title
listlengths
0
18
author
listlengths
0
4.41k
authoraffiliation
listlengths
0
6.45k
venue
listlengths
0
9
abstract
stringlengths
1
37.6k
doi
stringlengths
10
114
pdfurls
listlengths
1
3
corpusid
int64
158
259M
arxivid
stringlengths
9
16
pdfsha
stringlengths
40
40
text
stringlengths
66
715k
github_urls
listlengths
0
36
[ "Enhancement of extreme events through the Allee effect and its mitigation through noise in a three species system", "Enhancement of extreme events through the Allee effect and its mitigation through noise in a three species system" ]
[ "Deeptajyoti Sen \nIndian Institute of Science Education and Research Mohali\nSector 81140306Knowledge City, ManauliIndia\n", "Sudeshna Sinha \nIndian Institute of Science Education and Research Mohali\nSector 81140306Knowledge City, ManauliIndia\n" ]
[ "Indian Institute of Science Education and Research Mohali\nSector 81140306Knowledge City, ManauliIndia", "Indian Institute of Science Education and Research Mohali\nSector 81140306Knowledge City, ManauliIndia" ]
[]
We consider the dynamics of a three-species system incorporating the Allee Effect, focussing on its influence on the emergence of extreme events in the system. First we find that under Allee effect the regular periodic dynamics changes to chaotic. Further, we find that the system exhibits unbounded growth in the vegetation population after a critical value of the Allee parameter. The most significant finding is the observation of a critical Allee parameter beyond which the probability of obtaining extreme events becomes non-zero for all three population densities. Though the emergence of extreme events in the predator population is not affected much by the Allee effect, the prey population shows a sharp increase in the probability of obtaining extreme events after a threshold value of the Allee parameter, and the vegetation population also yields extreme events for sufficiently strong Allee effect. Lastly we consider the influence of additive noise on extreme events. First, we find that noise tames the unbounded vegetation growth induced by Allee effect. More interestingly, we demonstrate that stochasticity drastically diminishes the probability of extreme events in all three populations. In fact for sufficiently high noise, we do not observe any more extreme events in the system. This suggests that noise can mitigate extreme events, and has potentially important bearing on the observability of extreme events in naturally occurring systems.
10.1038/s41598-021-00174-0
null
237,492,054
2109.05753
0b2cb56158e9954bb8b2b3fde9d07cdfee150cfe
Enhancement of extreme events through the Allee effect and its mitigation through noise in a three species system 0123456789 Deeptajyoti Sen Indian Institute of Science Education and Research Mohali Sector 81140306Knowledge City, ManauliIndia Sudeshna Sinha Indian Institute of Science Education and Research Mohali Sector 81140306Knowledge City, ManauliIndia Enhancement of extreme events through the Allee effect and its mitigation through noise in a three species system 012345678910.1038/s41598-021-00174-0Scientific Reports | (2021) 11:209131 * email: [email protected] We consider the dynamics of a three-species system incorporating the Allee Effect, focussing on its influence on the emergence of extreme events in the system. First we find that under Allee effect the regular periodic dynamics changes to chaotic. Further, we find that the system exhibits unbounded growth in the vegetation population after a critical value of the Allee parameter. The most significant finding is the observation of a critical Allee parameter beyond which the probability of obtaining extreme events becomes non-zero for all three population densities. Though the emergence of extreme events in the predator population is not affected much by the Allee effect, the prey population shows a sharp increase in the probability of obtaining extreme events after a threshold value of the Allee parameter, and the vegetation population also yields extreme events for sufficiently strong Allee effect. Lastly we consider the influence of additive noise on extreme events. First, we find that noise tames the unbounded vegetation growth induced by Allee effect. More interestingly, we demonstrate that stochasticity drastically diminishes the probability of extreme events in all three populations. In fact for sufficiently high noise, we do not observe any more extreme events in the system. This suggests that noise can mitigate extreme events, and has potentially important bearing on the observability of extreme events in naturally occurring systems. The emergence of extreme events in the dynamical evolution of systems ranging from weather 1 to power grids 2,3 have catastrophic implications. So understanding the underlying mechanisms that may trigger extreme events have commanded considerable recent research interest 4 . An extreme event may be defined as an event where one or more variables of a system, arising in nature or in the laboratory, exhibits very large deviations from the mean value. So the dynamics of the system is characterized by excursions to values that differ significantly from the average. Further, though recurrent, these large deviations are rare vis-a-vis the characteristic time scale of the system, and their occurrences are aperiodic and uncorrelated in time. Without loss of generality, an event is typically labelled 'extreme' if a state variable, in the course of its temporal evolution, takes values that are several standard deviations away from the average value, thereby signalling dynamical behaviour beyond normal variability. Such extreme events have been observed in natural systems such as rogue ocean waves 5 , laboratory systems such as optical systems 6 , as well as financial phenomena like market crashes 7 . Interestingly, other definitions of extreme events, more appropriate to the context, have also been employed. Notably, in the specific important problem relating climate change to ecological dynamics, a synthetic definition of extreme events involving both the driver and response system is proposed 8 . However in this work we will employ the most commonly used marker for extreme events: namely, recurrent and aperiodic deviations larger than a prescribed threshold from the mean value, are considered to be extreme events, with the threshold typically taken to be 3-8 standard deviations from the mean. A central direction in understanding extreme events is to find generic mechanisms that can give rise such large fluctuations. Typical studies of extreme events have involved stochastic models 9,10 , for instance random walk models of transport on networks 11 . The emergence of extreme events in deterministic dynamical systems, manifested as intermittent large amplitude events, have also been investigated recently. Broadly speaking, the statistical features of deterministic systems is an active research direction that can lead to understanding extreme events that arise in the context of deterministic dynamics 12,13 . The search for dynamical systems that yield extreme events, without the drive of external stochastic influences or intrinsic random fluctuations, is a focus of much www.nature.com/scientificreports/ ongoing research effort from the point of view of basic understanding of complex systems [14][15][16][17] . Additionally, such probabilistic outcomes in dynamical systems are most relevant in the applied context as well, such as in engineering sciences where this direction of research leads to better assessment of risks 18 . In this broad direction, in this work we explore the dynamics of a vegetation-prey-predator system, coupled through interactions of the Lotka-Volterra type. Importantly, our model system also incorporates the biologically significant Allee effect, which is one of the classic phenomena of population ecology. The Allee effect reflects the beneficial effects on the growth of individuals arising from conspecific interactions [19][20][21] , and has bearing on the long-term persistence of a population as a consequence of small size. Further we investigate the effect of additive noise in the system, focusing on the role of stochasticity in the emergence of extreme events. In a larger context, this research direction also has bearing on the important general question of the emergence of extremely large events in deterministic dynamical systems, and the effect of noise on their sustained prevalence. Our central findings in this work are as follows: the system under Allee effect yields chaotic dynamics. Further, it leads to unbounded vegetation growth for sufficiently strong Allee effect. Most significantly, Allee effect aids the emergence of extreme events in this three-species chain. Interestingly, we also observe that noise suppresses the unbounded blow-up of vegetation induced by Allee effect. Lastly, sufficiently strong noise also subdues the extreme events in vegetation, prey and predator populations, thus suggesting a significant natural mechanism to mitigate extreme events in population chains. In second section we present results arising in the model of three interacting species, incorporating the Allee effect. In third section we explore the effect of additive noise in the system. We conclude with discussions of the scope of our findings in fourth section. Three-species food chain model incorporating the Allee effect Complex systems research in general, and theoretical ecology in particular, has seen intense research activity in networks modelling interacting species, often focusing on local and global stability properties [22][23][24][25] . Here we will focus on the emergence of extreme events and consider as our test-bed the well-known model for the dynamics of the snowshoe hare and the Canadian lynx populations, based on observed data 22 . Specifically, the system incorporates a three species vertical food chain, consisting of vegetation (denoted by u), prey (denoted by v) and predator (denoted by w). Additionally, we will incorporate in this model a term to reflect the Allee effect in the growth of the prey. The dynamics of this three species trophic system is described by the following sets of coupled differential equations: The interaction between the vegetation and prey populations is considered to follow the type II functional response, described by the function f 1 (u, v) , where f 1 (u, v) = uv 1+ku . This well-known functional response is characterized by a decelerating intake rate, stemming from the assumption that the consumer is limited by its capacity to process food. The parameter k is the average time spent on processing a food item, which is termed the handling time in literature. The interaction of the predator population with the prey is considered to follow the well-known Lotka-Volterra type interaction, described by the function f 2 (v, w) , where f 2 (v, w) = vw . Here α 1 denotes the maximum growth rate of the prey, which is in general a product of ingestion rate with a constant factor ( < 1 ), accounting the fact that not all of the ingested resource (vegetation) converted into prey's biomass. A similar parameter for the predator is denoted by α 2 . The parameters a, b and c represent the intrinsic growth rates of the three species u, v and w respectively. Further, the model allows the predator population to maintain a equilibrium population w * when the prey concentration is very low. In other words, predator can survive in the trophic system without depending on prey. Importantly, in contrast to work on similar systems 26 , here we will also explicitly consider the Allee effect. Specifically, the Allee effect is considered in growth of the prey by introducing where θ is the Allee strength parameter representing the critical prey density at which the probability of successful mating would be half. The Allee effect here occurs due to difficulties in finding mates for sexual reproduction and A(v) describes the mating success at the low population density 27,28 . This kind of Allee effect in two and three dimensional population models has drawn much research attention [19][20][21]27,29,30 . In this work we consider the parameter values a = 1 , b = 1 , c = 10 , w * = 0.006 , α 1 = 0.5 , α 2 = 1 , k = 0.05 22 . We explore the dynamics of the system under varying θ , through numerical simulations using the Runge-Kutta fourth order algorithm. We have ascertained the stability and convergence of our results with respect to decreasing step size. Our main focus is to explore the dynamical consequences of the Allee effect in the population of the prey, in this three-species model system. We will first show that there is a sharp increase in the probability of obtaining unbounded vegetation growth beyond a critical value of the Allee parameter. We will then demonstrate that, interestingly, the system exhibits chaos as the Allee effect becomes more significant. Then we will move on to the central focus of this work, namely the influence of Allee effect on the emergence of extreme events. We will explicitly demonstrate that there is a pronounced increase in the propensity of extreme events under increasing (1) u = f (u, v, w) = au − α 1 f 1 (u, v), v = g(u, v, w) = α 1 f 1 (u, v) A(v) − bv − α 2 f 2 (v, w), w = h(u, v, w) = α 2 f 2 (v, w) − c(w − w * ). (2) A(v) = v v + θ , Temporal evolution of the population densities Our first observation is the emergence of explosive runaway growth in vegetation when the Allee effect is too strong, i.e. when the Allee parameter θ is sufficiently large, the vegetation grows in an unbounded manner 31 . In order to quantify this blow-up, we estimate the probability of unbounded vegetation growth from a large sample of random initial states, followed over a long period of time. We also ascertain that the estimated values are converged with respect to increasing sample size, and thus can be considered to be robust numerically. The results thus obtained, for varying Allee parameter θ , are displayed in Fig. 1. It is clearly evident from the figure that there exists a critical value of θ , which we denote by θ c , beyond which the vegetation has a probability of explosive unbounded growth. So in the rest of this work, we will restrict our analysis for the range θ ∈ [0, θ c ). Next we investigate the temporal evolution of the population densities. Figure 2 shows representative time series of vegetation, prey and predator and the corresponding attractors in 3-dimensional phase space. To broadly illustrate the influence of Allee effect, we present this for three values of θ , with increasing magnitude. When Allee effect is absent in the prey population i.e. θ = 0 , we observe that all populations fluctuate periodically and are confined to a periodic orbit (see Fig. 2a,b). For a larger Allee parameter ( θ = 0.01776 ), the populations of vegetation, prey and predator all evolve in an aperiodic manner, as evident in Fig. 2c, with the corresponding chaotic attractor shown in Fig. 2d. On further increasing θ to 0.02475, the size of the chaotic attractor increases, as evident from Fig. 2e,f. Therefore increasing magnitude of the Allee effect parameter drives the system into a chaotic state from the periodic state. To corroborate this observation, we present the bifurcation diagram of the prey population, with respect to the Allee parameter θ , in Fig. 3. We observe the onset of chaos through the usual period-doubling cascade, initiating at period-4 at θ = 0 . Subsequently, we also observe narrow periodic windows, with period-3 being the most prominent one. The most significant implication of this bifurcation diagram is the emergence of chaotic dynamics in a very large range of the Allee parameter θ . That is, with no Allee effect or under very weak Allee effect the system is periodic, while a strong Allee effect typically induces chaos in this three-species system. Extreme events induced by Allee Effect One of the most interesting observations from the time series presented in the section above is the following: when the magnitude of the Allee parameter θ is low, vegetation and prey densities are confined to low values. However, the predator densities deviate very significantly away from their mean. Now for very small θ the system is attracted to a periodic orbit, and so the large deviations are completely correlated with time and occur periodically. So they cannot be considered to be extreme events, as they are neither aperiodic, nor rare. But for larger θ , both predator and prey densities can sometime shoot up over 7 standard deviations away from the mean value. This is evident clearly in Fig. 2c,e where one can see that both predator and prey populations exceed the 7σ threshold from time to time. The instants at which prey and predator populations exceed the 7σ threshold are now completely uncorrelated with time. This is consistent with the underlying chaotic dynamics that emerges under increasing Allee parameter θ. In order to illustrate this, we mark the time instances at which a population exceeds the 7σ threshold, for different values of Allee parameter θ . Figure 4 shows this for the vegetation, prey and predator populations. The www.nature.com/scientificreports/ density of points signifying the occurrence of extreme events is clearly the highest for the predator population. This indicates that the predator population has the greatest propensity for large deviations. It is also clear that vegetation has the least number of extreme events in the same time window. The uncorrelated nature of the extreme events is also evident in the scatter of these points, except in the small periodic windows that occur for certain special ranges of θ . The increasing density of these points also illustrate the increasing probability of extreme events in the populations with increasing Allee parameter θ. In order to understand the phenomena quantitatively, we first estimate the maximum densities of vegetation, prey and predator populations (denoted by u max , v max and w max respectively) for varying the Allee parameter θ . To estimate this, we find the global maximum of the populations sampled over a time interval T = 1000 , averaged over a large set of random initial conditions. Figure 5 shows u max , v max and w max , for Allee parameter θ ∈ [0, θ c ) , scaled by their values at θ = 0 . These scaled maxima help us gauge the relative change in the maximum population densities arising due to the Allee effect. It is evident from our simulation results that the magnitude of the global maximum of vegetation does not change very significantly for increasing Allee parameter θ , with its magnitude around θ c being approximately 4 fold the value at θ = 0 . However, the magnitude of maximum prey and predator populations change very significantly with respect to Allee parameter θ and exceeds over 10 fold the value obtained for θ = 0. www.nature.com/scientificreports/ We then go on to numerically calculate the probability density of the vegetation, prey, and predator population densities, for increasing Allee effect parameter θ . The tail of this probability density function reflects the influence of the Allee effect on the probability of obtaining extreme events. To illustrate this, we show the probability density function for the prey population in Fig. 6, for three different values of θ . Extreme events are confined to the tail of the distribution that lie beyond the vertical red line, marking the µ + 7σ value in the figure. So it is clear from these probability distributions that the Allee effect in prey population promotes the occurrence of extreme events as the tail of the distribution is flatter and extends further with increasing Allee parameter θ. In order to ascertain that the extreme values are uncorrelated and aperiodic we examine the time intervals between successive extreme events in the population. Figure 7 (left panel) shows representative results for the return map of the intervals between extreme events in the prey population and it is clearly shows no regularity. The probability distribution of the intervals is also Poisson distributed and so the extreme population buildups are uncorrelated aperiodic events, as clearly evident from the right panel of the figure. In order to further quantify how Allee effect influences extreme events, we estimate the probability of obtaining large deviations, in a large sample of initial states tracked over a long period of time. We denote this probability by P ext , and we calculate it by following a large set of random initial conditions and recording the number of occurrences of the population crossing the threshold value in a prescribed period of time, with this time window being several orders of magnitude larger than the mean oscillation period. This time-averaged and ensemble-averaged quantity yields a good estimate of P ext . With no loss of generality, we choose the threshold for determining extreme events to be µ + 7σ , i.e. when the variable crosses the 7σ level, it is labelled as extreme. This probability, estimated for all three populations is shown in Fig. 8. First, it is clear from Fig. 8, that the probability of the occurrence of extreme events is the lowest for vegetation, and the highest for predator populations, for any value of the Allee parameter θ ∈ [0, θ c ) . We also observe that, for values of the Allee parameter θ lower than a critical value denoted by θ u c the probability of obtaining extreme events in the vegetation population tends to zero. Beyond the critical value θ u c , the vegetation population starts to exhibit extreme events. A similar trend emerges for the prey population. However, the critical value of the Allee parameter θ necessary for the emergence of a finite probability of extreme events, denoted by θ v c , is much smaller than θ u c . So for the prey population, a weaker Allee effect can induce extreme events. Note that some mechanisms have been proposed for the generation of extreme events in deterministic dynamical systems, which typically have been excitable systems. These include interior crisis, Pomeau-Manneville intermittency, and the breakdown of quasiperiodic motion. However the extreme events generated by these mechanisms occur typically at very specific critical points in parameter space, or narrow windows around it. The first important difference in our system here is that the extreme events do not emerge only at some special values alone. Rather, there is a broad range in Allee parameter space where extreme events have a very significant presence. This makes our extreme event phenomenon more robust, and thus increases its potential observability. This also rules out the intermittency-induced mechanisms that have been proposed, as is evident through the lack of sudden expansion in attractor size in our bifurcation diagram (Fig. 3) in general. However, interestingly, the system does have one parameter window where there is attractor widening and this gives rise to a markedly enhanced extreme event count. The peak observed in Fig. 8 can be directly correlated with a sudden attractor widening leading to a marked increase of extreme event in a narrow window of parameter space located near the crisis (see Fig. 9). Additionally, for a narrow window around θ ∼ 0.02 , the emergent dynamics is periodic. So the large deviations are no longer uncorrelated, and so they are not extreme events in the true sense. Lastly we notice that the predator population shows extreme events for all values of θ ∈ [0, θ c ) . So the predator population is most prone to experiencing unusually large deviations from the mean. We also observe that the probability of occurrence of extreme events in the predator population is not affected significantly by the Allee effect. This is in marked contrast to the case of vegetation and prey, where the Allee effect crucially influences www.nature.com/scientificreports/ the advent of extreme events. Also, for the predator population there is no marked transition from zero to finite P ext under increasing Allee parameter θ , as evident for vegetation and prey populations. Effect of noise in the system: mitigation of blow-ups and extreme events Most realistic population models are not deterministic, as noise is ubiquitous in nature. So stochasticity must be incorporated into the models, since there are many external influences, such as migration, diversity and environmental fluctuation, present in the real ecosystem. For instance, in important earlier works the role of environmental fluctuation dependent fitness in population dynamics, namely parametric noise, in extinction and persistence has been studied 32 . In this work, we explore the interplay of stochasticity and extreme events, by investigating the system under the influence of additive noise. It is of much relevance to explore if noise has any significant effect on the dynamics, for instance on the unbounded vegetation growth under increasing Allee effect and on the emergent chaotic attractors. The other question of utmost interest is the following: does noise mitigate or aid the emergence of extreme events. This will be the focus of our investigation in this section. www.nature.com/scientificreports/ Specifically we investigate the dynamics of the three species system (1) under additive random noise ξ(t) , given by the following dynamical equations: where f (u, v, w), g(u, v, w) and h(u, v, w) have the functional forms given in Eq. (1), and ξ i (t), i = 1, 2, 3 are Gaussian white noises with zero mean and correlation function is given by < ξ i (t), ξ j (t ′ ) >= σ δ(t − t ′ )δ ij for i, j = 1, 2, 3 . Here σ represents the strength of the noise. To simulate the dynamics of this noise-driven system, we numerically solve the stochastic differential system by the explicit Euler-Maruyama scheme. First we investigate if the boundedness of the system (3) is affected by the presence of additive noise. Recall that the system, without noise, blows up as the magnitude of the Allee parameter θ increases beyond a threshold (cf. Fig. 1). Therefore it is important to examine if noise suppresses or enhances the probability of unbounded growth in the system under Allee effect. Figure 10 displays representative results for the probability of blow-ups in the population of vegetation, estimated for varying noise strengths σ , for the Allee parameter θ = 0.1 . Note that the system without noise (i.e. σ = 0 ) had significant probability of unbounded vegetation growth for this value of θ (see Fig. 1). It is clearly evident from the results in Fig. 10 that the probability of blow-ups for vegetation rapidly decreases to zero with Figure 7. (Left) Return Map of t i+1 versus t i , and (right) Probability distribution of t i fitted with exponentially decaying function, where t i is the ith interval between successive extreme events, where an extreme event is defined at the instant when the prey population crosses the µ + 7σ line (cf. Fig. 2). Here θ = 0.024. Figure 8. Probability of obtaining extreme event in unit time ( P ext ), with respect to Allee parameter θ , estimated by sampling a time series of length T = 5000 , and averaging over 500 random initial states. Here we consider that an extreme event occurs when a population level crosses the threshold µ + 7σ . P ext for vegetation, prey and predator are displayed in blue, red and black colors respectively. Note that there exists a narrow periodic window around θ ∼ 0.02 (cf. Fig. 9), and so the large deviations in this window of Allee parameter are not associated with true extreme events, as they occur periodically. www.nature.com/scientificreports/ increasing magnitude of noise strength. So the presence of noise helps to keep the populations bounded, indicating the constructive role of noise in the stability of this three species system. Next we examine how noise influences the extreme events which were observed to emerge in this system in the presence of Allee effect. We first examine the temporal evolution of the population densities and their corresponding phase-space attractors. In Fig. 11 we display illustrative results of the times series and the phase-space attractors of the system governed by Eq. (3), for different noise strength σ . It is observed that when the strength is very low ( σ ∼ 10 −4 ), all population densities fluctuate in an aperiodical manner and settle down to a chaotic attractor, as shown in Fig. 11a. Also extreme events occur in all populations for very low noise strengths, as is clear from the figure where the vegetation, prey and predator populations can be seen to cross the µ + 7σ threshold. However with increasing strength of noise, these extreme events disappear from all populations in the system. Additionally, the populations are seen to fluctuate in a more regular almost-periodic manner (see Fig. 11b). On further increase of the noise strength all populations settle down to a quasi-fixed state, as evident from Fig. 11c. This suggests that noise transforms the chaotic behaviour of the system to a noisy fixed point. Importantly, the very long time intervals we sampled did not yield a single extreme event. That is, under increased noise strengths there is no evidence of extreme events any more, in either the vegetation, prey or predator populations. Thus www.nature.com/scientificreports/ we arrive at the following important conclusion: Noise leads to quasi-fixed (non-zero) populations and the suppression of extreme events in this three species system. Further, in order to quantify how noise influences the emergence of extreme events, we again estimate the probability of obtaining extreme events, P ext , under varying noise strength σ . The results are exhibited in Fig. 12. It is clear from the figure that the probability of obtaining extreme events is the lowest for vegetation and the highest for the predator population. This is consistent with the observations for the system without noise (see Fig. 8). The new significant result here is that the probability of obtaining extreme events decreases to zero for increasing the noise strength σ . Therefore in presence of sufficiently strong additive noise, extreme events are suppressed in vegetation, prey and predator populations. This points to the novel finding that stochasticity can lead to the mitigation of extreme events. (3) u = f (u, v, w) + ξ 1 (t), v = g(u, v, w) + ξ 2 (t), w = h(u, v, w) + ξ 3 (t), Discussion In summary, we explored the dynamics of a three-species trophic system incorporating the Allee Effect in the prey population. Our focus is on the emergence of extreme events in the system. In particular we address the significant question of whether or not Allee effect suppresses or enhances extreme events. Our key observations are as follows: First, under Allee effect the regular periodic dynamics changes to chaotic, as evident from the emergence of chaotic attractors for increasing Allee parameter θ . Further, we find that the system exhibits unbounded growth in the vegetation population (a "blow-up") after a critical value of the Allee www.nature.com/scientificreports/ parameter. The most significant result is the observation of a critical Allee parameter beyond which the probability of obtaining extreme events becomes non-zero for all three population densities. Though the emergence of extreme events in the predator population is not affected much by the Allee effect, the prey population shows a sharp increase in the probability of obtaining extreme events after a threshold value of the Allee parameter θ , and the vegetation population also yields extreme events for sufficiently strong Allee effect. An interesting open problem in this context would be to check the observation that the extreme events in the predator population are more pronounced than in prey and vegetation across other models, in order to establish the generality of this important trend in a larger class of models. Lastly we consider the influence of additive noise on extreme events. First, we find that noise tames the unbounded vegetation growth induced by Allee effect. More interestingly, we demonstrate that stochasticity drastically diminishes the probability of extreme events in all three populations. In fact for sufficiently high noise, we do not observe any more extreme events in the system. This indicates that noise can mitigate extreme events, and has potentially important impact on the observability of extreme events in naturally occurring systems. Figure 1 . 1Probability of unbounded growth of the vegetation population (u) with respect to θ . Here an explosive blow-up is considered to have occurred when the vegetation population exceeds a value of 10 3 . The probability is estimated from a sample of 10 3 initial states randomly distributed in a hyper-cube ( u ∈ [0 : 4], v ∈ [0 : 2], w ∈ [0 : 5] ) in phase space. Interestingly, the slight dip in the estimated probability of unbounded growth in the narrow window around θ ∼ 0.031 arises from unbounded orbits co-existing with a small set of initial states that evolve to bounded periodic orbits.Scientific Reports | (2021) 11:20913 | https://doi.org/10.1038/s41598-021-00174-0 Figure 2 .Figure 3 . 23Left panels display the time series for the vegetation (u), prey (v) and predator (w) populations in the system given by Eq. (1), and the right panels display the corresponding phase space attractor. The Allee effect parameter is θ = 0 (a,b), θ = 0.01776 (c,d), and θ = 0.02475 (e,f). The red dashed line shows the mean µ , and the black dashed line represents the threshold level of 7 standard deviations above the mean (i.e. µ + 7σ). Bifurcation diagram of prey populations with respect to Allee parameter θ . Here we display the local maxima of the prey population. The parameter values in Eq. (1) are as mentioned in the text. Figure 4 . 4Figure marking the time instances at which a population exceeds the 7σ threshold, for different values of Allee parameter θ , for the case of (top to bottom) vegetation, prey and predator populations. Scientific Reports | (2021) 11:20913 | https://doi.org/10.1038/s41598-021-00174-0 Figure 5 . 5Global maximum of vegetation u max (blue), prey v max (red) and predator (black) populations, with respect to the Allee parameter θ , scaled by their values obtained for θ = 0 . Clearly, when Allee parameter θ is sufficiently large, the maximum prey and predator populations are an order of magnitude larger than that obtained in systems with no Allee effect. Figure 6 . 6Probability Density Function (PDF) of the prey population v, for the system given by Eq. (1), with increasing magnitude of θ with (a) θ = 0 , (b) θ = 0.015 and (c) θ = 0.02 . The threshold for extreme event µ + 7σ is denoted by vertical red dashed line. Scientific Reports | (2021) 11:20913 | https://doi.org/10.1038/s41598-021-00174-0 Figure 9 . 9Bifurcation diagram of prey populations with respect to Allee parameter, in the range θ ∈ [0.0189 : 0.0191] . Here we display the local maxima of the prey population. The parameter values in Eq. (1) are as mentioned in the text. Figure 10 . 10Probability of unbounded vegetation growth in presence of additive noise, with respect to noise strength σ . As inFig. 1, a blow up to be considered to occur when the vegetation population exceeds 10 3 . Here the Allee parameter θ = 0.1 , and the other system parameters are the same as inFig. 1. The probability is estimated from a sample of 10 3 initial states randomly distributed in a hyper-cube( u ∈ [0 : 4], v ∈ [0 : 2], w ∈ [0 : 5] ) in phase space. Scientific Reports | (2021) 11:20913 | https://doi.org/10.1038/s41598-021-00174-0 Figure 11 . 11Time series and phase dynamics of the system (3) with different σ . (a) σ = 10 −4 , (b) σ = 10 −2 and (c) σ = 10 −1 . We keep all other parameters values are same as before except θ = 0.024. Scientific Reports | (2021) 11:20913 | https://doi.org/10.1038/s41598-021-00174-0 © The Author(s) 2021 Data availabilityData will be made available on reasonable request.Author contributionsS.S. conceived the problem, D.S. did all the simulations, D.S. and S.S. analyzed the results and wrote the manuscript together.Competing interestsThe authors declare no competing interests.Additional informationCorrespondence and requests for materials should be addressed to S.S.Reprints and permissions information is available at www.nature.com/reprints.Publisher's note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/. Extreme weather events. J Lubchenco, T R Karl, Phys. Today. 6531Lubchenco, J. & Karl, T. R. Extreme weather events. Phys. Today 65, 31 (2012). Modeling cascading failures in the North American power grid. R Kinney, P Crucitti, R Albert, V Latora, Eur. Phys. J. B-Condens. Matter Complex Syst. 46Kinney, R., Crucitti, P., Albert, R. & Latora, V. Modeling cascading failures in the North American power grid. Eur. Phys. J. B-Condens. Matter Complex Syst. 46, 101-107 (2005). How the blackout came to life. S Strogatz, The New York Times. Strogatz, S. How the blackout came to life. The New York Times (2003). Extreme Events in. S Albeverio, V Jentsch, H Kantz, Nature and Society. SpringerAlbeverio, S., Jentsch, V. & Kantz, H. Extreme Events in Nature and Society (Springer, 2006). Oceanic rogue waves. K Dysthe, H E Krogstad, P Müller, Annu. Rev. Fluid Mech. 40Dysthe, K., Krogstad, H. E. & Müller, P. Oceanic rogue waves. Annu. Rev. Fluid Mech. 40, 287-310 (2008). Optical rogue waves. D R Solli, C Ropers, P Koonath, B Jalali, Nature. 450Solli, D. R., Ropers, C., Koonath, P. & Jalali, B. Optical rogue waves. Nature 450, 1054-1057 (2007). Power-law relaxation in a complex system: Omori law after a financial market crash. F Lillo, R N Mantegna, Phys. Rev. E. 6816119Lillo, F. & Mantegna, R. N. Power-law relaxation in a complex system: Omori law after a financial market crash. Phys. Rev. E 68, 016119 (2003). An ecological perspective on extreme climatic events: A synthetic definition and framework to guide future research. M Smith, J. Ecol. 99Smith, M. An ecological perspective on extreme climatic events: A synthetic definition and framework to guide future research. J. Ecol. 99, 656-663 (2011). Universal record statistics of random walks and lévy flights. S N Majumdar, R M Ziff, Phys. Rev. Lett. 10150601Majumdar, S. N. & Ziff, R. M. Universal record statistics of random walks and lévy flights. Phys. Rev. Lett. 101, 050601 (2008). Universal order statistics of random walks. G Schehr, S N Majumdar, Phys. Rev. Lett. 10840601Schehr, G. & Majumdar, S. N. Universal order statistics of random walks. Phys. Rev. Lett. 108, 040601 (2012). Extreme events on complex networks. V Kishore, M Santhanam, R Amritkar, Phys. Rev. Lett. 106188701Kishore, V., Santhanam, M. & Amritkar, R. Extreme events on complex networks. Phys. Rev. Lett. 106, 188701 (2011). Extreme value distributions in chaotic dynamics. V Balakrishnan, C Nicolis, G Nicolis, J. Stat. Phys. 80Balakrishnan, V., Nicolis, C. & Nicolis, G. Extreme value distributions in chaotic dynamics. J. Stat. Phys. 80, 307-336 (1995). Extreme events in deterministic dynamical systems. C Nicolis, V Balakrishnan, G Nicolis, Phys. Rev. Lett. 97210602Nicolis, C., Balakrishnan, V. & Nicolis, G. Extreme events in deterministic dynamical systems. Phys. Rev. Lett. 97, 210602 (2006). Extreme events in excitable systems and mechanisms of their generation. G Ansmann, R Karnatak, K Lehnertz, U Feudel, Phys. Rev. E. 8852911Ansmann, G., Karnatak, R., Lehnertz, K. & Feudel, U. Extreme events in excitable systems and mechanisms of their generation. Phys. Rev. E 88, 052911 (2013). Route to extreme events in excitable systems. R Karnatak, G Ansmann, U Feudel, K Lehnertz, Phys. Rev. E. 9022917Karnatak, R., Ansmann, G., Feudel, U. & Lehnertz, K. Route to extreme events in excitable systems. Phys. Rev. E 90, 022917 (2014). Extreme events in the forced liénard system. S L Kingston, K Thamilmaran, P Pal, U Feudel, S K Dana, Phys. Rev. E. 9652204Kingston, S. L., Thamilmaran, K., Pal, P., Feudel, U. & Dana, S. K. Extreme events in the forced liénard system. Phys. Rev. E 96, 052204 (2017). Emergence of extreme events in networks of parametrically coupled chaotic populations. P Moitra, S Sinha, Chaos: Interdiscipl. J. Nonlinear Sci. 2923131Moitra, P. & Sinha, S. Emergence of extreme events in networks of parametrically coupled chaotic populations. Chaos: Interdiscipl. J. Nonlinear Sci. 29, 023131 (2019). Dragon-king extreme events as precursors for catastrophic transition. D Premraj, Europhys. Lett. 13434006Premraj, D. et al. Dragon-king extreme events as precursors for catastrophic transition. Europhys. Lett. 134, 34006 (2021). ext for vegetation, prey and predator populations are shown by blue, red and black colour respectively. F Courchamp, L Berec, J Gascoigne, 10.1038/s41598-021-00174-0Scientific Reports |. 11Oxford University PressAllee Effects in Ecology and ConservationCourchamp, F., Berec, L. & Gascoigne, J. Allee Effects in Ecology and Conservation (Oxford University Press, 2008). ext for vegetation, prey and predator populations are shown by blue, red and black colour respectively. Scientific Reports | (2021) 11:20913 | https://doi.org/10.1038/s41598-021-00174-0 Allee effect in prey's growth reduces the dynamical complexity in prey-predator model with generalist predator. D Sen, S Ghorai, S Sharma, M Banerjee, Appl. Math. Model. 91Sen, D., Ghorai, S., Sharma, S. & Banerjee, M. Allee effect in prey's growth reduces the dynamical complexity in prey-predator model with generalist predator. Appl. Math. Model. 91, 768-790 (2021). Allee effects: Population growth, critical density, and the chance of extinction. B Dennis, Nat. Resour. Model. 3Dennis, B. Allee effects: Population growth, critical density, and the chance of extinction. Nat. Resour. Model. 3, 481-538 (1989). Complex dynamics and phase synchronization in spatially extended ecological systems. B Blasius, A Huppert, L Stone, Nature. 399Blasius, B., Huppert, A. & Stone, L. Complex dynamics and phase synchronization in spatially extended ecological systems. Nature 399, 354-359 (1999). Evidence of universality for the May-Wigner stability theorem for random networks with local dynamics. S Sinha, S Sinha, Phys. Rev. E (Rapid Commun.). 7120902Sinha, S. & Sinha, S. Evidence of universality for the May-Wigner stability theorem for random networks with local dynamics. Phys. Rev. E (Rapid Commun.) 71, 020902 (2005). Robust emergent activity in dynamical networks. S Sinha, S Sinha, Phys. Rev. E. 7466117Sinha, S. & Sinha, S. Robust emergent activity in dynamical networks. Phys. Rev. E 74, 066117 (2006). Balance of interactions determines optimal survival in multi-species communities. A Choudhary, S Sinha, PLoS ONE. 10145278Choudhary, A. & Sinha, S. Balance of interactions determines optimal survival in multi-species communities. PLoS ONE 10, e0145278 (2015). Advent of extreme events in predator populations. S S Chaurasia, U K Verma, S Sinha, Sci. Rep. 10Chaurasia, S. S., Verma, U. K. & Sinha, S. Advent of extreme events in predator populations. Sci. Rep. 10, 1-10 (2020). Single-species models of the Allee effect: Extinction boundaries, sex ratios and mate encounters. D S Boukal, L Berec, J. Theor. Biol. 218Boukal, D. S. & Berec, L. Single-species models of the Allee effect: Extinction boundaries, sex ratios and mate encounters. J. Theor. Biol. 218, 375-394 (2002). Depensation, probability of fertilization, and the mating system of Atlantic cod (Gadus morhua L.). S Rowe, J A Hutchings, D Bekkevold, A Rakitin, ICES J. Mar. Sci. 61Rowe, S., Hutchings, J. A., Bekkevold, D. & Rakitin, A. Depensation, probability of fertilization, and the mating system of Atlantic cod (Gadus morhua L.). ICES J. Mar. Sci. 61, 1144-1150 (2004). The Allee effect, finding mates and theoretical models. M Mccarthy, Ecol. Model. 103McCarthy, M. The Allee effect, finding mates and theoretical models. Ecol. Model. 103, 99-102 (1997). Allee effect increases the dynamical stability of populations. I Scheuring, J. Theor. Biol. 199Scheuring, I. Allee effect increases the dynamical stability of populations. J. Theor. Biol. 199, 407-414 (1999). Taming explosive growth through dynamic random links. A Choudhary, V Kohar, S Sinha, Sci. Rep. 4308Choudhary, A., Kohar, V. & Sinha, S. Taming explosive growth through dynamic random links. Sci. Rep. 4308, 1-8 (2014). Pushed beyond the brink: Allee effects, environmental stochasticity, and extinction. G Roth, J Schreiber, J. Biol. Dyn. 8Roth, G. & Schreiber, J. Pushed beyond the brink: Allee effects, environmental stochasticity, and extinction. J. Biol. Dyn. 8, 187-205 (2014).
[]
[ "Mathematics Subject Classification: 30C45 , 30C50", "Mathematics Subject Classification: 30C45 , 30C50", "Mathematics Subject Classification: 30C45 , 30C50", "Mathematics Subject Classification: 30C45 , 30C50" ]
[ "G M Birajdar [email protected] \nSchool of Mathematics & Statistics\nDepartment of Mathematics, D. Y. Patil College of Engineering & Technology\nMIT World Peace University\nDr. Vishwanath Karad411038Pune, Kasaba Bawda, Kolhapur(M.S) India\n", "N D Sangle \nSchool of Mathematics & Statistics\nDepartment of Mathematics, D. Y. Patil College of Engineering & Technology\nMIT World Peace University\nDr. Vishwanath Karad411038Pune, Kasaba Bawda, Kolhapur(M.S) India\n", "G M Birajdar [email protected] \nSchool of Mathematics & Statistics\nDepartment of Mathematics, D. Y. Patil College of Engineering & Technology\nMIT World Peace University\nDr. Vishwanath Karad411038Pune, Kasaba Bawda, Kolhapur(M.S) India\n", "N D Sangle \nSchool of Mathematics & Statistics\nDepartment of Mathematics, D. Y. Patil College of Engineering & Technology\nMIT World Peace University\nDr. Vishwanath Karad411038Pune, Kasaba Bawda, Kolhapur(M.S) India\n" ]
[ "School of Mathematics & Statistics\nDepartment of Mathematics, D. Y. Patil College of Engineering & Technology\nMIT World Peace University\nDr. Vishwanath Karad411038Pune, Kasaba Bawda, Kolhapur(M.S) India", "School of Mathematics & Statistics\nDepartment of Mathematics, D. Y. Patil College of Engineering & Technology\nMIT World Peace University\nDr. Vishwanath Karad411038Pune, Kasaba Bawda, Kolhapur(M.S) India", "School of Mathematics & Statistics\nDepartment of Mathematics, D. Y. Patil College of Engineering & Technology\nMIT World Peace University\nDr. Vishwanath Karad411038Pune, Kasaba Bawda, Kolhapur(M.S) India", "School of Mathematics & Statistics\nDepartment of Mathematics, D. Y. Patil College of Engineering & Technology\nMIT World Peace University\nDr. Vishwanath Karad411038Pune, Kasaba Bawda, Kolhapur(M.S) India" ]
[]
In this paper, we introduce the subclass SHP −m (α, β) using integral operator and give sufficient coefficient conditions for normalized harmonic univalent function in the subclass SHP −m (α, β).These conditions are also shown to be necessary when the coefficients are negative.
null
[ "https://export.arxiv.org/pdf/2211.06424v1.pdf" ]
253,499,004
2211.06424
b9979618a8f00bad65a0abc7ffee0a17492c65ec
Mathematics Subject Classification: 30C45 , 30C50 2000 G M Birajdar [email protected] School of Mathematics & Statistics Department of Mathematics, D. Y. Patil College of Engineering & Technology MIT World Peace University Dr. Vishwanath Karad411038Pune, Kasaba Bawda, Kolhapur(M.S) India N D Sangle School of Mathematics & Statistics Department of Mathematics, D. Y. Patil College of Engineering & Technology MIT World Peace University Dr. Vishwanath Karad411038Pune, Kasaba Bawda, Kolhapur(M.S) India Mathematics Subject Classification: 30C45 , 30C50 2000HarmonicUnivalentIntegral operatorDistortion bounds In this paper, we introduce the subclass SHP −m (α, β) using integral operator and give sufficient coefficient conditions for normalized harmonic univalent function in the subclass SHP −m (α, β).These conditions are also shown to be necessary when the coefficients are negative. Introduction The class S H investigated by Clunie and Sheil-Small [1].They studied geometric subclasses and established some coefficient bounds. Several researchers have worked on the class S H and its subclasses.By introducing new subclasses, Silverman [13], Silverman and Silvia [14], Jahangiri [7], Dixit and Porwal [4] etc. presented a systematic study of harmonic univalent functions. Motivating the research work done by Jahangiri [2,3], Purohit et al. [6], Darus and Sangle [17], Ravindar et.al [16], Bhoosnurmath and Swamy [5], Yalcin [10], Al-Shaqsi et.al [11], Sangle and Birajdar [18], Murugusundaramoorthy [15], we introduce new subclasses of harmonic mappings using integral operator. Also, we determine extreme points, coefficient estimates of SHP −m (α, β) and T HP −m (α, β). Suppose A be family of analytic functions defined in unit disk U and A 0 be the class of all normalized analytic functions. Let the functions h ∈ A be of the form h(z) = z + ∞ v≥2 a v z v (1.1) The integral operator I n in [3] for the above functions h defined as I 0 h(z) = h(z). I 1 h(z) = I(z) = z 0 h(t)t −1 dt; I m h(z) = I I m−1 h(z) , m ∈ N = {1, 2, 3, ...} I m h(z) = z + ∞ k=2 [v] −m a v z v (1.2) The harmonic functions can be expressed as f = h + g where h ∈ A 0 is given by (1.1) and g ∈ A has power series expansion: g(z) = ∞ v≥1 b v z v , |b 1 | < 1 Clunie and Sheil-Small [1] defined function of form f = h + g that are locally univalent, sense-preserving and harmonic in U . A sufficient condition for the functions f to be univalent in U is |h ′ (z)| ≥ |g ′ (z)| in U . A function f (z) = h + g is harmonic starlike [9] for |z| = r < 1,if ∂ ∂θ arg f (re iθ ) = Re zh ′ (z) − zg ′ (z) h(z) + g(z) > 0. The integral operator in [3] defined for the harmonic functions f by I m f (z) = I m h(z) + (−1) m I m g(z) (1.3) where I m is defined by (1.2). Now for 0 ≤ α < 1, m ∈ N 0 and z ∈ U ,suppose that SHP −m (α, β) denote the family of harmonic univalent function f of the form f (z) = h(z) + g(z) where h(z) = z + ∞ v≥2 a v z v , g(z) = ∞ v≥1 b v z v , |b 1 | < 1 (1.4) Re (1 − α) I m h(z) + (−1) m I m g(z) z + α I m h(z) + (−1) m I m g(z) ′ ≥ β (1.5) We further denote by T HP −m (α, β) subclass of SHP −m (α, β) consisting harmonic functions f = h + g in T HP −m (α, β) so that h and g are of the form h(z) = z − ∞ v≥2 |a v |z v and g(z) = ∞ v≥1 |b v |z v . (1.6) In this paper, we will give the sufficient condition for functions f = h + g to be in the subclass SHP −m (α, β). It is shown that the coefficient condition is also necessary for the functions in the class T HP −m (α, β). Coefficient and distortion bounds, extreme points, convolution conditions and convex combination of this class are obtained. Main Results We begin by proving some sharp coefficient inequalities contained in the following theorem. Theorem 2.1. Let the function f = h + g be such that h and g are given by (1.6), furthermore, let ∞ v≥1 [v] −m (1 − α + αv) (|a v | + |b v |) ≤ 1 − β. (2.1) Then f (z) is harmonic univalent, sense preserving in U and f (z) ∈ SHP m q (α, β). Proof: For |z 1 | ≤ |z 1 | < 1, we have by equation (2.1) |f (z 1 ) − f (z 2 )| ≥ |h(z 1 ) − h(z 2 )| − |g(z 1 ) − g(z 2 )| ≥ |z 1 − z 2 |   1 − ∞ v≥2 v |a v | |z 2 | v−1 − ∞ v≥1 v |b v | |z 2 | v−1   ≥ |z 1 − z 2 |   1 − ∞ v≥2 v (|a v | + |b v |) |z 2 | v−1 + |b 1 |   ≥ |z 1 − z 2 |   1 − ∞ v≥2 v (|a v | + |b v |) + |b 1 |   ≥ |z 1 − z 2 |   1 − ∞ v≥2 [v] −m (1 − α + αv) (|a v | + |b v |) + |b 1 |   ≥ |z 1 − z 2 | (1 − (1 − β) − |b 1 | + |b 1 |) ≥ |z 1 − z 2 | |β| ≥ 0. Hence, f (z) is univalent in U . f (z) is sense preserving in U . This is because |h ′ (z)| ≥ 1 − ∞ v≥2 u |a v | |z| v−1 > 1 − ∞ v≥2 v |a v | > 1 − ∞ v≥2 [v] −m (1 − α − αv) |a v | ≥ ∞ v≥1 [v] −m (1 − α − αv) |b u | ≥ ∞ v≥1 [v] −m (1 − α − αv) |b v | |z| v−1 > ∞ v≥1 v |b v | |z| v−1 |h ′ (z)| ≥ |g ′ (z)| . Now, we show that f (z) ∈ SHP m (α, β).Using the fact that Re {w} ≥ β if and only if |1 − β + w| ≥ |1 + β − w|, its sufficient to show that 1 − β + Re (1 − α) I m h(z) + (−1) m I m g(z) z + α I m h(z) + (−1) m I m g(z) ′ − 1 + β − Re (1 − α) I m h(z) + (−1) m I m g(z) z − α I m h(z) + (−1) m I m g(z) ′ ≥ 0. (2.2) = 2 − β + ∞ v≥2 [v] −m (1 − α + αv) a v z v−1 + (−1) m ∞ v≥1 [v] −m (1 − α + αv) b v z v−1 − β − ∞ v≥2 [v] −m (1 − α + αv) a u z v−1 − (−1) m ∞ v≥1 [v] −m (1 − α + αv) b v z v−1 ≥ 2 (1 − β) − ∞ v≥2 [v] −m (1 − α + αv) a v |z| v−1 + ∞ v≥1 [v] −m (1 − α + αv) b v |z| v−1 > 2 (1 − β) − ∞ v≥2 [v] −m (1 − α + αv) a v + ∞ v≥1 [v] −m (1 − α + αv) b v . This last expression is non-negative by (2.1). The harmonic mappings f (z) = z + ∞ v≥2 1 − β [v] −m (1 − α + αv) x v z v + ∞ v≥1 1 − β [v] −m (1 − α + αv) y v z v (2.3) where ∞ v≥2 |x v | + ∞ v≥1 |y v | = 1 shows that the coefficient bound given by (2.1) is sharp. The functions of the form (2.3) are in SHP −m (α, β) because ∞ v≥1 [v] −m (1 − α + αv) (|a v | + |b v |) = 1 + (1 − β) ∞ v≥2 |x v | + ∞ v≥1 |y v | = 2 − β. Theorem 2.2. Let the function f = h + g be such that h and g are given by (1.6) then f (z) ∈ T HP −m (α, β) if and only if ∞ v≥1 [v] m q (1 − α + αv) (|a v | + |b v |) ≤ 2 − β.Re (1 − α) I m h(z) + (−1) m I m g(z) z + α I m h(z) + (−1) m I m g(z) ′ > β or, equivalently Re    1 − ∞ v≥2 [v] −m (1 − α + αu) |a v | z v−1 − ∞ v≥1 [v] −m (1 − α + αv) |b v | z v−1    > β. If we choose z to be real and z → 1 − , we get 1 − ∞ v≥2 [v] −m (1 − α + αv) |a v | − ∞ v≥1 [v] −m (1 − α + αv) |b v | ≥ β this is precisely the assertion of (2.4). Theorem 2.3. If f (z) ∈ T HP −m (α, β) , |z| = r < 1 then |f (z)| ≤ (1 + |b 1 |) r + 1 [2] −m (1 + α) (1 − |b 1 | − β) r 2 and |f (z)| ≥ (1 − |b 1 |) r − 1 [2] −m (1 + α) (1 − |b 1 | − β) r 2 (2.5) Proof: Taking the absolute values of f (z), we obtain |f (z)| ≤ (1 + |b 1 |) r + ∞ v≥2 (|a v | + |b v |) r v ≤ (1 + |b 1 |) r + ∞ v≥2 (|a v | + |b v |) r 2 ≤ (1 + |b 1 |) r + 1 [v] −m (1 − α + αv) ∞ v≥2 [v] −m (1 − α + αv) (|a v | + |b v |) r 2 ≤ (1 + |b 1 |) r + 1 [2] −m (1 + α) ∞ v≥2 [v] −m (1 − α + αv) (|a v | + |b v |) r 2 ≤ (1 + |b 1 |) r + 1 [2] −m (1 + α) (1 − |b 1 | − β) r 2 and |f (z)| ≥ (1 − |b 1 |) r − ∞ v≥2 (|a v | + |b v |) r v ≥ (1 − |b 1 |) r − ∞ v≥2 (|a v | + |b v |) r 2 ≥ (1 − |b 1 |) r − 1 [v] −m (1 − α + αv) ∞ v≥2 [v] −m (1 − α + αv) (|a v | + |b v |) r 2 ≥ (1 − |b 1 |) r − 1 [2] −m (1 + α) ∞ v≥2 [v] −m (1 − α + αv) (|a v | + |b v |) r 2 ≥ (1 − |b 1 |) r − 1 [2] −m (1 + α) (1 − |b 1 | − β) r 2 . For the functions f (z) = z + |b 1 | z − 1 [2] m q (1 + α) (1 − |b 1 | − β) z 2 and f (z) = z − |b 1 | z − 1 [2] −m (1 + α) (1 − |b 1 | − β) z 2 . For |b 1 | ≤ 1 − β shows that the bounds given in Theorem 2.3 are sharp. Next, we determine the extreme points of the closed convex hulls of T HP −m (α, β) denoted by clcoT HP −m (α, β). Theorem 2.4. A function f (z) ∈ clco T HP −m (α, β) if and only if f (z) = ∞ v≥1 (µ v h v + η v g v ) (2.6) h 1 (z) = z where h v (z) = z − 1 − β [v] −m (1 − α + αv) z v , (v = 2, 3, ...) g v (z) = z − 1 − β [v] −m (1 − α + αv) z v , (v = 1, 2, 3, ...) ∞ v≥1 (µ v + η v ) = 1, µ v ≥ 0. In particular, the extreme points of T HP −m (α, β) are {h v } and {g v }. Proof: For the functions f (z) of the form (2.6), we have f (z) = ∞ v≥1 (µ v h v + η v g v ) = ∞ v≥1 (µ v + η v ) z− ∞ v≥2 1 − β [v] −m (1 − α + αv) µ v z v + (−1) m ∞ v≥1 1 − β [v] −m (1 − α + αv) η v z v then ∞ v≥2 [v] −m (1−α+αv) 1−β 1−β [v] −m (1−α+αv) µ v + ∞ v≥1 [v] −m (1−α+αv) 1−β 1−β [v] −m (1−α+αv) η v = ∞ v≥2 µ v + ∞ v≥1 η v = 1 − µ 1 ≤ 1 and so f (z) ∈ clco T HP −m (α, β). Conversely ,suppose that f (z) ∈ clco T HP −m (α, β).consider µ v = [v] −m (1 − α + αv) 1 − β |a v | , (v = 2, 3, ...) and η v = [v] −m (1 − α + αv) 1 − β |b v | , (v = 1, 2, 3, ...) Then note that by Theorem 2.2 0 ≤ µ v ≤ 1 (v = 2, 3, ...), and 0 ≤ η v ≤ 1 (v = 1, 2, 3, ...) We define µ 1 = 1 − ∞ v≥2 µ v + ∞ v≥1 η v and note that, by Theorem 2.2 µ 1 ≥ 0. Consequently, we obtain f (z) = ∞ v≥1 (µ v h v + η v g v ) as required. Using Theorem 2.2, it is easily seen that T HP −m (α, β) is convex and closed, so clcoT HP −m (α, β) = T HP −m (α, β).Then the statement of Theorem 2.4 is really for f (z) ∈ T HP −m (α, β). Theorem 2.5. Each member of T HP −m (α, β) maps U on to a starlike domain. Proof: We only need to show that if f (z) ∈ T HP −m (α, β) then Re zh ′ (z) − zg ′ (z) h(z) + g(z) > 0. Using the fact that Re {w} > 0 if and only if |1 + w| > |1 − w|,it suffices to show that h(z) + g(z) + zh ′ (z) − zg ′ (z) − h(z) + g(z) + zh ′ (z) + zg ′ (z) = 2z − ∞ v≥2 (v + 1) |a v | z v + ∞ v≥1 (v − 1) |b v | z −v − ∞ v≥2 (v − 1) |a v | z v − ∞ v≥1 (v + 1) |b v | z −v ≥ 2 |z| − ∞ v≥2 (v + 1) |a v | z v + ∞ v≥1 (v − 1) |b v | z −v − ∞ v≥2 (v − 1) |a v | z v − ∞ v≥1 (v + 1) |b v | z −v ≥ 2 |z|    1 −   ∞ v≥2 v |a v | |z| v−1 + ∞ v≥1 v |b v | |z| v−1      ≥ 2 |z|    1 −   ∞ v≥2 [v] −m (1 − α + αv) |a v | + ∞ v≥1 [v] −m (1 − α + αv) |b v |      ≥ 2 |z| [1 − (1 − β)] = 2 |z| β ≥ 0. Theorem 2.6. If f (z) ∈ T HP −m (α, β) then f (z) is convex in the disc |z| < min v 1 − β − |b 1 | v 1 v−1 , v = 2, 3, ..., 1 − β > |b 1 | Proof: Let f (z) ∈ T HP −m (α, β) and let r be fixed such that 0 < r < 1,then if r −1 f (rz) ∈ T HP −m (α, β) and we have ∞ v≥2 v 2 (|a v | + |b v |)r v−1 = ∞ v≥2 v (|a v | + |b v |) vr v−1 ≤ ∞ v≥2 [v] −m (1 − α + αv) (|a v | + |b v |) vr v−1 ≤ 1 − β − |b 1 | . Provided vr v−1 ≤ 1 − β − |b 1 |,which is true r < min v 1 − β − |b 1 | v 1 v−1 , v = 2, 3, ..., 1 − β > |b 1 | . Following Ruscheweyh [8], we call the set N δ f (z) =    G : G(z) = z − ∞ v≥2 |C v | z v − ∞ v≥1 |D v | z −v and ∞ v≥1 u (|a v − C v | + |b v − D v |) ≤ δ    (2.7) as the δ-neighborhood of f (z). From (2.7) we obtain ∞ v≥1 v (|a v − C v | + |b v − D v |) = |b 1 − D 1 | + ∞ u≥2 v (|a v − C v | + |b v − D v |) ≤ δ. (2.8) Theorem 2.7. Let f (z) ∈ T HP −m (α, β) and δ ≤ β.If G ∈ N δ (f ), then G is a harmonic starlike function. Proof: Let G(z) = z − ∞ v≥2 |C v | z v − ∞ v≥1 |D v | z v ∈ N δ f (z), we have ∞ v≥2 v (|C v | + |D v |) + |D 1 | ≤ ∞ v≥2 v (|a v − C v | + |b v − D v |) + ∞ v≥2 v (|a v | + |b v |) + |D 1 − b 1 | + |b 1 | ≤ ∞ v≥2 [v] −m (1 − α + αv) (|a v − C v | + |b v − D v |) + |D 1 − b 1 | + |b 1 | + ∞ u≥2 [v] −m (1 − α + αv) (|a v | + |b v |) ≤ δ + |b 1 | + (1 − β − |b 1 |) ≤ 1. Hence, G(z) is a harmonic starlike function. For next theorem, we require to define the convolution of two harmonic functions. For harmonic functions of the form f (z) = z − ∞ v≥2 |a v | z v − ∞ v≥1 |b v | z v and G(z) = z − ∞ v≥2 |C v | z v − ∞ v≥1 |D v | z v we define the convolution of two harmonic functions f (z) and G(z) as (f * G) (z) = f (z) * G(z) = z − ∞ v≥2 |a v | |C v | z u − ∞ v≥1 |b v | |D v | z v Using above definition, we show that the class T HP −m (α, β) is closed under convolution. Theorem 2.8. For 0 ≤ α 1 ≤ α 2 ,0 ≤ β 1 ≤ β 2 let f (z) ∈ T HP −m (α 2 , β 2 ) and G(z) ∈ T HP m q (α 1 , β 1 ).Then (f * G)(z) ∈ T HP −m (α 2 , β 2 ) ⊂ T HP −m (α 1 , β 1 ) Proof: Let f (z) = z − ∞ v≥2 |a v | z v − ∞ v≥1 |b v | z v ∈ T HP −m (α 2 , β 2 ) G(z) = z − ∞ v≥2 |C v | z v − ∞ v≥1 |D v | z u ∈ T HP −m (α 1 , β 1 ). Then the convolution (f * G) is given by (2.9).We wish to show that the coefficient of (f * G) satisfies the required condition given in Theorem 2.2. For G(z) ∈ T HP −m (α 1 , β 1 ),we note that |C v | < 1 and |D v | < 1.Now, for the convolution function f * G, we obtain ∞ v≥2 [v] −m (1 − α 1 + vα 1 ) 1 − β 1 |a v | |C v | + ∞ v≥1 [v] −m (1 − α 1 + vα 1 ) 1 − β 1 |b v | |D v | ≤ ∞ v≥2 [v] −m (1 − α 1 + vα 1 ) 1 − β 1 |a v | + ∞ v≥1 [v] −m (1 − α 1 + vα 1 ) 1 − β 1 |b v | ≤ ∞ v≥2 [v] −m (1 − α 2 + uα 2 ) 1 − β 2 |a v | + ∞ v≥1 [v] −m (1 − α 2 + vα 2 ) 1 − β 2 |b v | ≤ 1. Since 0 ≤ α 1 ≤ α 2 , 0 ≤ β 1 ≤ β 2 let f (z) ∈ T HP −m (α 2 , β 2 ), thus (f * G)(z) ∈ T HP −m (α 2 , β 2 ) ⊂ T HP −m (α 1 , β 1 ). The 'if part' follows from Theorem 2.1 upon noting that the functions h(z) andg(z) in f (z) ∈ SHP −m (α, β) are of the form (1.6), then f (z) ∈ T HP −m (α, β).For the 'only if ' part, we show that if f (z) ∈ T HP −m (α, β) then the condition (2.4) holds. Note that a necessary and sufficient condition for f = h + g given by (1.6) be in T HP −m (α, β) is that Harmonic univalent functionalities. J Clunie, T Sheil-Small, Ann.Acad. Sci. Fenn.Ser. A I Mathematics. 9J. Clunie and T. Sheil-Small , Harmonic univalent functionalities, Ann.Acad. Sci. Fenn.Ser. A I Mathematics. 9,(1984) 3-25. Accordant features starlike in the unit disk. J M Jahangiri, J. Mahematics.Anal. Appl. 235no. 2J. M. Jahangiri , Accordant features starlike in the unit disk, J. Mahematics.Anal. Appl. 235no. 2, (1999) 470-477. Subclass of univalent functions. G S Salagean, Lecture notes in Math. 1013Springer-VerlagSalagean G. S.(1983), Subclass of univalent functions, Lecture notes in Math., Springer-Verlag, 1013, 362-372. A subclass of harmonic univalent functions with positive coefficients. K K Dixit, Saurabh, Tamkang J. Math. 413Dixit K.K. and Porwal Saurabh , A subclass of harmonic univalent functions with positive coefficients,Tamkang J. Math., 41(3), (2010) 261-269. Certain classes of analytic functions with negative coefficient. S S Bhoosnurmath, S R Swamy, Indian J. Math. 27S. S. Bhoosnurmath and S. R. Swamy , Certain classes of analytic functions with negative coefficient, Indian J. Math. 27, (1985) 89-98. Certain subclasses of analytic functions associated with fractional q-calculus operators. S D Purohit, R K Raina, Math. Scand. 1091S.D. Purohit, and R.K.Raina , Certain subclasses of analytic functions associated with fractional q-calculus operators, Math. Scand. 109, no. 1, (2011) 55-70. Harmonic functions starlike in the unit disc. J M Jahangiri, J. Math. Anal. Appl. 235J.M. Jahangiri , Harmonic functions starlike in the unit disc, J. Math. Anal. Appl., 235, (1999) 470-477. Neighbourhood of univalent functions. S Ruscheweyh, Proc. Amer. Math. Soc. 81S . Ruscheweyh , Neighbourhood of univalent functions, Proc. Amer. Math. Soc., 81, (1981) 521-527. Constants for planer Harmonic mappings. T Sheil-Small, J. London Math. Soc. 422T. Sheil-Small, Constants for planer Harmonic mappings, J. London Math. Soc., 42(2),(1990) 237-248. On certain harmonic univalent functions defined by Salagean derivative. S Yalcin, Soochow Journal of Math. 313S. Yalcin , On certain harmonic univalent functions defined by Salagean derivative, Soochow Journal of Math. 31(3), (2005) 321-331. K Al-Shaqsi, M Darus, O A Fadipe-Joseph, A new subclass of Salagean-type harmonic univalent functions, Abstract and Applied Analysis, Article ID 821531. 12pagesK. Al-Shaqsi, M. Darus and O.A. Fadipe-Joseph, A new subclass of Salagean-type harmonic univalent functions, Abstract and Applied Analysis, Article ID 821531, (2010) 12 pages. Subclasses of univalent functions, Complex Analysis-Fifth Romanian Finish Seminar. G S Salagean, Bucharest, 1G.S. Salagean, Subclasses of univalent functions, Complex Analysis-Fifth Roma- nian Finish Seminar, Bucharest, 1, (1983)362-372. Harmonic univalent function with negative coefficients. H Silverman, J. Math. Anal. Appl. 220H. Silverman , Harmonic univalent function with negative coefficients, J. Math. Anal. Appl., 220, (1998) 283-289. Subclasses of Harmonic univalent functions. H Silverman, E M Silvia, New Zealand J. Math. 28H. Silverman and E.M. Silvia , Subclasses of Harmonic univalent functions, New Zealand J. Math., 28, (1999) 275-284. Certain Subclasses of Starlike Harmonic Functions Associated With a Convolution Structure. G Murugusundaramoorthy, Int. J. Open Problems Complex Analysis. 21G. Murugusundaramoorthy , Certain Subclasses of Starlike Harmonic Functions Associated With a Convolution Structure, Int. J. Open Problems Complex Analysis, 2(1), (2010) 1-13. On Certain Subclass of Harmonic Univalent Functions Defined Q-Differential Operator. B Ravindar, R B Sharma, N Magesh, J.Mech.Cont.Math.Sci. 146B. Ravindar, R.B. Sharma, N. Magesh , On Certain Subclass of Harmonic Univa- lent Functions Defined Q-Differential Operator,J.Mech.Cont.Math.Sci.,Vol.14,No.6, (2019) 45-53. On Certain Class of Harmonic Univalent Functions Defined By Generalized Derivative Operator. M Darus, N D Sangle, Int. J. Open Problems Compt. Math. 42M. Darus, N.D.Sangle , On Certain Class of Harmonic Univalent Functions Defined By Generalized Derivative Operator,Int. J. Open Problems Compt. Math., Vol. 4, No. 2, (2011)83-96. Certain subclass of analytic function with negative coefficients defined by Catas operator. N D Sangle, G M Birajdar, Indian Journal of Mathematics(IJM). 623N. D. Sangle and G. M. Birajdar, Certain subclass of analytic function with neg- ative coefficients defined by Catas operator, Indian Journal of Mathematics(IJM), 62(3), (2020) 335-353.
[]
[ "V2X Misbehavior in Maneuver Sharing and Coordination Service: Considerations for Standardization", "V2X Misbehavior in Maneuver Sharing and Coordination Service: Considerations for Standardization" ]
[ "Jean-Philippe Monteuuis [email protected] \nQualcomm Technologies, Inc. Boxborough\nMAUSA\n", "Jonathan Petit [email protected] \nQualcomm Technologies, Inc. Boxborough\nMAUSA\n", "Mohammad Raashid Ansari [email protected] \nQualcomm Technologies, Inc. Boxborough\nMAUSA\n", "Cong Chen [email protected] \nQualcomm Technologies, Inc. Boxborough\nMAUSA\n", "Seung Yang [email protected] \nQualcomm Technologies, Inc. Boxborough\nMAUSA\n" ]
[ "Qualcomm Technologies, Inc. Boxborough\nMAUSA", "Qualcomm Technologies, Inc. Boxborough\nMAUSA", "Qualcomm Technologies, Inc. Boxborough\nMAUSA", "Qualcomm Technologies, Inc. Boxborough\nMAUSA", "Qualcomm Technologies, Inc. Boxborough\nMAUSA" ]
[]
Connected and Automated Vehicles (CAV) use sensors and wireless communication to improve road safety and efficiency. However, attackers may target Vehicle-to-Everything (V2X) communication. Indeed, an attacker may send authenticated-but-wrong data to send false location information, alert incorrect events, or report a bogus object endangering other CAVs' safety. Standardization Development Organizations (SDO) are currently working on developing security standards against such attacks. Unfortunately, current standardization efforts do not include misbehavior specifications for advanced V2X services such as Maneuver Sharing and Coordination Service (MSCS). This work assesses the security of MSC Messages (MSCM) and proposes inputs for consideration in existing standards.
10.1109/cscn57023.2022.10051093
[ "https://export.arxiv.org/pdf/2211.02579v2.pdf" ]
253,370,226
2211.02579
20aca6b3d1ca92f5af02aa06dda5f103470e16c5
V2X Misbehavior in Maneuver Sharing and Coordination Service: Considerations for Standardization Jean-Philippe Monteuuis [email protected] Qualcomm Technologies, Inc. Boxborough MAUSA Jonathan Petit [email protected] Qualcomm Technologies, Inc. Boxborough MAUSA Mohammad Raashid Ansari [email protected] Qualcomm Technologies, Inc. Boxborough MAUSA Cong Chen [email protected] Qualcomm Technologies, Inc. Boxborough MAUSA Seung Yang [email protected] Qualcomm Technologies, Inc. Boxborough MAUSA V2X Misbehavior in Maneuver Sharing and Coordination Service: Considerations for Standardization 1Index Terms-CAVV2Xmaneuver sharing and coordinationmisbehaviorthreat analysisrisk assessmentstandards Connected and Automated Vehicles (CAV) use sensors and wireless communication to improve road safety and efficiency. However, attackers may target Vehicle-to-Everything (V2X) communication. Indeed, an attacker may send authenticated-but-wrong data to send false location information, alert incorrect events, or report a bogus object endangering other CAVs' safety. Standardization Development Organizations (SDO) are currently working on developing security standards against such attacks. Unfortunately, current standardization efforts do not include misbehavior specifications for advanced V2X services such as Maneuver Sharing and Coordination Service (MSCS). This work assesses the security of MSC Messages (MSCM) and proposes inputs for consideration in existing standards. I. INTRODUCTION Thanks to Vehicle-to-Everything (V2X) communication, road safety can be significantly improved. It enables V2Xequipped vehicles to exchange their telematics information to create awareness, especially in non-line-of-sight (NLoS) conditions. Cooperative awareness is achieved by broadcasting a message called Basic Safety Message (BSM) or Cooperative Awareness Message (CAM). Both messages contain similar information (location and kinematic state of the sender) but are defined by two different standards. BSM is defined in the Society of Automotive Engineers (SAE) J2735 standard [1] and CAM is defined in the European Telecommunications Standards Institute (ETSI) European Standard (EN) 302 637-2 standard [2] 1 . However, BSMs do not provide the maneuver intent from the transmitting CAV. Therefore, a Maneuver Sharing and Coordination Service (MSCS) has been created to share maneuvers among V2X-enabled vehicles. Vehicles participating in the MSCS generate and consume Maneuver Sharing and Coordination Messages (MSCM) that are designed to complement the BSM/CAM service. BSMs and MSCMs are intended to be used to make driving decisions by an operator or an automated driving system. Due to this reason, these services become safety-critical. Thus, it is paramount that the information passed through these services are accurate. An attacker like the one discussed in Section IV Fig. 1: Use Case for Maneuver Sharing and Coordination [6] can send incorrect data to affect receivers' telematics awareness negatively. Commonly, attacks on BSMs jeopardize V2X applications [3]. Therefore, deploying a Misbehavior Detection System (MBDS) is mandatory to detect and protect against such attackers [4]. However, very little research exists on the security of the MSCS. This paper summarizes the results of a threat assessment (TA) on the MSCS defined in SAE J3186 [5]. Lastly, we discuss the gaps in MSCS and MBDS standards and propose items for consideration. The structure of the paper is as follows. Section II presents the standardization and academic efforts in the domain of V2X MSC and its security. Section III details the system model and the MSCM. Section IV describes the attacker model considered in our TA presented in Section V. Section VI discusses standardization and research's open challenges to achieve a secure MSCS. Finally, Section VII concludes this paper. II. RELATED WORK This section provides an overview of functional and security standards for MSCM. Additionally, this section includes related academic work. A. Standardization This section briefly introduces existing and ongoing standards from a functional and security perspective. 1) Functional Standards: The notion of MSC has been introduced in the V2X community to share maneuver intent among CAVs and smart infrastructures. As a result, each CAV can enhance its driving tasks by considering the maneuver arXiv:2211.02579v2 [cs.CR] 14 Nov 2022 [11] briefly mentioned the detection and reporting of MSCM, but the details are outof-scope of version 1 of the ETSI TS 103 759. B. Academic work As far as we know, there is no research on MSCS security. However, the security of the planning stack (trajectory prediction) in autonomous driving is at an early research stage [12], [13], and, as explained in Section III, MSCS and planning are related. In [12], researchers fooled the planning stack by attacking the perception system. The considered attack scenario was as follows. The perception system of an automated vehicle incorrectly classifies an adversarial vehicle as a pedestrian due to an adversarial patch attack [14]. As a result, the planning stack loaded the prediction model used for pedestrian trajectory instead of the one used for vehicle trajectory. The outcome was erratic movements from the car's victim and increased safety risk for the victim and its surrounding vehicles. To our knowledge, the prior art has not studied attacks targeting cooperative planning. Thus, our work aims to fill this gap. III. SYSTEM MODEL This section provides an overview of the MSC system, service, and messages. Lastly, this section describes the role of V2X applications and security in the context of MSC. A. Maneuver Sharing and Coordination System As seen in Figure 2, the MSC service requires a MCS system composed of a V2X On-Board Unit (OBU), sensors (e.g., camera), a mapping stack, a perception stack, a planning stack, and a control system (e.g., steering, brakes, engine). The perception stack provides a view of the ego-vehicle (depicted as a truck) and its surroundings (e.g., other vehicles). The mapping stack contains map data (e.g., roads, road lanes, Fig. 2: Maneuver Sharing and Coordination System and buildings). Lastly, the planning stack decides and updates the vehicle's maneuvers. The planning stack relies on several components. First, the perception stack provides road obstacles to be avoided by the planning stack. Then, the mapping stack provides all the areas where the vehicle can navigate. Lastly, the V2X OBU provides the maneuver intent (MSCM) from surrounding CAVs and transmits the maneuver intent from the planning stack to the surrounding vehicles. B. Maneuver Sharing and Coordination Service MSCS is a service allowing connected (and automated) vehicles to share maneuvers. Unlike local planning, MSCS optimizes the planned trajectory by considering the planned trajectory of other vehicles. To achieve this goal, MSCS relies on two communication protocols: one for regular users (e.g., cars and trucks) and one for special vehicles (e.g., ambulances and police cars). The protocol for regular users has a request and response design. The requester will send a MSCM to the maneuver participants detailing all the maneuvers to be performed by each participant. Accordingly, each participant will send a response (a MSCM) to the request (agree or disagree). A single negative response ends the maneuver negotiation (protocol session). A unanimous positive answer leads to the start of the maneuver. For special vehicles, there is no maneuver negotiation. During the maneuver, participants may send a MSCM to cancel the ongoing maneuver (e.g., due to a flat tire). Lastly, a maneuver is considered complete as soon as each participant acknowledges (via a MSCM) the completion of its assigned maneuver. MSCS allows the requester to send its maneuver request via unicast, groupcast, or broadcast mode. In unicast mode, the requester will negotiate the maneuver with a single vehicle. In groupcast mode, the requester can adjust the signal strength and orientate the signal beam to negotiate a maneuver with a subset of surrounding vehicles. Finally, in broadcast mode, the requester will negotiate a maneuver with all the vehicles within communication range. C. Maneuver Sharing and Coordination Messages The MSCM consists of an ITS Protocol Data Unit header and containers to include information about the transmitting station (vehicle or infrastructure), vehicles involved in the maneuver negotiation, and maneuvers. Figure 3 shows the full structure of a MSCM. The white fields are mandatory; greyed fields are situational (optional), depending on the current stage of the maneuver negotiation . For instance, a MSCM for requesting a maneuver contains fields describing the maneuver (Maneuver) and its participants. On the other hand, a MSCM for responding to a request will not contain fields describing the maneuver but fields answering this request (Reason Code). The Destination IDs are identifiers of all the vehicles involved in the maneuver negotiation. Note, Destination IDs include the identifier of vehicles requested to perform maneuvers (Executant IDs) and of vehicles that do not perform maneuvers (e.g., spectators). The Maneuver ID is the session's identifier used during the maneuver negotiation. A vehicle may be simultaneously involved in multiple maneuver sessions. Thus, the Maneuver ID helps to identify which session the vehicle is answering. The Maneuver Execution Status contains an integer describing the status (canceled or completed) of the approved and ongoing maneuver performed by the MSCM transmitter. Maneuver describes each maneuver (Sub-Maneuver) performed by an executant. For instance, a maneuver to overtake requires a first executant to move to a new lane (first Sub-Maneuver) to let a second executant overtake the first executant (second Sub-Maneuver). Each Sub-Maneuver contains information related to its executant (e.g., Current Status) and its description (Target Road Resource, also known as TRR). D. V2X Applications and Security 1) Applications: V2X applications rely on V2X messages as input to warn the driver or to control the vehicle dynamics to avoid road hazards or improve gas consumption. Several safety critical ADAS applications would benefit from using MSCM such as Cooperative Automated Overtaking (CAO) and Cooperative Automated Parking (COP) [6]. For example, CAVs performing a CAO will benefit from a MSCS. For instance, CAVs get richer information about the maneuver intent, such as the needed portion of the road, the starting time, and the maneuver duration. Also, the MCSC reduces the computation load in each vehicle. Indeed, each vehicle does not need to estimate the trajectory of each surrounding vehicle using its kinematics state. However, note that these applications are still unspecified from a standard perspective. 2) Security: The MSCS specification includes security requirements such as MSCM's integrity and transmitter's authenticity. Following the IEEE 1609.2 [15], the message's integrity and transmitter's authenticity are ensured by digitally signing every MSCM sent. Receivers use the transmitter's public key in the certificate to verify the digital signature attached to the MSCM. IV. ATTACKER MODEL To facilitate the threat assessment, we formalize the attacker model following the classification proposed in [16]. Internal versus External: The internal attacker is an authenticated network member that can communicate with other members. The external attacker cannot properly sign her messages, which limits the diversity of attacks. Nevertheless, she can eavesdrop on the V2X broadcast communication. Malicious versus Rational: A malicious attacker seeks no personal benefits from the attacks and aims to harm the members or the functionality of the network. Hence, she may employ any means disregarding corresponding costs and consequences. On the contrary, a rational attacker seeks personal profit and is more predictable in terms of attack means and target. Active versus Passive: An active attacker can generate packets or signals to perform the attack, whereas a passive attacker only eavesdrops on the communication channel (i.e., wireless or in-vehicle wired network). Local versus Extended: An attacker can be limited in scope, even if she controls entities at an intersection (vehicles or base stations), which makes her local. An extended attacker controls scattered entities across the network, thus extending her scope. Direct versus Indirect: A direct attacker reaches its primary target directly, whereas an indirect attacker reaches its primary target through secondary targets. For instance, an indirect attacker may compromise an MSCM through a sensor attack via the planning stack. Figure 4 shows an example of an attack on the MSCS. This example assumes an attacker (white vehicle) proposes V. THREAT ASSESSMENT A. Methodology Several methodologies exist to assess the risk level of an attack. For example, attack trees were used to formalize attacks on V2V communication [18]. However, in our context, the large number of attacks makes the trees too large Fig. 4: Attack on CAO via MSCS to provoke a cars collision and unwieldy. Therefore, our methodology follows a matrix approach based on three criteria: reproducibility, impact, and stealthiness (see Table II). The attack reproducibility aims to assess the ease of replicating the attack. Then, the impact measures the impact of the attack on the victim's car and its surrounding vehicles (i.e., criticality and scalability). Lastly, the attack stealthiness assesses the ease by which a driver or a system can detect it. Accordingly, we assess the overall risk level for each threat based on the majority rating among the criteria. For attacks with all three (High, Medium, Low) ratings in the criteria, the overall rating is taken as Medium. B. Results We performed a threat assessment of SAE J3186 [5], identifying 16 attacks (see Table IV). As a result, we found eight high, one medium, and seven low-risk attacks. Although there are more medium-risk and low-risk attacks, some attacks are very easily reproducible, and some have the capability of a very high impact on the MSCS. Hence, we selected a subset of such attacks and presented our findings in Table III. As described in Section IV, the attacker model considered can modify all of the MSCM's fields with any desired value. For example, one attack considers an attacker performing the maneuver request and response to create fake maneuvers using multiple pseudonym IDs (spoofing). This information is false, but a receiver cannot corroborate such information without other data sources. For instance, the victim could use its camera to check if the maneuvering vehicle exists. One attack is an attacker denying all the requests for maneuver (Denial-of-Service attack). The transmitting vehicle could only corroborate against this attack by observing the same behavior numerous times within a long time window or towards a specific transmitting vehicle. One attack on the maximum speed is when an attacker sets a speed value greatly above the speed limit. However, some special vehicles (e.g., ambulances) sometimes must maneuver above the speed limit. This example shows the complexity of designing robust MBDS for MSCMs. C. Takeaways Most attacks have high reproducibility (only one has a medium rating) since they do not require special hardware to perform the attack. in Table III, 3 out of 6 chosen attacks, have high impact rating since they have the potential to threaten the lives of drivers and pedestrians. Lastly, these attacks are rated low for stealthiness as the attacker would be exposing its certificate in the malicious messages and can be easily detected if the suggested defenses are deployed. Although the attacks we developed have high reproducibility and impact, we have suggested defense mechanisms that should be able to detect such attacks and help report the malicious actors. However, these defense mechanisms mainly require redundant V2X information from other honest actors surrounding the target vehicle or sensors on the target vehicle. Thus, the defense mechanisms can only be practically applied if the standards allow room for redundant information. VI. DISCUSSION In this section, we propose standard-related directions to address security gaps identified by the threat assessment. A. Misbehavior detectors and reporting ETSI TR 103 460 and TS 103 759 list a set of misbehavior detectors for BSM. Currently, the TS draft does not specify detectors for the MSCM, leaving that for a future version. However, we can assume that detectors (designed for BSM) also apply to MSCM. For instance, in TR 103 460, the detector, named implausible speed, will be the same for both MSCM and BSM. Additional detectors specific to MSCM will be needed, however. For example, a detector could check if a Maneuver contains overlapping Sub-maneuvers to prevent car collisions. For instance, an attacker sends a maneuver request with two overlapping maneuvers. The attacker aims to force one vehicle to collide with a second vehicle. At a protocol level, a second detector could check if a maneuver participant keeps declining consecutive maneuver requests within a short time window (e.g., 5 seconds) or sent by a specific requester. After being detected, a misbehavior report (MBR) may be generated and sent to authorities for further investigation. The ASN.1 definition specified in TS 103 759 is flexible enough to allow for MSCM detectors. B. Use of MSCM as data source for V2X MBD (and vice versa) It can be tempting to use MSCM as data source to detect malicious BSMs (or to use BSMs to detect malicious MSCM). For instance, a CAV reported may have sent BSM data inconsistent with the corresponding MSCM. In detail, the expected maneuver described in the MSCM is inconsistent with the ongoing maneuver depicted by the BSMs. Another example is the inconsistency between the vehicle dimension in the MSCM and the vehicle dimension in the BSM. Additionally, using sensors or the mapping stack to detect malicious MSCM will be beneficial. For instance, an attacker may send a MSCM to perform a maneuver on a lane that does not exist. A CAV could detect this attack by looking at the number of lanes in the mapping stack or via the lane detection algorithm performed by its camera. To further improve the MSCS' trustworthiness and prevent attacks on TRR Location, extending the IEEE 1609.2 certificate format could be useful to include ego-vehicle capabilities. This extension would allow for (authenticated) sensing and mapping capabilities attestation. The specification of these detectors, looking at inconsistencies between different message types, will be included in a future version of TS 103 759. C. Adversarial defense for local planning The V2X module of a CAV assumes trustworthy sensor data. This assumption is invalid, considering recent attacks on trajectory prediction [12]. Indeed, if an external attacker fools the planning stack, a CAV could not trust the maneuvers contained in a MSCM. Also, a CAV could not perform a consistency check between the maneuvers in a received MSCM and the predicted maneuvers from its planning stack. Recent research proposed defenses. For instance, researchers used adversarial training to increase the robustness of trajectory prediction against adversarial examples [13]. Mentioning the use of defenses for local planning in standards will decrease the risk of malicious MSCM. Such standardization effort could happen in the ISO TC22 SC32 committee as part of the future ISO 5083. VII. CONCLUSION Maneuver Sharing and Coordination Service (MSCS) offers to V2X-equipped vehicles the ability to exchange richer data to improve their telematics awareness and safety. However, the security of MSCS and its underlying message set is critical to guarantee quality data. Standardization efforts of MSCS and V2X misbehavior detection and reporting (separately) are ongoing worldwide, but misbehavior protection in MSCS still has to be addressed. In this paper, we summarized a threat assessment done on SAE J3186, which identified 16 attacks with mainly low and high risk levels. Thanks to this assessment, we proposed four directions to consider in ongoing standardization efforts. We hope this work could serve as a starting point to tackle the question of MSCS security by standard organizations and regulators. Omit a mandatory field in the MSCM -Overall: Low. • (High) Reproducibility: A malicious transmitter generates MSCMs with an incorrect format. • (Low) Impact : The message is not decodable. However, the attacker occupies the channel. • (Low) Stealthiness: An Attacker has to transmit a signed MSCM and hence will be reported and revoked eventually. Insert a TRR Location format that does not match the value contained in the field TRR Type Detect inconsistency between TRR Type and TRR Location Overall: Low. • (High) Reproducibility: A malicious transmitter generates MSCMs with an incorrect format. • (Low) Impact: The message is not decodable. However, the attacker occupies the channel • (Low) Stealthiness: An attacker has to transmit a signed MSCM and hence will be reported and revoked eventually. The attacker misbehave during the maneuver negotiation (MSCS protocol) The attacker performs the maneuver request and response to create fake maneuvers using pseudonym IDs (spoofing) Correlate with camera's information Overall: High. • (High) Reproducibility: An attacker encodes inaccurate information into the Maneuver before signing and transmitting. • (High) Impact: Surrounding vehicle cannot maneuver if the attacker has planned some (fake) maneuvers • (Medium) Stealthiness: Onboard sensors should reveal the requester or the responder does not exist at the specified location. The attacker denies all the request for maneuver Detect an abnormal number of request for maneuver being denied Overall: High. • (High) Reproducibility: An attacker sets the code to denies some request. • (High) Impact: Cancelling a maneuver request leads to traffic inefficiency. • (Low) Stealthiness: An Attacker has to transmit a signed MSCM and hence will be reported and revoked eventually. The attacker inserts an incorrect value in the MSCMrequest Set the maximum speed with a value way above speed limit (e.g., 200 km/h > 130 km/h) The maximum speed is way above the average speed of surrounding vehicles or the speed limit displayed by the map or perceived by the camera. Overall: High. • (High) Reproducibility: An attacker inserts a malicious value to the field maximum speed • (High) Impact: Maneuvering vehicles maneuver way above the speed limit (safety risk). • (Low) Stealthiness: speed value way above the maximal speed limit (implausible value). Attacker request a maneuver on a nonexistent lane by setting an incorrect La-neOffset Check the number of lanes displayed by the map or perceived by the camera. Overall: Medium. • (High) Reproducibility: An attacker inserts a malicious value to the field LaneOffset (located in the container TRR Location) • (Medium) Impact: Set the vehicle off the road (safety risk). • (Low) Stealthiness: An attacker is detectable through its certificate in the MSCM. The minimal speed is way below the average speed of surrounding vehicles or the speed limit displayed by the map or perceived by the camera. Overall: High. • (High) Reproducibility: An attacker can insert a malicious value to the field maximum speed • (High) Impact: Maneuvering vehicles will maneuver way below the speed limit (safety risk). • (Low) Stealthiness: speed value is way below the maximal speed limit (implausible value). Set a static field (e.g., executant width) with a plausible but incorrect value for a single a MSCM to prevent other vehicles to maneuver at a given moment. Check if the static field value is consistent with other MSCMs or with the BSMs of the attacker Overall: Low. • (High) Reproducibility: Set a fake value in a field does not require advanced knowledge. • (Low) Impact: Prevent vehicles to maneuver (traffic jam). • (Medium) Stealthiness: the value is inconsistent with other sources (e.g., MSCM) to detect the origin of the inconsistency Set the executant width with a size much bigger than the lane to prevent other vehicle from maneuvering (e.g., vehicle width > lane width) Check if the vehicle's width is above a threshold and check for consistency with the width value contained in the BSM sent by the attacker. Overall: Low. • (High) Reproducibility: Set a fake value in a field does not require advanced knowledge. • (Low) Impact: Prevent vehicles to maneuver (traffic jam). • (Low) Stealthiness: width value is above the lane width (implausible value) Set the executant length with a size much bigger than the length to prevent other vehicle from maneuvering (e.g., vehicle length > 30m) Check if the vehicle's length is above a threshold and check for consistency with the length value contained in the BSM sent by the attacker. Overall: Low. • (High) Reproducibility: Set a fake value in a field does not require advanced knowledge. • (Low) Impact: Prevent vehicles to maneuver (traffic jam). • (Low) Stealthiness: length value is an outlier looking at a length distribution for vehicles Set the maneuver's starting time after the ending time Check if starting time is set before the ending time. Overall: Low. • (High) Reproducibility: Set a fake value in a field does not require advanced knowledge. • (Low) Impact: The vehicle must process an implausible MSCM data and cannot accept other MSCM (DoS). • (Low) Stealthiness: the attacker is detectable through its certificate in the MSCM. Set the maneuver's starting time before the Msg Timestamp Check if starting time is set after the Msg Timestamp. Overall: Low. • (High) Reproducibility: Set a fake value in a field does not require advanced knowledge. • (Low) Impact: The vehicle must process an implausible MSCM data and cannot accept other MSCM (DoS). • (Low) Stealthiness: the attacker is detectable through its certificate in the MSCM. Set the Maneuver's or the Sub-Maneuver's duration is too long (e.g., starting time + the starting time > 1 min) preventing other maneuver request Check if duration between the smallest Sub-Maneuver's starting time and the latest Sub-Maneuver's ending time is over a threshold (e.g., 1min0 starting time is set after the Msg Timestamp. Overall: High. • (High) Reproducibility: Set a fake value in a field does not require advanced knowledge. • (High) Impact: The attacker prevents other CAVs to perform a maneuver request because the maneuver is still ongoing (DoS). • (Low) Stealthiness: the attacker is detectable through its certificate in the MSCM. Fig. 3 : 3Message Format (as indicated via the MSCM Type) TABLE I : IStatus of MBD specification per message type Even though the standards above have the same purpose, each document has its own set of specifications. Moreover, thus, each document might have slightly different cybersecurity threats. In this work, we present our TA of SAE J3186. 2) Security Standard: ETSI TS 103 759 [10] is a standard under development that defines V2X MBD and reporting activities for CAM and Decentralized Environmental Messages (DENM). The supporting TR 103 460V2X Message Misbehavior Detectors Status BSM / CAM Specified DENM Specified MSCM Specification is missing intent of neighboring CAVs. Figure 1 illustrates a MSC scenario. This service prevents the need for a CAV to guess or predict the maneuver intent of other CAVs. Currently, several ongoing standardization initiatives exist in: • North America (SAE J3186 [5]) • Europe (ETSI TR 103 578 [7] & TS 103 561 [8]) • China (CSAE 157 [9]) TABLE II : IIRisk ratings and criteria[17] Criteria High Medium Low Reproducibility The attack is easily reproducible The attack is reproducible with some limitations The attack is hard to reproduce due to its complexity or operational cost. Impact The attack infects the system and can lead to catastrophic damage (e.g., an accident) The attack infects the system and can lead to moderate damage (e.g., traffic jam) The attack has no impacts on the system but can inflict minor harm Stealthiness Unknown attack occurs in certain applications The attack needs several misbehavior detectors, message types, or data sources to be detected Broadcasted information readily explain the misbehavior overlapping maneuvers to two different vehicles. For each vehicle, the attacker will use a unicast model. For example, in a first maneuver negotiation (time t1), the attacker proposes that vehicle A completes a maneuver at t3. Then, in a second maneuver negotiation (t2), the attacker proposes that vehicle B completes a maneuver at t3. As a result, vehicles A and B are driving towards the same location, resulting in increased safety risk (car collisions) and decreased traffic efficiency. This example demonstrates the importance of assessing data trustworthiness and detecting attacks in MSCS. TABLE III : IIIThreat analysis of selected use-cases from SAE J3186Use Case Attacks Defense Risk The attacker generates a MSCM with an incorrect ASN.1 format (the message is not decodable). TABLE IV : IVOverall: High.• (Medium) Reproducibility: A malicious transmitter can generate MSCM with an incorrect format. • (High) Impact : The message is not decodable. However, the attacker occupies the channel • (High Stealthiness: a fake vehicle with plausible mobility data is hard to detect without the PKI or the use of sensors.Overall: High.• (High) Reproducibility: Crafting overlapping sub-maneuvers does not require advanced knowledge. • (High) Impact: The message is not decodable. However, the attacker occupies the channel • (Low) Stealthiness: Attacker has to transmit a signed MSCM and hence will be reported and revoked eventually.The attacker does not answer to some maneuver request Exclude the attacker and report evidence showing the attacker can transmit V2X message but choose to not participate (e.g., a maneuver request that has been approved by the attacker)Overall: High.• (High) Reproducibility: An attacker can encode inaccurate information into the SensorInformationContainer before signing and transmitting. • (High) Impact: Surrounding vehicle cannot maneuver in the absencense of respo • (High) Stealthiness: Hardly detectable Set the minimal speed with a value way below speed limit (e.g., 10 km/h < 130 km/h)Threat analysis of use-cases from SAE J3186 Use Case Attacks Defense Risk The attacker performs spoofing attacks (ghost vehicle, ghost maneuvers). Overloads the MSCM with fake executant IDs and fake Sub- Maneuvers to create a longer processing time for the receivers. Use the camera to detect ghost vehicle and to check for maneuver consistency Add overlapping sub- maneuvers to provoke a car collision Check if at least two sub-maneuvers are over- lapping in time and space The attacker misbehave during the maneuver negotiation (MSCS protocol) The attacker inserts an incorrect value in the MSCM- request Henceforth, we will refer to both BSM and CAM services as only BSM service. V2x communications message set dictionary. Sae, Society of Automotive EngineersSAE, "V2x communications message set dictionary," Society of Auto- motive Engineers, J2735, 2020. Intelligent Transport Systems (ITS). Etsi, ETSI, "Intelligent Transport Systems (ITS); Vehicular Communications. Vehicular Communications; Basic Set of Applications; Part 2: Specification of Cooperative Awareness Basic Service. EN. 103European Telecommunications Standards InstituteBasic Set of Applications; Part 2: Specification of Cooperative Aware- ness Basic Service," European Telecommunications Standards Institute, EN 103 637-2, 2014. my autonomous car is an elephant": A machine learning based detector for implausible dimension. J.-P Monteuuis, J Petit, J Zhang, H Labiod, S Mafrica, A Servel, Security of Smart Cities. IEEEJ.-P. Monteuuis, J. Petit, J. Zhang, H. Labiod, S. Mafrica, and A. Servel, ""my autonomous car is an elephant": A machine learning based detector for implausible dimension," in Security of Smart Cities, Industrial Control System and Communications (SSIC). IEEE, 2018. Misbehavior Detection for V2X communication. DEFCON 28 Car Hacking Village. J Petit, R Ansari, C Chen, J. Petit, R. Ansari, and C. Chen. (2020) Misbehavior Detection for V2X communication. DEFCON 28 Car Hacking Village. [Online]. Application protocol and requirements for maneuver sharing and coordinating service. Sae, Society of Automotive Engineers. 31862022SAE, "Application protocol and requirements for maneuver sharing and coordinating service," Society of Automotive Engineers, J3186, 2022. Car 2 Car Communication Consortium White Paper. C2CCCGuidance for day 2 and beyond roadmapC2CCC, "Guidance for day 2 and beyond roadmap," Car 2 Car Communication Consortium White Paper, 2019. Intelligent transport system (its); vehicular communications; informative report for the manoeuvres' coordination service. Etsi, TS. 103578European Telecommunications Standards InstituteETSI, "Intelligent transport system (its); vehicular communications; informative report for the manoeuvres' coordination service," European Telecommunications Standards Institute, TS 103 578, 2022. Intelligent transport systems (its); vehicular communications; basic set of applications; maneuver coordination service. TR. 103561European Telecommunications Standards Institute--, "Intelligent transport systems (its); vehicular communications; basic set of applications; maneuver coordination service," European Telecommunications Standards Institute, TR 103 561, 2018. Cooperative intelligent transportation system vehicular communication application layer specification and data exchange standard. C-Sae , China Society of Automotive Engineers. 2157C-SAE, "Cooperative intelligent transportation system vehicular com- munication application layer specification and data exchange standard (phase 2)," China Society of Automotive Engineers, C-SAE 157, 2020. Intelligent transport systems (its); security; misbehaviour reporting service. Etsi, European Telecommunications Standards Institute, ETSI TS 103759. ETSI, "Intelligent transport systems (its); security; misbehaviour report- ing service," European Telecommunications Standards Institute, ETSI TS 103759, 2021. Intelligent transport systems (its); security; pre-standardization study on misbehaviour detection; release 2. ETSI TR 103 460European Telecommunications Standards Institute--, "Intelligent transport systems (its); security; pre-standardization study on misbehaviour detection; release 2," European Telecommunica- tions Standards Institute, ETSI TR 103 460, 2020. Evaluating perception attacks on prediction and planning of autonomous vehicles. Y Man, R Muller, M Li, Z B Celik, R Gerdes, USENIX Security Symposium Poster Session. Y. Man, R. Muller, M. Li, Z. B. Celik, and R. Gerdes, "Evaluating perception attacks on prediction and planning of autonomous vehicles," in USENIX Security Symposium Poster Session, 2022. Semi-supervised semantics-guided adversarial training for trajectory prediction. R Jiao, X Liu, T Sato, Q A Chen, Q Zhu, arXiv:2205.14230arXiv preprintR. Jiao, X. Liu, T. Sato, Q. A. Chen, and Q. Zhu, "Semi-supervised semantics-guided adversarial training for trajectory prediction," arXiv preprint arXiv:2205.14230, 2022. Shapeshifter: Robust physical adversarial attack on faster r-cnn object detector. S.-T Chen, C Cornelius, J Martin, D H P Chau, Joint European Conference on Machine Learning and Knowledge Discovery in Databases. SpringerS.-T. Chen, C. Cornelius, J. Martin, and D. H. P. Chau, "Shapeshifter: Robust physical adversarial attack on faster r-cnn object detector," in Joint European Conference on Machine Learning and Knowledge Discovery in Databases. Springer, 2018, pp. 52-68. Standard for wireless access in vehicular environments-security services for applications and management messages. IEEE. Std 1609.2IEEE, "Standard for wireless access in vehicular environments-security services for applications and management messages," Std 1609.2, 2016. Attacker model for connected and automated vehicles. J.-P Monteuuis, J Petit, J Zhang, H Labiod, S Mafrica, A Servel, ACM Computer Science in Car Symposium. J.-P. Monteuuis, J. Petit, J. Zhang, H. Labiod, S. Mafrica, and A. Servel, "Attacker model for connected and automated vehicles," in ACM Com- puter Science in Car Symposium, 2018. Security attacks impact for collective perception based roadside assistance: A study of a highway on-ramp merging case. M Hadded, P Merdrignac, S Duhamel, O Shagdar, International Wireless Communications and Mobile Computing (IWCMC). 2020M. Hadded, P. Merdrignac, S. Duhamel, and O. Shagdar, "Security attacks impact for collective perception based roadside assistance: A study of a highway on-ramp merging case," in International Wireless Communications and Mobile Computing (IWCMC), 2020. Sara: Security automotive risk analysis method. J.-P Monteuuis, A Boudguiga, J Zhang, H Labiod, A Servel, P Urien, 4th ACM Workshop on Cyber-Physical System Security. J.-P. Monteuuis, A. Boudguiga, J. Zhang, H. Labiod, A. Servel, and P. Urien, "Sara: Security automotive risk analysis method," in 4th ACM Workshop on Cyber-Physical System Security, 2018.
[]
[ "AssetField: Assets Mining and Reconfiguration in Ground Feature Plane Representation Original Scene Bird-Eye View Optimized Feature Volumes Original 3D Scene Scene Reconfiguration via Neural Rendering Ground Feature Planes Captured Multiview Images Ground Feature Plane Rendering", "AssetField: Assets Mining and Reconfiguration in Ground Feature Plane Representation Original Scene Bird-Eye View Optimized Feature Volumes Original 3D Scene Scene Reconfiguration via Neural Rendering Ground Feature Planes Captured Multiview Images Ground Feature Plane Rendering" ]
[ "Yuanbo Xiangli \nThe Chinese University of Hong Kong\n\n", "Linning Xu \nThe Chinese University of Hong Kong\n\n", "Xingang Pan \nMax Planck Institute for Informatics\n\n", "Nanxuan Zhao [email protected]@pjlab.org.cn \nAdobe Research\n\n", "Bo Dai \nShanghai AI Laboratory\n\n", "Dahua Lin [email protected]@mpi-inf.mpg.de \nThe Chinese University of Hong Kong\n\n\nShanghai AI Laboratory\n\n" ]
[ "The Chinese University of Hong Kong\n", "The Chinese University of Hong Kong\n", "Max Planck Institute for Informatics\n", "Adobe Research\n", "Shanghai AI Laboratory\n", "The Chinese University of Hong Kong\n", "Shanghai AI Laboratory\n" ]
[]
E x tr a c t O b je c t T e m p la teFigure 1: Man-made environments are often characterized by repetitive scene objects, e.g. tables, chairs, and trees. AssetField represents these environments with a set of informative ground feature planes aligning with the physical ground, from which neural representations of scene objects are extracted and grouped into categories. The proposed mechanism allows users to manipulate and compose assets directly on the ground feature plane and produces high-quality rendering on novel scene configurations.AbstractBoth indoor and outdoor environments are inherently structured and repetitive. Traditional modeling pipelines keep an asset library storing unique object templates, which is both versatile and memory efficient in practice. Inspired by this observation, we propose AssetField, a novel neural scene representation that learns a set of object-aware ground feature planes to represent the scene, where an asset library storing template feature patches can be constructed in an unsupervised manner. Unlike existing methods which require object masks to query spatial points for object editing, our ground feature plane representation offers a natural visualization of the scene in the bird-eye view, allowing a variety of operations (e.g. translation, duplication, deformation) on objects to configure a new scene. With the template feature patches, group editing is enabled for scenes with many recurring items to avoid repetitive work on object individuals. We show that AssetField not only achieves competitive performance for novel-view synthesis but also generates realistic renderings for new scene configurations.
10.48550/arxiv.2303.13953
[ "https://export.arxiv.org/pdf/2303.13953v1.pdf" ]
257,757,425
2303.13953
c4c2aed56ffc66d8154941d21e24cce61e4f03c8
AssetField: Assets Mining and Reconfiguration in Ground Feature Plane Representation Original Scene Bird-Eye View Optimized Feature Volumes Original 3D Scene Scene Reconfiguration via Neural Rendering Ground Feature Planes Captured Multiview Images Ground Feature Plane Rendering Yuanbo Xiangli The Chinese University of Hong Kong Linning Xu The Chinese University of Hong Kong Xingang Pan Max Planck Institute for Informatics Nanxuan Zhao [email protected]@pjlab.org.cn Adobe Research Bo Dai Shanghai AI Laboratory Dahua Lin [email protected]@mpi-inf.mpg.de The Chinese University of Hong Kong Shanghai AI Laboratory AssetField: Assets Mining and Reconfiguration in Ground Feature Plane Representation Original Scene Bird-Eye View Optimized Feature Volumes Original 3D Scene Scene Reconfiguration via Neural Rendering Ground Feature Planes Captured Multiview Images Ground Feature Plane Rendering (density, color, semantics) Asset Mining & Categorization AssetField Asset Library Layout ① Layout ② Rendering physically aligned E x tr a c t O b je c t T e m p la teFigure 1: Man-made environments are often characterized by repetitive scene objects, e.g. tables, chairs, and trees. AssetField represents these environments with a set of informative ground feature planes aligning with the physical ground, from which neural representations of scene objects are extracted and grouped into categories. The proposed mechanism allows users to manipulate and compose assets directly on the ground feature plane and produces high-quality rendering on novel scene configurations.AbstractBoth indoor and outdoor environments are inherently structured and repetitive. Traditional modeling pipelines keep an asset library storing unique object templates, which is both versatile and memory efficient in practice. Inspired by this observation, we propose AssetField, a novel neural scene representation that learns a set of object-aware ground feature planes to represent the scene, where an asset library storing template feature patches can be constructed in an unsupervised manner. Unlike existing methods which require object masks to query spatial points for object editing, our ground feature plane representation offers a natural visualization of the scene in the bird-eye view, allowing a variety of operations (e.g. translation, duplication, deformation) on objects to configure a new scene. With the template feature patches, group editing is enabled for scenes with many recurring items to avoid repetitive work on object individuals. We show that AssetField not only achieves competitive performance for novel-view synthesis but also generates realistic renderings for new scene configurations. Introduction The demand for bringing our living environment into a virtual realm continuous to increase these days, with example cases ranging from indoor scenes such as rooms and restaurants, to outdoor ones like streets and neighborhoods. Apart from the realistic 3D rendering, real-world applications also require flexible and user-friendly editing of the scene. Use cases can be commonly found in interior design, urban planning etc. To save human labor and expense, users need to frequently visualize different scene configurations before finalizing a plan and bringing it to reality, like shown in Fig.1. For their interests, a virtual environment offering versatile editing choices and high rendering quality is always preferable. In these scenarios, objects are primarily located on a horizontal plane like ground, and can be inserted to/deleted from the scene. Translation along the plane and rotation around the vertical axis are also common operations. Furthermore, group editing becomes essential when scenes are populated with recurring items (e.g. substitute all chairs with stools and remove all sofas in a bar). While recent advances in neural rendering [27,3,44,28] offer promising solutions to producing realistic visuals, they struggle to meet the aforementioned editing demands. Specifically, traditional neural radiance field (NeRF)-based methods such as [47,26,3] encode an entire scene into a single neural network, making it difficult to manipulate and composite due to its implicit nature and limited network capacity. Some follow-up works [41,16] tackle objectaware scene rendering in a bottom-up fashion by learning one model per object and then performing joint rendering. Another branch of methods learn object radiance fields using instance masks [42], object motions [46], and image features [39,21] as clues but are scene-specific, limiting their applicable scenarios. Recently, some approaches have attempted to combine voxel grids with neural radiance fields [23,44,28] to explicitly model the scene. Previous work [23] showed local shape editing and scene composition abilities of the hybrid representation. However, since the learned scene representation is not object-aware, users must specify which voxels are affected to achieve certain editing requirements, which is cumbersome, especially for group editing. Traditional graphical workflows build upon an asset library that stores template objects, whose copies are deployed onto a 'canvas' by designers, then rendered by some professional software (e.g. interior designers arrange furniture according to floorplans). This practice significantly saves memory for large scene development and offers users versatile editing choices, which inspires us to resemble this characteristic in neural rendering. To this end, we present AssetField, a novel neural representation that bears the editing flexibility of traditional graphical workflows. Our method factorizes a 3D neural field into a ground feature plane and a vertical feature axis. As illustrated in Fig. 1, the learned ground feature plane is a 2D feature plane that is visually aligned with the bird-eye view (BEV) of the scene, allowing intuitive manipulation of individual objects. It is also able to embed multiple scenes into scene-specific ground feature planes with a shared vertical feature axis, rendered using a shared MLP. The learned ground feature planes encode scene density, color and semantics, providing rich clues for object detection and categorization. We show that assets mining and categorization, and scene layout estimation can be directly performed on the ground feature planes. By maintaining a cross-scene asset library that stores template objects' ground feature patches, our method enables versatile editing at object-level, category-level, and scene-level. In summary, AssetField 1) learns a set of explicit ground feature planes that are intuitive and user-friendly for scene manipulation; 2) offers a novel way to discover assets and scene layout on the informative ground feature planes, from which one can construct an asset library storing feature patches of object templates from multiple scenes; 3) im-proves group editing efficiency and enables versatile scene composition and reconfiguration and 4) provides realistic renderings on new scene configurations. Related Works Neural Implicit Representations and Semantic Fields. Since the introduction of neural radiance fields [27], many advanced scene representations have been proposed [24,44,28,11,24,10,28], demonstrating superior performance in terms of quality and speed for neural renderings. However, most of these methods are semantic and content agnostic, and many assume sparsity to design a more compact structure for rendering acceleration [24,11,28]. We notice that the compositional nature of a scene and the occurrence of repetitive objects within can be further utilized, where we can extract a reusable asset library for more scalable usages, similar to those adopted in the classical modeling pipeline. A line of recent neural rendering works has explored the jointly learning a semantic fields along with the original radiance field. Earlier works use available semantic labels [50] or existing 2D detectors for supervision [22]. The realized semantic field can enable category or object-level control. More recently, [39,21] explore the potential of distilling self-supervised 2D image feature extractors [8,2,14] into NeRF, and showcasing their usages of support local editing. In this work, we target an orthogonal editing goal where the accurate control of high-level scene configuration and easy editing on object instances is desired. Object Manipulation and Scene Composition. Traditional modeling and rendering pipelines [5,7,33,34,35,17] are vastly adopted for scene editing and novel view synthesis in early approaches. For example, Karsch et al. [17] propose to realistically insert synthetic objects into legacy images by creating a physical model of the scene from user annotations of geometry and lighting conditions, then compose and render the edited scene. Cossairt et al. [12] consider synthetic and real objects compositions from the perspective of light field, where objects are captured by a specific hardware system. [49,19,18] consider the problem of manipulating existing 3D scenes by matching the objects to cuboid proxies/pre-captured 3D models. These days, several works propose to tackle objectdecomposite rendering under the context of newly emerged neural implicit representations [27]. Ost et al. [31] target dynamic scenes and learn a scene graph representation that encodes object transformation and radiance at each node, which further allows rendering novel views and re-arranged scenes. Kundu et al. [22] resort to existing 3D object detectors for foreground object extraction. Sharma et al. [36] disentangles static and movable scene contents, leveraging object motion as a cue. Guo et al. [16] propose to learn object-centric neural scattering functions to implicitly model per-object light transportation, enabling scene rendering with moving objects and lights. Neural Rendering in a Room [41] targets indoor scenes by learning a radiance field for each pre-captured object and putting objects into a panoramic image for optimization. While these methods need to infer object from motion, or require one model per object, ObjectNeRF [43] learns a decompositional neural radiance field, utilizing semantic masks to separate objects from the background to allow editable scene rendering. uORF [45] performs unsupervised discovery of object radiance fields without the need for semantic masks, but requires cross-scene training and is only tested on simple synthetic objects without textures. AssetField In this work, we primarily consider a branch of realworld application scenarios that require fast and highquality rendering of scenes whose configuration is subject to change, such as interior design, urban planning and traffic simulation. In these cases, objects are mainly placed on some dominant horizontal plane, and is commonly manipulated with insertion, deletion, translation on the horizontal plane, and rotation around the vetical axis, etc. We first introduce our ground feature plane representation in Sec. 3.1 to model each neural field. Sec. 3.2 describes the process of assets mining with the inferred the ground feature plane. We further leverage the color and semantic feature planes to categorize objects in an unsupervised manner, which is illustrated in Sec. 3.3. Finally, Sec. 3.4 demonstrates the construction of a cross-scene asset library that enables versatile scene editing. Ground Feature Plane Representation Ground plan has been commonly used for indoor and outdoor scene modeling [36,13,32]. We adopt a similar representation to parameterize a 3D neural field with a 2D ground feature plane M of shape L×W ×N , and a globally encoded vertical feature axis H of shape H × N , where N is the feature dimension. A query point at coordinate (x, y, z) is projected onto M(plane) and H(axis) to retrieve its feature values m and h via bilinear/linear interpolation: m = Interp(M, (x, y)), h = Interp(H, z),(1) which are then combined and decoded into the 3D scene feature via a MLP decoder. Concretely, a 3D radiance field is parameterized by a set of ground feature planes Figure 2: TensoRF with full 3D factorization produces noisy feature planes; our ground plane representation yields informative features that clearly illustrated scene contents and layout after discretization, especially in the density field. Red boxes: two spatially close objects can be clearly separated on the density plane but not the RGB plane. Blue boxes: objects with similar geometry but different appearance can be distinguished on the RGB plane but not the density plane. C(r) = N i=1 T i (1 − exp (−σ i δ i )) c i ,(2) where T i = exp(− i−1 j=1 σ j δ j ) , and supervised by the 2D image reconstruction loss with r ( Ĉ (r)−C(r) 2 2 ), where C(r) is the ground truth pixel color. Such neural representation are beneficial to our scenario. Firstly, the ground feature planes are naturally aligned with the BEV of the scene, mirroring the human approach to high-level editing and graphic design, where artists and designers mainly sketch on 2D canvas to reflect a 3D scene. Secondly, the globally encoded vertical feature axis encourages the ground feature plane to encode more scene information, which aligns better with scene contents. Thirdly, this compact representation is more robust when trained with sparse view images, where the full 3D feature grids are easy to overfit under insufficient supervision, producing noisy values, as depicted in Fig. 2. Assets Mining on Ground Feature Plane For the ease of demonstration, let us first consider a simplified case where objects are scattered on an invisible horizontal plane, as in Fig. 3 (a). On scenes with a background, a pre-filtering step can be performed on the learned ground feature plane as illustrated in Fig. 4. We start from modeling the radiance field, where a set of ground features planes M=(M σ , M c ) describing scene density and color are inferred following the formulation in Sec. 3.1. It can be observed that M σ tends to exhibit sharper object boundaries compared to the color feature plane, as shown in the red boxes in Fig. 2. This could be attributed to the mechanism of neural rendering (Eq. 2), where the model firstly learns a clean and accurate density field to guide the learning of the color field. We therefore prefer to use M σ for assets mining. In the example scene, the feature plane is segmented into two clusters with K-means [25] to obtain a binary mask feature plane representation factorizes a neural field into a horizontal feature plane and a vertical feature axis. (c) We further integrate color and semantic field into a 2D neural plane, which is decoded into 3D-aware features with the geometry guidance from scene density. The inferred RGB-DINO plane is rich in object appearance and semantic clues whilst being less sensitive to vertical displacement between objects, on which we can (d) detect assets and grouping them into categories. (e) For each category, we select a template object and store its density and color ground feature patches into the asset library. A cross-scene asset library can be construct by letting different scenes fit there own ground feature planes whilst sharing the same vertical feature axes and decoders/renderers. of the objects. Contour detection [37,6] is then applied to locate each object, resulting in a set of bounding box. Note that the number of clusters can be customized according to the objects users want to highlight. In more complex sce-narios where objects are arranged in a hierarchical structure, (e.g. computer -table -floor), the clustering step can be repeated to gradually unpack the scene, as illustrated in Fig. 4. With the bounding boxes, a collection of object neural representations P={(p σ , p c )} can be obtained, which are the enclosed feature patches on M σ and M rgb . To address complex real-world scenes, we take inspiration from previous works [21,39] that models a DINO [9] field to guide the learning a semantic-aware radiance field. Similarly, we can learn a separate DINO ground feature plane M dino to provide more explicit indications of object presence. As AssetField models a set of separate fields, object discovery can be conducted on any field that offers the most distinctive features in a scene dependent manner. At this point, users can intuitively edit the scene with object feature patches P, e.g.,"paste" (p i σ , p i c ) ∈ P to the designated location on M σ and M rgb to insert object i. The edited ground feature planes M =(M σ , M c ) get paired with the original vertical feature axes H=(H σ , H c ) to decode 3D information using the original Dec σ and Dec rgb . Unsupervised Asset Grouping Despite being versatile, users can only interact with individual instances in P from the learned ground planes, whereas group editing is also a desirable feature in realworld applications, especially when objects of the same cat-egory need to be altered together. While the definition of object category can be narrow or broad, here we assume that objects with close appearance and semantics are analogues and use RGB and semantic feature plane for assets grouping. A case where the density features fail to distinguish two visually different objects is highlight in Fig. 2. Occupancy-Guided RGB-DINO field. As our goal is to "self-discover" assets from neural scene representation, there is no extra prior on object category to regularize scene features. 3D voxel-based methods such as those described in [24,44], may learn different sets of features to express the same objects, as grid features are independently optimized. Such issue can be alleviated by our proposed neural representation, where the ground feature plane M is constrained by the globally shared vertical feature axis H. Concretely, given two identical objects i, j placed on a horizontal flat surface, the same feature chunk on H will be queried during training, which constraints their corresponding feature patches p i and p j to be as similar as possible so that they can be decoded into the same set of 3D features. However, such constraint no longer holds when there is a vertical displacement among identical objects (e.g. one the ground and one on the table), where different feature chunks on H are queried, leading to divergent p i and p j . To learn a more object-centric ground feature plane rich in color and semantics clues, we propose to integrate the color and semantic fields by letting them share the same set of ground feature planes, denoted by M rgb-dino . Instead of appending a vertical feature axis, here we use scene density features to guide the decoding of M rgb-dino into 3Daware features, as illustrated in Fig. 3(c). It can be interpreted as M σ and H σ fully capture the scene geometry, while M rgb-dino captures the 'floorplan' of scene semantics layouts and appearances. For a query point at (x, y, z), its retrieved density feature m σ and h σ are mapped to a color feature v rgb and a semantic feature v dino via two MLPs, which are then decoded into scene color c and semantic f dino along with the RGB-DINO plane feature m rgb-dino = Interp(M rgb-dino , (x, y)) via Dec rgb and Dec dino . Assets Grouping and Template Matching. On the inferred RGB-DINO ground feature plane, we then categorize the discovered objects by comparing their RGB-DINO feature patches enclosed in bounding boxes. However, due to the absence of object pose information, pixel-wise comparison is not ideal. Instead, we compare the distributions of color and semantic features among patches. To do this, we first discretize M rgb-dino with clustering (e.g. K-means), which results in a set of labeled object feature patches K. The similarity between two object patches k i , k j ∈ K are measured by the Jensen-Shannon Divergence over the distribution of labels, denoted by JSD(k i ||k j ). Agglomerative clustering [29] is then performed using JS-divergence as the distance metric. The number of clusters can be set by in-specting training views, and can be flexibly adjusted to fit users' desired categorization granularity. With scene assets grouped into categories, a template object can be selected from each cluster either randomly or in a user-defined manner. We can further extract scene layout in BEV by computing the relative pose between the template object and its copies, i.e. to optimize a rotation angle θ that minimizes the pixel-wise loss between the RGB-DINO feature patches of the template and each copy with θ * = argmin θ N i ||p i − R θ (p) i || 2 2 for p ∈ P rgb-dino , wherep is the template RGB-DINO feature patch, R θ rotates the input feature patch by θ. Cross-scene Asset Library Following the proposed framework, a scene can be represented with (1) a set of template feature patches P={(p σ , p rgb )}, (2) a layout describing object position and pose in the BEV, (3) the shared vertical feature axes H = (H σ , H rgb ), and (4) MLP decoders Dec σ , Dec rgb , which enables versatile scene editing at object-, category-, and scene-level. The newly configured scenes can be directly rendered without retraining. An optional template refinement step is also allowed. Examples are given in Sec. 4. Previous work [24] demonstrates that voxel-based neural representations support multi-scene modeling by learning different voxel embeddings for each scene whilst sharing the same MLP renderer. However, it does not support crossscene analogue discovery due to the aforementioned lack of constraints issue, whereas in reality, objects are not exclusive to a scene. Our proposed neural representation has such potential to discover cross-scene analogues by also sharing the vertical feature axes among different scenes. Consequently, we can construct a cross-scene asset library storing template feature patches, and continuously expand it to accommodate new ones. Experiment In this section, we first describe our experiment setup, then evaluate AssetField on novel view synthesis both quantitatively and qualitatively, demonstrating its advantages in asset mining, categorization, and editing flexibility. More training details and ablating results of hyperparameters (e.g. the number of clusters, the pairing of plane feature, and axis feature) are provided in supplementary. Experimental Setup Dataset. A synthetic dataset is created for evaluation. We compose 10 scenes resembling common man-made environments such as conference room, living room, dining hall and office. Each scene contains objects from 3∼12 categories with a fixed light source. For each scene, we render 50 views with viewpoints sampled on a half-sphere, Fig. 7. We report PSNR(↑), SSIM(↑) [40] and LPIPS(↓) [48] for evalution. The best and second best results are highlighted. among which 40 are used for training and the rest for testing. We demonstrate flexible scene manipulation with As-setField on both the synthetic and real-world data, including scenes from Mip-NeRF 360 [4], DONeRF [30], and Ob-jectNeRF [42]. We also show manipulation results on city scenes collected from Google Earth Studio [1]. Implementation. We use NeRF [27] and TensoRF [11] as baselines to evaluate the rendering quality of the original scenes. For a fair comparison, all methods are implemented to model an additional DINO field. Specifically, (1) NeRF is extended with an extra head to predict view-independent DINO feature [2] in parallel with density. (2) For TensoRF, we additionally construct the DINO field which is factorized along 3 directions the same as its radiance field. (3) S(tandard)-AssetField separately models the density, RGB, and DINO fields. (4) I(ntegrated)-AssetField models the density field the same as S-AssetField, and an integrated RGB-DINO ground feature plane. Both S-AssetField and I-AssetField adopt outerproduct to combine ground plane features and vertical axis features, following [11]. The resolution of feature planes in TensoRF baseline and AssetField are set to 300×300. Detailed model adaptation can be found in the supplementary. We train NeRF for 200k iterations, and 50k iterations for TensoRF and AssetField using Adam [20] optimization with a learning rate set to 5e −4 for NeRF and 0.02 for Ten-soRF and AssetField. Results Novel View Rendering. We compare S-AssetField and I-AssetField with the adapted NeRF [27] and TensoRF [11] as described above. Quantitative results are provided in Tab. 1. It is noticeable that AssetField's ground feature plane representation (i.e. xy-z) achieves comparable performance with TensoRF's 3-mode factorization (i.e. xy-z, xz-y,yz-x), indicating the suitability of adopting ground plane representations for such scenes. Our method also inherits the merit of efficient training and rendering from grid-based methods. Compared to NeRF, our model converges 40x faster at training and renders 30x faster at inference. Object Detection and Categorization. In Fig. 2 we already showed an example of the ground feature planes learned by AssetField compared to the xy-planes learned by TensoRF. While TensoRF produces noisy and less informative feature planes that is unfriendly for object discovery in the first place, AssetField is able to identify and categorize Figure 6: Multi-scene learning on the Toydesk dataset [42]. As real-world scenes usually exhibit noisier color and density features, we apply the object mask obtained from the density plane before categorization. The common object between scenes (yellow) can be correctly clustered with I-AssetField's occupancyguided RGB-DINO plane features (green) whilst the independently modeled neural planes by S-AssetField fails (red). most of the scene contents, as shown in Fig. 7 (b). Furthermore, I-AssetField is more robust to vertical displacement, as shown in Fig. 5 Recall that I-AssetField is able to identify object analogues across different scenes, to demonstrate such ability, we jointly model the two toy desk scenes from [42] by letting them share the same vertical feature axes and MLPs as described in Sec. 3.4. The inferred feature planes are showed in Fig. 6. Since the coordinate systems of these two scenes are not aligned with the physical world, we perform PCA [15] on camera poses such that the xy-plane is expanded along the ground/table-top. However, we cannot guarantee their table surfaces are at the same height, meaning that vertical displacement among objects is inevitable. I-AssetField is able to infer similar RGB-DINO feature values for the common cube plush (yellow circle), whilst the independently learned RGB/DINO planes in S-AssetField are affected by the height difference. Scene Editing. Techniques on 2D image manipulation can be directly applied to ground feature planes. Fig. 7 shows that AssetField supports a variety of operations, such as object removal, insertion, translation and rescaling. Scenelevel reconfiguration is also intuitive by composing objects' density and color ground feature patches. In particular, I-AssetField associates the RGB-DINO field with space occupancy, producing more plausible warping results. Original Rendering Rendering from the Edited Scene Remove ceiling lights Figure 9: Expanding the 2D ground plane back to 3D feature grids, explicit control on full 3D space is allowed. We remove the ceiling light by setting the density grids as zero at the target region. [30]. We use RGB-DINO plane for assets discovery. ture unchanged. Results show that I-AssetField successfully preserves object structure and part semantics, whereas S-AssetField fails to render the cork correctly. Despite the convenience of ground feature plane repre- Original Scene Figure 13: We expand the asset library from the living room with the newly included assets mics from [27]. The template of mics is in the shared latent space with the living room and can thus naturally composed together for rendering. Figure 14: Feature plane refinement. The object template, when trained among all instances within the scene, produces more accurate feature map compared to the isolated ones. sentation, it does not directly support manipulating overlapping/stacked objects. However, one can expand the ground feature plane back to 3D feature grids with its pairing vertical feature axis, and control the scene in the conventional way as described in [24]. An example is given in Fig. 9. Fig. 10 shows AssetField's editing capability on realworld datasets [4,42,30]. Additionally, on a self-collected city scene from Google Earth, we find a construction site and complete it with different nearby buildings (withinscene editing), even borrow Colosseum from Rome (crossscene editing). Results are shown in Fig 11. The test view PSNR for the original scene is NeRF/TensoRF/S-AssetField/I-AssetField: 24.55/27.61/ 27.54/ 27.95. Group Editing and Scene Reconfiguration. Recall that a template object can be selected for each asset category to substitute all its instances in the scene (on the ground feature planes). Consequently, we are allowed to perform group editing like changing the color of a specific category as depicted in Fig. 12. Scene-level reconfiguration is also intuitive, where users can freely compose objects from the asset library on a neural 'canvas' to obtain a set of new ground feature planes, as demonstrated in Fig. 13. The environments or containers (e.g. the floor or an empty house) can also be considered as a special asset category, where small objects (e.g. furniture) can be placed into the container to deliver immersive experience. The final scene can be composited with summed density value and weighted color, as has been discussed in [38]. Template Refinement. Grid-based neural fields are sensitive to training views with insufficient point supervision, leading to noisy and inaccurate feature values. Appearance differences caused by lighting variation, occlusion, etc., interferes the obtaining of a clean template feature patch. An example can be found in Fig. 14. Due to imbalanced training view distribution, the chair in the corner receives less supervision, resulting in inconsistent object feature patch within a category. Such issue can be alleviated with a following-up template refinement step. With the inferred scene layout and the selected object templates (Sec. 3.3). We propose to replace all instances p ∈ P with their representative category templatep and optimize this set of feature patches to reconstruct the scene instead of the full ground planes. Consequently, the template feature patch integrates supervisions from all instances in the scene to overcome appearance variations and sparse views. Discussion and Conclusion We present AssetField, a novel framework that mines assets from neural fields. We adopt a ground feature plane representation to model scene density, color and semantic fields, on which assets mining and grouping can be directly conducted. The novel occupancy-guided RGB-DINO feature plane enables cross-scene asset grouping and the construction of an expandable neural asset library, enabling a variety of intuitive scene editing at object-, category-and scene-level. Extensive experiments are conducted to show the easy control over multiple scenes and the realistic rendering results given novel scene configurations. However, AssetField still suffer from limitations like: separating connected objects in the scene; handling stacked/overlapped objects; and performing vertical translations. Rendering quality might also be compromised due to complex scene background in real-world. More limitations are discussed in the supplementary. We believe the proposed representation can be further explored for the manipulation and construction of large-scale scenes, e.g., by following floorplans or via a programmable scheme like procedural modeling. M=(M σ , M c ), and vertical feature axes H=(H σ , H c ), for the density and color fields respectively. The retrieved feature values m=(m σ , m c ) and h=(h σ , h c ) are then combined and decoded into point density σ and view-dependent color c values by two decoders Dec σ , Dec rgb . Points along a ray r are volumetrically integrated following[27]: Figure 3 : 3Overview of AssetField. (a) We demonstrate on a scene without background for clearer visuals. (b) The proposed ground Figure 4 : 4Top: Simple scene background can be filtered on the learned ground feature plane in advance using feature clustering. Bottom: (a) Nested structure can be separated by (c) firstly identify the enclosed chair, then set its value to background feature for table patch. (b) Items placed on top of a surface can be detected by (d) another round of filtering that treats table surface as background. Figure 5 : 5The RGB-DINO ground feature plane from I-AssetField yields consistent features for analogues with vertical displacement, whereas S-AssetField infers different set of features due to the lack of constraints. Figure 7 :Figure 8 : 78Results of assets mining and scene editing with I-AssetField on synthetic scenes. (a) Our approach learns informative density and RGB-DINO ground feature planes that support object detection and categorization. (b) With joint training, an asset library can be constructed by storing ground feature plane patches of the radiance field (we show label patches here for easy visualization). (c) The proposed ground plane representation provides an explicit visualization of the scene configuration, which can be directly manipulated by users. The altered ground feature planes are then fed to the global MLP renderer along with the shared vertical feature axes to render the novel scenes. Operations such as object removal, translation, rotation and rescaling are demonstrated on the right. Density warping from the blue bottle to the region of the brown one. S-AssetField loses the structure of the brown bottle in terms of part semantics, while I-AssetField gives plausible editing result with appropriate structure transfer. Fig. 8 8demonstrates a case of topology deformation, where the blue bottle's density field is warped to the region of the brown bottle, while keeping their RGB(-DINO) fea- Figure 10 : 10Example editings on real-world scenes[4] and indoor scenarios Figure 11 : 11Editing two city scenes collected from Google Earth ©2023 Google. AssetField is versatile where users can directly operate on the ground feature plane, supporting both withinscene and cross-scene editing with realistic rendering results.category-wise color manipulationReference GT Views of Original SceneNovel View Renderings of the Edited Scene Figure 12 : 12We apply batch-wise color changing for all instances of the chair, by replacing the template RGB feature map solely.Extended asset (cross-scene finetune) PSNR SSIM LPIPS PSNR SSIM LPIPS PSNR SSIM LPIPS PSNR SSIM LPIPSScene1 Scene2 Scene3 Scene4 NeRF 32.977 0.969 0.067 35.743 0.967 0.051 32.521 0.959 0.058 34.212 0.964 0.072 TensoRF 35.751 0.990 0.057 38.184 0.995 0.027 36.933 0.994 0.034 37.795 0.993 0.059 S-AssetField 36.471 0.992 0.049 36.856 0.993 0.037 36.753 0.994 0.038 37.445 0.990 0.065 I-AssetField 36.526 0.992 0.047 37.271 0.994 0.035 37.249 0.995 0.032 37.716 0.991 0.060 Table 1: Quantitative comparison on test views for the 4 scenes in Shir Amir, Yossi Gandelsman, Shai Bagon, Tali Dekel, arXiv:2112.05814Deep vit features as dense visual descriptors. 26arXiv preprintShir Amir, Yossi Gandelsman, Shai Bagon, and Tali Dekel. Deep vit features as dense visual descriptors. arXiv preprint arXiv:2112.05814, 2021. 2, 6 Jonathan T Barron, Ben Mildenhall, Dor Verbin, P Pratul, Peter Srinivasan, Hedman, Mip-nerf 360: Unbounded anti-aliased neural radiance fields. 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Jonathan T. Barron, Ben Mildenhall, Dor Verbin, Pratul P. Srinivasan, and Peter Hedman. Mip-nerf 360: Unbounded anti-aliased neural radiance fields. 2022 IEEE/CVF Confer- ence on Computer Vision and Pattern Recognition (CVPR), pages 5460-5469, 2021. 2 Mip-nerf 360: Unbounded anti-aliased neural radiance fields. T Jonathan, Ben Barron, Dor Mildenhall, Verbin, P Pratul, Peter Srinivasan, Hedman, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. the IEEE/CVF Conference on Computer Vision and Pattern Recognition7Jonathan T Barron, Ben Mildenhall, Dor Verbin, Pratul P Srinivasan, and Peter Hedman. Mip-nerf 360: Unbounded anti-aliased neural radiance fields. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 5470-5479, 2022. 6, 7, 8 Patchmatch stereo -stereo matching with slanted support windows. Michael Bleyer, Christoph Rhemann, Carsten Rother, BMVC. Michael Bleyer, Christoph Rhemann, and Carsten Rother. Patchmatch stereo -stereo matching with slanted support windows. In BMVC, 2011. 2 . G Bradski, 4G. Bradski. 4 A probabilistic framework for space carving. Adrian Broadhurst, Tom Drummond, Roberto Cipolla, Proceedings Eighth IEEE International Conference on Computer Vision. ICCV. Eighth IEEE International Conference on Computer Vision. ICCV1Adrian Broadhurst, Tom Drummond, and Roberto Cipolla. A probabilistic framework for space carving. Proceedings Eighth IEEE International Conference on Computer Vision. ICCV 2001, 1:388-393 vol.1, 2001. 2 Emerging properties in self-supervised vision transformers. Mathilde Caron, Hugo Touvron, Ishan Misra, Hervé Jégou, Julien Mairal, Piotr Bojanowski, Armand Joulin, Proceedings of the IEEE/CVF International Conference on Computer Vision. the IEEE/CVF International Conference on Computer VisionMathilde Caron, Hugo Touvron, Ishan Misra, Hervé Jégou, Julien Mairal, Piotr Bojanowski, and Armand Joulin. Emerg- ing properties in self-supervised vision transformers. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 9650-9660, 2021. 2 Emerging properties in self-supervised vision transformers. Mathilde Caron, Hugo Touvron, Ishan Misra, Julien Herv&apos;e J&apos;egou, Piotr Mairal, Armand Bojanowski, Joulin, 2021Mathilde Caron, Hugo Touvron, Ishan Misra, Herv'e J'egou, Julien Mairal, Piotr Bojanowski, and Armand Joulin. Emerg- ing properties in self-supervised vision transformers. 2021 IEEE/CVF International Conference on Computer Vision (ICCV). IEEE/CVF International Conference on Computer Vision (ICCV), pages 9630-9640, 2021. 4 Tero Karras, and Gordon Wetzstein. Efficient geometry-aware 3D generative adversarial networks. Eric R Chan, Connor Z Lin, Matthew A Chan, Koki Nagano, Boxiao Pan, Orazio Shalini De Mello, Gallo, arXiv, 2021. 2Leonidas Guibas, Jonathan Tremblay, Sameh KhamisEric R. Chan, Connor Z. Lin, Matthew A. Chan, Koki Nagano, Boxiao Pan, Shalini De Mello, Orazio Gallo, Leonidas Guibas, Jonathan Tremblay, Sameh Khamis, Tero Karras, and Gordon Wetzstein. Efficient geometry-aware 3D generative adversarial networks. In arXiv, 2021. 2 Tensorf: Tensorial radiance fields. Anpei Chen, Zexiang Xu, Andreas Geiger, Jingyi Yu, Hao Su, European Conference on Computer Vision (ECCV), 2022. 6Anpei Chen, Zexiang Xu, Andreas Geiger, Jingyi Yu, and Hao Su. Tensorf: Tensorial radiance fields. In European Conference on Computer Vision (ECCV), 2022. 2, 6 Light field transfer: global illumination between real and synthetic objects. S Oliver, Shree K Cossairt, Ravi Nayar, Ramamoorthi, ACM SIGGRAPH 2008 papers. Oliver S Cossairt, Shree K. Nayar, and Ravi Ramamoorthi. Light field transfer: global illumination between real and synthetic objects. ACM SIGGRAPH 2008 papers, 2008. 2 Unconstrained scene generation with locally conditioned radiance fields. Terrance Devries, Miguel Angel Bautista, Nitish Srivastava, W Graham, Joshua M Taylor, Susskind, Proceedings of the IEEE/CVF International Conference on Computer Vision. the IEEE/CVF International Conference on Computer VisionTerrance DeVries, Miguel Angel Bautista, Nitish Srivastava, Graham W Taylor, and Joshua M Susskind. Unconstrained scene generation with locally conditioned radiance fields. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 14304-14313, 2021. 3 Nerf-sos: Any-view selfsupervised object segmentation on complex scenes. Zhiwen Fan, Peihao Wang, Yifan Jiang, Xinyu Gong, Dejia Xu, Zhangyang Wang, arXiv:2209.08776arXiv preprintZhiwen Fan, Peihao Wang, Yifan Jiang, Xinyu Gong, De- jia Xu, and Zhangyang Wang. Nerf-sos: Any-view self- supervised object segmentation on complex scenes. arXiv preprint arXiv:2209.08776, 2022. 2 on lines and planes of closest fit to systems of points in space. The London, Edinburgh, and Dublin Philosophical Magazine. Karl Pearson, F R S Liii, Journal of Science. 211Karl Pearson F.R.S. Liii. on lines and planes of closest fit to systems of points in space. The London, Edinburgh, and Dublin Philosophical Magazine and Journal of Science, 2(11):559-572, 1901. 6 Object-centric neural scene rendering. Michelle Guo, Alireza Fathi, Jiajun Wu, Thomas A Funkhouser, abs/2012.08503ArXiv. 2Michelle Guo, Alireza Fathi, Jiajun Wu, and Thomas A. Funkhouser. Object-centric neural scene rendering. ArXiv, abs/2012.08503, 2020. 2 Rendering synthetic objects into legacy photographs. Kevin Karsch, Varsha Hedau, David A Forsyth, Derek Hoiem, Proceedings of the 2011 SIGGRAPH Asia Conference. the 2011 SIGGRAPH Asia ConferenceKevin Karsch, Varsha Hedau, David A. Forsyth, and Derek Hoiem. Rendering synthetic objects into legacy photographs. Proceedings of the 2011 SIGGRAPH Asia Conference, 2011. 2 Alexei Efros, and Yaser Sheikh. 3d object manipulation in a single photograph using stock 3d models. Natasha Kholgade, Tomas Simon, ACM Transactions on Graphics (TOG). 334Natasha Kholgade, Tomas Simon, Alexei Efros, and Yaser Sheikh. 3d object manipulation in a single photograph using stock 3d models. ACM Transactions on Graphics (TOG), 33(4):1-12, 2014. 2 Acquiring 3d indoor environments with variability and repetition. Young Min Kim, J Niloy, Dong-Ming Mitra, Leonidas Yan, Guibas, ACM Transactions on Graphics (TOG). 316Young Min Kim, Niloy J Mitra, Dong-Ming Yan, and Leonidas Guibas. Acquiring 3d indoor environments with variability and repetition. ACM Transactions on Graphics (TOG), 31(6):1-11, 2012. 2 Adam: A method for stochastic optimization. CoRR, abs/1412. P Diederik, Jimmy Kingma, Ba, 6980Diederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization. CoRR, abs/1412.6980, 2015. 6 Decomposing nerf for editing via feature field distillation. Sosuke Kobayashi, Eiichi Matsumoto, Vincent Sitzmann, Advances in Neural Information Processing Systems. 354Sosuke Kobayashi, Eiichi Matsumoto, and Vincent Sitz- mann. Decomposing nerf for editing via feature field distilla- tion. In Advances in Neural Information Processing Systems, volume 35, 2022. 2, 4 Panoptic neural fields: A semantic object-aware neural scene representation. Abhijit Kundu, Kyle Genova, Xiaoqi Yin, Alireza Fathi, Caroline Pantofaru, Leonidas J Guibas, Andrea Tagliasacchi, Frank Dellaert, Thomas Funkhouser, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. the IEEE/CVF Conference on Computer Vision and Pattern Recognition2022Abhijit Kundu, Kyle Genova, Xiaoqi Yin, Alireza Fathi, Car- oline Pantofaru, Leonidas J Guibas, Andrea Tagliasacchi, Frank Dellaert, and Thomas Funkhouser. Panoptic neural fields: A semantic object-aware neural scene representation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 12871-12881, 2022. 2 Lingjie Liu, Jiatao Gu, Kyaw Zaw Lin, Tat-Seng Chua, and Christian Theobalt. Neural sparse voxel fields. ArXiv, abs. Lingjie Liu, Jiatao Gu, Kyaw Zaw Lin, Tat-Seng Chua, and Christian Theobalt. Neural sparse voxel fields. ArXiv, abs/2007.11571, 2020. 2 Neural sparse voxel fields. Lingjie Liu, Jiatao Gu, Tat-Seng Kyaw Zaw Lin, Christian Chua, Theobalt, NeurIPS. 8Lingjie Liu, Jiatao Gu, Kyaw Zaw Lin, Tat-Seng Chua, and Christian Theobalt. Neural sparse voxel fields. NeurIPS, 2020. 2, 5, 8 Least squares quantization in pcm. P Stuart, Lloyd, IEEE Trans. Inf. Theory. 283Stuart P. Lloyd. Least squares quantization in pcm. IEEE Trans. Inf. Theory, 28:129-136, 1982. 3 Ricardo Martin-Brualla, Noha Radwan, S M Mehdi, Jonathan T Sajjadi, Alexey Barron, Daniel Dosovitskiy, Duckworth, Nerf in the wild: Neural radiance fields for unconstrained photo collections. 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Ricardo Martin-Brualla, Noha Radwan, Mehdi S. M. Saj- jadi, Jonathan T. Barron, Alexey Dosovitskiy, and Daniel Duckworth. Nerf in the wild: Neural radiance fields for un- constrained photo collections. 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 7206-7215, 2020. 2 Nerf: Representing scenes as neural radiance fields for view synthesis. Ben Mildenhall, P Pratul, Matthew Srinivasan, Jonathan T Tancik, Ravi Barron, Ren Ramamoorthi, Ng, ECCV. 6Ben Mildenhall, Pratul P. Srinivasan, Matthew Tancik, Jonathan T. Barron, Ravi Ramamoorthi, and Ren Ng. Nerf: Representing scenes as neural radiance fields for view syn- thesis. In ECCV, 2020. 2, 3, 6, 8 Instant neural graphics primitives with a multiresolution hash encoding. Thomas Müller, Alex Evans, Christoph Schied, Alexander Keller, 102:1- 102:15ACM Trans. Graph. 414Thomas Müller, Alex Evans, Christoph Schied, and Alexan- der Keller. Instant neural graphics primitives with a multires- olution hash encoding. ACM Trans. Graph., 41(4):102:1- 102:15, July 2022. 2 Modern hierarchical, agglomerative clustering algorithms. ArXiv, abs/1109.2378. Daniel Müllner, Daniel Müllner. Modern hierarchical, agglomerative cluster- ing algorithms. ArXiv, abs/1109.2378, 2011. 5 Donerf: Towards realtime rendering of compact neural radiance fields using depth oracle networks. Thomas Neff, Pascal Stadlbauer, Mathias Parger, Andreas Kurz, H Joerg, Chakravarty R Alla Mueller, Anton Chaitanya, Markus Kaplanyan, Steinberger, Computer Graphics Forum. Wiley Online Library40Thomas Neff, Pascal Stadlbauer, Mathias Parger, Andreas Kurz, Joerg H Mueller, Chakravarty R Alla Chaitanya, Anton Kaplanyan, and Markus Steinberger. Donerf: Towards real- time rendering of compact neural radiance fields using depth oracle networks. In Computer Graphics Forum, volume 40, pages 45-59. Wiley Online Library, 2021. 6, 7, 8 Neural scene graphs for dynamic scenes. Julian Ost, Fahim Mannan, Nils Thuerey, Julian Knodt, Felix Heide, IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Julian Ost, Fahim Mannan, Nils Thuerey, Julian Knodt, and Felix Heide. Neural scene graphs for dynamic scenes. 2021 IEEE/CVF Conference on Computer Vision and Pat- tern Recognition (CVPR), pages 2855-2864, 2021. 2 Translating images into maps. Avishkar Saha, Oscar Alejandro Mendez Maldonado, Chris Russell, R Bowden, 2022 International Conference on Robotics and Automation (ICRA). 2022Avishkar Saha, Oscar Alejandro Mendez Maldonado, Chris Russell, and R. Bowden. Translating images into maps. 2022 International Conference on Robotics and Automation (ICRA), pages 9200-9206, 2022. 3 Structurefrom-motion revisited. L Johannes, Jan-Michael Schönberger, Frahm, IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Johannes L. Schönberger and Jan-Michael Frahm. Structure- from-motion revisited. 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 4104-4113, 2016. 2 Pixelwise view selection for unstructured multi-view stereo. Johannes L Schönberger, Enliang Zheng, Jan-Michael Frahm, Marc Pollefeys, ECCV. Johannes L. Schönberger, Enliang Zheng, Jan-Michael Frahm, and Marc Pollefeys. Pixelwise view selection for unstructured multi-view stereo. In ECCV, 2016. 2 Photorealistic scene reconstruction by voxel coloring. M Steven, Charles R Seitz, Dyer, International Journal of Computer Vision. 352Steven M. Seitz and Charles R. Dyer. Photorealistic scene reconstruction by voxel coloring. International Journal of Computer Vision, 35:151-173, 1997. 2 Seeing 3d objects in a single image via self-supervised static. Prafull Sharma, Ayush Tewari, Yilun Du, Sergey Zakharov, Rares Ambrus, Adrien Gaidon, T William, Fredo Freeman, Joshua B Durand, Vincent Tenenbaum, Sitzmann, arXiv:2207.1123223dynamic disentanglement. arXiv preprintPrafull Sharma, Ayush Tewari, Yilun Du, Sergey Zakharov, Rares Ambrus, Adrien Gaidon, William T Freeman, Fredo Durand, Joshua B Tenenbaum, and Vincent Sitzmann. See- ing 3d objects in a single image via self-supervised static- dynamic disentanglement. arXiv preprint arXiv:2207.11232, 2022. 2, 3 Topological structural analysis of digitized binary images by border following. Satoshi Suzuki, Keiichi Abe, Comput. Vis. Graph. Image Process. 304Satoshi Suzuki and Keiichi Abe. Topological structural anal- ysis of digitized binary images by border following. Comput. Vis. Graph. Image Process., 30:32-46, 1985. 4 Compressible-composable nerf via rank-residual decomposition. Jiaxiang Tang, Xiaokang Chen, Jingbo Wang, Gang Zeng, abs/2205.14870ArXiv. 8Jiaxiang Tang, Xiaokang Chen, Jingbo Wang, and Gang Zeng. Compressible-composable nerf via rank-residual de- composition. ArXiv, abs/2205.14870, 2022. 8 Neural Feature Fusion Fields: 3D distillation of self-supervised 2D image representations. Vadim Tschernezki, Iro Laina, Diane Larlus, Andrea Vedaldi, Proceedings of the International Conference on 3D Vision (3DV), 2022. the International Conference on 3D Vision (3DV), 202224Vadim Tschernezki, Iro Laina, Diane Larlus, and Andrea Vedaldi. Neural Feature Fusion Fields: 3D distillation of self-supervised 2D image representations. In Proceedings of the International Conference on 3D Vision (3DV), 2022. 2, 4 Image quality assessment: from error visibility to structural similarity. Zhou Wang, Alan Conrad Bovik, Hamid R Sheikh, Eero P Simoncelli, IEEE Transactions on Image Processing. 136Zhou Wang, Alan Conrad Bovik, Hamid R. Sheikh, and Eero P. Simoncelli. Image quality assessment: from error visibility to structural similarity. IEEE Transactions on Im- age Processing, 13:600-612, 2004. 6 Neural rendering in a room. Bangbang Yang, Yinda Zhang, Yijin Li, Zhaopeng Cui, S Fanello, Hujun Bao, Guofeng Zhang, ACM Transactions on Graphics (TOG). 413Bangbang Yang, Yinda Zhang, Yijin Li, Zhaopeng Cui, S. Fanello, Hujun Bao, and Guofeng Zhang. Neural rendering in a room. ACM Transactions on Graphics (TOG), 41:1 - 10, 2022. 2, 3 Learning object-compositional neural radiance field for editable scene rendering. Bangbang Yang, Yinda Zhang, Yinghao Xu, Yijin Li, Han Zhou, Hujun Bao, Guofeng Zhang, Zhaopeng Cui, International Conference on Computer Vision (ICCV). Bangbang Yang, Yinda Zhang, Yinghao Xu, Yijin Li, Han Zhou, Hujun Bao, Guofeng Zhang, and Zhaopeng Cui. Learning object-compositional neural radiance field for ed- itable scene rendering. In International Conference on Com- puter Vision (ICCV), October 2021. 2, 6, 8 Learning object-compositional neural radiance field for editable scene rendering. Bangbang Yang, Yinda Zhang, Yinghao Xu, Yijin Li, Han Zhou, Hujun Bao, Guofeng Zhang, Zhaopeng Cui, IEEE/CVF International Conference on Computer Vision (ICCV). Bangbang Yang, Yinda Zhang, Yinghao Xu, Yijin Li, Han Zhou, Hujun Bao, Guofeng Zhang, and Zhaopeng Cui. Learning object-compositional neural radiance field for ed- itable scene rendering. 2021 IEEE/CVF International Con- ference on Computer Vision (ICCV), pages 13759-13768, 2021. 3 Alex Yu, Sara Fridovich-Keil, Matthew Tancik, Qinhong Chen, Benjamin Recht, Angjoo Kanazawa, Plenoxels, Radiance fields without neural networks. 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). 25Alex Yu, Sara Fridovich-Keil, Matthew Tancik, Qinhong Chen, Benjamin Recht, and Angjoo Kanazawa. Plenoxels: Radiance fields without neural networks. 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 5491-5500, 2022. 2, 5 Unsupervised discovery of object radiance fields. Hong-Xing Yu, Leonidas J Guibas, Jiajun Wu, International Conference on Learning Representations. 2022Hong-Xing Yu, Leonidas J. Guibas, and Jiajun Wu. Unsu- pervised discovery of object radiance fields. In International Conference on Learning Representations, 2022. 3 Star: Self-supervised tracking and reconstruction of rigid objects in motion with neural rendering. Wentao Yuan, Zhaoyang Lv, Tanner Schmidt, S Lovegrove, 2021Wentao Yuan, Zhaoyang Lv, Tanner Schmidt, and S. Love- grove. Star: Self-supervised tracking and reconstruction of rigid objects in motion with neural rendering. 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 13139-13147, 2021. 2 Kai Zhang, Gernot Riegler, Noah Snavely, Vladlen Koltun, abs/2010.07492Nerf++: Analyzing and improving neural radiance fields. ArXiv. Kai Zhang, Gernot Riegler, Noah Snavely, and Vladlen Koltun. Nerf++: Analyzing and improving neural radiance fields. ArXiv, abs/2010.07492, 2020. 2 The unreasonable effectiveness of deep features as a perceptual metric. Richard Zhang, Phillip Isola, Alexei A Efros, Eli Shechtman, Oliver Wang, IEEE/CVF Conference on Computer Vision and Pattern Recognition. Richard Zhang, Phillip Isola, Alexei A. Efros, Eli Shecht- man, and Oliver Wang. The unreasonable effectiveness of deep features as a perceptual metric. 2018 IEEE/CVF Con- ference on Computer Vision and Pattern Recognition, pages 586-595, 2018. 6 Interactive images: Cuboid proxies for smart image manipulation. Youyi Zheng, Xiang Chen, Ming-Ming Cheng, Kun Zhou, Shi-Min, Niloy J Hu, Mitra, ACM Trans. Graph. 314Youyi Zheng, Xiang Chen, Ming-Ming Cheng, Kun Zhou, Shi-Min Hu, and Niloy J Mitra. Interactive images: Cuboid proxies for smart image manipulation. ACM Trans. Graph., 31(4):99-1, 2012. 2 In-place scene labelling and understanding with implicit scene representation. Shuaifeng Zhi, Tristan Laidlow, Stefan Leutenegger, Andrew J Davison, Proceedings of the IEEE/CVF International Conference on Computer Vision. the IEEE/CVF International Conference on Computer VisionShuaifeng Zhi, Tristan Laidlow, Stefan Leutenegger, and An- drew J Davison. In-place scene labelling and understanding with implicit scene representation. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 15838-15847, 2021. 2
[]
[ "Korg: a modern 1D LTE spectral synthesis package", "Korg: a modern 1D LTE spectral synthesis package", "Korg: a modern 1D LTE spectral synthesis package", "Korg: a modern 1D LTE spectral synthesis package" ]
[ "Adam J Wheeler \nDepartment of Astronomy\nMcPherson Laboratory\nOhio State University\n140 West 18th AvenueColumbusOhio\n", "Matthew W Abruzzo \nDepartment of Astronomy\nPupin Physics Laboratories\nColumbia University\n10027New YorkNYUSA\n", "Andrew R Casey \nSchool of Physics & Astronomy\nMonash University\nVictoriaAustralia\n\nCenter of Excellence for Astrophysics in Three Dimensions (ASTRO-3D)\n\n", "Melissa K Ness \nDepartment of Astronomy\nPupin Physics Laboratories\nColumbia University\n10027New YorkNYUSA\n", "Adam J Wheeler \nDepartment of Astronomy\nMcPherson Laboratory\nOhio State University\n140 West 18th AvenueColumbusOhio\n", "Matthew W Abruzzo \nDepartment of Astronomy\nPupin Physics Laboratories\nColumbia University\n10027New YorkNYUSA\n", "Andrew R Casey \nSchool of Physics & Astronomy\nMonash University\nVictoriaAustralia\n\nCenter of Excellence for Astrophysics in Three Dimensions (ASTRO-3D)\n\n", "Melissa K Ness \nDepartment of Astronomy\nPupin Physics Laboratories\nColumbia University\n10027New YorkNYUSA\n" ]
[ "Department of Astronomy\nMcPherson Laboratory\nOhio State University\n140 West 18th AvenueColumbusOhio", "Department of Astronomy\nPupin Physics Laboratories\nColumbia University\n10027New YorkNYUSA", "School of Physics & Astronomy\nMonash University\nVictoriaAustralia", "Center of Excellence for Astrophysics in Three Dimensions (ASTRO-3D)\n", "Department of Astronomy\nPupin Physics Laboratories\nColumbia University\n10027New YorkNYUSA", "Department of Astronomy\nMcPherson Laboratory\nOhio State University\n140 West 18th AvenueColumbusOhio", "Department of Astronomy\nPupin Physics Laboratories\nColumbia University\n10027New YorkNYUSA", "School of Physics & Astronomy\nMonash University\nVictoriaAustralia", "Center of Excellence for Astrophysics in Three Dimensions (ASTRO-3D)\n", "Department of Astronomy\nPupin Physics Laboratories\nColumbia University\n10027New YorkNYUSA" ]
[]
We present Korg, a new package for 1D LTE (local thermodynamic equilibrium) spectral synthesis of FGK stars, which computes theoretical spectra from the near-ultraviolet to the near-infrared, and implements both plane-parallel and spherical radiative transfer. We outline the inputs and internals of Korg, and compare synthetic spectra from Korg, Moog, Turbospectrum, and SME. The disagreements between Korg and the other codes are no larger than those between the other codes, although disagreement between codes is substantial. We examine the case of a C 2 band in detail, finding that uncertainties on physical inputs to spectral synthesis account for a significant fraction of the disagreement. Korg is 1-100 times faster than other codes in typical use, compatible with automatic differentiation libraries, and easily extensible, making it ideal for statistical inference and parameter estimation applied to large data sets. Documentation and installation instructions are available at https://ajwheeler.github.io/Korg.jl/stable/.
10.3847/1538-3881/acaaad
[ "https://export.arxiv.org/pdf/2211.00029v1.pdf" ]
253,244,112
2211.00029
5e9a2315bf9b80ccb3d44b6286af43eca4136ae3
Korg: a modern 1D LTE spectral synthesis package November 2, 2022 Adam J Wheeler Department of Astronomy McPherson Laboratory Ohio State University 140 West 18th AvenueColumbusOhio Matthew W Abruzzo Department of Astronomy Pupin Physics Laboratories Columbia University 10027New YorkNYUSA Andrew R Casey School of Physics & Astronomy Monash University VictoriaAustralia Center of Excellence for Astrophysics in Three Dimensions (ASTRO-3D) Melissa K Ness Department of Astronomy Pupin Physics Laboratories Columbia University 10027New YorkNYUSA Korg: a modern 1D LTE spectral synthesis package November 2, 2022Draft version Typeset using L A T E X twocolumn style in AASTeX63spectroscopy We present Korg, a new package for 1D LTE (local thermodynamic equilibrium) spectral synthesis of FGK stars, which computes theoretical spectra from the near-ultraviolet to the near-infrared, and implements both plane-parallel and spherical radiative transfer. We outline the inputs and internals of Korg, and compare synthetic spectra from Korg, Moog, Turbospectrum, and SME. The disagreements between Korg and the other codes are no larger than those between the other codes, although disagreement between codes is substantial. We examine the case of a C 2 band in detail, finding that uncertainties on physical inputs to spectral synthesis account for a significant fraction of the disagreement. Korg is 1-100 times faster than other codes in typical use, compatible with automatic differentiation libraries, and easily extensible, making it ideal for statistical inference and parameter estimation applied to large data sets. Documentation and installation instructions are available at https://ajwheeler.github.io/Korg.jl/stable/. INTRODUCTION Improvements in instrumentation have yielded exponential growth in the amount of spectral data to analyse. Creating pipelines that can keep up with analysis is nontrivial. There are several extant codes for 1D LTE spectral synthesis, including Turbospectrum (Plez et al. 1993;Plez 2012;Gerber et al. 2022), Moog (Sneden 1973;Sneden et al. 2012), SYNTHE (Kurucz 1993;Sbordone et al. 2004), SME, (Valenti & Piskunov 1996, 2012Piskunov & Valenti 2017;Wehrhahn et al. 2022), SPECTRUM (Gray & Corbally 1994), and SYNSPEC (Hubeny & Lanz 2011Hubeny et al. 2021). While they have enabled a huge volume of research, these codes can be difficult to use for the uninitiated, and require input and output through custom file formats, impeding integration into analysis code. Here we present Korg, a new 1D LTE synthesis package, written in Julia and suitable for easy integration with scripts and use in an interactive environment. As the first such new code in more than two decades, Korg benefits from numerical libraries not available at the time earlier packages were authored, principally modern automatic differentiation packages and optimization libraries. The two fundamental assumptions made by Korg are that the stellar atmosphere is hydrostatic and 1D, and in LTE (local thermodynamic equilibrium). Eliminating the former two assumptions means calculating atmospheric structure using the full hydrodynamic equations (e.g. Freytag et al. 2012;Magic et al. 2015;Schultz et al. 2022), or the magnetohydrodynamic equations (e.g. Vögler et al. 2005), if the internal magnetic field is strong. Fortunately, corrections to 1D LTE level populations (e.g. Amarsi et al. 2020Amarsi et al. , 2022, equivalent widths, or abundances (e.g. Lind et al. 2011;Amarsi et al. 2015;Bergemann et al. 2012;Osorio & Barklem 2016) can be calculated from NLTE simulations. These can be applied to LTE results or codes to produce approximate NLTE spectra at relatively little computational cost. Additionally, biases from the assumption of LTE will roughly cancel for similar stars, yielding differential abundance estimates with high precision. The performance of spectral synthesis codes is most important when fitting observational data. Because synthesis must be embedded in an inference loop, the analysis of of a single spectrum may trigger tens or hundreds . Vertical offsets have been applied, but the relative scaling is preserved, which is why some derivative spectra appear flat here (but show fluctuations at a sub-percent level). of syntheses. Even when many parameters may be estimated by interpolating over (or otherwise comparing to) a precomputed grid of spectra (e.g. Recio-Blanco et al. 2006;Smiljanic et al. 2014;Holtzman et al. 2018;Boeche et al. 2021;Buder et al. 2021), individual abundances are best done with targeted syntheses of individual lines. Furthermore, generating a grid from which to interpolate can be computationally expensive. Inference and optimization can also be sped up by fast and accurate derivatives of the function being sampled or minimized, most easily produced via automatic differentiation. Korg is designed to be compatible with automatic differentiation packages (e.g. ForwardDiff), which can provide derivative spectra in roughly the same amount of time required for a single synthesis (as discussed in Section 4; see Figure 1). DESCRIPTION OF CODE To synthesize a spectrum, Korg calculates the number density of each species (e.g. H I, C II, CO) at each layer in the atmosphere (Section 2.2), then computes the absorption coefficient at each wavelength and atmospheric layer due to continuum (Section 2.3) and line absorption (Section 2.4). Given the total absorption coefficient at each wavelength and atmospheric layer, it then solves the radiative transfer equation to produce the flux at the stellar surface (Section 2.5). Inputs Korg takes as inputs, a model atmosphere, a linelist, and abundances for each element in A(X) form, 1 assumed to be constant throughout the atmosphere. Korg includes functions for parsing .modformat MARCS (Gustafsson et al. 2008) model atmospheres, and line lists in the format supplied by VALD (Piskunov et al. 1995;Kupka et al. 1999Kupka et al. , 2000Ryabchikova et al. 2015;Pakhomov et al. 2019) or Kurucz (Kurucz 2011), or those accepted by Moog. When parsing VALD linelists, Korg will automatically apply corrections to unscaled log(gf ) values for transitions with isotope information using the isotopic abundance values supplied by Berglund & Wieser (2011) for linelists which are not already adjusted for isotopic abundances. Korg detects and uses packed Anstee, Barklem, and O'Mara (ABO;Anstee & O'Mara 1991;Barklem et al. 2000a) if they are provided (as VALD optionally does). It uses vacuum wavelengths internally, and will automatically convert air-wavelength VALD linelists to vacuum (Birch & Downs 1994). Users may also construct line lists or model atmospheres using custom code and pass them directly into Korg. Chemical equilibrium Korg uses the assumption of LTE, i.e. that in a sufficiently small region, baryonic matter is described by thermal distributions, and radiation is only slightly out of detailed balance. The source function (the ratio of the per-volume emission and absorption coefficients) is dominated by collisions of baryonic matter and is Planckian. While non-LTE (NLTE) calculations are important for producing the most unbiased possible model spectra, they are prohibitively slow, and not yet suitable for applications that require computing many spectra over large wavelength ranges. In order to compute the absorption coefficient, α λ , at each layer of the atmosphere, Korg must solve for the number density of each species. Treating the number density of each neutral atomic species as the free parameters, Korg uses the NLsolve library (Mogensen et al. 2020) to solve the system of Saha and molecular equilibrium equations with the temperature, total number density, and number density of free electrons set by the model atmosphere, and the total abundance of each element set by the user. By default, Korg uses molecular equilibrium constants from Barklem & Collet (2016) (the ionization energies are originally from Haynes 2010), but alternatives can be passed by the user. These are defined only up to 10,000 K, so we treat the molecular partition functions as constant above this temperature. This is unlikely to be problematic as few molecules are present above this threshold. The default atomic partition functions are calculated using energy levels from NIST 2 (Kramida & Ralchenko 2021). In dense environments, upper energy levels are perturbed or dissolved. This can be crudely accounted for by truncating the terms of the partition function (e.g. Hubeny & Mihalas 2014 section 4.1), or with the "probability occupation formalism" developed in Hummer & Mihalas (1988), and generalized by Hubeny et al. (1994). This effect is most important for hydrogen, where it strongly impacts the partition function starting above 10,000 K. Based on energy level truncation, the partition functions of lithium and vanadium are affected in the same regime at the 5% and 3% level, respectively, but other elements are unchanged at the 1% level. For FGK stars, these effects are most important for higher-order hydrogen transitions. Figure 2 shows the occupation probability correction factor, w for several energy levels of hydrogen in the solar atmosphere. For n = 1, 2, 3, the correction is smaller than 10 −3 everywhere. At present, we do not include these effects in Korg, because they do not make a large difference for FGK stars, and because of disagreement with observational data (see Section 3.2). By default, Korg solves the equilibrium equations taking into account all elements up to uranium (as neutral, singly ionized, and doubly ionized species), as well as 247 diatomic molecules. As they exist in extremely small quantities in LTE atmospheres, Korg neglects species which are more than doubly ionized. It also neglects ionic and triatomic molecules, which are present in cool stars, although we plan to address this in a future version. Continuum absorption Korg computes contributions to the continuum absorption coefficient from a number of sources, listed in Table 1. Continuum absorption mechanisms involving 2 https://physics.nist.gov/PhysRefData/ASD/levels form.html an atom and an electron can be classified as bound-free (bf) or free-free (ff), depending on whether the elec- tron is initially bound to the atom. 3 In addition to bound-free and free-free absorption for H − , H I, He II, and H + 2 (as well as He − and He), Korg includes treatments of bound-free and free-free interactions with metals. Free-free interactions are treated with the hydrogenic approximation using Gaunt factors from van Hoof et al. (2014), with corrections from Peach (1970) for neutral He, C, Si, and Mg. Bound-free interactions are treated with cross-sections from NORAD 4 , when available, TOPBase 5 Seaton et al. (1994) otherwise. Following Gustafsson et al. (2008), we shifted the theoretical energy levels to the empirical values from NIST, leaving out levels for which an empirical counterpart wasn't 3 Note that these are named as though the electron were bound, even for free-free interactions. For example, "H I bf" refers to the ionization of neutral hydrogen, and "H I ff" refers to absorption by the interaction of free electrons and free protons. 4 https://norad.astronomy.osu.edu/#AtomicDataTbl1 5 https://cds.u-strasbg.fr/topbase/topbase.html present. We considered all species with bound-free interaction included in Gustafsson et al. (2008), but included only those which contribute to the total continuum absorption at above the 10 −3 level anywhere in any of the atmospheres used in Section 3. Figure 3 shows the contribution of various mechanisms to the continuum absorption coefficient at the solar photosphere. At present, Korg treats all scattering as absorption, which is correct in the LTE regime. In the future we plan to support quasi-LTE radiative transfer with isotropic scattering, which will yield more accurate spectra when Rayleigh scattering dominates or is a significant source of opacity. Figure 4 shows the ratio of the continuum absorption according to Korg and the one used by MARCS through the solar atmosphere. Agreement is good, except in the violet and ultraviolet, where Korg uses more recent bound-free metal absorption coefficients with more structure in wavelength, which shows up as vertical bands in the figure. Compounding this, the MARCS values also include the effects on line blanketing, which is handled via the linelist in Korg. The agreement between the continua calculated by Korg and other codes is good (see Section 3). Line absorption The contribution of each line to the total absorption coefficient, α line , is α line λ = σn exp(−βE up ) − exp(−βE lo ) U (T ) φ(λ)(1) where the wavelength-integrated cross-section, σ is given by σ = g lo f πe 2 λ 2 0 m e c 2 .(2) Here, n is the number density of the line species, E up and E lo are the upper and lower energy levels of the transition, β = 1/kT is the thermodynamic beta, φ is the normalized line profile, U is the species partition function, T is the temperature, f is the oscillator strength, g lo is the degeneracy of the lower level, λ 0 is the line center, and m e is the electron mass. For all lines of all species besides hydrogen (including autoionizing lines), φ is approximated with a Voigt profile using the numerical approximation from Hunger (1956). The width of the Gaussian component is σ D = λ 0 2kT m + ξ 2 ,(3) where m is the species mass, and ξ is the microturbulent velocity, a fudge factor used to account for convective Doppler-shifts. The width of the Lorentz component (in frequency rather than wavelength units) is given by Γ = Γ rad + γ stark n e + γ vdW n H I ,(4) where Γ rad is the radiative broadening parameter, γ stark the per-electron Stark broadening parameter, γ vdW the per-neutral-hydrogen van der Waals broadening parameter, and n e and n H I are the number densities of free electrons and neutral hydrogen, respectively. We neglect pressure broadening of molecular lines, setting γ stark and γ vdW to zero. When Γ rad is not supplied in the linelist, Korg approximates it with Γ rad = 8π 2 e 2 mcλ 2 gf ,(5) where m is the mass of the atom or molecule, g is the degeneracy of the lower level of the transition, and f is the transition oscillator strength. This can be obtained by assuming that spontaneous de-exitation dominates the transition's energy uncertainty, and that the upper level's degeneracy is unity. The values of γ stark and γ vdW at 10 5 K, γ stark 0 and γ vdW 0 , are provided by the linelist, then scaled by their temperature dependence according to semiclassical impact theory (e.g. Hubeny & Mihalas 2014 ch. 8.3) to the per-particle broadening parameters: γ stark = γ stark 0 T 10 5 K 1 6 (6) γ vdW = γ vdW 0 T 10 5 K 3 10 (7) If provided, ABO parameters, which describe a more nuanced temperature dependence in broadening by neutral hydrogen, will be used to calculate γ vdW instead. When a Stark broadening parameter is not provided in the linelist, the approximation from Cowley (1971) is used. When a van der Waals broadening parameter is provided, a form of the the Unsöld approximation (Unsold 1955;Warner 1967) is used in which the angular momentum quantum number is neglected and the mean square radius, r 2 , is approximated by r 2 = 5 2 n eff 2 Z 2 ,(8) where n eff is the effective principal quantum number and Z is the atomic number. Pressure broadening is neglected for autoionizing lines with no provided parameters. The absorption coefficient for each line (except those of hydrogen) is calculated over a dynamically determined wavelength range. The maximum detuning (wavelength difference from the line center) is set to the value at which a pure Gaussian or pure Lorentzian profile takes on a value of α crit , whichever is larger. By default, Korg truncates profiles at α crit = 10 −3 α cntm , where α cntm is the local continuum absorption coefficient. This ratio, α crit /α cntm , can be set by the user. The broadening of hydrogen lines is treated separately. We use the tabulated Stark broadening profiles from (Stehlé & Hutcheon 1999), which are pre-convolved with Doppler profiles. For H α , H β , and H γ , where selfbroadening is important, we add to φ a Voigt profile using the p-d approximation for self-broadening from Barklem et al. (2000b). Radiative transfer Given the absorption coefficient, α λ at each wavelength and atmospheric layer, the final step is to solve the radiative transfer equation. The radiative transfer equation is µ α λ dI λ dz = S λ − I λ ,(9) in a plane-parallel atmosphere, and µ α λ ∂I λ ∂r + 1 − µ 2 α λ r ∂I λ ∂µ = S λ − I λ ,(10) in a spherical atmosphere, where I λ is the intensity of radiation at wavelength λ, S λ the source function, α λ the absorption coefficient, z the negative depth into the atmosphere, r, the distance to the center of the star, and µ the cosine of the angle between the r/z axis and the line of sight. When the thickness of the atmosphere is small relative to the stellar radius, curvature can be neglected and a plane-parallel atmosphere is a good approximation, otherwise sphericity must be taken into account. By default, Korg does it's radiative transfer calculations in the same geometry as the model atmosphere. What Korg actually returns is the disk-averaged intensity, i.e. the astrophysical flux, F λ = 2π 1 0 µI top λ (µ) dµ .(11) Here, I top λ (µ) stands for I λ (z = 0, µ) in the plane-parallel case and I λ (r = R, µ) in the spherical case, where R is the radius of the outermost atmospheric layer. The total flux from the star is then F λ = (R/d) 2 F λ , where d is the distance to the star. Since Korg assumes that the stellar atmosphere is in LTE, S λ is the blackbody spectrum. In the plane-parallel case, F λ = 2π 0 τ λ S λ (τ λ )E 2 (τ λ )dτ λ ,(12) where E 2 is the second order exponential integral, τ λ is the optical depth (dτ λ = α λ dz) and τ λ is the depth at the bottom of the atmosphere. The spherical case admits no further analytic simplifications. When using a spherical model atmosphere, to obtain the astrophysical flux, F λ , Korg calculates the intensity, I top λ at a discrete grid of µ values, by integrating along rays from the lowest atmospheric layer to the top atmospheric layer for many of surface µ values. This is valid since the source function is isotropic under LTE. Rays which do not intersect the lowest atmospheric layer are cast from the far side of the star. The integral over µ (Equation 11) is performed using Gauss-Legendre quadrature. Korg has two radiative transfer calculation schemes: a lower-order default, and the quadratic Bezier scheme from de la Cruz Rodríguez & Piskunov (2013). The agree at the sub-percent level in flux, and complication with the Bezier scheme lead us to make the lower-order scheme the default. Appendix A contains numerical details and an extended discussion of the calculation of transfer integrals. COMPARISON TO OTHER CODES To test Korg's correctness, we compare its spectra to those produced by Moog, Turbospectrum, and SME for four sets of the parameters (T eff , log g, and metallicity). The first set of parameters is solar, and we have chosen the others to be similar to three well-studied stars: the red giant branch star Arcturus, HD122563 (an extremely metal-poor red giant), and HD49933 (an F-type main sequence star), but shifted slightly to the nearest existing MARCS model atmosphere to avoid the need to interpolate the atmospheres. For all stars, the same solar abundance pattern was assumed, which is not problematic since we are comparing synthetic spectra to each other and not to observational data. For the first four regions, we used an "extract all" linelist from VALD, which includes all known lines within the wavelength bounds. The near-infrared spectrum was synthesized using the APOGEE DR16 linelist (Smith et al. 2021). The VALD linelists are pre-adjusted for isotopic abundances, which eliminates a possible source of disagreement. For the APOGEE linelist, Turbospectrum's runtime isotopic adjustment may not be consistent with Korg's or with the pre-adjustment performed for SME or Moog, but agreement is generally good nevertheless. As Moog has a limit of 2500 lines which can contribute at a given wavelength, we culled the weakest TiO lines from each list, reducing the total number of transitions passed to Moog to be at most 10,000. This is expected to have negligible impact on the spectrum. Since Korg (see Section 2.3) and Moog (excepting the fork presented in Sobeck et al. 2011) do not support true scattering, we set the PURE-LTE flag to true in babsma.par, setting Turbospectrum to turn off this functionality. Scattering can't be turned off in SME which presumably explains some of the disagreement between it and other codes. Disagreement between the codes is substantial, with fluxes disagreeing at the ten percent level in many cases, as established by Blanco-Cuaresma (2019). Agreement is strongest in the infrared, where continuum opacities are simpler. We discuss some of the discrepancies in more detail here. Continuum absorption in the violet and ultraviolet Near the Balmer jump and blueward, agreement between the synthetic continua is generally poor. This is the primary cause for the disagreement between the codes in the first two wavelength regions, which are at the shortest wavelengths. The defined structure seen the Korg continuum is due to resonances in the metal bf cross-sections, not lines. Unfortunately, it is the case that reliable spectral synthesis in the blue and ultraviolet remains out of reach, as metal opacities are theoretically very challenging to predict at the energy resolution required (not to mention the difficult-to-effects of line blanketing). Balmer series There are eight Balmer series lines in the 3660Å -3670Å (vacuum) wavelength region (upper levels n = 26 to n = 33), five of which are included in Korg (those up to n = 30). Typically, these have a minimal impact on the observed spectrum, but in the metal-poor giant, Korg predicts that these lines are visible as broad shal-low features, in contrast to Turbospectrum (hydrogen lines are omitted from the Moog linelist in this region). They are located at 3668.76Å, 3667.17Å, 3665.75 A, 3664.48Å, and 3663.33Å (vacuum), and are more clearly visible in the residuals panel. This is because Turbospectrum uses the occupation probability formalism discussed in Section 2.2, which eliminates the relevant hydrogen orbitals throughout most of the stellar atmospheres. In a high-resolution ultraviolet spectrum of HD122563 from the Ultraviolet and Visual Echelle Spectrograph 6 (Dekker et al. 2000), a star with similar parameters, the lines are present. For this reason, we are not confident that the occupation probability formalism offers a significant advantage over unmodified partition functions, and we do not use it in Korg. The top-left panel in Figure 9 shows part of this wavelength region in more detail, along with the observational data. SME predicts the strongest absorption by these lines in the metal-poor giant, but it also predicts strong absorption for the other stars, where it is not present in observations. The bottom-left panel of Figure 9 demonstrates this for the Sun, using the Wallace et al. (2011) solar atlas. There is relatively good agreement between Korg, Turbospectrum, and SME in the H α wings (Moog lacks special treatment of hydrogen lines, and generally disagrees). Turbospectrum and SME both use forms of HLINOP (Barklem & Piskunov 2003), so strong agreement is to be expected (though they presumably have differing treatments of the equation of state). Korg uses roughly the same treatment as HLINOP, with Stark broadening from Stehlé & Hutcheon (1999) selfbroadening via the Voigt approximation from Barklem & Piskunov (2003), but Figure 9 indicates that there are differences in implementation. For the metal-poor giant hot dwarf (Figures 7 and 8), Turbospectrum and SME agree with each other almost exactly, and disagree with Korg at the ∼ 1% level in the H α wings. In constrast, Korg and Turbospectrum agree more closely with each other than they do with SME for the Sun. In the H α core, disagreement is similarly minor, except for in the metal-poor giant. The right panel in Figure 9 shows this in detail, alongside the observed spectrum of HD122563, a star with similar parameters, from GALAH (Buder et al. 2021). Given that accurate modelling of the H α core requires techniques beyond 1D LTE (e.g. Barklem 2007;Amarsi et al. 2018), and that the observational data doesn't match the prediction of any of the codes, we consider this disagreement permissible. C 2 band While tracking down the causes of disagreement between codes for all wavelengths and stars isn't feasible, it is instructive to focus on one as a "case study". In the sun, Korg predicts deeper lines from roughly 5160 A -5165Å than the other codes. These are due to absorption by C 2 , and the disagreement arises from the varying molecular equilibrium constants, K C2 = P 2 C P C2 ,(13) adopted for the species by the codes. Most of the difference in K C2 values comes from different values for the dissociation energy, D 0 , of C 2 . Molecular dissociation energies are one of the most dominant sources of systematic uncertainty in spectral synthesis (see discussion in Barklem & Collet 2016). To summarize the treatments of K C2 in each code: • Korg uses the data from Barklem & Collet (2016): D 0 = 6.371 eV. • Based on logging information, Turbospectrum appears to use the polynomial expansions from Tsuji (1973): estimated D 0 = 6.07 eV. • The Moog molecular equilibrium constants comes from Kurucz 7 . We were able to extract the polynomial approximation for the C 2 molecular equilibrium constant from the source code: D 0 = 6.21 eV. • For C 2 , the SME equilibrium constant comes from Sauval & Tatum (1984): D 0 = 6.297 eV. Figure 10 shows the effects of these choices about K C2 . They account for nearly all of the disagreement in the synthesized spectra. While the numerical scheme used to represent the temperature-dependence of K C2 plays some role, the value of D 0 adopted is far more important. The differing D 0 values result in number densities of C 2 differing by up to a factor of 2 at the photosphere, and by orders of magnitude at the top of the atmosphere. BENCHMARKS The first panel in Figure 11 shows the time taken by each code to produce the spectra in the first four wavelength regions in Section 3. These are relatively small regions with linelists including all transitions in VALD, including many which have essentially no effect on the spectrum. The second panel shows the time to compute Barklem & Collet (2016) data adjusted for the D0 used by another code. Uncertainty in D0 drives most of the disagreement in the amount of C2 present at the photosphere and most of the disagreement in the synthesized spectra at these wavelengths. a spectrum from 15000Å -15500Å using the APOGEE DR16 linelist. This provides a more realistic example of synthesizing a spectrum as part of the analysis for a large survey. All test are single-core, and run on the same machine with an AMD Epyc 7702P. Korg only needs to load the linelist and model atmosphere into memory once, so repeat syntheses which use the same inputs (as would be the case when varying individual abundances) avoid that step. For these comparisons, loading the linelist (10 5 -10 6 transitions) takes 1 -4 seconds, and loading the model atmosphere takes roughly 0.05 ms. As nearly all cases requiring performance involve repeated synthesizing with the same (or nearly the same) linelist, we did not optimize input parsing for speed and do not include this time in Korg benchmarks. We note that for the "small wavelength regions", the Moog times are technically lower limits, since the linelists provided to Moog were reduced in size by an order of magnitude or more (though the synthesized spectra are essentially unchanged). Figure 11 has separate markers for Turbospectrum with and without hydrogen lines, as we found them to slow down synthesis by a very large factor. In wavelength ranges where hydrogen lines are not present or unimportant, they can be omitted, and Turbospectrum executes much more quickly. The comparison to SME is included for completeness, but is not entirely fair since SME includes the effects of scattering, which is numerically expensive. Scattering is turned off for Turbospectrum, to make the comparison as fair as possible. In the top panel of Figure 12 we plot the time required by Korg to compute the gradient of the solar spectrum, ∂F ∂A(X) , with regard to a varying numbers of element abundances, N . (See Figure 1 for a subset of the derivative spectra.) For this demonstration we used the 15,000 -15,500Å range and the APOGEE DR16 linelist. A dashed horizontal line marks the time required to synthesize the spectrum, just under one second. Calculating the N -dimensional gradient spectrum takes roughly 2 + 0.15N seconds, meaning that the marginal time required for each derivative is an order of magnitude smaller than the time required to obtain it via finite differences. As Julia's automatic differentiation ecosystem improves, Korg may benefit from further speedups to the calculation of gradient spectra. Calculating derivative spectra with respect to atmospheric parameters, e.g. T eff , log g, or v mic , requires code to interpolate between model atmospheres, which we plan to add to Korg. CONCLUSIONS AND FUTURE DEVELOPMENT We have presented a new code, Korg, for 1D LTE spectral synthesis, i.e. computing stellar spectra given a model atmosphere, linelist, and element abundances. Korg is both fast faster than all other codes, and autodifferentiable, which yields further speedups when synthesis is embedded in an optimization loop. The code is publicly available at https://github.com/ajwheeler/ Korg.jl and installable via the Julia package manager. Detailed documentation, usage examples, and installation instructions are available at https://ajwheeler. github.io/Korg.jl/stable. In comparing Korg to other codes, we have highlighted that the level of disagreement between them (and thus systematic error) is substantial. Much of this can be attributed to uncertain physical parameters (e.g. the C 2 dissociation energy, discussed in Section 3.3) and Sol Arcturus HD122563 HD49933 Figure 11. The time taken by each code for the syntheses in Section 3. top: average compute time for the optical wavelength regions, which used very dense linelists. bottom: compute time to synthesize spectra from 15,000 -15,500Å with the APOGEE DR16 linelist, a more "realistic" use case. Note that the wavelength regions synthesized were larger than those plotted in Figures 5 -8. SME includes a true scattering treatment which can't be turned off, increasing its execution times. models of continuum absorption processes, but varying numerical methods also play a role (e.g. the discussion in appendix A). Unfortunately, similar problems extend to model atmospheres, which require the same physics used for synthesis, and linelists, which include many poorlyconstrained atomic parameters. We have shown in this work that Korg produces results for FGK stars that are similar to other codes. We recommend its use for these spectral classes. We plan to soon address the factors limiting Korg's applicability outside this regime, most importantly its lack of polyatomic molecules (present in cool stars) and true scattering (important in hot stars). We also plan to add support for NLTE departure coefficients. There are also limitations affecting ease-of-use that we plan to ad-0 1 2 3 4 5 6 7 8 9 10 gradient dimensions (element abundances) dress. As mentioned in Section 4, calculating derivative spectra with regard to atmospheric parameters like log g or T eff , requires code to interpolate model atmospheres which is written in Julia. Relatedly, while inferring stellar abundances and parameters with Korg is not overly burdensome, the user must still apply the line spread function to synthesized spectra and calculated goodness-of-fit themselves, processes we would like to automate. There are some limitations which apply to all spectroscopic synthesis codes. In the blue and ultraviolet, the poorly-constrained continuum results in very uncertain predictions, manifesting in the poor agreement found between codes. Many of the chemical equilibrium constants necessary for spectroscopic synthesis are also poorly-constrained, as discussed Barklem & Collet (2016). There are doubtless many features (like the one discussed in Section 3.3) where chemical equilibrium constants are the primary driver of uncertainty. Uncertain atomic parameters in linelists pose a similar problem in all regimes. Our goal is that Korg will be useful for both inference of stellar parameters and abundances for large survey data, e.g. Gaia Zhao et al. 2012), GALAH (De Silva et al. 2015), and for boutique analyses of individual spectra. We aim to make Korg fast and flexible enough to enable better survey pipelines and novel analyses, such as the propagation of error from synthesis inputs to synthesized spectra, or the joint inference of line parameters with observational data. In addition, we hope that by making Korg as easy to use as possible, more researchers will find it worthwhile to synthesize spectra when the need arises. To solve the equation of radiative transfer along a ray, Korg approximates the source function, S (λ subscripts dropped for brevity), with linear interpolation over optical depth, τ . Between each adjacent pair of atmospheric layers, we have S(τ ) ≈ mτ + b, so the (indefinite) transfer integral has solution (mτ − b) exp(−τ )dτ = − exp(−τ ) (b + m(τ + 1)) . ACKNOWLEDGMENTS (A1) When evaluating equation 12, Korg uses the same approximation, (mτ − b)E 2 (τ )dτ = 1 6 [τ E 2 (τ )(3b + 2mτ ) − exp(τ )(3b + 2m(τ + 1))] . Calculating the optical depth, τ , requires numerically integrating τ λ (s) = s 0 α λ α 5 α 5 ds ,(A3) where α 5 is the absorption coefficient at the reference wavelength, 5000Å, calculated by Korg, and α 5 is the absorption coefficient at the reference wavelength supplied by the model atmosphere. Their ratio is the correction factor to α λ which enforces greater consistency between the model atmosphere and spectral synthesis. Korg computes τ λ by evaluating the equivalent integral τ λ (s) = ln τ 5 (s) 0 τ 5 α λ α 5 d(ln τ 5 ) ,(A4) where τ 5 is the optical depth at the reference wavelength per the model atmosphere. This integral is numerically preferable, since atmospheric layers are spaced uniformly in ln τ 5 , and the integrand of A4 is more nearly linear than that of A3. A.2. Comparison to quadratic Bézier scheme Korg has a secondary radiative transfer scheme based on de la Cruz Rodríguez & Piskunov (2013), which approximates the source function with monotonic quadratic Bézier interpolation. They suggest that τ λ might be computed using the same scheme, but this causes numerical problems when the absorption coefficient is non-monotonic in depth. We modified the interpolant by requiring the Bézier control point (equation 10 in de la Cruz Rodríguez & Piskunov 2013) to be within a reasonable range, which eliminates the issue, but is physically unmotivated. This is the implementation available in Korg. As an alternative, we tried computing τ λ by interpolating α λ with a cubic spline. Unfortunately, the relatively sparse sampling of model atmospheres means that this choice has an impact on the synthesized spectrum similar to the difference between the default method and the de la Cruz Rodríguez & Piskunov (2013) method. Figure A.2 shows the difference in a portion of the solar spectrum resulting from using Bézier interpolation of the source function compared to the default implementation. The difference in synthesized spectra is at the sub-percent level, significantly smaller than the level of disagreement between codes. While we plan to adopt a higher-order scheme as the default in the future, we provide this scheme as a secondary option until we better understand the impact of the choices involved. S) Bezier spline Figure 13. The difference in the rectified spectrum obtained for the Sun using the quadratic Bézier scheme and Korg's default scheme. The purple and brown lines show the result obtained using modified Bézier interpolation, and cubic spline interpolation of α λ , respectively. (This is separate from the interpolation of the source function, which uses Bézier curves in both cases.) The difference between all three methods is small compared to the difference in spectra obtained from different codes, but nevertheless larger than is ideal. Figure 1 . 1Derivatives of a synthesized solar spectrum with respect to various abundances, ∂F ∂A(X) Figure 2 . 2The Hummer & Mihalas (1988) occupation probability correction factor, w, for several values of the primary quantum number, n, in hydrogen, evaluated at each layer in the solar atmosphere (indexed by optical depth at 5000Å). While the formalism affects transitions involving outer energy levels, those which don't, e.g. Hα, are unaffected. The results are similar for the other atmospheres considered in Section 3. Figure 3 . 3Major sources of opacity at the solar photosphere, defined here at the MARCs model atmosphere layers where the optical depth at 5000Å is most nearly unity. Figure 4 . 4The ratio of the absorption coefficient per Korg and per MARCS in the solar atmosphere (indexed by Rosseland mean opacity, τros). Agreement is good, except in the blue and ultraviolet where Korg uses more recent metal opacities and the effects of line blanketing (handled separately in Korg) in MARCS become large. • 3660Å -3670Å, near the Balmer jump • 3935Å -3948Å, in the wing of the Ca II Fraunhofer K line • 5160Å -5175Å, including two lines of the Fraunhofer b Mg triplet • 6540Å -6580Å, including H α • 15000Å -15050Å, in the near-infrared, part of the APOGEE wavelength region • The continuum across 2000Å -10000Å, computed without any lines Figure 5 .Figure 6 .Figure 7 .Figure 8 .Figure 9 . 56789Portions of a synthetic solar spectrum generated with Korg, Moog, Turbospectrum, and SME, as well as the continuum generated with an empty linelist from 4000Å to 10000Å. All wavelengths are vacuum. We emphasize that the structure in the blue and ultraviolet in the Korg continuum is due to resonances in the metal bound-free cross-sections, not lines. The residuals (other -Korg) are shown underneath the rectified flux for each wavelength region. The level of agreement between Korg and the other codes is similar to their agreement with each other. See text for a discussion of the discrepancies. Same asFigure 5, but showing the synthetic spectrum of an Arcturus-like star. Same asFigure 5, but showing the synthetic spectrum of an HD122563-like star. Same asFigure 5, but showing the synthetic spectrum of an HD499330-like star. top-left: part of the region with high-order Balmer lines for the metal-poor giant, along with the observed spectrum of HD122563. bottom-left: the same for solar parameters, alongside the observed solar spectrum right: Hα in the metal-poor giant, along with the observed spectrum of HD122563. None of the codes correctly model the hydrogen lines in the metalpoor regime. Though SME is the closest to the observed high-order Balmer lines in the metal-poor regime, it predicts strong high-order Balmer lines in stars where they are not present. All synthetic spectra have been convolved to the observational resolution. Figure 10 . 10left: Some lines from a C2 band in the sun synthesized by the four codes, and synthesized by Korg using the prescriptions for the chemical equilibrium constant, KC 2 from the other codes. Differences in these prescriptions, primarily arising from differences in the dissociation energy, D0, drive most of the disagreement. right: the ratio of KC 2 values perBarklem & Collet (2016) (Korg's adopted values) and per the treatment used by another code. The dashed lines show the values obtained using the Figure 12 . 12top: Time required to compute simultaneous derivative of spectrum with regard to N different element abundances, for varying N . (Gaia Collaboration et al. 2016), SDSS-V (Kollmeier et al. 2017), MOONS (Kollmeier et al. 2017), WEAVE Dalton et al. (2012), 4-MOST (de Jong et al. 2019), LAMOST AJW would like to thank Chris Sneden for his advice, Samuel Potter, PaulBarklem, Karen Lind, and Thomas Nordlander for answers to naive questions, and Rob Rutten, David F.Gray, Ivan Hubeny, Robert Kurucz, and Dimitri Mihalas for writing excellent reference material. He would also like to thank Charlie Conroy for his interest and encouragement, and David Hogg for putting him in touch with Mike O'Neil and Samuel Potter. The authors would like to thank the anonymous reviewer for their thoughtful comments and expertise. AJW is supported by the National Science Foundation Graduate Research Fellowship under Grant No. 1644869. MKN is in part supported by a Sloan Re-search Fellowship. A. R. C. is supported in part by the Australian Research Council through a Discovery Early Career Researcher Award (DE190100656) and through a Monash University Network of Excellence grant (NOE170024). Parts of this research were supported by the Australian Research Council Centre of Excellence for All Sky Astrophysics in 3 Dimensions (ASTRO 3D), through project number CE170100013. This research has made use of the services of the ESO Science Archive Facility. Based on observations collected at the European Southern Observatory under ESO programme 266.D-5655. This work has made use of the VALD database, operated at Uppsala University, the Institute of Astronomy RAS in Moscow, and the University of Vienna. The authors acknowledge support and resources from the Center for High-Performance Computing at the University of Utah. APPENDIX A. NUMERICAL DETAILS OF RADIATIVE TRANSFER A.1. Korg's default implementation A(X) = log 10 (n X /n H )+12 where n X is the total number density of element X and n H that of hydrogen. this spectum (archive ID: ADP.2021-08-26T17:20:56.312). http://kurucz.harvard.edu/molecules.html . A M Amarsi, M Asplund, R Collet, J Leenaarts, 10.1093/mnrasl/slv122Monthly Notices of the Royal Astronomical Society. 45411Amarsi, A. M., Asplund, M., Collet, R., & Leenaarts, J. 2015, Monthly Notices of the Royal Astronomical Society, 454, L11, doi: 10.1093/mnrasl/slv122 . A M Amarsi, S Liljegren, P E Nissen, 3Amarsi, A. M., Liljegren, S., & Nissen, P. E. 2022, 3D Non-Lte, Iron Abundances in FG-type Dwarfs. Non-LTE Iron Abundances in FG-type Dwarfs, arXiv. https://arxiv.org/abs/2209.13449 . A M Amarsi, T Nordlander, P S Barklem, 10.1051/0004-6361/201732546Astronomy and Astrophysics. 615139Amarsi, A. M., Nordlander, T., Barklem, P. S., et al. 2018, Astronomy and Astrophysics, 615, A139, doi: 10.1051/0004-6361/201732546 . A M Amarsi, K Lind, Y Osorio, 10.1051/0004-6361/202038650Astronomy and Astrophysics. 64262Amarsi, A. M., Lind, K., Osorio, Y., et al. 2020, Astronomy and Astrophysics, 642, A62, doi: 10.1051/0004-6361/202038650 . S D Anstee, B J Mara, 10.1093/mnras/253.3.549Monthly Notices of the Royal Astronomical Society. 253549Anstee, S. D., & O'Mara, B. J. 1991, Monthly Notices of the Royal Astronomical Society, 253, 549, doi: 10.1093/mnras/253.3.549 . P S Barklem, 10.1051/0004-6361:20066686Astronomy and Astrophysics. 466327Barklem, P. S. 2007, Astronomy and Astrophysics, 466, 327, doi: 10.1051/0004-6361:20066686 . P S Barklem, R Collet, 10.1051/0004-6361/201526961Astronomy and Astrophysics. 58896Barklem, P. S., & Collet, R. 2016, Astronomy and Astrophysics, 588, A96, doi: 10.1051/0004-6361/201526961 . P S Barklem, N Piskunov, 21028Barklem, P. S., & Piskunov, N. 2003, 210, E28 . P S Barklem, N Piskunov, B J Mara, 10.1051/aas:2000167arXiv:astro-ph/0010022Astronomy and Astrophysics Supplement Series. 142467Barklem, P. S., Piskunov, N., & O'Mara, B. J. 2000a, Astronomy and Astrophysics Supplement Series, 142, 467, doi: 10.1051/aas:2000167 -. 2000b, arXiv:astro-ph/0010022. https://arxiv.org/abs/astro-ph/0010022 A & A Supplement series. M A Bautista, 10.1051/aas:1997327122167Bautista, M. A. 1997, A & A Supplement series, Vol. 122, April I 1997, p.167-176., 122, 167, doi: 10.1051/aas:1997327 . K L Bell, K A Berrington, 10.1088/0022-3700/20/4/019Journal of Physics B: Atomic and Molecular Physics. 20Bell, K. L., & Berrington, K. A. 1987, Journal of Physics B: Atomic and Molecular Physics, 20, 801, doi: 10.1088/0022-3700/20/4/019 . M Bergemann, C J Hansen, M Bautista, G Ruchti, 10.1051/0004-6361/201219406Astronomy and Astrophysics. 54690Bergemann, M., Hansen, C. J., Bautista, M., & Ruchti, G. 2012, Astronomy and Astrophysics, 546, A90, doi: 10.1051/0004-6361/201219406 . M Berglund, M E Wieser, 10.1351/PAC-REP-10-06-02Pure and Applied Chemistry. 83397Berglund, M., & Wieser, M. E. 2011, Pure and Applied Chemistry, 83, 397, doi: 10.1351/PAC-REP-10-06-02 . K P Birch, M J Downs, 10.1088/0026-1394/31/4/006Metrologia. 31315Birch, K. P., & Downs, M. J. 1994, Metrologia, 31, 315, doi: 10.1088/0026-1394/31/4/006 . S Blanco-Cuaresma, 10.1093/mnras/stz549Monthly Notices of the Royal Astronomical Society. 4862075Blanco-Cuaresma, S. 2019, Monthly Notices of the Royal Astronomical Society, 486, 2075, doi: 10.1093/mnras/stz549 . C Boeche, A Vallenari, S Lucatello, 10.1051/0004-6361/202038973Astronomy and Astrophysics. 64535Boeche, C., Vallenari, A., & Lucatello, S. 2021, Astronomy and Astrophysics, 645, A35, doi: 10.1051/0004-6361/202038973 . S Buder, S Sharma, J Kos, 10.1093/mnras/stab1242Monthly Notices of the Royal Astronomical Society. 506150Buder, S., Sharma, S., Kos, J., et al. 2021, Monthly Notices of the Royal Astronomical Society, 506, 150, doi: 10.1093/mnras/stab1242 . J Colgan, D P Kilcrease, N H Magee, 10.3847/0004-637X/817/2/116The Astrophysical Journal. 817116Colgan, J., Kilcrease, D. P., Magee, N. H., et al. 2016, The Astrophysical Journal, 817, 116, doi: 10.3847/0004-637X/817/2/116 C R Cowley, The Observatory. 91139Cowley, C. R. 1971, The Observatory, 91, 139 Spectral Reflectivity of the Earth's Atmosphere III: The Scattering of Light by Atomic Systems. A Dalgarno, Geophysical Corporation of AmericaTech. Rep. 62-20-ADalgarno, A. 1962, Spectral Reflectivity of the Earth's Atmosphere III: The Scattering of Light by Atomic Systems, Tech. Rep. 62-20-A, Geophysical Corporation of America . A Dalgarno, A E Kingston, 10.1098/rspa.1960.0237Proceedings of the Royal Society of London Series A. 259424Dalgarno, A., & Kingston, A. E. 1960, Proceedings of the Royal Society of London Series A, 259, 424, doi: 10.1098/rspa.1960.0237 . A Dalgarno, D A Williams, 10.1086/147428The Astrophysical Journal. 136690Dalgarno, A., & Williams, D. A. 1962, The Astrophysical Journal, 136, 690, doi: 10.1086/147428 . G Dalton, S C Trager, D C Abrams, 10.1117/12.9259508446Dalton, G., Trager, S. C., Abrams, D. C., et al. 2012, 8446, 84460P, doi: 10.1117/12.925950 . R S De Jong, O Agertz, A A Berbel, 10.18727/0722-6691/5117The Messenger. 175de Jong, R. S., Agertz, O., Berbel, A. A., et al. 2019, The Messenger, 175, 3, doi: 10.18727/0722-6691/5117 . J De La Cruz Rodríguez, N Piskunov, 10.1088/0004-637X/764/1/33The Astrophysical Journal. 76433de la Cruz Rodríguez, J., & Piskunov, N. 2013, The Astrophysical Journal, 764, 33, doi: 10.1088/0004-637X/764/1/33 . De Silva, G M Freeman, K C Bland-Hawthorn, J , 10.1093/mnras/stv327Monthly Notices of the Royal Astronomical Society. 4492604De Silva, G. M., Freeman, K. C., Bland-Hawthorn, J., et al. 2015, Monthly Notices of the Royal Astronomical Society, 449, 2604, doi: 10.1093/mnras/stv327 . H Dekker, S D&apos;odorico, A Kaufer, B Delabre, H Kotzlowski, 10.1117/12.3955124008Dekker, H., D'Odorico, S., Kaufer, A., Delabre, B., & Kotzlowski, H. 2000, 4008, 534, doi: 10.1117/12.395512 . L.-C Deng, H J Newberg, C Liu, 10.1088/1674-4527/12/7/003Research in Astronomy and Astrophysics. 12735Deng, L.-C., Newberg, H. J., Liu, C., et al. 2012, Research in Astronomy and Astrophysics, 12, 735, doi: 10.1088/1674-4527/12/7/003 . B Freytag, M Steffen, H G Ludwig, 10.1016/j.jcp.2011.09.026Journal of Computational Physics. 231919Freytag, B., Steffen, M., Ludwig, H. G., et al. 2012, Journal of Computational Physics, 231, 919, doi: 10.1016/j.jcp.2011.09.026 . T Prusti, Gaia CollaborationJ H J De Bruijne, Gaia Collaboration10.1051/0004-6361/201629272Astronomy and Astrophysics. 595Gaia Collaboration, Prusti, T., de Bruijne, J. H. J., et al. 2016, Astronomy and Astrophysics, 595, A1, doi: 10.1051/0004-6361/201629272 J M Gerber, E Magg, B Plez, Non-LTE Radiative Transfer with Turbospectrum. Gerber, J. M., Magg, E., Plez, B., et al. 2022, Non-LTE Radiative Transfer with Turbospectrum Theory and Observation of Normal Stellar Atmospheres. O Gingerich, Gingerich, O. 1969, Theory and Observation of Normal Stellar Atmospheres . R O Gray, C J Corbally, 10.1086/116893The Astronomical Journal. 107742Gray, R. O., & Corbally, C. J. 1994, The Astronomical Journal, 107, 742, doi: 10.1086/116893 . B Gustafsson, B Edvardsson, K Eriksson, 10.1051/0004-6361:200809724Astronomy and Astrophysics. 486951Gustafsson, B., Edvardsson, B., Eriksson, K., et al. 2008, Astronomy and Astrophysics, 486, 951, doi: 10.1051/0004-6361:200809724 W M Haynes, CRC Handbook of Chemistry and Physics. Boca Raton, FlaCRC Press91st Edition, 91st edn.Haynes, W. M., ed. 2010, CRC Handbook of Chemistry and Physics, 91st Edition, 91st edn. (Boca Raton, Fla.: CRC Press) . J A Holtzman, S Hasselquist, M Shetrone, 10.3847/1538-3881/aad4f9The Astronomical Journal. 156125Holtzman, J. A., Hasselquist, S., Shetrone, M., et al. 2018, The Astronomical Journal, 156, 125, doi: 10.3847/1538-3881/aad4f9 I Hubeny, C Prieto, Y Osorio, T Lanz, TLUSTY and SYNSPEC Users's Guide IV: Upgraded Versions. 20854Hubeny, I., Allende Prieto, C., Osorio, Y., & Lanz, T. 2021, TLUSTY and SYNSPEC Users's Guide IV: Upgraded Versions 208 and 54 . I Hubeny, D G Hummer, T Lanz, Astronomy and Astrophysics. 282151Hubeny, I., Hummer, D. G., & Lanz, T. 1994, Astronomy and Astrophysics, 282, 151 . I Hubeny, T Lanz, ascl:1109.022Astrophysics Source Code Library. Hubeny, I., & Lanz, T. 2011, Astrophysics Source Code Library, ascl:1109.022 A Brief Introductory Guide to TLUSTY and SYNSPEC. -. 2017, A Brief Introductory Guide to TLUSTY and SYNSPEC I Hubeny, D ; D G Mihalas, D Mihalas, 10.1086/166600Theory of Stellar Atmospheres Hummer. 331794Hubeny, I., & Mihalas, D. 2014, Theory of Stellar Atmospheres Hummer, D. G., & Mihalas, D. 1988, The Astrophysical Journal, 331, 794, doi: 10.1086/166600 . K Hunger, Zeitschrift fur Astrophysik. 3936Hunger, K. 1956, Zeitschrift fur Astrophysik, 39, 36 . T L John, 10.1093/mnras/269.4.871Monthly Notices of the Royal Astronomical Society. 269John, T. L. 1994, Monthly Notices of the Royal Astronomical Society, 269, 871, doi: 10.1093/mnras/269.4.871 . W J Karzas, R Latter, 10.1086/190063The Astrophysical Journal Supplement Series. 6167Karzas, W. J., & Latter, R. 1961, The Astrophysical Journal Supplement Series, 6, 167, doi: 10.1086/190063 . J A Kollmeier, G Zasowski, H.-W Rix, arXiv:1711.03234arXiv e-printsKollmeier, J. A., Zasowski, G., Rix, H.-W., et al. 2017, arXiv e-prints, 1711, arXiv:1711.03234 A Kramida, Y Ralchenko, 10.18434/T4W30FNIST Atomic Spectra Database. 78National Institute of Standards and TechnologyKramida, A., & Ralchenko, Y. 2021, NIST Atomic Spectra Database, NIST Standard Reference Database 78, National Institute of Standards and Technology, doi: 10.18434/T4W30F . F Kupka, N Piskunov, T A Ryabchikova, H C Stempels, W W Weiss, 10.1051/aas:1999267Astronomy and Astrophysics Supplement Series. 138119Kupka, F., Piskunov, N., Ryabchikova, T. A., Stempels, H. C., & Weiss, W. W. 1999, Astronomy and Astrophysics Supplement Series, 138, 119, doi: 10.1051/aas:1999267 . F G Kupka, T A Ryabchikova, N E Piskunov, H C Stempels, W W Weiss, 10.1515/astro-2000-0420Baltic Astronomy. 9590Kupka, F. G., Ryabchikova, T. A., Piskunov, N. E., Stempels, H. C., & Weiss, W. W. 2000, Baltic Astronomy, 9, 590, doi: 10.1515/astro-2000-0420 . R L Kurucz, SAO Special Report. 309Kurucz, R. L. 1970, SAO Special Report, 309 Cd-Rom Kurucz, Cambridge, Ma, Smithsonian Astrophysical Observatory, -c1993. -. 1993, Kurucz CD-ROM, Cambridge, MA: Smithsonian Astrophysical Observatory, -c1993, December 4, 1993 . 10.1139/p10-104Canadian Journal of Physics. 89417-. 2011, Canadian Journal of Physics, 89, 417, doi: 10.1139/p10-104 . H.-W Lee, 10.1111/j.1365-2966.2005.08859.xMonthly Notices of the Royal Astronomical Society. 3581472Lee, H.-W. 2005, Monthly Notices of the Royal Astronomical Society, 358, 1472, doi: 10.1111/j.1365-2966.2005.08859.x . K Lind, M Asplund, P S Barklem, A K Belyaev, 10.1051/0004-6361/201016095Astronomy and Astrophysics. 528103Lind, K., Asplund, M., Barklem, P. S., & Belyaev, A. K. 2011, Astronomy and Astrophysics, 528, A103, doi: 10.1051/0004-6361/201016095 . Z Magic, A Weiss, M Asplund, 10.1051/0004-6361/201423760Astronomy and Astrophysics. 57389Magic, Z., Weiss, A., & Asplund, M. 2015, Astronomy and Astrophysics, 573, A89, doi: 10.1051/0004-6361/201423760 . B M Mclaughlin, P C Stancil, H R Sadeghpour, R C Forrey, 10.1088/1361-6455/aa6c1fJournal of Physics B: Atomic, Molecular and Optical Physics. 50114001McLaughlin, B. M., Stancil, P. C., Sadeghpour, H. R., & Forrey, R. C. 2017, Journal of Physics B: Atomic, Molecular and Optical Physics, 50, 114001, doi: 10.1088/1361-6455/aa6c1f P K Mogensen, K Carlsson, S Villemot, 10.5281/zenodo.4404703JuliaNLSolvers/NLsolve.Jl: V4.5.1, Zenodo. Mogensen, P. K., Carlsson, K., Villemot, S., et al. 2020, JuliaNLSolvers/NLsolve.Jl: V4.5.1, Zenodo, doi: 10.5281/zenodo.4404703 . S N Nahar, A K Pradhan, 10.1103/PhysRevA.44.2935Physical Review A. 442935Nahar, S. N., & Pradhan, A. K. 1991, Physical Review A, 44, 2935, doi: 10.1103/PhysRevA.44.2935 . 10.1088/0953-4075/26/6/012Journal of Physics B Atomic Molecular Physics. 261109-. 1993, Journal of Physics B Atomic Molecular Physics, 26, 1109, doi: 10.1088/0953-4075/26/6/012 . Y Osorio, P S Barklem, 10.1051/0004-6361/201526958Astronomy and Astrophysics. 586120Osorio, Y., & Barklem, P. S. 2016, Astronomy and Astrophysics, 586, A120, doi: 10.1051/0004-6361/201526958 . Y V Pakhomov, T A Ryabchikova, N E Piskunov, 10.1134/S1063772919120047Astronomy Reports. 631010Pakhomov, Y. V., Ryabchikova, T. A., & Piskunov, N. E. 2019, Astronomy Reports, 63, 1010, doi: 10.1134/S1063772919120047 . G Peach, Memoirs of the Royal Astronomical Society. 731Peach, G. 1970, Memoirs of the Royal Astronomical Society, 73, 1 . N Piskunov, J A Valenti, 10.1051/0004-6361/201629124Astronomy & Astrophysics. 59716Piskunov, N., & Valenti, J. A. 2017, Astronomy & Astrophysics, 597, A16, doi: 10.1051/0004-6361/201629124 . N E Piskunov, F Kupka, T A Ryabchikova, W W Weiss, C S Jeffery, Astronomy and Astrophysics Supplement Series. 112525Piskunov, N. E., Kupka, F., Ryabchikova, T. A., Weiss, W. W., & Jeffery, C. S. 1995, Astronomy and Astrophysics Supplement Series, 112, 525 . B Plez, ascl:1205.004Astrophysics Source Code Library. Plez, B. 2012, Astrophysics Source Code Library, ascl:1205.004 . B Plez, V V Smith, D L Lambert, 10.1086/173438The Astrophysical Journal. 418Plez, B., Smith, V. V., & Lambert, D. L. 1993, The Astrophysical Journal, 418, 812, doi: 10.1086/173438 . A Recio-Blanco, A Bijaoui, P De Laverny, 10.1111/j.1365-2966.2006.10455.xMonthly Notices of the Royal Astronomical Society. 370141Recio-Blanco, A., Bijaoui, A., & De Laverny, P. 2006, Monthly Notices of the Royal Astronomical Society, 370, 141, doi: 10.1111/j.1365-2966.2006.10455.x . T Ryabchikova, N Piskunov, R L Kurucz, 10.1088/0031-8949/90/5/054005Physica Scripta. 9054005Ryabchikova, T., Piskunov, N., Kurucz, R. L., et al. 2015, Physica Scripta, 90, 054005, doi: 10.1088/0031-8949/90/5/054005 . A J Sauval, J B Tatum, 10.1086/190980The Astrophysical Journal Supplement Series. 56193Sauval, A. J., & Tatum, J. B. 1984, The Astrophysical Journal Supplement Series, 56, 193, doi: 10.1086/190980 . L Sbordone, P Bonifacio, F Castelli, R L Kurucz, Memorie della Societa Astronomica Italiana Supplementi. 593Sbordone, L., Bonifacio, P., Castelli, F., & Kurucz, R. L. 2004, Memorie della Societa Astronomica Italiana Supplementi, 5, 93 Synthesizing Spectra from 3D Radiation Hydrodynamic Models of Massive Stars Using Monte Carlo Radiation Transport. W C Schultz, B T H Tsang, L Bildsten, Y.-F Jiang, Schultz, W. C., Tsang, B. T. H., Bildsten, L., & Jiang, Y.-F. 2022, Synthesizing Spectra from 3D Radiation Hydrodynamic Models of Massive Stars Using Monte Carlo Radiation Transport, arXiv. https://arxiv.org/abs/2209.14772 P Schwerdtfeger, 10.1142/9781860948862_0001Computational, Numerical and Mathematical Methods in Sciences and Engineering. 1Atoms, Molecules and Clusters in Electric FieldsSchwerdtfeger, P. 2006, in Computational, Numerical and Mathematical Methods in Sciences and Engineering, Vol. Volume 1, Atoms, Molecules and Clusters in Electric Fields (PUBLISHED BY IMPERIAL COLLEGE PRESS AND DISTRIBUTED BY WORLD SCIENTIFIC PUBLISHING CO.), 1-32, doi: 10.1142/9781860948862 0001 . M J Seaton, Y Yan, D Mihalas, A K Pradhan, 10.1093/mnras/266.4.805Monthly Notices of the Royal Astronomical Society. 266805Seaton, M. J., Yan, Y., Mihalas, D., & Pradhan, A. K. 1994, Monthly Notices of the Royal Astronomical Society, 266, 805, doi: 10.1093/mnras/266.4.805 . R Smiljanic, A J Korn, M Bergemann, 10.1051/0004-6361/201423937Astronomy and Astrophysics. 570122Smiljanic, R., Korn, A. J., Bergemann, M., et al. 2014, Astronomy and Astrophysics, 570, A122, doi: 10.1051/0004-6361/201423937 . V V Smith, D Bizyaev, K Cunha, 10.3847/1538-3881/abefdcThe Astronomical Journal. 161254Smith, V. V., Bizyaev, D., Cunha, K., et al. 2021, The Astronomical Journal, 161, 254, doi: 10.3847/1538-3881/abefdc . C Sneden, 10.1086/152374The Astrophysical Journal. 184839Sneden, C. 1973, The Astrophysical Journal, 184, 839, doi: 10.1086/152374 . C Sneden, J Bean, I Ivans, S Lucatello, J Sobeck, ascl:1202.009Astrophysics Source Code Library. Sneden, C., Bean, J., Ivans, I., Lucatello, S., & Sobeck, J. 2012, Astrophysics Source Code Library, ascl:1202.009 . J S Sobeck, R P Kraft, C Sneden, 10.1088/0004-6256/141/6/175The Astronomical Journal. 141175Sobeck, J. S., Kraft, R. P., Sneden, C., et al. 2011, The Astronomical Journal, 141, 175, doi: 10.1088/0004-6256/141/6/175 . P C Stancil, 10.1086/174411The Astrophysical Journal. 430360Stancil, P. C. 1994, The Astrophysical Journal, 430, 360, doi: 10.1086/174411 . C Stehlé, R Hutcheon, 10.1051/aas:1999118Astronomy and Astrophysics Supplement Series. 14093Stehlé, C., & Hutcheon, R. 1999, Astronomy and Astrophysics Supplement Series, 140, 93, doi: 10.1051/aas:1999118 The London, Edinburgh, and Dublin Philosophical Magazine. S J J Thomson, 10.1080/14786440408637241Journal of Science. Thomson, S. J. J. 1912, The London, Edinburgh, and Dublin Philosophical Magazine and Journal of Science, doi: 10.1080/14786440408637241 . T Tsuji, Astronomy and Astrophysics. 23411Tsuji, T. 1973, Astronomy and Astrophysics, Vol. 23, p. 411 (1973), 23, 411 . A Unsold, Physik Der Sternatmospharen. MIT Besonderer Berucksichtigung Der SonneUnsold, A. 1955, Physik Der Sternatmospharen, MIT Besonderer Berucksichtigung Der Sonne. . J A Valenti, N Piskunov, Astronomy and Astrophysics Supplement Series. 118595Valenti, J. A., & Piskunov, N. 1996, Astronomy and Astrophysics Supplement Series, 118, 595 . ascl:1202.013Astrophysics Source Code Library. -. 2012, Astrophysics Source Code Library, ascl:1202.013 . P A M Van Hoof, R J R Williams, K Volk, 10.1093/mnras/stu1438Monthly Notices of the Royal Astronomical Society. 444420van Hoof, P. A. M., Williams, R. J. R., Volk, K., et al. 2014, Monthly Notices of the Royal Astronomical Society, 444, 420, doi: 10.1093/mnras/stu1438 . A Vögler, S Shelyag, M Schüssler, 10.1051/0004-6361:20041507Astronomy & Astrophysics. 429335Vögler, A., Shelyag, S., Schüssler, M., et al. 2005, Astronomy & Astrophysics, 429, 335, doi: 10.1051/0004-6361:20041507 . L Wallace, K H Hinkle, W C Livingston, S P Davis, 10.1088/0067-0049/195/1/6The Astrophysical Journal Supplement Series. 195Wallace, L., Hinkle, K. H., Livingston, W. C., & Davis, S. P. 2011, The Astrophysical Journal Supplement Series, 195, 6, doi: 10.1088/0067-0049/195/1/6 . B Warner, 136Warner, B. 1967, . ., 136, 8 A Wehrhahn, N Piskunov, T Ryabchikova, PySME -Spectroscopy Made Easier. Wehrhahn, A., Piskunov, N., & Ryabchikova, T. 2022, PySME -Spectroscopy Made Easier, arXiv. https://arxiv.org/abs/2210.04755 . G Zhao, Y Zhao, Y Chu, Y Jing, L Deng, arXiv:1206.3569astro-phZhao, G., Zhao, Y., Chu, Y., Jing, Y., & Deng, L. 2012, arXiv:1206.3569 [astro-ph].
[ "https://github.com/ajwheeler/" ]
[ "The Simplicial Ricci Tensor", "The Simplicial Ricci Tensor" ]
[ "Paul M Alsing \nInformation Directorate\nAir Force Research Laboratory\n13441RomeNew York\n", "Jonathan R Mcdonald [email protected] \nInformation Directorate\nAir Force Research Laboratory\n13441RomeNew York\n\nInsitut für Angewandte Mathematik\nFriedrich-Schiller-Universität-Jena\n07743JenaGermany\n", "Warner A Miller \nDepartment of Physics\nFlorida Atlantic University\n33431Boca RatonFL\n" ]
[ "Information Directorate\nAir Force Research Laboratory\n13441RomeNew York", "Information Directorate\nAir Force Research Laboratory\n13441RomeNew York", "Insitut für Angewandte Mathematik\nFriedrich-Schiller-Universität-Jena\n07743JenaGermany", "Department of Physics\nFlorida Atlantic University\n33431Boca RatonFL" ]
[]
The Ricci tensor (Ric) is fundamental to Einstein's geometric theory of gravitation. The 3-dimensional Ric of a spacelike surface vanishes at the moment of time symmetry for vacuum spacetimes. The 4-dimensional Ric is the Einstein tensor for such spacetimes. More recently the Ric was used by Hamilton to define a non-linear, diffusive Ricci flow (RF) that was fundamental to Perelman's proof of the Poincarè conjecture. Analytic applications of RF can be found in many fields including general relativity and mathematics. Numerically it has been applied broadly to communication networks, medical physics, computer design and more. In this paper, we use Regge calculus (RC) to provide the first geometric discretization of the Ric. This result is fundamental for higher-dimensional generalizations of discrete RF. We construct this tensor on both the simplicial lattice and its dual and prove their equivalence. We show that the Ric is an edge-based weighted average of deficit divided by an edge-based weighted average of dual area -an expression similar to the vertex-based weighted average of the scalar curvature reported recently. We use this Ric in a third and independent geometric derivation of the RC Einstein tensor in arbitrary dimension.
10.1088/0264-9381/28/15/155007
[ "https://arxiv.org/pdf/1107.2458v1.pdf" ]
119,313,072
1107.2458
940a6375f428aaf8ee11b7eaaf3911abb1bd59aa
The Simplicial Ricci Tensor 13 Jul 2011 Paul M Alsing Information Directorate Air Force Research Laboratory 13441RomeNew York Jonathan R Mcdonald [email protected] Information Directorate Air Force Research Laboratory 13441RomeNew York Insitut für Angewandte Mathematik Friedrich-Schiller-Universität-Jena 07743JenaGermany Warner A Miller Department of Physics Florida Atlantic University 33431Boca RatonFL The Simplicial Ricci Tensor 13 Jul 2011 The Ricci tensor (Ric) is fundamental to Einstein's geometric theory of gravitation. The 3-dimensional Ric of a spacelike surface vanishes at the moment of time symmetry for vacuum spacetimes. The 4-dimensional Ric is the Einstein tensor for such spacetimes. More recently the Ric was used by Hamilton to define a non-linear, diffusive Ricci flow (RF) that was fundamental to Perelman's proof of the Poincarè conjecture. Analytic applications of RF can be found in many fields including general relativity and mathematics. Numerically it has been applied broadly to communication networks, medical physics, computer design and more. In this paper, we use Regge calculus (RC) to provide the first geometric discretization of the Ric. This result is fundamental for higher-dimensional generalizations of discrete RF. We construct this tensor on both the simplicial lattice and its dual and prove their equivalence. We show that the Ric is an edge-based weighted average of deficit divided by an edge-based weighted average of dual area -an expression similar to the vertex-based weighted average of the scalar curvature reported recently. We use this Ric in a third and independent geometric derivation of the RC Einstein tensor in arbitrary dimension. Introduction The Ricci curvature tensor (Ric) governs the dynamics of geometry in vacuum general relativity. It also has been pivotal in the mathematical classification of manifolds. It can therefore have a profound impact on our understanding of geometry and deepen our insights into classical and quantum gravity. Hamilton used the Ric to define a diffusive curvature flow that is referred to as Ricci flow (RF) [1]; Rate of change of the metric = −2 Ric. This was instrumental in Perelman's proof of Poincaré's conjecture [2,3,4]. In addition to its mathematical applications, RF has been applied to a broad range of problems ranging from medical physics to network routing, and from face recognition to general relativity and cosmology. Many of the applications of RF are for discrete, unstructured meshes. Regge calculus (RC) provides a natural discrete description of Einstein's geometric theory of gravitation [5]. Here we apply RC to define the Ric in RC for arbitrary dimensions, so that RF can be extended to higher dimensions. Evolutions of the Ric have found recent applications in the physics of spacetime. RF is expected to be an important tool for the study of generic black hole solutions of spacetime. For example, RF provides a means for a better understanding of quasilocal mass in non-trivial asymptotically flat spacetimes [6]. Moreover, it may be useful for a mathematically rigourous prescription for black hole boundary conditions in the numerical relativity community [7]. Similary, RF has also been applied to black-hole physics as a means for determining the Bekenstein-Hawking entropy [8,9]. In cosmology, there has been increased interest in RF as means for understanding the averaging problem in ΛCDM cosmological models [10,11]. Numerical methods using RF techniques require discrete representations of the Ric and its corresponding evolution equation. Current RF techniques in computational geometry on complex topologies focus on 2-dimensional representations of higherdimensional data [12,13]. Meanwhile, recent numerical simulations of relativistic models examined RF on higher dimensional manifolds with lower complexity topologies [14,15,16,17]. Geometric discretizations of the Ric are needed for numerical simulation of RF on higher dimensional manifolds with arbitrary topology. RC is a natural setting for investigating the Ric and RF due to its piecewise-flat, coordinate-free construction which naturally captures the Riemannian curvature on each codimension 2 hinge, h, of the simplicial lattice. Here we use this RC Riemann curvature to derive a simplicial representation of the Ric. This one-form expression is valid in arbitrary dimension. We start by reviewing some of the principles related to representation of differential forms in RC and the notation used in this article in Section 2. In Section 3 we develop simplicial Ric on edges of the simplicial and dual lattices. In Section 4 we use our expression of the simplicial Ric to provide a third and independent geometric derivation of the RC Einstein tensor in arbitrary dimension. In particular, we utilize the simplicial Ric and scalar curvature to explicity construct the Einstein tensor as the trace-reversed Ric. Dual Lattices and Discrete Differential Forms Geometric discretizations [12,13,18,19,20,21] are generally characterized by association of tensors with lattice elements of a discrete manifold. Tensors decomposed into the space of values and tangent space components become weighted distributions over the skeleton of the discrete manifold and obtain their geometric properties from the skeleton itself. Differential quantities in the lattice are formulated such that pointwise evaluation gives way to averaged evaluation over an integral domain. Tensors thus become integrated measures on the discrete manifold and their associated scalar weights may be intepreted as densities assigned to a lattice element. This integrated representation of tensors over lattice elements is a form of discrete exterior calculus or discrete differential forms (DDF) in which one explicitly discretizes the tangent space values of a differential form. The simplicial lattice in RC provides one set of differential forms onto which a tensor may be projected. The simplicial d-volumes of a d-dimensional manifold provide an anchor -the tangent space-for the differential forms. However, to incorporate dual forms we require a lattice structure obtained by some duality relation with the simplicial skeleton, i.e. the dual lattice. We will often use the more generic phrasing of dual lattice to refer to the circumcentric dual lattice. The circumcentric dual lattice is the unique lattice defined by connecting the circumcenter of a d-simplex to the circumcenters of each neighboring d-simplex. This lattice is of special interest since it creates a pairwise orthogonality between elements of the dual lattice, i.e. for each k-element of the simplicial lattice there exists a (d − k)-element in the circumcentric dual. Moreover, if we constrain the simplicial lattice to be a Delaunay lattice [22], then the circumcentric dual is identified as a Voronoi lattice. In this particular case, the d-dimensional Voronoi cells are uniquely determined by the set of all points closest to a given simplicial vertex than to any other simplicial vertex. Likewise, a general (d−k)-Voronoi element is the set of all points in the codimension-k hyperplane closest to a k-simplex than to any other k-simplex in the simplicial lattice. Thus, a d-volume constructed from the simplicial element and its Voronoi dual has a natural interpretation as the local, compact integral measures on the simplicial lattice. (See Appendix A for more details.) Some of the notation used in this article will denote elements of the simplicial (dual) lattice, volumes in the lattices, or measures of curvature. In particular, we will distinguish between the simplicial and dual lattices using Latin and Greek lettering. The Latin letters v, ℓ, and t will label simplicial elements of dimension 0, 1, and 2, respectively. Arbitrary k-simplexes are labeled by s (k) . Meanwhile, the elements of the dual lattice are labeled by the Greek letter counterparts ν, λ, τ , and σ (k) . We will also be using the notation ∆V a to denote the d-volume associated with the element a. For an edge ℓ of the simplicial lattice on a 3-dimensional lattice, the label ∆V ℓ represents a 3-volume associated with ℓ. The label ∆V a b denotes the d-volume associated with a and restricted to the element b. This restriction can be formulated as taking the intersection of the individual d-volumes from a and b. Extending this notation to arbitrary restrictions, we can write ∆V a 1 a 2 ···a k as the restriction of the volume ∆V a 1 to all of the elements a 2 , . . . , a k . Indeed, one can convince oneself of this notation by considering the case of the simplicial manifold restricted to a given element of either lattice. In this case, the entire manifold contains the d-volume of every lattice element, so ∆V a can be seen to be the restriction of the simplicial manifold to the element a. These notations, and others, are summarized below: ν -Dual vertex λ -Dual edge τ , h * -Dual polygon σ (k) -Dual polytope of dimension k v -Simplicial vertex ℓ -Simplicial edge t -Triangle on simplicial skeleton s (k) -k-simplex h -Simplicial hinge St(a) -Star of a lattice element a, i.e. s (k) ⊃a s (k) for the simplicial lattice A h , |h| -Volume of h A * h , |h * | -Area of h * |s (k) | |σ (k) | -volume of s (k) σ (k) θ hℓ -Angle opposite of edge ℓ on a hinge h in 4 dimensions ǫ h -deficit angle associated with a hinge R h -Riemann Tensor projected on a hinge R λ -Ric projected on a dual edge, λ R ℓ -Ric projected on a simplicial edge, ℓ R ν -Ricci scalar at a dual vertex, ν R v -Ricci scalar at a simplicial vertex, v A hℓ -Volume of hinge restricted to ℓ A * hλ -Area of dual to a hinge restricted to λ |a| b -Volume of a restricted to b, i.e. the norm of a b |a| b 1 ···bm -Volume of a restricted to all b i , i.e. the norm of a b 1 · · · b m ∆V ad-volume associated with the element (either dual or simplicial) a. ∆V a b -d-volume of a restricted to b α (k) , s (k) -Local projection or metric inner-product of two k-forms, α (k) , β (k) -Standard L 2 inner-product on two simplicial (dual) k-forms C a b -Volume-weighted average of the C a 's hinging on the element b, a:b∈a Ca ∆Va b a:b∈a ∆Va b C a b -Area-weighted average of the C a 's hinging on the element b, a a:b∈a CaA ab a:b∈a A ab C a | b -Arithmetic mean of the C a 's hinging on b Discretizing the Ricci Tensor Here we construct a geometric representation of the Ric on a piecewise-flat simplicial geometry. The geometric discretization we use is based on discrete differential forms (DDF) in which the (dual) simplicial lattice is used as the (co-)chain complex for embedding continuum forms in the discrete manifold. It has been found that such discretizations preserve the geometric properties of the tensors and can be useful for solving differential equations for tensor fields on geometries with complex topology [20,23,21]. Piecewise-flat geometries are characterized by curvature distributions concentrated at each codimension 2 hinge, h, on the simplicial manifold, S. The curvature on a given hinge h is a conical singularity with deficit angle ǫ h . We have shown that standard RC is consistent with distributing this curvature evenly over the polyhedron, h * , (with area A * h ) dual to hinge h. It admits a natural interpretation as the sole independent component of the Riemann curvature tensor in the d-volume associated with the hinge [24]. From this local representation of curvature distributed over a hinge one can explicitly and geometrically define the Einstein tensor in 4-dimensions [25] and a vertex-based scalar curvature [26]. The Einstein tensor encodes the geometrodynamics of General Relativity through the Einstein equations. The scalar curvature provides a point-wise average of curvature that an observer can set out to measure. However, these curvature measures are insufficient to examine geometric flows where the Ric plays the predominent role. When discretizing evolution processes that can be reformulated as an evolution of the Ric itself, e.g. RF, we seek to first represent the Ric directly in the geometry, then develop the evolution equations for the new representation. We provide two equivalent derivations of the Ric. First, we start with the continuum construction and apply it directly to discrete curvature forms. Second, we derive an equivalent expression directly from the action principle of RC. Derivation of the Ric from the Continuum using Discrete Curvature Forms In the continuum the Ric is defined as the first contraction of the Riemann curvature tensor; R a b = R ac bc .(2) As a bivector-valued two-form the curvature tensor takes in a bivector for the loop of parallel transport and outputs a bivector characterizing the change in a vector transported around the loop; R = 1 4 e a ∧ e b R ab cd dx c ∧ dx d(3) where {e a } are the basis tangent vectors dual to the basis one-forms {dx c }. In RC, curvature is given exclusively by the sectional curvature, K, associated with a codimension 2 hinge. On a hinge, the sectional curvature is given by K = ǫ h A * h (4) which is just the ratio of angle rotated (the deficit angle epsilon h ) to area traversed (A * h ) by the loop of parallel transport. The sectional curvature is the double projection of the Riemann tensor onto a given plane [27]; K = R(e a , e b , e a , e b )(5) where e a and e b are an orthonormal basis for the plane. Hence the Riemann curvature tensor on a hinge is proportional to the sectional curvature of the polygonal dual, h * , to the hinge; R h = R(h * ab , h * ,ab ) = d(d − 1) ǫ h A * h .(6) For this reason, one can generally denote the Riemann tensor for a hinge as R h * h * . We will, in general, only keep track of the two-form components and write R h * = R h , where the equality is a result of the duality. Taking the trace of the Riemann tensor requires summation over the curvature associated with loops spanned, in part, by a given oneform e b . This summation of loops hinging on a given one-form reduces the curvature two-form to a one-form doubly-projected on e b . RC is at its heart a weak variational formulation of General Relativity. This is easily seen since the geometric content of RC is encoded not through pointwise defined tensors, but tensors distributed over elements of the lattice. Indeed, the Regge equations are integral equations and given by the Einstein tensor integrated over the associated 4volume. Hence, we evaluate the discrete Ric as an integrated quantity on the simplicial manifold. Locally, the Ric becomes a one-form projected on the dual edges of the lattice and integrated over the d-dimensional domain, ∆V λ , associated with the dual edge, λ. To take the trace of the Riemann tensor, one must sum over the independent directions orthogonal to a dual-edge λ. In general, one will sum over all independent two-forms λ ∧ e a . However, when e a lies in the plane of a hinge h, there is no curvature associated with such a loop of parallel transport. Therefore, the Ricci one-form on λ is dependent only on the the polyhedral 2-faces, h * , hinging on a given dual edge, λ; R λ ∆V λ = h * :λ∈h * R h * ∆V h * λ .(7) We have introduced the volume ∆V h * λ (Figure 1) which is a restriction of the d-volume for h * to the dual edge λ-the intersection of the d-volumes associated with h * and λ. This is the discrete equivalent of decomposing a domain and integrating over distinct representations on the subdomains; Ω α = i Ω i α ′ (Ω i ).(8) Here the Ω i form a non-overlapping domain decomposition of Ω. Using the Voronoi-Delaunay orthogonal decomposition of volumes and the RC definition of curvature on a hinge, R h = d(d − 1) ǫ h A * h , we obtain an explicit expression for the integrated Ricci one-form on a dual edge; R λ ∆V λ = h * : λ∈h * d(d − 1) ǫ h A * h 1 d 2 A h A * hλh * : λ∈h * 2ǫ h A h A * hλ A * h .(9) We have decomposed the restricted d-volume, (see Appendix A), into the Voronoi and Delaunay components and restricted the Voronoi area, A * h , to the dual edge, λ, denoted as A * hλ . This restricted area is the set of all points in A * h closer to λ than to any other dual edge λ ′ in the skeleton of h * . Dividing by the intergal domain, we obtain R λ = h * :λ∈h * d(d − 1) ǫ h A * h A * hλ A h h * :λ∈h * A * hλ A h = h * :λ∈h * R h ∆V h λ h * :λ∈h * ∆V h λ .(10) Defining a volume-weighted average as C a b = a:b∈a C a ∆V a b a:b∈a ∆V a b , the Ricci one-form in the dual lattice becomes R λ = R h λ .(11) This is an explicit expression of the Ric in the dual lattice as a weighted average of curvatures meeting on the dual lattice one-form λ. In RC, it is customary for measures of curvature to be associated with elements in the simplicial lattice. This more readily allows for evolution equations in terms of the degrees of freedom, the edge lengths of the simplexes {ℓ}. For applications of the Ric, such as for RF, this is particularly important since a straightforward weak evolution equation for the edge lengths (synonomous with the components of metric) will require an integration of the Ric over the d-volume associated with an edge, i.e. the integrated Ricci one-form at a given ℓ. We thus seek to re-express the Ricci one-form on the simplicial skeleton. Taking the dual of the above expression gives us a Ricci three-form on the simplicial lattice. However, it is beneficial to write an explicit expression for the Ricci one-form on simplicial edges. We can transform the above expression into a edge-based expression in the simplicial skeleton via a lowering (raising) operator which transforms r-forms in the dual (simplicial) lattice to r-forms in the simplicial (dual) lattice (see Appendix B). We first rewrite the association of the Ric on a dual edge by restricting the domain to that closest to a simplicial edge, ℓ. This is the result of the projection of the dual edge Ric onto the domain of the edge, ℓ; R λ ∆V λ ℓ = R λ ∆V λ ∆V λ ℓ ∆V λ = h * :λ∈h * R h * ∆V h * λ ℓ .(12) For d > 2 this newly projected volume can be decomposed as before, except now we must restrict the hinge area to that which is closest to ℓ. Suitably rearranging the terms in the sums gives R ℓ ∆V ℓ = λ∈ℓ * R λ ∆V λ ℓ (13) = λ∈ℓ * h * : λ∈h * 2ǫ h A hℓ A * hλ A * h (for d > 2) = h: ℓ∈h 2ǫ h A hℓ A * h λ∈h * A * hλ = 2 ǫ h ℓ A h(14) where we have defined the edge-based area-weighted average as C h ℓ = h:ℓ∈h C h A hℓ h:ℓ∈h A hℓ . It is key to note here that swapping the summations is allowed given that the Voronoi-Delaunay decomposition of the volumes determines a tiling of the manifold without overlap. This will generally be true for arbitrary triangulations with circumcentric duals as long as volume orientation is also carried over in the calculation. Again, dividing by the integral volume, we get an explicit expression for the Ric weighting on an edge of the simplicial lattice; (Top) A simplicial edge ℓ is shown with all triangles t = λ * hinging on ℓ. The notation of t = λ * indicates that to each triangle containing ℓ, there is an edge λ of the dual lattice orthogonal and dual to t. (Bottom-left) The 3-volume for a given λ is depicted here. In general, only a portion of this volume will overlap with the 3-volume associated with ℓ. To construct the Ric for λ, the integral volumes used must coincide, so we take the restriction of ∆V λ to ℓ, ∆V λℓ . (Bottom-right) The 3-volume ∆V ℓ is shown and we indicate the part of ∆V ℓ corresponding to ∆V λℓ as the volume spanned by the vertexes [ABDEO]. Since ℓ * = h * , summing over all λ contained in h * carries us around the loop orthogonal to ℓ. In the restriction of ∆V λ to ℓ, the only contribution with non-trivial restricted volume is h * = ℓ * for the given ℓ. Hence, substituting the expression for R λ into Eq. 13 and summing over all λ ∈ ℓ * = h * , we obtain the Regge curvature on an edge/hinge in d = 3. Hence we have R ℓ ∆V ℓ = R h ∆V h as expected. R ℓ = d(d − 1) ǫ h ℓ A h * ℓ (for d > 2).(15) For the special case of d = 3, the Riemann tensor is proportional to the Ric, i.e. all curvature content is encoded directly in the Ricci tensor. In Figure 2 we look at the Ric on a simplicial edge in d = 3 and illustrate the volumes associated with the construction. We now turn to the special case of d = 2. The duality between λ and ℓ is such that R ℓ ∆V ℓ = λ∈ℓ * R λ ∆V λ ℓ = λ R λ ∆V λ δ λ,ℓ * = R λ ∆V λ .(16) Again, the duality can be used to show R ℓ = R λ . Using the expression for R λ = R h=v λ we have R ℓ = h=v∋ℓ R h A * hℓ h=v∋ℓ A * hℓ = h=v∋ℓ R h 1 4 ℓ × ℓ * 1 2 ℓ × ℓ * =R h | ℓ(17) whereR h | ℓ is the arithmetic average of the curvature evaluated at the endpoints of ℓ. We have used the normalization of Vol(h = v) = 1 in the first equality and the relation A hℓ = 1 4 ℓ × ℓ * = A h ′ ℓ for both endpoints (hinges), h and h ′ , on ℓ in the second equality. This is a discrete expression showing explicitly that the Ricci one-form is determined solely by the scalar curvature (on vertexes) in 2-dimensions. Derivation of the Ric from the RC Action Principle These expressions can also be derived from the Regge action principle in a similar way to the authors' previous construction of the scalar curvature invariant in RC [26]. Since the curvature is locally proportional to the sectional curvature, we obtain a simple relation between the action using the curvature two-form and the canonical Einstein-Hilbert action; I = 1 κ h (R, h * ) = 1 κ h d(d − 1)K h ∆V h = 2 κ h ǫ h A h = I Regge(18) where h * is the two-form for the dual loop to a hinge and K h = ǫ h A * h is the sectional curvature. The factor of d(d − 1) comes about from contracting the curvature twoform with the dual polygon two-form which gives equal contributions from all non-zero components. Using duality, we can also change the first expression to a hinge-based, instead of a dual polygon, expression; I = 1 κ h (⋆R, h).(19) Now tracing over directions orthogonal to edges and summing over the edges we get I = 1 κ h ℓ∈h (d − 1) (R hℓ , ℓ)(20) where R hℓ is the Ric on hinge h directed along ℓ. Contracting the Ric with its associate one-form gives an additional factor of d such that we obtain I = 1 κ h ℓ∈h d(d − 1)K h ∆V ℓ h .(21) To get the action in terms of the Ricci one-form we decompose the integral measures and rearrange the summations; I = h d(d − 1) ǫ h A * h 1 d 2 ℓ∈h A * h A hℓ ∆V ℓ h = h ℓ∈h d(d − 1) ǫ h A * h 1 d 2 A * h A hℓ = ℓ h: ℓ∈h d(d − 1) 1 d 2 ǫ h A * h A * h A hℓ = ℓ R ℓ ∆V ℓ .(22) Using the equality of the individual terms in the sum over edges, we get an expression for the curvature on an edge of the simplicial lattice; R ℓ ∆V ℓ = h: ℓ∈h d(d − 1) 1 d 2 ǫ h A * h A * h A hℓ .(23) We have absorbed the combinatoric factor of d(d − 1) into the definition of R ℓ as we will do in general. This helps keep in mind that the expression for R ℓ is a scalar weight on the edge element. Formally, these scalar weights are part of a integrated quantity and are not necessarily assigned to a point on the lattice, but rather across the domain of integration associated with the given element. Hence, the curvature forms used in RC are to be understood as R h ∆V h , R ℓ ∆V ℓ , and R v ∆V v for the Riemann, Ricci and scalar curvature, respectively. We can raise the simplicial Ricci one-form to obtain the dual Ricci one-form. To do so, we first restrict the integrative domain to the volume closest to the dual edge; R ℓ ∆V ℓ λ = h: ℓ∈h d(d − 1) d 2 ǫ h A * h A * hλ A hℓ .(24) We define the raising (lowering) operation applied to the simplicial Ric by summing over all integrated R ℓ for which λ ∈ ℓ * . The restriction of the domain above is necessary to ensure that lowering (raising) this expression gives a quantity integrated over the appropriate d-volume. Doing so we obtain R λ ∆V λ = ℓ: λ∈ℓ * R ℓ ∆V ℓ λ = ℓ: l∈ℓ * d(d − 1) d 2 h: ℓ∈h ǫ h A * h A * hλ A hℓ = h: λ∈h * d(d − 1) d 2 ℓ∈h ǫ h A * h A * hλ A hℓ = h: λ∈h * d(d − 1) d 2 ǫ h A * h A * hλ A h = h: λ∈h * 2 ǫ h A * h A * hλ A h .(25) Comparing with Eq. (9) shows exact agreement. The independence of the local and global derivations shown here are indicative of the decomposition of the lattice into elements with compact support. Therefore, the global derivation in terms of the action becomes just an additional sum over the local terms defined over the domains of compact, local support. This highlights the reason that RC as a weak variational principle reduces to locally simple characterizations of the manifold geometry. The Canonical Einstein Tensor In previous work, the Cartan moment-of-rotation trivector view was used to derive the embedding of the Einstein tensor in RC [25,28]. Here we present an alternative derivation using the more familiar definition of the Einstein tensor; G µν = R µν − 1 2 g µν R(26) which can be rewritten as an Einstein one-form G a ≡ G µν e ν a = R a − 1 2 e a R.(27) Using the simplicial Ric and the previously derived scalar curvature [26], we have all the necessary tools to provide a direct reconstruction of the Einstein tensor on an edge. The isomorphism between forms on the dual and forms on the simplicial lattice allows us the freedom to define curvature forms on either lattice. However, we should start off on a sound geometric footing by following the projection of the continuum object onto the lattice structure. Eq. (26) identifies the quantitative construction of the Einstein tensor, but does not indicate the geometric character of the Einstein oneform. However, it is known that the Einstein tensor is the double-dual of the Riemann curvature tensor [29]; G j i = ( * R * ) jm im = 1 4 ǫ mnil R mn ab ǫ ablj .(28) The Hodge duals transform the two-form components on the dual lattice to forms on the simplicial lattice. The trace over the second and third indices reduce the two-form to a one-form. Hence, the Einstein tensor is a one-form on edges of the simplicial lattice. Equivalently, in 4-d the Einstein one-form is the dual of the moment of rotation 3-form projected on the 3-volume dual to an edge [25,28]. We take the natural embedding for the Einstein one-form in RC to be on the simplicial 1-skeleton. One could easily construct a dual lattice Einstein one-form, though we see no particular benefit. We must also be careful in how we introduce the vertex-based scalar curvature, R v , in the edge-based representation. This is most directly accomplished by projecting the integrated scalar curvature at a vertex onto the d-volume associated with an edge; e a R −→ R v ∆V vℓ(29) This contributes non-trivially only when the vertex v is a vertex of ℓ. Moreover, since the scalar curvature is decomposed into volumes associated with the hinges meeting at v, this projection introduces a Kronecker delta into each term. This results from projecting the vertex-based volume associated with a hinge ∆V hv onto a given edge. Hence, only those hinges meeting at ℓ contribute to the edge-restricted scalar curvature; R v ∆V v ℓ = d(d − 1) h: v, ℓ∈h ǫ h A * h 1 d 2 A hℓv A * h = 2 h: v, ℓ∈h ǫ h A hℓv(30) where A hℓv is the area of the hinge h restricted to both the edge ℓ and the vertex v -both ℓ and v are assumed to be on h otherwise A hℓv = 0. Using this representation of the scalar curvature and the simplicial Ricci one-form defined above, we are in position to explicitly define the canonical form of the Einstein tensor; G ℓ ∆V ℓ = R ℓ ∆V ℓ − 1 2 v∈ℓ R v ∆V v ℓ = 2 h: ℓ∈h (ǫ h A hℓ ) − v∈ℓ h: ℓ∈h ǫ h A hℓv = 2 h: ℓ∈h (ǫ h A hℓ ) − h: ℓ∈h ǫ h A hℓ = h: ℓ∈h ǫ h A hℓ .(31) In d = 4 this becomes G ℓ 1 4 ℓ · ℓ * ∆V ℓ = h: ℓ∈h ǫ h A hℓ = h: ℓ∈h ǫ h 1 2 ℓ · 1 2 ℓ cot (θ hℓ ) A hℓv G ℓ ℓ * = h: ℓ∈h ǫ h ℓ cot (θ hℓ )(32) where θ hℓ is the angle on the hinge h opposite ℓ. Staying in d = 4 we can check this result with the result obtained from varying the Regge action. In the continuum the integrated Einstein tensor is obtained from the variational principle; √ −g G αβ d 4 x = κ δI geom δg αβ(33) where κ = 16πGc −4 and I geom is the Einstein-Hilbert action. In RC, with action given by 2 κ h ǫ h A h , this becomes G ℓ ℓ * = κ δI Regge δℓ .(34) Regge showed that the variation of the deficit angle ǫ h in the Regge action does not contribute to the final equations of motion. Only variation of the hinge volume contributes. Using this result we obtain the standard Regge equations for an edge; δI Regge δℓ = 2 1 κ h: ℓ∈h ǫ h 1 2 ℓ cot (θ ℓh ) = 1 κ h: ℓ∈h ǫ h ℓ cot (θ ℓh ).(35) The integrated Einstein tensor from the variational principle is thus found to match the result obtained from the Regge version of the canonical Einstein tensor definition; G ℓℓ ℓ * = h: ℓ∈h ǫ h ℓ cot (θ ℓh ).(36) This agrees with the results from the moment of rotation three-form derivations [25,28]. The factor of 2 that explicitly appears in Eq. (36) that cancels the 1 2 factor in the moment arm is due to combinatoric factors coming from the symmetry in the moment of rotation, i.e. dP ∧ R = R ∧ dP. In particular, the integrated moment of rotation assigned to an edge is not dependent on the ordering of the wedge product of the moment arm with the curvature and gives rise to this numerical factor. It is particularly instructive to confirm this result by way of Eq. (28). Since the first dual acts on the space of values, we only need note that one component survives while the second component of the bivector contributes to the trace. Acting on the twoform components is the fundamental volume form, ǫ ablj , which acts as a given 4-volume. Choosing a given component of G i is akin to choosing an edge ℓ on the simplicial lattice. Since R h = R h * takes non-zero components only in the directions orthogonal to hinges, the trace is the sum over directions orthogonal to ℓ and h * ; G j , ℓ j = 1 2 h:ℓ∈h R h * ∆V h * ℓ = 1 2 h:ℓ∈h d(d − 1) ǫ h A h * 1 d 2 A hℓ A h * G ℓ ∆V ℓ = 1 4 h:ℓ∈h ǫ h ℓ 2 cot (θ ℓh )(37) where we have used A hℓ = 1 2 ℓ 2 cot (θ ℓh ). Doing the usual trick of decomposing the volume on the LHS and dividing by ℓ, we have G ℓ ℓ * = h:ℓ∈h ǫ h ℓ cot (θ ℓh )(38) as before. In general, the Einstein tensor in arbitrary dimension is given by; G ℓ ℓ * = d ℓ h ǫ h A hℓ(39) in agreement with Eq. (31). We thus have multiple methodologies for deriving the Einstein tensor, and we have shown that the Einstein tensor is the sum of restricted areas of hinges times their associated deficit angles. Conclusion We have presented here the first geometric discretization of the Ric in RC in arbitrary dimension. The tracing of the Riemann tensor over loops of parallel transport produces a one-form in the dual lattice. Moreover, we are able to use the isomorphism between forms on the dual with forms on the simplicial lattice to construct a simplicial counterpart to the dual lattice Ricci one-form. Both formulations provide explicit meaning to the simplicial analog of the trace of the Riemann tensor as an edge-based "weighted average" of curvature. In the dual representation the Ric is a volumeweighted average while in the simplicial representation it becomes a ratio of areaweighted averages. The Ric defined as one-form in the simplicial or dual lattices is one step towards accurately embedding the machinery of RF into the piecewise-flat discretization of RC. By representing the Ric, and eventually RF, in the RC framework, we expect to be able to use RF on geometries of arbitrary topology in arbitrary dimension. In particular, the 3-dimensional Ric carries the full information about the curvature of the manifold and can be used for manifold comparison using techniques developed by Perelman [2,3,4,30]. Ongoing future work will develop the RF equations and apply them to discrete manifolds in higher dimension. The definition of a Ric in arbitrary dimension has further allowed us to provide a third and independent derivation of the Einstein tensor in RC. By using our simplicial Ricci one-form and the recent definition of the vertex-based scalar curvature, we are able to write an explicit expression for the trace-reversed Ric in terms of restricted volumes in the simplicial lattice. This shows further the utility of the inherent Voronoi-Delaunay duality and the associated hybrid cells as natural volumes in RC. We begin by defining the simplicial volume via the inner-product of forms. The volume of a simplicial cell is given by the inner product of the simplicial d-form with itself; s (d) , s (d) = s (d) ∧ * s (d) = 1 d d s (d) · * s (d) = s (d) (A.1) where we use the usual notation, |·|, to indicate the norm. Since * s (d) is a vertex of the dual lattice, i.e. the circumcenter of s (d) , it contributes only a scalar constant to the integral. To ensure that the integral yields the appropriate d-volume, we choose assign to any vertex a volume with unit normalization. Likewise, a polytope σ (d) dual to a vertex v in the simplicial lattice is given by * σ (d) , * σ (d) = 1 d 0 * σ (d) · σ (d) = σ (d) (A.2) where again we have * σ (d) = |v| = 1. Explicitly, this volume is constructed by building local domains interior to each simplex in the star of the vertex v dual to σ (d) . Using the Voronoi construction, this volume localized on a simplex is the set points in polysd closest to v than any other vertex in the simplex. This portion of the simplex will be called the restriction of the simplex to v, ∆V s (d) v = s (d) v . Summing over each simplex in the star of v, St(v), gives the complete dual volume. | * v| = s (d) ∈St(v) s (d) v . (A.3) We can construct arbitrary volumes that are hybrid Delaunay-Voronoi cells through inner-products of the simplicial (dual) r-forms with themselves; s (r) , s (r) = s (r) ∧ * s (r) = 1 d r s (r) | * sr| . (A.4) The factorization given by the last equatlity is a direct result of the inherent orthogonality between the Voronoi and Delaunay lattices. This canonical factorization is one of many factorizations. One may also decompose the volume associated with a given simplicial or dual element into volumes determined by m-forms (m < r) contained in a given s (r) or n-forms (n > r) in St(s (r) ); Here, the Voronoi-Delaunay duality is again particularly useful as it allows us to construct the restricted volume via restriction of only a subspace of a given volume. ∆V s (r) = s ( ) m∈s (r) The restriction is applied to the subspace such that the restriction makes sense, i.e. the restriction of s (m) to s (r) (r > m) trivially yields the norm s (m) . Such restrictions are explicitly used in the definition of the vertex-based scalar curvature which require vertex d-volumes to be decomposed using the vertex-restriction of the hinge area [26] Appendix B. Operations on Discrete Forms In the lattice we endow the geometry with two distinct spaces of differential forms, (1) the simplicial skeleton as the representation of the homology and (2) the dual skeleton as the representation of the cohomology. The representation of differential forms on a simplicial complex is based on the ideas of Whitney [18] and has been used in computational electromagnetism [19,20,21] and computational geometry [12,13]. The purpose of such a representation is to not just discretize tensor and differential form fields by representing their components point-wise on some discrete set of points, but to embed the full geometric character of a field in the discretization. In this way, one hopes to preserve the general geometric properties and symmetries of the field in the discretization. In this appendix we review some useful isomorphisms between the spaces of forms in the simplicial and dual lattices. The first and most straightforward isomorphism is the Hodge dual. The Hodge dual maps an element of Λ (r) (Λ * (r) ) to Λ * (d−r) (Λ (d−r) ). This is defined by mapping the scalar weighting to a given simplicial (dual) element of the skeleton to its geometric dual, i.e. α s (r) −→ α * s (r) . (B.1) This is done via formal mapping [23] 1 |s (r) | α, s (r) = 1 | * s (r) | * α, * s (r) (B.2) where α, Ω = Ω α. Since differential forms in RC are represented as scalar weights on elements of the lattice, this isomorphism is a simple mapping of the weight from an element on one lattice to its dual element. We also can construct the raising (lowering) operations in the lattice. In the continuum, this operation is carried out via the metric or its inverse applied to components of the form. In the lattice, we must construct a way of identifing a scalar weighting to an r-form of the simplicial (dual) lattice using the weights of the r-forms in the dual (simplicial) lattice. We define the isomorphism taking dual r-forms to a simplicial r-form as Using the orthogonal decomposition and restriction of volumes defined in Appendix Appendix A, the volumes on the RHS are given by ∆V σ (r) s (r) =        1 ( d r ) σ (r) * σ (r) s (r) , if 2r ≤ d 1 ( d r ) * σ (r) σ (r) s (r) , if 2r > d (B.4) One can define a similar isomorphism from the simplicial lattice to the dual lattice by taking the sum over elements of the simplicial skeleton. It is important here that we incorporate the restriction of the integral domain into the defintion to ensure that if we apply the inverse isomorphism that we reobtain the initial r-form. This can be easily checked. Figure 1 . 1Restricting the Hinge Volume to a Dual Edge: Here we explicitly show the decomposition of the d-volume of a hinge h (in d = 4) and its restriction to a dual edge λ. (Top) Here we show the orthogonal decomposition of the d-volume into the area of a hinge, A h , and the area of the dual polygon to a hinge, A * h . Struts (notshown) connecting each vertex of h * to each vertex of h complete the boundary of the domain spanned by h and h * . (Bottom-left) We focus attention on the dual polygon h * and have shown (shaded) the restriction to the dual edge λ. This restricted area is the 2-simplex constructed from the endpoints of λ and the circumcenter of the hinge h. (Bottom-right) By connecting the vertexes of the restricted area of h * , A * hλ , to each of the vertexes of the hinge h, we obtain a new d-volume, ∆V h * λ = ∆V hλ . The thick red (dashed) lines are struts connecting vertexes on the boundary of ∆V h * λ The struts connecting the circumcenter O of h to the vertexes of h (thin dashed, red) do not contribute the boundary of ∆V hλ and can be routinely dropped from the construction. = Figure 2 . 2Volumes for the Ric on a Simplicial Hinge: Here we use the case of d = 3 as a concrete example of the construction of the simplicial Ric from the dual edge-based Ric. s (r) = s (n) ∈St(s (r) ) 1 d n s (n)s (r) * s (n) (for r < n).(A.6) α s (r) ∆V s (r) =              σ ( ) r: s (r) ∈ * σ ( ) r α σ (r) ∆V σ (r) s (r) , if 2r ≤ d σ (r) : * σ (r) ∈s (r) α σ (r) ∆V σ (r) s (r) , if 2r > d (B.3) AcknowledgmentsWe would like to thank Shing-Tung Yau and Xianfeng Gu for stimulating our interest in this topic and pointing out useful references. JRM would like to acknowledge partial support from the SFB/TR7 "Gravitational Wave Astronomy" grant funded by the German Research Foundation (DFG) and is currently supported through a National Research Council Research Associateship Award at AFRL Information Directorate. WAM acknowledges partial support from the Information Directorate at Air Force Research Laboratory. PMA wishes to acknowledge the support of the Air Force Office of Scientific Research (AFOSR) for this work. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of AFRL.Appendix A. Integral Volumes in Regge CalculusThe canonical volumes of RC are the simplicial blocks of the lattice. These domains define the locally flat subspaces of the geometry. The simplicial blocks also supply the lattice with an intrinsice definition of local tangent spaces on which we explicitly define vectors, tensors, and differential forms. It is useful to decompose these simplicial domains to fit with the character of the geometric objects we construct. Since all embeddings of geometric variables are essentially integrated quantities, as opposed to the point-based representation in the continuum, we wish the integral volumes to reflect the nature of the object itself. Here we provide a short review of the methods for constructing integral volumes used in this manuscript. Three-manifolds with positive Ricci curvature. R S Hamilton, J. Diff. Geom. 17R.S. Hamilton. Three-manifolds with positive Ricci curvature. J. Diff. Geom., 17:255-306, 1982. The entropy formula for the Ricci flow and its geometric applications. Grisha Perelman, arXiv:0211159preprint. math.DGGrisha Perelman. The entropy formula for the Ricci flow and its geometric applications. preprint arXiv:0211159 [math.DG], 2002. Ricci flow with surgery on three-manifolds. Grisha Perelman, arXiv:0303109preprint. math.DGGrisha Perelman. Ricci flow with surgery on three-manifolds. preprint arXiv:0303109 [math.DG], 2003. arXiv:0307245Grisha Perelman. Finite extinction time for the solutions to the Ricci flow on certain threemanifolds. preprint. math.DGGrisha Perelman. Finite extinction time for the solutions to the Ricci flow on certain three- manifolds. preprint arXiv:0307245 [math.DG], 2003. . Tullio Regge. General relativity without coordinates. Nuovo Cimento. 19Tullio Regge. General relativity without coordinates. Nuovo Cimento, 19:558-571, 1961. Some applications of Ricci flow in physics. E Woolgar, Can. J. Phys. 864E. Woolgar. Some applications of Ricci flow in physics. Can. J. Phys., 86(4):645-651, 2008. . Shing-Tung Yau, Private CommunicationsShing-Tung Yau. Private Communications. Geometric flows and black hole entropy. Joseph Samuel, Sutirtha Roy Chowdhury, Class. Quantum Grav. 241147Joseph Samuel and Sutirtha Roy Chowdhury. Geometric flows and black hole entropy. Class. Quantum Grav., 24(11):F47, 2007. Entanglement entropy and the ricci flow. N Sergey, Solodukhin, Phys. Lett. B. 6465-6Sergey N. Solodukhin. Entanglement entropy and the ricci flow. Phys. Lett. B, 646(5-6):268 - 274, 2007. Smoothing out spatially closed cosmologies. M Carfora, A Marzuoli, Phys. Rev. Lett. 5325M. Carfora and A. Marzuoli. Smoothing out spatially closed cosmologies. Phys. Rev. Lett., 53(25):2445-2448, Dec 1984. Ricci flow deformation of cosmological initial data sets. M Carfora, T Buchert, Proceedings of the 14th International Conference on Waves and Stability in Continuous Media. the 14th International Conference on Waves and Stability in Continuous MediaHackensack, NJWorld Scientific Publishing CoM. Carfora and T. Buchert. Ricci flow deformation of cosmological initial data sets. In Proceedings of the 14th International Conference on Waves and Stability in Continuous Media, pages 118- 128, Hackensack, NJ, 2008. World Scientific Publishing Co. Computing conformal structures of surfaces. Xianfeng Gu, Shing-Tung Yau, Comm. Info. Sys. 2Xianfeng Gu and Shing-Tung Yau. Computing conformal structures of surfaces. Comm. Info. Sys., 2:121-146, 2002. Global conformal surface parameterization. Xianfeng Gu, Shing-Tung Yau, SGP '03: Proceedings of the 2003 Eurographics/ACM SIGGRAPH symposium on Geometry processing. Aire-la-Ville, Switzerland, SwitzerlandEurographics AssociationXianfeng Gu and Shing-Tung Yau. Global conformal surface parameterization. In SGP '03: Proceedings of the 2003 Eurographics/ACM SIGGRAPH symposium on Geometry processing, pages 127-137, Aire-la-Ville, Switzerland, Switzerland, 2003. Eurographics Association. Ricci flow and black holes. Matthew Headrick, Toby Wiseman, Class. Quantum Grav. 23236683Matthew Headrick and Toby Wiseman. Ricci flow and black holes. Class. Quantum Grav., 23(23):6683, 2006. A new approach to static numerical relativity and its application to KaluzaKlein black holes. Matthew Headrick, Sam Kitchen, Toby Wiseman, Class. Quantum Grav. 27335002Matthew Headrick, Sam Kitchen, and Toby Wiseman. A new approach to static numerical relativity and its application to KaluzaKlein black holes. Class. Quantum Grav, 27(3):035002, 2010. Ricci flow of biaxial Bianchi IX metrics. G Holzegel, T Schmelzer, C Warnick, arXiv:0706.1694preprint. hep-thG. Holzegel, T. Schmelzer, and C. Warnick. Ricci flow of biaxial Bianchi IX metrics. preprint arXiv:0706.1694 [hep-th], 2007. The modeling of degenerate neck pinch singularities in Ricci flow by Bryant solitons. David Garfinkle, James Isenberg, J Math. Phys. 49773505David Garfinkle and James Isenberg. The modeling of degenerate neck pinch singularities in Ricci flow by Bryant solitons. J Math. Phys., 49(7):073505, 2008. Geometric Integration Theory. H Whitney, Princeton University PressPrinceton, NJH. Whitney. Geometric Integration Theory. Princeton University Press, Princeton, NJ, 1957. Differential forms and the computation of fields and forces in electromagnetism. A Bossavit, Eur. J. Mech. 10A. Bossavit. Differential forms and the computation of fields and forces in electromagnetism. Eur. J. Mech., B10:474-488, 1991. Computational Electromagnetism: Variational Formulations, Complementarity, Edge Element. A Bossavit, Academic PressChestnut Hill, MAA. Bossavit. Computational Electromagnetism: Variational Formulations, Complementarity, Edge Element. Academic Press, Chestnut Hill, MA, 1998. Finite element, exterior calculus, homological techniques, and applications. Doug N Arnold, Richard S Falk, R Winther, Acta Numerica. 15Doug N. Arnold, Richard S. Falk, and R. Winther. Finite element, exterior calculus, homological techniques, and applications. Acta Numerica, 15:1-155, 2006. Spatial Tessellations: Concepts and Application of Voronoi Diagrams. A Okabe, B Boots, K Sugihara, WileyNew YorkA. Okabe, B. Boots, and K. Sugihara. Spatial Tessellations: Concepts and Application of Voronoi Diagrams. Wiley, New York, 1992. Discrete differential forms for computational modeling. Eva Mathieu Desbrun, Yiying Kanso, Tong, SIGGRAPH '06: ACM SIGGRAPH 2006 Courses. New York, NYACMMathieu Desbrun, Eva Kanso, and Yiying Tong. Discrete differential forms for computational modeling. In SIGGRAPH '06: ACM SIGGRAPH 2006 Courses, pages 39-54, New York, NY, 2006. ACM. The Hilbert action in Regge calculus. Warner A Miller, Class. Quantum. Grav. 14Warner A. Miller. The Hilbert action in Regge calculus. Class. Quantum. Grav., 14:L199-L204, 1997. Geometrodynamic content of the Regge equations as illuminated by the boundary of a boundary principle. Warner A Miller, Foundations of Physics. 16Warner A. Miller. Geometrodynamic content of the Regge equations as illuminated by the boundary of a boundary principle. Foundations of Physics, 16:143-169, 1986. A geometric construction of the Riemann scalar curvature in Regge calculus. Jonathan R Mcdonald, Warner A Miller, Class. Quantum. Grav. 25Jonathan R. McDonald and Warner A. Miller. A geometric construction of the Riemann scalar curvature in Regge calculus. Class. Quantum. Grav., 25:196017, 2008. Foundations of Differential Geometry. S Kobayashi, K Nomizu, Interscience Publishers1New YorkS. Kobayashi and K. Nomizu. Foundations of Differential Geometry, volume 1. Interscience Publishers, New York, 1963. A Kirchhofflike conservation law in Regge calculus. Adrian P Gentle, Arkady Kheyfets, Jonathan R Mcdonald, Warner A Miller, Class. Quantum Grav. 2615005Adrian P. Gentle, Arkady Kheyfets, Jonathan R. McDonald, and Warner A. Miller. A Kirchhoff- like conservation law in Regge calculus. Class. Quantum Grav., 26:015005, 2009. Relativity: The General Theory. John Lighton Synge, North-Holland Publishing CoAmsterdamJohn Lighton Synge. Relativity: The General Theory. North-Holland Publishing Co., Amsterdam, 1960. Comparison Geometry. Karsten Grove and Peter PetersonCambridgeCambridge Univ. PressKarsten Grove and Peter Peterson, editors. Comparison Geometry. Cambridge Univ. Press, Cambridge, 1997.
[]
[ "Vehicle trajectory prediction works, but not everywhere", "Vehicle trajectory prediction works, but not everywhere" ]
[ "Mohammadhossein Bahari [email protected] \nSwitzerland\n", "Saeed Saadatnejad [email protected] \nSwitzerland\n", "Ahmad Rahimi \nSharif university of technology\n\n", "Mohammad Shaverdikondori \nSharif university of technology\n\n", "Amir Hossein \nSharif university of technology\n\n", "Shahidzadeh ", "Seyed-Mohsen Moosavi-Dezfooli \nImperial College London\n\n", "Alexandre Alahi \nSwitzerland\n" ]
[ "Switzerland", "Switzerland", "Sharif university of technology\n", "Sharif university of technology\n", "Sharif university of technology\n", "Imperial College London\n", "Switzerland" ]
[]
Vehicle trajectory prediction is nowadays a fundamental pillar of self-driving cars. Both the industry and research communities have acknowledged the need for such a pillar by providing public benchmarks. While state-of-theart methods are impressive, i.e., they have no off-road prediction, their generalization to cities outside of the benchmark remains unexplored. In this work, we show that those methods do not generalize to new scenes. We present a method that automatically generates realistic scenes causing state-of-the-art models to go off-road. We frame the problem through the lens of adversarial scene generation. The method is a simple yet effective generative model based on atomic scene generation functions along with physical constraints. Our experiments show that more than 60% of existing scenes from the current benchmarks can be modified in a way to make prediction methods fail (i.e., predicting off-road). We further show that the generated scenes (i) are realistic since they do exist in the real world, and (ii) can be used to make existing models more robust, yielding 30 − 40% reductions in the off-road rate. The code is available online: https://s-attack.github.io/.
10.1109/cvpr52688.2022.01661
[ "https://arxiv.org/pdf/2112.03909v2.pdf" ]
244,920,918
2112.03909
af3629d54a13b684c318a7bb79d1d94b6f6b4261
Vehicle trajectory prediction works, but not everywhere Mohammadhossein Bahari [email protected] Switzerland Saeed Saadatnejad [email protected] Switzerland Ahmad Rahimi Sharif university of technology Mohammad Shaverdikondori Sharif university of technology Amir Hossein Sharif university of technology Shahidzadeh Seyed-Mohsen Moosavi-Dezfooli Imperial College London Alexandre Alahi Switzerland Vehicle trajectory prediction works, but not everywhere Vehicle trajectory prediction is nowadays a fundamental pillar of self-driving cars. Both the industry and research communities have acknowledged the need for such a pillar by providing public benchmarks. While state-of-theart methods are impressive, i.e., they have no off-road prediction, their generalization to cities outside of the benchmark remains unexplored. In this work, we show that those methods do not generalize to new scenes. We present a method that automatically generates realistic scenes causing state-of-the-art models to go off-road. We frame the problem through the lens of adversarial scene generation. The method is a simple yet effective generative model based on atomic scene generation functions along with physical constraints. Our experiments show that more than 60% of existing scenes from the current benchmarks can be modified in a way to make prediction methods fail (i.e., predicting off-road). We further show that the generated scenes (i) are realistic since they do exist in the real world, and (ii) can be used to make existing models more robust, yielding 30 − 40% reductions in the off-road rate. The code is available online: https://s-attack.github.io/. Introduction Vehicle trajectory prediction is one of the main building blocks of a self-driving car, which forecasts how the future might unfold based on the road structure (i.e., the scene) and the traffic participants. State-of-the-art models are commonly trained and evaluated on datasets collected from a few cities [14,19,23]. While their evaluation has shown impressive performance, i.e., almost no off-road prediction, their generalization to other types of possible scenes e.g., other cities, remains unknown. Figure 1 shows a real-world example where a state-of-the-art model reaching zero offroad in the known benchmark [19] failed in South St, New * Equal contribution as the first authors. Figure 1. A real-world place (location) in New York where the trajectory prediction model (here [32]) fails. We find this place by retrieving real-world locations which resemble our conditional generated scenes for the prediction model. York, USA. Since collecting and annotating data of all realworld scenes is not a viable and affordable solution, we present a method that automatically investigates the robustness of vehicle trajectory prediction to the scene. We tackle the problem through the lens of realistic adversarial scene generation. Given an observed scene, we want to generate a realistic modification of it such that the prediction models fail in. Having an off-road prediction is a clear indication of a failure in the the model's scene reasoning and has been used in some previous works [8,16,36,38]. To find a realistic example where the models go off-road, the huge space of possible scenes should be explored. One solution is datadriven generative models that mimic the distribution of a dataset [35]. Yet, they do not essentially produce realistic scenes due to the possible artifacts. Moreover, they will represent a portion of real-world scenes as they cannot generate scenes beyond what they have observed in the dataset (cannot extrapolate). We therefore suggest a simple yet efficient alternative. We show that it is possible to use a limited number of simple functions for transforming the scene into new realistic but challenging ones. Our method can explicitly extrapolate to new scenes. We introduce atomic scene generation functions where given a scene in the dataset, the functions generate multiple new ones. These functions are chosen such that they can cover a range of realistic scenes. We then choose the scenes where the prediction model produces an off-road trajectory. Using three state-of-the-art trajectory prediction models trained on Argoverse public dataset [19], we demonstrate that more than 60% of the existing scenes in the dataset can be modified in such a way that it will make state-of-the-art methods fail (i.e., predict off-road). We confirm that the generated scenes are realistic by finding realworld locations that partially resemble the generated scenes. We also demonstrate off-road predictions of the models in those locations. To this end, we extract appropriate features from each scene and use image retrieval techniques to search public maps [1]. We finally show that these generated scenes can be used to improve the robustness of the models. Our contributions are fourfold: • we highlight the need for a more in-depth evaluation of the robustness of vehicle trajectory prediction models; • our work proposes an open-source evaluation framework through the lens of realistic adversarial scene generation by promoting an effective generative model based on atomic scene generation functions; • we demonstrate that our generated scenes are realistic by finding similar real-world locations where the models fail; • we show that we can leverage our generated scenes to make the models more robust. Related work Vehicle trajectory prediction. The scene plays an important role in vehicle trajectory prediction as it constrains the future positions of the agents. Therefore, modeling the scene is common in spite of some human trajectory prediction models [13,39]. In order to reason over the scene in the predictions, some suggested using a semantic segmented map to build circular distributions and outputting the most probable regions [21]. Another solution is reasoning over raw scene images using convolutional neural networks (CNN) [31]. Many follow-up works represented scenes in the segmented image format and used the learning capability of CNNs over images to account for the scene [10,17,18,25,40]. Carnet [45] used attention mechanism to determine the scene regions that were attended more, leading to an interpretable solution. Some recent work showed that scene can be represented by vector format instead of images [7,24,32,47]. To further improve the reasoning of the model and generate predictions admissible with respect to the scene, use of symmetric cross-entropy loss [38,41], off-road loss [8], and REINFORCE loss [16] have been proposed. Despite all these efforts, there has been limited attention to assess the performance of trajectory prediction models on new scenes. Our work proposes a framework for such assessments. Evaluating self-driving systems. Self-driving cars deal with dynamic agents nearby and the static environment around. Several works studied the robustness of self-driving car modules with respect to the status of dynamic agents on the road, e.g., other vehicles. Some previous works change the behavior of other agents in the road to act as attackers and evaluate the model's performance with regards to the interaction with other agents [3,4,20,26,28,30,43,52]. Others directly modify the raw sensory inputs to change the status of the agents in an adversarial way [15,49,51,53]. In addition to the dynamic agents, driving is highly dependant on the static scene around the vehicle. The scene understanding of the models can be assessed by modifying the input scene. Previous works modify the raw sensory input by changing weather conditions [33,50,54], generating adversarial drive-by billboards [29,55], and adding carefully crafted patches/lines to the road [12,46]. These works have not changed the shape of the scene, i.e., the structure of the road. In contrast, we propose a conditional scene generation method to assess the scene reasoning capability of trajectory prediction models. Also our approach is different from data-driven scene generation based on graph [35] or semantic maps [44]. Data-driven generative models are prone to have artifacts and cannot extrapolate beyond the training data. Ours is an adversarial one which can extrapolate to new scenes. Realistic scene generation In this section, we explain in detail our approach for generating realistic scenes. After introducing the notations in Section 3.1, we show how we generate each scene in Section 3.2 and satisfy physical constraints in Section 3.3. Finally, we introduce our search method in Section 3.4. Problem setup The vehicle trajectory prediction task is usually defined as predicting the future trajectory of a vehicle z given its observation trajectory h, status of surrounding vehicles a, and scene S. For the sake of brevity, we assume S is in the vector representation format [19] 1 . Specifically, S is a matrix of stacked 2d coordinates of all lanes' points in xy coordinate space where each row represents a point s = (s x , s y ). Formally, the output trajectory z of the predictor g is: z = g(h, S, a).(1) Given a scene S, our goal is to create challenging realistic scene S * as we will explain in Section 3.2. Conditional scene generation Our controllable scene generation method generates diverse scenes conditioned on existing scenes. Specifically, we opt for a set of atomic functions which represent turn as a typical road topology. To this end, we normalize the scene (i.e., translation and rotation with respect to h), apply the transformation functions, and finally denormalize to return the generated scene to the original view. Note that every transformation of S is followed by the same transformations on h and a. We define transformations on each scene point in the following form:s = (s x , s y + f (s x − b))(2) wheres is the transformed point, f is a single-variable transformation function, and b is the border parameter that determines the region of applying the transformation. In other words, we define f (< 0) = 0 so the areas where s x < b are not modified. This confines the changes to the regions containing the prediction. One example is shown in Figure 2. The new scene is namedS, a matrix of stackeds. We propose three interpretable analytical functions for the choice of f . Smooth-turn: this function represents different types of single turns in the road. f st,α (s x ) =      0, s x < 0 q α (s x ), 0 ≤ s x ≤ α 1 (s x − α 1 )q ′ α (α 1 ) + q α (α 1 ) α 1 < s x , q α (s x ) = α 2 s α3 x , α = (α 1 , α 2 , α 3 ),(3) where α 1 determines the length of the turn, α 2 , α 3 control its sharpness, and q ′ α indicates the derivative of the defined auxiliary function q α . Note that according to the definition, f st,α is continuously differentiable and makes a smooth turn. One such turn is depicted in Figure 2b. Double-turn: these functions represent two consecutive turns with opposite directions. Also, there is a variable indicating the distance between them: f dt,β (s x ) = f st,β1 (s x ) − f st,β1 (s x − β 2 ), β = (β 11 , β 12 , β 13 , β 2 ), β 1 = (β 11 , β 12 , β 13 ),(4) where β 1 is the set of parameters of each turn described in Equation (3) and β 2 is the distance between two turns. One example is shown in Figure 2c. Ripple-road: one type of scene that can be challenging for the prediction model is ripple road: f rr,γ (s x ) = 0, s x < 0 γ 1 (1 − cos(2πγ 2 s x )), s x ≥ 0 , γ = (γ 1 , γ 2 ),(5) where γ 1 determines the turn curvatures and γ 2 determines the sharpness of the turns. One such turn is depicted in Figure 2d. Physical constraints Every scenario consists of a scene and vehicle trajectories in it. The generated scenarios must be feasible, otherwise, they cannot represent possible real-world cases. We consider a scenario as feasible if a human driver can pass it safely. This means that the physical constraints -i.e., the Newton's law -should not be violated. The Newton's law indicates a maximum feasible speed for each road based on its curvature [22]: v max = µgR,(6) where R is the radius of the road, µ is the friction coefficient and g is the gravity. To consider the most conservative situation, we pick the maximum curvature (minimum radius) existing in the generated road. Then, we slow down the history trajectory when the speed is higher than the maximum feasible speed, and we name ith. Note that this conservative speed scaling ensures a feasible acceleration too. We will show in Section 4 that a model with hard-coded physical constraints successfully predicts the future trajectory for the generated scenes, which indicates that our constraints are enough. Scene search method In the previous sections, we defined a realistic controllable scene generation method. Now, we introduce a search method to find a challenging scene specific to each trajectory prediction model. We define m as a function of z and S measuring the percentage of prediction points that are off-road obtained using a binary mask of the drivable area. We aim to solve the following problem to find a scene in which the prediction model fails in: S * = arg miñ S l(z,S), l(z,S) = 1 − m(z,S) 2 ,(7) whereS is a modification of S according to Equation (2) using one of the transformation functions Equation (3), Equation (4), or Equation (5). Moreover,z = g(h,S,ã) is the model's predicted trajectory given the modified scene and the modified history trajectories.The optimization problem finds the corresponding parameters to obtain S * that gives the highest number of off-road prediction points. Equation (7) can be optimized using any black-box optimization technique. We have studied Bayesian optimization [42,48], Genetic algorithms [5,34], Tree-structured Parzen Estimator Approach (TPE) [9] and brute-force. The overall algorithm is described in Appendix A. Experiments We conduct experiments to answer the following questions: 1) How is the performance of the prediction models on our generated scenes? 2) Are the generated scenes realistic and possibly similar to the real-world scenes? 3) Are we able to leverage the generated scenes to improve the robustness of the model? Experimental setup 4.1.1 Baselines and datasets We conduct our experiments on the baselines with different scene reasoning approaches (lane-graph attention [32], symmetric cross entropy [38], and counterfactual reasoning [27]), which are among the top-performing models and are open-source. LaneGCN [32]. It constructs a lane graph from vectorized scene and uses self-attention to learn the predictions. This method was among the top methods in Argoverse Forecasting Challenge 2020 [2]. It is a multi-modal prediction model which also provides the probability of each mode. Therefore, in our experiments, we consider the mode with the highest probability. DATF [38]. It is a flow-based method which uses a symmetric cross-entropy loss to encourage producing on-road predictions. This multi-modal prediction model does not provide the probability of each mode. We therefore consider the mode which is closest to the ground truth. WIMP [27]. They employ a scene attention module and a dynamic interaction graph to capture geometric and social relationships. Since they do not provide probabilities for each mode of their multi-modal predictions, we consider the one which is closest to the ground truth. MPC [6,56]. We report the performance of a rule-based model with satisfied kinematic constraints. We used a well-known rule-based model which follows center of the lanes [56]. While many approaches can be used to satisfy kinematic constraints in trajectory prediction, similar to [6], we used Model Predictive Control (MPC) with a bicycle dynamic model. We leveraged Argoverse dataset [19], the same dataset our baselines were trained on. Given the 2 seconds observation trajectory, the goal is to predict the next 3 seconds as the future motion of the vehicle. It is a large scale vehicle trajectory dataset. The dataset covers parts of Pittsburgh and Miami with total size of 290 kilometers of lanes. Metrics Hard Off-road Rate (HOR): in order to measure the percentage of samples with an inadmissible prediction with regards to the scene, we define HOR as the percentage of scenarios that at least one off-road happens in the prediction trajectory points. It is rounded to the nearest integer. Soft Off-road Rate (SOR): to measure the performance in each scenario more thoroughly, we measure the percentage of off-road prediction points over all prediction points and the average over all scenarios is reported. The reported values are rounded to the nearest integer. Implementation details We set the number of iterations to 60, the friction coefficient µ to 0.7 [11] and b equal to 5 for all experiments. For the choice of the black-box algorithm, as the search space of parameters is small in our case, we opt for the brute-force algorithm. We developed our model using a 32GB V100 NVIDIA GPU. Results We first provide the quantitative results of applying our method to the baselines in Table 1. The last column (All) represents the results of the search method described in Section 3.3. We also reported the performance of considering only one category of scene generation functions in the optimization problem Equation (7) in the other columns of the table. The results indicate a substantial increase in SOR and HOR across all baselines in different categories of the generated scenes. This shows that the generated scenes are difficult for the models to handle. LaneGCN and WIMP have competitive performances, but WIMP run-time is 50 times slower than LaneGCN. Hence, we use LaneGCN to conduct our remaining experiments. Figure 3 visualizes the performance of the baselines in our generated scenes. We observe that all models are challenged with the generated scenes. More cases are provided in Appendix B. In Table 1, we observe that SOR is less than or equal to 1% for all methods in the original scenes. Our exploration shows that more than 90% of these off-road cases are due to the annotation noise in the drivable area maps of the dataset and the models are almost error-free with respect to the scene. Some figures are provided in Appendix B. While this might lead to the conclusion that the models are flawless, results on the generated scenes question this conclusion. We confirm our claim in the next section by retrieving the real-world scenes where the model fails. Feasibility of a scenario is an important feature for generated scenes. As mentioned in Section 3.3, we added physical constraints to guarantee the physical feasibility of the scenes. Table 1 indicates that MPC as a rule-based model predicts almost without any off-road in the generated scenarios. It approves that the scenes are feasible with the given history trajectory. In order to study the importance of added constraints, we relax the constraints for the generated scenes. We report the performance of the baseline and MPC on the cases where the maximum speed in their h is higher than v max . In Table 2, we observe that without those feasibility-assurance constraints, there are more cases where MPC is unable to follow the road and has 3× more off-road. We conclude that those constraints are necessary to make the scene feasible. We keep the constraints in all of our experiments to generate feasible scenarios. Real-world retrieval So far, we have shown that the generated scenes along with the constraints are feasible/realistic scenes. Next, we want to study the plausibility/existence of the generated scenes. Inspired by image retrieval methods [37], we develop a retrieval method to find similar roads in the realworld. First, we extract data of 4 arbitrary cities (New York, Paris, Hong Kong, and New Mexico) using OSM [1]. Then, 20, 000 random samples of 200 × 200 meters are collected from each city. Note that it is the same view size as in Argoverse samples. Then, a feature extractor is required to obtain a feature vector for each scene. We used the scene feature extractor of LaneGCN named MapNet to obtain some 128 dimensional feature vectors for each sample. We then use the well-known image retrieval method K-tree algorithm [37]. It first uses K-Means algorithm multiple times to cluster the feature vectors of all scenes into a predefined number of clusters (in our case 1000). Then, given a generated scene as the query, it sorts real scenes based on the similarity with the query scene and retrieves 10 closest scenes to the query. Finally, we test the prediction model in these examples. Some examples are provided in Figure 4. More scenes can be found Appendix B. Robustness Here, we study if we can make the models robust against new generated scenes. To this end, we fine-tune the trained model using a combination of the original training data and the generated examples by our method for 10 epochs. We report the performance of these models in the generated scenes with different transformation power. Transformation power is determined by α 2 × 3000, β 12 × 3000 and γ 1 for Equation (3), Equation (4), and Equation (5), respectively. It represents the amount of curvature in the scene. Table 3 indicates that without losing the performance in the original accuracy metrics, the fine-tuned model is less vulnerable to the generated scenes by predicting 40% less SOR and 30% less HOR in the Full setting. While the results show improvements in all transformation powers, the gains in extreme cases are higher, i.e., the model can handle them better after fine-tuning. In Figure 5, the prediction of the original model is compared with the prediction of the robust model. The original model cannot predict without off-road while the fine-tuned model is able to predict reasonable and without any off-road point. Discussions In this section, we perform experiments and bring speculations to shed light on the weaknesses of the models. [56] 0 / 0 0 / 0 0 / 0 0 / 0 0 / 0 Table 2. Impact of the physical constraints. We report the performance with and without the physical constraints explained in Section 3.3. The numbers are reported on samples of data with speed higher than vmax in their h. generated for other models. We conduct this experiment by storing the generated scenes for a source model which lead to an off-road prediction, and evaluate the performance of target models on the stored scenes. Table 4 shows that the transferred scenes are still difficult cases for other models. 2. We study how models perform with smoothly changing the transformation functions parameters. To this end, we smoothly change the transformation parameters for 100 random scenes and visualize the heatmap of HOR for the generated scenes. Figure 6 demonstrates that models are more vulnerable to larger trans-formation parameters, i.e., sharper turns. Also, it shows more off-road in the left turns compared with the right ones which could be due to the biases in the dataset [36]. A clear improvement is visible in the robust model. 3. Our experiments showed that while the model has almost zero off-road rate in the original scenes, it suffers from over 60% off-road rate in the generated ones. In order to hypothesize the causes of this gap, we explored the training data. We observed that in most samples, the history h has enough information about the future trajectory which reduces the need for the scene reasoning. However, our scene generation approach changes the scene such that h includes almost no information about the future trajectory. This essentially makes a situation which requires scene reasoning. We speculate that this feature is one factor which makes the generated scenes challenging. Note that this does not contradict with the ablations in [32] as their performance measure is accuracy. Figure 7a shows a failure of the model where the prediction is only based on h instead of reasoning over the scene. However, the robust model learned to reason over the scene, as shown in Figure 7b. While our discussion is an observational Table 3. Comparing the original model and the fine-tuned model with data augmentation of the generated scenes. The performance is reported on generated scenes with different transformation power (Pow). Transformation power is determined by α2 × 3, 000, β12 × 3, 000 and γ1 for Equation (3), Equation (4), and Equation (5), respectively which represents the amount of curvature in the scene. The average / final displacement errors on original scenes are equal to 1.35/2.98m for both original and fine-tuned models. hypothesis, we leave further studies for future works. 4. In some cases, our generated scene could not lead to an off-road prediction. One such example is depicted in Figure 8a. 5. While our method offers a new approach for assessing trajectory prediction models, it has some limitations. First, our transformation functions are limited, X-spatial coordinates Y-spatial coordinates WIMP [26] DAFT [37] LaneGCN [30] Our robustified LaneGCN Figure 6. The qualitative results of baselines for different transformation functions. The red color indicates more off-road prediction in those scenes and the green indicates higher admissible ones. Usually the models fail in turns with high curvature. We could successfully make the LaneGCN model more robust by fine-tuning. Figure 7. The output of the model before and after the robustness in a sample which requires reasoning over the scene. We observe that the model before robustness mainly uses h to predict instead of reasoning over the scene. However, after robustness, it reasons more over the scene. Moreover, Figure 8b shows one scenario in which the predictions of the model are in the drivable area but the sudden lane change is abnormal. Conclusion In this work, we presented a conditional scene generation method. We showed that several state-of-the-art trajectory prediction models fail in our generated scenes. Notably, they have high off-road rate in their predictions. Next, leveraging image retrieval techniques, we retrieved real-world locations which partially resemble the generated scenes and demonstrate their failure in those locations. We made the model robust against the generated scenes. We hope that this framework helps to better evaluate the prediction models which are involved in the autonomous driving systems. Acknowledgments A. Overall algorithm In this section, we demonstrate the overall algorithm for the chosen search method. The pseudo-code of the algorithm for generating a scene is shown in Algorithm 1. The goal is to generate the scene S * for a given scenario x, a, S and predictor g. The process is called for k max iterations. In each iteration, we start with selecting a transformation function (L. 3). Then, the transformation function generates the corresponding scene (L. 4). After that, the observation trajectory is scaled to ensure feasibility of the scenario (L. 5). Next, the prediction of the model in the new scenario is computed and used to calculate the loss (L. 6, L. 7). The best-achieved loss determines the final generated scene. 2. More generated scenes. Figure 10 provides more visualizations for the performance of the baselines in our generated scenes. 3. Noise in the drivable area map. The models predict near perfect in the original dataset with HOR of less than 1%. Our exploration shows that most of the 1% failed cases are due to the annotation noise in the drivable area maps of the dataset and the models are almost error-free with respect to the scene. Some figures are provided in Figure 11. 4. Gifs. We provide gifs on the perfromance of model when smoothly transforming the scene in Figure 12. We observe that in some cases the model fails and in some succeeds. C. Additional quantitative results C.1. Excluding trivial scenes. In this section, we remove some trivial scenes, i.e., the scenes that fooling is near impossible, e.g., the scenes with zero velocity. Excluding them, we report in Table 5 and compared to table 1 of the paper, the off-road numbers substantially increase. C.2. Exploring black box algorithms In the paper, we mentioned that we used a brute-force approach for finding the optimal values as the search space is not huge. Here, we investigate different block box algorithms for the search. The results of applying different search algorithms are provided in Table 6. They cannot overcome the brute-force approach because of their bigger search spaces (the continuous space instead of the discrete space) and the large required computation time. D. Generalization to rasterized scene In the paper, we assumed S is in the vector representation, i.e., it includes x-y coordinates of road lanes points. In the case of a rasterized scene, an RGB value is provided for each pixel of the image. Therefore, it is the same as the vector representation unless here we have information (RGB value) about other parts of the scene in addition to the lanes. Hence, the transformation function can be applied directly on all pixels of the image. In other words, in image representation, s is the coordinate of each pixel which has an RGB value andŝ represents the new coordinate with the same RGB value as s. Other Vehicles Observation Ground Truth Prediction (c) Figure 11. Some examples showing the noise in the drivable area map. All these predictions were considered as off-road because of an inaccurate drivable area map. [56] 0 / 0 0 / 0 0 / 0 0 / 0 0 / 0 Table 6. Comparing the performance and computation time of different optimization algorithms in the generated scenes. Figure 2 . 2Visualization of different transformation functions. The scene before transformation will be followed by three different transformations. Here, α = (10, 0.002, 3) for the single-turn, β = (10, 0.002, 3, 10) for the double-turn and γ = (6, 0.017) for the ripple-road. b is the border parameter and set to 5 meters in all figures. 1 .Figure 3 . 13Comparing the performance of different baselines in the original dataset scenes and our generated scenes. SOR and HOR are reported in percent and the lower represent a better reasoning on the scenes by the model. MPC as a rule-based model always has on-road predictions both in original and our generated scenes. The predictions of different models in some generated scenes. All models are challenged by the generated scenes and failed in predicting in the drivable area. Figure 4 . 4Retrieving some real-world locations similar to the generated scenes using our real-world retrieval algorithm. We observe that the model fails in Paris (a), HongKong (b) and New Mexico (c). Figure 5 . 5The output of the original model (the left) vs the robust model (the right) in a generated scene. While the original model has a trajectory in non-drivable area, the robust model predicts without any off-road. Figure 8 . 8Some successful cases of the prediction model. In (a), the model follows the road and predicts without any off-road. In (b), while the model predicts on-road, it suddenly changes its lane. This project was funded by Honda R&D Co., Ltd and from the European Union's Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie grant agreement No 754354. We thank Lorenzo Bertoni and Kirell Benzi for their valuable feedback. Algorithm 1 :. 1Scene search method Input: Sequence h, Scene S, Predictor g, Surrounding vehicles a, Transformation set f , Number of iterations k max Output: Generated scene S * 1 Initialize l * ← 1 2 for k = 1 to k max Real-world retrieval images. We show more realworld examples for both cases where the trajectory prediction model fails and succeeds inFigure 9. Figure 9 . 9Retrieving real-world places using our real-world retrieval algorithm. We observe that the model fails in Paris (a), New York (b), Hong Kong (c) and New Mexico (d). The model also successfully predicts in the drivable area in the remaining figures. Figure 10 . 10The predictions of different models in some generated scenes. All models are challenged by the generated scenes and failed in predicting in the drivable area. Figure 12 . 12The animations show the changes of the models predictions in different scenes. It is best viewed using Adobe Acrobat Reader. 1 . 1We study the ability to transfer the generated scenes to new models, i.e., how models perform on the scenesModel Original Generated (Ours) Smooth-turn Double-turn Ripple-road All SOR / HOR SOR / HOR SOR / HOR SOR / HOR SOR / HOR DATF [38] 1 / 2 37 / 77 36 / 76 42 / 80 43 / 82 WIMP [27] 0 / 1 13 / 46 14 / 50 20 / 58 22 / 63 LaneGCN [32] 0 / 1 8 / 40 19 / 60 21 / 62 23 / 66 MPC Table Table 4 . 4Studying the transferability of the generated scenes.We generate scenes for source model and keep the ones that have off-road prediction by the source model. The target mod- els are evaluated using those scenes. The reported numbers are SOR/HOR values. Numbers are rounded to the nearest integer. and they cannot cover all real-world cases. We how- ever propose a general methodology that can be ex- panded by adding other types of transformations. To demonstrate it, we add lane merging to the framework, which causes 14% HOR. Second, in addition to the off- road criterion, there exist other failure criteria. For in- stance, collision with other agents or abnormal behav- iors like sudden lane changes. By choosing collision with other agents as criterion, HOR becomes 1.68% in the generated scenes while it is 0.55% in original data. ModelOriginal Generated (Ours) Smooth-turn Double-turn Ripple-road All SOR / HOR SOR / HOR SOR / HOR SOR / HOR SOR / HORDATF [38] 1 / 2 44 / 92 43 / 91 50 / 95 51 / 99 WIMP [27] 0 / 1 30 / 80 23 / 71 29 / 77 31 / 82 LaneGCN [32] 0 / 1 23 / 65 32 / 75 34 / 77 37 / 81 MPC Table 5 . 5Comparing the performance of different baselines in the original dataset scenes and our generated scenes after removing trivial scenarios. SOR and HOR are reported in percent and the lower represent a better reasoning on the scenes by the model. Numbers are rounded to the nearest integer.Optimization method on LaneGCN [32] GPU hours SOR / HOR Baysian [42, 48] 13 / 40 17.5 GA [34] 14 / 45 25.0 TPE [9] 14 / 45 12.1 Brute force 23 / 66 4.2 We show in Appendix D that our method is seamlessly applicable when S is in image representation. Openstreetmap. 25Openstreetmap. https : / / www . openstreetmap . org. 2, 5 Argoai challenge. CVPR Workshop on Autonomous Driving. Argoai challenge. CVPR Workshop on Autonomous Driving, 2020. 4 Generating adversarial driving scenarios in highfidelity simulators. Yasasa Abeysirigoonawardena, Florian Shkurti, Gregory Dudek, International Conference on Robotics and Automation (ICRA). Yasasa Abeysirigoonawardena, Florian Shkurti, and Gregory Dudek. Generating adversarial driving scenarios in high- fidelity simulators. In International Conference on Robotics and Automation (ICRA). IEEE, 2019. 2 Automatic generation of safety-critical test scenarios for collision avoidance of road vehicles. Matthias Althoff, Sebastian Lutz, IEEE Intelligent Vehicles Symposium (IV). Matthias Althoff and Sebastian Lutz. Automatic genera- tion of safety-critical test scenarios for collision avoidance of road vehicles. In IEEE Intelligent Vehicles Symposium (IV), 2018. 2 Genattack: practical black-box attacks with gradient-free optimization. GECCO. Moustafa Alzantot, Yash Sharma, Supriyo Chakraborty, Huan Zhang, Cho-Jui Hsieh, Mani B Srivastava, Moustafa Alzantot, Yash Sharma, Supriyo Chakraborty, Huan Zhang, Cho-Jui Hsieh, and Mani B. Srivastava. Genat- tack: practical black-box attacks with gradient-free opti- mization. GECCO, 2019. 4 Injecting knowledge in data-driven vehicle trajectory predictors. Mohammadhossein Bahari, Ismail Nejjar, Alexandre Alahi, Transportation Research Part C: Emerging Technologies. 1284Mohammadhossein Bahari, Ismail Nejjar, and Alexandre Alahi. Injecting knowledge in data-driven vehicle trajectory predictors. Transportation Research Part C: Emerging Tech- nologies, 128, 2021. 4 Mohammadhossein Bahari, Vahid Zehtab, Sadegh Khorasani, Sana Ayramlou, arXiv:2110.03706Saeed Saadatnejad, and Alexandre Alahi. Svg-net: An svg-based trajectory prediction model. arXiv preprintMohammadhossein Bahari, Vahid Zehtab, Sadegh Kho- rasani, Sana Ayramlou, Saeed Saadatnejad, and Alexandre Alahi. Svg-net: An svg-based trajectory prediction model. arXiv preprint arXiv:2110.03706, 2021. 2 Chauffeurnet: Learning to drive by imitating the best and synthesizing the worst. Mayank Bansal, RSSAlex Krizhevsky, RSSAbhijit Ogale, RSSRobotics: Science and Systems. 12Mayank Bansal, Alex Krizhevsky, and Abhijit Ogale. Chauf- feurnet: Learning to drive by imitating the best and synthe- sizing the worst. Robotics: Science and Systems (RSS), 2019. 1, 2 Algorithms for hyper-parameter optimization. J S Bergstra, R Bardenet, Y Bengio, B Kégl, Advances in Neural Information Processing Systems(NeurIPS). 415J. S. Bergstra, R. Bardenet, Y. Bengio, and B. Kégl. Algo- rithms for hyper-parameter optimization. Advances in Neu- ral Information Processing Systems(NeurIPS), 2011. 4, 15 Prank: motion prediction based on ranking. Yuriy Biktairov, Maxim Stebelev, Irina Rudenko, Oleh Shliazhko, Boris Yangel, Advances in Neural Information Processing Systems (NeurIPS). 2020Yuriy Biktairov, Maxim Stebelev, Irina Rudenko, Oleh Shli- azhko, and Boris Yangel. Prank: motion prediction based on ranking. Advances in Neural Information Processing Sys- tems (NeurIPS), 2020. 2 Friction Science and Technology: From Concepts to Applications. J Peter, Blau, CRC PressPeter J. Blau. Friction Science and Technology: From Con- cepts to Applications. CRC Press, 2005. 5 Attacking visionbased perception in end-to-end autonomous driving models. Adith Boloor, Karthik Garimella, Xin He, Christopher Gill, Yevgeniy Vorobeychik, Xuan Zhang, Journal of Systems Architecture. 1102Adith Boloor, Karthik Garimella, Xin He, Christopher Gill, Yevgeniy Vorobeychik, and Xuan Zhang. Attacking vision- based perception in end-to-end autonomous driving models. Journal of Systems Architecture, 110, 2020. 2 Pedestrian intention prediction: A multi-task perspective. European Association for Research in Transportation (hEART). Saeed Smail Ait Bouhsain, Alexandre Saadatnejad, Alahi, 2020Smail Ait Bouhsain, Saeed Saadatnejad, and Alexandre Alahi. Pedestrian intention prediction: A multi-task perspec- tive. European Association for Research in Transportation (hEART), 2020. 2 nuscenes: A multimodal dataset for autonomous driving. Holger Caesar, Varun Bankiti, Alex H Lang, Sourabh Vora, Venice Erin Liong, Qiang Xu, Anush Krishnan, Yu Pan, Giancarlo Baldan, Oscar Beijbom, Proceedings of IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)2020Holger Caesar, Varun Bankiti, Alex H. Lang, Sourabh Vora, Venice Erin Liong, Qiang Xu, Anush Krishnan, Yu Pan, Giancarlo Baldan, and Oscar Beijbom. nuscenes: A mul- timodal dataset for autonomous driving. Proceedings of IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2020. 1 Adversarial objects against lidar-based autonomous driving systems. Yulong Cao, Chaowei Xiao, Dawei Yang, Jing Fang, Ruigang Yang, Mingyan Liu, Bo Li, arXiv:1907.05418arXiv preprintYulong Cao, Chaowei Xiao, Dawei Yang, Jing Fang, Ruigang Yang, Mingyan Liu, and Bo Li. Adversarial ob- jects against lidar-based autonomous driving systems. arXiv preprint arXiv:1907.05418, 2019. 2 The importance of prior knowledge in precise multimodal prediction. Sergio Casas, Cole Gulino, Simon Suo, Raquel Urtasun, 1Sergio Casas, Cole Gulino, Simon Suo, and Raquel Urtasun. The importance of prior knowledge in precise multimodal prediction, 2020. 1, 2 Intentnet: Learning to predict intention from raw sensor data. Sergio Casas, Wenjie Luo, Raquel Urtasun, Conference on Robot Learning. PMLR. Sergio Casas, Wenjie Luo, and Raquel Urtasun. Intentnet: Learning to predict intention from raw sensor data. In Con- ference on Robot Learning. PMLR, 2018. 2 Multipath: Multiple probabilistic anchor trajectory hypotheses for behavior prediction. Yuning Chai, Benjamin Sapp, Mayank Bansal, Dragomir Anguelov, Conference on Robot Learning. Yuning Chai, Benjamin Sapp, Mayank Bansal, and Dragomir Anguelov. Multipath: Multiple probabilistic anchor tra- jectory hypotheses for behavior prediction. Conference on Robot Learning, 2019. 2 Argoverse: 3d tracking and forecasting with rich maps. Ming-Fang Chang, John Lambert, Patsorn Sangkloy, Jagjeet Singh, Slawomir Bak, Andrew Hartnett, De Wang, Peter Carr, Simon Lucey, Deva Ramanan, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)Ming-Fang Chang, John Lambert, Patsorn Sangkloy, Jag- jeet Singh, Slawomir Bak, Andrew Hartnett, De Wang, Peter Carr, Simon Lucey, Deva Ramanan, et al. Argoverse: 3d tracking and forecasting with rich maps. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2019. 1, 2, 3, 4 A survey of algorithms for blackbox safety validation. Anthony Corso, J Robert, Mark Moss, Ritchie Koren, Lee, Kochenderfer, arXiv:2005.02979arXiv preprintAnthony Corso, Robert J Moss, Mark Koren, Ritchie Lee, and Mykel J Kochenderfer. A survey of algorithms for black- box safety validation. arXiv preprint arXiv:2005.02979, 2020. 2 Long-term path prediction in urban scenarios using circular distributions. Pasquale Coscia, Francesco Castaldo, Francesco A N Palmieri, Alexandre Alahi, Silvio Savarese, Lamberto Ballan, Image and Vision Computing. 692Pasquale Coscia, Francesco Castaldo, Francesco A.N. Palmieri, Alexandre Alahi, Silvio Savarese, and Lamberto Ballan. Long-term path prediction in urban scenarios us- ing circular distributions. Image and Vision Computing, 69, 2018. 2 Fundamentals of physics. Halliday David, Robert Resnick, Jearl Walker, WileyHalliday David, Robert Resnick, and Jearl Walker. Funda- mentals of physics. Wiley, 1997. 3 Large scale interactive motion forecasting for autonomous driving: The waymo open motion dataset. Scott Ettinger, Shuyang Cheng, Benjamin Caine, Chenxi Liu, Hang Zhao, Sabeek Pradhan, Yuning Chai, Ben Sapp, Charles Qi, Yin Zhou, Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV). the IEEE/CVF International Conference on Computer Vision (ICCV)Scott Ettinger, Shuyang Cheng, Benjamin Caine, Chenxi Liu, Hang Zhao, Sabeek Pradhan, Yuning Chai, Ben Sapp, Charles Qi, Yin Zhou, et al. Large scale interactive motion forecasting for autonomous driving: The waymo open mo- tion dataset. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2021. 1 Vectornet: Encoding hd maps and agent dynamics from vectorized representation. Jiyang Gao, Chen Sun, Hang Zhao, Yi Shen, Dragomir Anguelov, Congcong Li, Cordelia Schmid, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)2020Jiyang Gao, Chen Sun, Hang Zhao, Yi Shen, Dragomir Anguelov, Congcong Li, and Cordelia Schmid. Vectornet: Encoding hd maps and agent dynamics from vectorized rep- resentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2020. 2 Rules of the road: Predicting driving behavior with a convolutional model of semantic interactions. Joey Hong, Benjamin Sapp, James Philbin, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)Joey Hong, Benjamin Sapp, and James Philbin. Rules of the road: Predicting driving behavior with a convolutional model of semantic interactions. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2019. 2 A scenario-based platform for testing autonomous vehicle behavior prediction models in simulation. Francis Indaheng, Edward Kim, Kesav Viswanadha, Jay Shenoy, Jinkyu Kim, J Daniel, Fremont, Sanjit, Seshia, arXiv:2110.14870arXiv preprintFrancis Indaheng, Edward Kim, Kesav Viswanadha, Jay Shenoy, Jinkyu Kim, Daniel J Fremont, and Sanjit A Se- shia. A scenario-based platform for testing autonomous vehi- cle behavior prediction models in simulation. arXiv preprint arXiv:2110.14870, 2021. 2 What-if motion prediction for autonomous driving. Siddhesh Khandelwal, William Qi, Jagjeet Singh, Andrew Hartnett, Deva Ramanan, arXiv:2008.10587415arXiv preprintSiddhesh Khandelwal, William Qi, Jagjeet Singh, Andrew Hartnett, and Deva Ramanan. What-if motion prediction for autonomous driving. arXiv preprint arXiv:2008.10587, 2020. 4, 6, 15 Generating critical test scenarios for automated vehicles with evolutionary algorithms. Moritz Klischat, Matthias Althoff, IEEE Intelligent Vehicles Symposium (IV). Moritz Klischat and Matthias Althoff. Generating critical test scenarios for automated vehicles with evolutionary algo- rithms. In IEEE Intelligent Vehicles Symposium (IV), 2019. 2 Physgan: Generating physical-world-resilient adversarial examples for autonomous driving. Zelun Kong, Junfeng Guo, Ang Li, Cong Liu, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)2020Zelun Kong, Junfeng Guo, Ang Li, and Cong Liu. Physgan: Generating physical-world-resilient adversarial examples for autonomous driving. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2020. 2 Human trajectory forecasting in crowds: A deep learning perspective. Parth Kothari, Sven Kreiss, Alexandre Alahi, IEEE Transactions on Intelligent Transportation Systems. 2Parth Kothari, Sven Kreiss, and Alexandre Alahi. Human trajectory forecasting in crowds: A deep learning perspec- tive. IEEE Transactions on Intelligent Transportation Sys- tems, 2021. 2 Desire: Distant future prediction in dynamic scenes with interacting agents. Namhoon Lee, Wongun Choi, Paul Vernaza, B Christopher, Choy, H S Philip, Manmohan Torr, Chandraker, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. the IEEE/CVF Conference on Computer Vision and Pattern RecognitionNamhoon Lee, Wongun Choi, Paul Vernaza, Christopher B Choy, Philip HS Torr, and Manmohan Chandraker. Desire: Distant future prediction in dynamic scenes with interact- ing agents. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2017. 2 Learning lane graph representations for motion forecasting. Ming Liang, Bin Yang, Rui Hu, Yun Chen, Renjie Liao, Song Feng, Raquel Urtasun, European Conference on Computer Vision (ECCV). Springer615Ming Liang, Bin Yang, Rui Hu, Yun Chen, Renjie Liao, Song Feng, and Raquel Urtasun. Learning lane graph representa- tions for motion forecasting. In European Conference on Computer Vision (ECCV). Springer, 2020. 1, 2, 4, 6, 15 A little fog for a large turn. Harshitha Machiraju, N Vineeth, Balasubramanian, Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision. the IEEE/CVF Winter Conference on Applications of Computer Vision2020Harshitha Machiraju and Vineeth N Balasubramanian. A lit- tle fog for a large turn. In Proceedings of the IEEE/CVF Win- ter Conference on Applications of Computer Vision, 2020. 2 Genetic algorithms: concepts and applications. Kim-Fung Man, Wallace Kit-Sang Tang, Sam Kwong, in engineering designKim-Fung Man, Wallace Kit-Sang Tang, and Sam Kwong. Genetic algorithms: concepts and applications [in engineer- ing design]. . IEEE Trans. Ind. Electron. 415IEEE Trans. Ind. Electron., 1996. 4, 15 Hdmapgen: A hierarchical graph generative model of high definition maps. Lu Mi, Hang Zhao, Charlie Nash, Xiaohan Jin, Jiyang Gao, Chen Sun, Cordelia Schmid, Nir Shavit, Yuning Chai, Dragomir Anguelov, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)1Lu Mi, Hang Zhao, Charlie Nash, Xiaohan Jin, Jiyang Gao, Chen Sun, Cordelia Schmid, Nir Shavit, Yuning Chai, and Dragomir Anguelov. Hdmapgen: A hierarchical graph gen- erative model of high definition maps. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2021. 1, 2 Improving movement prediction of traffic actors using off-road loss and bias mitigation. Matthew Niedoba, Henggang Cui, Kevin Luo, Darshan Hegde, Advances in Neural Information Processing Systems (NeurIPS) Workshop. 16Fang-Chieh Chou, and Nemanja DjuricMatthew Niedoba, Henggang Cui, Kevin Luo, Darshan Hegde, Fang-Chieh Chou, and Nemanja Djuric. Improving movement prediction of traffic actors using off-road loss and bias mitigation. Advances in Neural Information Processing Systems (NeurIPS) Workshop, 2019. 1, 6 Scalable recognition with a vocabulary tree. D Nister, H Stewenius, IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 2D. Nister and H. Stewenius. Scalable recognition with a vo- cabulary tree. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), volume 2, 2006. 5 Diverse and admissible trajectory forecasting through multimodal context understanding. Gyubok Seong Hyeon Park, Jimin Lee, Manoj Seo, Minseok Bhat, Jonathan Kang, Ashwin Francis, Paul Pu Jadhav, Louis-Philippe Liang, Morency, European Conference on Computer Vision (ECCV). Springer615Seong Hyeon Park, Gyubok Lee, Jimin Seo, Manoj Bhat, Minseok Kang, Jonathan Francis, Ashwin Jadhav, Paul Pu Liang, and Louis-Philippe Morency. Diverse and admis- sible trajectory forecasting through multimodal context un- derstanding. In European Conference on Computer Vision (ECCV). Springer, 2020. 1, 2, 4, 6, 15 Learning decoupled representations for human pose forecasting. Saeed Behnam Parsaeifard, Yuejiang Saadatnejad, Taylor Liu, Alexandre Mordan, Alahi, Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV) workshop. the IEEE/CVF International Conference on Computer Vision (ICCV) workshopBehnam Parsaeifard, Saeed Saadatnejad, Yuejiang Liu, Tay- lor Mordan, and Alexandre Alahi. Learning decoupled rep- resentations for human pose forecasting. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV) workshop, pages 2294-2303, 2021. 2 Safety-aware motion prediction with unseen vehicles for autonomous driving. Tao Xuanchi Ren, Li Erran Yang, Alexandre Li, Qifeng Alahi, Chen, Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV). the IEEE/CVF International Conference on Computer Vision (ICCV)2021Xuanchi Ren, Tao Yang, Li Erran Li, Alexandre Alahi, and Qifeng Chen. Safety-aware motion prediction with un- seen vehicles for autonomous driving. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2021. 2 R2p2: A reparameterized pushforward policy for diverse, precise generative path forecasting. Nicholas Rhinehart, Kris M Kitani, Paul Vernaza, Proceedings of the European Conference on Computer Vision (ECCV). the European Conference on Computer Vision (ECCV)Nicholas Rhinehart, Kris M. Kitani, and Paul Vernaza. R2p2: A reparameterized pushforward policy for diverse, precise generative path forecasting. In Proceedings of the European Conference on Computer Vision (ECCV), September 2018. 2 Binxin Ru, Adam D Cobb, Arno Blaas, and Yarin Gal. Bayesopt adversarial attack. International Conference on Learning Representations (ICLR). 415Binxin Ru, Adam D. Cobb, Arno Blaas, and Yarin Gal. Bayesopt adversarial attack. International Conference on Learning Representations (ICLR), 2020. 4, 15 Seyed-Mohsen Moosavi-Dezfooli, and Alexandre Alahi. Are socially-aware trajectory prediction models really socially-aware?. Saeed Saadatnejad, Mohammadhossein Bahari, Pedram Khorsandi, Mohammad Saneian, arXiv:2108.10879arXiv preprintSaeed Saadatnejad, Mohammadhossein Bahari, Pedram Khorsandi, Mohammad Saneian, Seyed-Mohsen Moosavi- Dezfooli, and Alexandre Alahi. Are socially-aware trajec- tory prediction models really socially-aware? arXiv preprint arXiv:2108.10879, 2021. 2 A shared representation for photorealistic driving simulators. Saeed Saadatnejad, Siyuan Li, Taylor Mordan, Alexandre Alahi, IEEE Transactions on Intelligent Transportation Systems. 2Saeed Saadatnejad, Siyuan Li, Taylor Mordan, and Alexan- dre Alahi. A shared representation for photorealistic driving simulators. IEEE Transactions on Intelligent Transportation Systems, 2021. 2 Car-net: Clairvoyant attentive recurrent network. Amir Sadeghian, Ferdinand Legros, Maxime Voisin, Ricky Vesel, Alexandre Alahi, Silvio Savarese, European Conference on Computer Vision (ECCV). Amir Sadeghian, Ferdinand Legros, Maxime Voisin, Ricky Vesel, Alexandre Alahi, and Silvio Savarese. Car-net: Clair- voyant attentive recurrent network. In European Conference on Computer Vision (ECCV), 2018. 2 Dirty road can attack: Security of deep learning based automated lane centering under physical-world attack. Takami Sato, Junjie Shen, Ningfei Wang, Yunhan Jia, Xue Lin, Qi Alfred Chen, Proceedings of the 29th USENIX Security Symposium (USENIX Security'21). the 29th USENIX Security Symposium (USENIX Security'21)2021Takami Sato, Junjie Shen, Ningfei Wang, Yunhan Jia, Xue Lin, and Qi Alfred Chen. Dirty road can attack: Secu- rity of deep learning based automated lane centering under physical-world attack. In Proceedings of the 29th USENIX Security Symposium (USENIX Security'21), 2021. 2 Learning to predict vehicle trajectories with model-based planning. Haoran Song, Di Luan, Wenchao Ding, Michael Yu Wang, Qifeng Chen, Conference on Robot Learning. Haoran Song, Di Luan, Wenchao Ding, Michael Yu Wang, and Qifeng Chen. Learning to predict vehicle trajectories with model-based planning. In Conference on Robot Learn- ing, 2021. 2 Gaussian process optimization in the bandit setting: No regret and experimental design. Niranjan Srinivas, Andreas Krause, M Sham, Matthias W Kakade, Seeger, International Conference on Machine Learning (ICML). 415Niranjan Srinivas, Andreas Krause, Sham M. Kakade, and Matthias W. Seeger. Gaussian process optimization in the bandit setting: No regret and experimental design. Interna- tional Conference on Machine Learning (ICML), 2010. 4, 15 Towards robust lidar-based perception in autonomous driving: General black-box adversarial sensor attack and countermeasures. Jiachen Sun, Yulong Cao, Alfred Qi, Z Morley Chen, Mao, 29th USENIX Security Symposium. 2020Jiachen Sun, Yulong Cao, Qi Alfred Chen, and Z Morley Mao. Towards robust lidar-based perception in autonomous driving: General black-box adversarial sensor attack and countermeasures. In 29th USENIX Security Symposium, 2020. 2 Deeptest: Automated testing of deep-neural-network-driven autonomous cars. Yuchi Tian, Kexin Pei, Suman Jana, Baishakhi Ray, Proceedings of the 40th international conference on software engineering. the 40th international conference on software engineeringYuchi Tian, Kexin Pei, Suman Jana, and Baishakhi Ray. Deeptest: Automated testing of deep-neural-network-driven autonomous cars. In Proceedings of the 40th international conference on software engineering, 2018. 2 Frank Cheng, and Raquel Urtasun. Physically realizable adversarial examples for lidar object detection. James Tu, Mengye Ren, Sivabalan Manivasagam, Ming Liang, Bin Yang, Richard Du, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)2020James Tu, Mengye Ren, Sivabalan Manivasagam, Ming Liang, Bin Yang, Richard Du, Frank Cheng, and Raquel Ur- tasun. Physically realizable adversarial examples for lidar object detection. In Proceedings of the IEEE/CVF Confer- ence on Computer Vision and Pattern Recognition (CVPR), 2020. 2 Failure-scenario maker for rule-based agent using multi-agent adversarial reinforcement learning and its application to autonomous driving. Akifumi Wachi, Twenty-Eighth International Joint Conference on Artificial Intelligence (IJCAI). Akifumi Wachi. Failure-scenario maker for rule-based agent using multi-agent adversarial reinforcement learning and its application to autonomous driving. Twenty-Eighth Interna- tional Joint Conference on Artificial Intelligence (IJCAI), 2019. 2 Advsim: generating safety-critical scenarios for self-driving vehicles. Jingkang Wang, Ava Pun, James Tu, Sivabalan Manivasagam, Abbas Sadat, Sergio Casas, Mengye Ren, Raquel Urtasun, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)2021Jingkang Wang, Ava Pun, James Tu, Sivabalan Mani- vasagam, Abbas Sadat, Sergio Casas, Mengye Ren, and Raquel Urtasun. Advsim: generating safety-critical sce- narios for self-driving vehicles. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2021. 2 Deeproad: Gan-based metamorphic testing and input validation framework for autonomous driving systems. Mengshi Zhang, Yuqun Zhang, Lingming Zhang, Cong Liu, Sarfraz Khurshid, 33rd IEEE/ACM International Conference on Automated Software Engineering (ASE). Mengshi Zhang, Yuqun Zhang, Lingming Zhang, Cong Liu, and Sarfraz Khurshid. Deeproad: Gan-based metamorphic testing and input validation framework for autonomous driv- ing systems. In 33rd IEEE/ACM International Conference on Automated Software Engineering (ASE), 2018. 2 Deepbillboard: Systematic physical-world testing of autonomous driving systems. Husheng Zhou, Wei Li, Zelun Kong, Junfeng Guo, Yuqun Zhang, Bei Yu, Lingming Zhang, Cong Liu, Proceedings of the ACM/IEEE 42nd International Conference on Software Engineering. the ACM/IEEE 42nd International Conference on Software EngineeringHusheng Zhou, Wei Li, Zelun Kong, Junfeng Guo, Yuqun Zhang, Bei Yu, Lingming Zhang, and Cong Liu. Deep- billboard: Systematic physical-world testing of autonomous driving systems. In Proceedings of the ACM/IEEE 42nd In- ternational Conference on Software Engineering, 2020. 2 Making bertha drive-an autonomous journey on a historic route. Julius Ziegler, Philipp Bender, Markus Schreiber, Henning Lategahn, Tobias Strauß, Christoph Stiller, Thao Dang, Uwe Franke, Nils Appenrodt, Christoph Gustav Keller, Eberhard Kaus, Ralf G Herrtwich, Clemens Rabe, David Pfeiffer, Frank Lindner, Fridtjof Stein, Friedrich Erbs, Markus Enzweiler, Carsten Knöppel, Jochen Hipp, IEEE Intelligent Transportation Systems Magazine. Mohammad Ghanaat, Markus Braun, Armin Joos, Hans Fritz, Horst Mock, Martin Hein, and Eberhard Zeeb615Julius Ziegler, Philipp Bender, Markus Schreiber, Henning Lategahn, Tobias Strauß, Christoph Stiller, Thao Dang, Uwe Franke, Nils Appenrodt, Christoph Gustav Keller, Eber- hard Kaus, Ralf G. Herrtwich, Clemens Rabe, David Pfeif- fer, Frank Lindner, Fridtjof Stein, Friedrich Erbs, Markus Enzweiler, Carsten Knöppel, Jochen Hipp, Martin Haueis, Maximilian Trepte, Carsten Brenk, Andreas Tamke, Mo- hammad Ghanaat, Markus Braun, Armin Joos, Hans Fritz, Horst Mock, Martin Hein, and Eberhard Zeeb. Making bertha drive-an autonomous journey on a historic route. IEEE Intelligent Transportation Systems Magazine, 6, 2014. 4, 6, 15
[]
[ "Pretrained Cost Model for Distributed Constraint Optimization Problems", "Pretrained Cost Model for Distributed Constraint Optimization Problems" ]
[ "Yanchen Deng [email protected] \nSchool of Computer Science and Engineering\nNanyang Technological University\nSingapore\n", "Shufeng Kong [email protected] \nSchool of Computer Science and Engineering\nNanyang Technological University\nSingapore\n", "Bo An [email protected] \nSchool of Computer Science and Engineering\nNanyang Technological University\nSingapore\n" ]
[ "School of Computer Science and Engineering\nNanyang Technological University\nSingapore", "School of Computer Science and Engineering\nNanyang Technological University\nSingapore", "School of Computer Science and Engineering\nNanyang Technological University\nSingapore" ]
[]
Distributed Constraint Optimization Problems (DCOPs) are an important subclass of combinatorial optimization problems, where information and controls are distributed among multiple autonomous agents. Previously, Machine Learning (ML) has been largely applied to solve combinatorial optimization problems by learning effective heuristics. However, existing ML-based heuristic methods are often not generalizable to different search algorithms. Most importantly, these methods usually require full knowledge about the problems to be solved, which are not suitable for distributed settings where centralization is not realistic due to geographical limitations or privacy concerns. To address the generality issue, we propose a novel directed acyclic graph representation schema for DCOPs and leverage the Graph Attention Networks (GATs) to embed graph representations. Our model, GAT-PCM, is then pretrained with optimally labelled data in an offline manner, so as to construct effective heuristics to boost a broad range of DCOP algorithms where evaluating the quality of a partial assignment is critical, such as local search or backtracking search. Furthermore, to enable decentralized model inference, we propose a distributed embedding schema of GAT-PCM where each agent exchanges only embedded vectors, and show its soundness and complexity. Finally, we demonstrate the effectiveness of our model by combining it with a local search or a backtracking search algorithm. Extensive empirical evaluations indicate that the GAT-PCM-boosted algorithms significantly outperform the state-of-the-art methods in various benchmarks. Our pretrained cost model is available at
10.1609/aaai.v36i9.21164
[ "https://arxiv.org/pdf/2112.04187v2.pdf" ]
244,954,504
2112.04187
5012ecfad59793a330ec2c6311234acf590667b9
Pretrained Cost Model for Distributed Constraint Optimization Problems Yanchen Deng [email protected] School of Computer Science and Engineering Nanyang Technological University Singapore Shufeng Kong [email protected] School of Computer Science and Engineering Nanyang Technological University Singapore Bo An [email protected] School of Computer Science and Engineering Nanyang Technological University Singapore Pretrained Cost Model for Distributed Constraint Optimization Problems Distributed Constraint Optimization Problems (DCOPs) are an important subclass of combinatorial optimization problems, where information and controls are distributed among multiple autonomous agents. Previously, Machine Learning (ML) has been largely applied to solve combinatorial optimization problems by learning effective heuristics. However, existing ML-based heuristic methods are often not generalizable to different search algorithms. Most importantly, these methods usually require full knowledge about the problems to be solved, which are not suitable for distributed settings where centralization is not realistic due to geographical limitations or privacy concerns. To address the generality issue, we propose a novel directed acyclic graph representation schema for DCOPs and leverage the Graph Attention Networks (GATs) to embed graph representations. Our model, GAT-PCM, is then pretrained with optimally labelled data in an offline manner, so as to construct effective heuristics to boost a broad range of DCOP algorithms where evaluating the quality of a partial assignment is critical, such as local search or backtracking search. Furthermore, to enable decentralized model inference, we propose a distributed embedding schema of GAT-PCM where each agent exchanges only embedded vectors, and show its soundness and complexity. Finally, we demonstrate the effectiveness of our model by combining it with a local search or a backtracking search algorithm. Extensive empirical evaluations indicate that the GAT-PCM-boosted algorithms significantly outperform the state-of-the-art methods in various benchmarks. Our pretrained cost model is available at Introduction As a fundamental formalism in multi-agent systems, Distributed Constraint Optimization Problems (DCOPs) (Modi et al. 2005) capture the essentials of cooperative distributed problem solving and have been successfully applied to model the problems in many real-world domains like radio channel allocation (Monteiro et al. 2012), vessel navigation (Hirayama et al. 2019), and smart grid (Fioretto et al. 2017). Over the past two decades, numerous algorithms have been proposed to solve DCOPs and can be generally classi-fied as complete and incomplete algorithms. Complete algorithms aim to exhaust the search space and find the optimal solution by either distributed backtracking search (Hirayama and Yokoo 1997;Modi et al. 2005;Litov and Meisels 2017;Yeoh, Felner, and Koenig 2010) or dynamic-programming (Chen et al. 2020;Faltings 2005, 2007). However, complete algorithms scale poorly and are unsuitable for large real-world applications. Therefore, considerable research efforts have been devoted to develop incomplete algorithms that trade the solution quality for smaller computational overheads, including local search (Maheswaran, Pearce, and Tambe 2004;Okamoto, Zivan, and Nahon 2016;Zhang et al. 2005), belief propagation (Cohen, Galiki, and Zivan 2020;Farinelli et al. 2008;Rogers et al. 2011;Zivan et al. 2017;Chen et al. 2018) and sampling (Nguyen et al. 2019;Ottens, Dimitrakakis, and Faltings 2017). However, the existing DCOP algorithms usually rely on handcrafted heuristics which need expertise to tune for different settings. In contrast, Machine Learning (ML) based techniques learn effective heuristics for existing methods automatically (Bengio, Lodi, and Prouvost 2021;Gasse et al. 2019;Lederman et al. 2020), achieving state-of-the-art performance in various challenging problems like Mixed Integer Programming (MIP), Capacitated Vehicle Routing Problems (CVRPs), and Boolean Satisfiability Problems (SATs). Unfortunately, these methods are often not generalizable to different search algorithms. Most importantly, many of these methods usually require the full knowledge about the problems to be solved, making them unsuitable for a distributed setting where centralization is not realistic due to geographical limitations or privacy concerns. Therefore, we develop the first general-purpose ML model, named GAT-PCM, to generate effective heuristics for a wide range of DCOP algorithms and propose a distributed embedding schema of GAT-PCM for decentralized model inference. Specifically, we make the following key contributions: (1) We propose a novel directed tripartite graph representation based on microstructure (Jégou 1993) to encode a partially instantiated DCOP instance and use Graph Attention Networks (GATs) (Vaswani et al. 2017) to learn generalizable embeddings. (2) Instead of generating heuristics for a particular algorithm, GAT-PCM predicts the optimal cost of a target assignment given a partial assignment, such that it can be applied to boost the performance of a wide range of DCOP algorithms where evaluating the quality of an assignment is critical. To this end, we pretrain our model on a dataset where DCOP instances are sampled from a problem distribution, partial assignments are constructed according to pseudo trees, and cost labels are generated by a complete algorithm. (3) We propose a Distributed Embedding Schema (DES) to perform decentralized model inference without disclosing local constraints, where each agent exchanges only the embedded vectors via localized communication. We also theoretically show the correctness and complexity of DES. (4) As a case study, we develop two efficient heuristics for DLNS (Hoang et al. 2018) and backtracking search for DCOPs based on GAT-PCM, respectively. Specifically, by greedily constructing a solfution, our GAT-PCM can serve as a subroutine of DLNS to repair assignments. Besides, the predicted cost of each assignment is used as a criterion for domain ordering in backtracking search. (5) Extensive empirical evaluations indicate that GAT-PCM-boosted algorithms significantly outperform the state-of-the-art methods in various standard benchmarks. Related Work There is an increasing interest of applying neural networks to solve SAT problems in recent years. Selsam et al. (2019) proposed NeuroSAT, a message passing neural network built upon LSTMs (Hochreiter and Schmidhuber 1997) to predict the satisfiability of a SAT and further decode the satisfying assignments. Yolcu and Póczos (2019) proposed to use Graph Neural Networks (GNNs) to encode a SAT and RE-INFORCE (Williams 1992) to learn local search heuristics. Similarly, Kurin et al. (2020) proposed to learn branching heuristics for a CDCL solver (Eén and Sörensson 2003) using GNNs and DQN (Mnih et al. 2015). Beside boolean formulas, Xu et al. (2018) proposed to use CNNs (LeCun et al. 1989) to predict the satisfiability of a general Constraint Satisfaction Problem (CSP). However, all of these methods require the total knowledge of a problem, making them unsuitable for distributed settings. Differently, our method uses an efficient distributed embedding schema to cooperatively compute the embeddings without disclosing constraints. Very recently, there are two concurrent work (Deng et al. 2021;Razeghi et al. 2021) which uses Multi-layer Perceptrons (MLPs) to parameterize the high-dimensional data in traditional constraint reasoning techniques, e.g., Bucket Elimination (Dechter 1998). Unfortunately, they follow an online learning strategy, which removes the most attractive feature of generalizing to new instances offered by neural networks. As a result, they require a significantly long runtime in order to train the MLPs. In contrast, we aim to develop an ML model for DCOPs which is supervisely pretrained with large-scale datasets beforehand. When applying the model to an instance, we just need several steps of model inference, which substantially reduces the overall overheads. Backgrounds In this section, we present preliminaries including DCOPs, GATs and pretrained models. Distributed Constraint Optimization Problems A Distributed Constraint Optimization Problem (DCOP) (Modi et al. 2005) can be defined by a tuple I, X, D, F where I = {1, . . . , |I|} is the set of agents, X = {x 1 , . . . , x |X| } is the set of variables, D = {D 1 , . . . , D |X| } is the set of discrete domains and F = {f 1 , . . . , f |F | } is the set of constraint functions. Each variable x i takes a value from its domain D i and each function f i : D i1 × · · · × D i k → R ≥0 defines the cost for each possible combination of D i1 , . . . , D i k . Finally, the objective is to find a joint assignment X ∈ D 1 × · · · × D |X| such that the following total cost is minimized: min X fi∈F f i (X).(1) For the sake of simplicity, we follow the common assumptions that each agent only controls a variable (i.e., |I| = |X|) and all constraints are binary (i.e., f ij : D i × D j → R ≥0 , ∀f ij ∈ F ). Therefore, the term "agent" and "variable" can be used interchangeably and a DCOP can be visualized by a constraint graph in which vertices and edges represent the variables and constraints of the DCOP, respectively. Graph Attention Networks Graph attention networks (GATs) (Veličković et al. 2017) are constructed by stacking a number of graph attention layers in which nodes are able to attend over their neighborhoods' features via the self-attention mechanism. Specifically, the attention coefficient between every pair of neighbor nodes is computed as e ij = a(Wh i , Wh j ), where h i , h j ∈ R d are node features, W ∈ R d×d is a weight matrix, and a is single-layer feed-forward neural network. Then the attention weight α ij for nodes j ∈ N i is computed as α ij = exp(eij ) k∈N i exp(e ik ) , where N i is the neighborhood of node v i in the graph (including v i ). At last, node v i 's feature h i is updated as h i = g( j∈Ni α ij Wh j ), where g is some nonlinear function such as the sigmoid. Multi-head attention (Vaswani et al. 2017) is also used where K independent attention mechanisms are executed and their feature vectors are averaged as h i = g( 1 K K k=1 j∈Ni α k ij W k h j ) . Pretrained Models The idea behind pretrained models is to first pretrain the models using large-scale datasets beforehand, then apply the models in downstream tasks to achieve state-of-the-art results. Beside significantly reducing the training overhead, pretrained models also offer substantial performance improvement over learning from scratch, leading to great successes in natural language processing (Brown et al. 2020;Devlin et al. 2018) and computer vision (He et al. 2016;Krizhevsky, Sutskever, and Hinton 2017;Simonyan and Zisserman 2014). In this work, we aim to develop the first effective and general-purpose pretrained model for DCOPs. In particular, we are interested in training a cost model M θ to predict the optimal cost of a partially instantiated DCOP instance, which is a core task in many DCOP algorithms: Figure 1: An illustration of the architecture of GAT-PCM with a small DCOP instance. A DCOP instance in (a) is first transformed into an equivalent microstructure G in (b), and then G is instantiated with a partial assignment Γ by removing some nodes and edges in (c) and then further compiled to a directed tripartite graph with a given target assignment in (d) (cf. the section "Graph Representations" for details) in (e). Finally, we use GATs to learn an embedding with supervised training (cf. the sections "Graph Embeddings" and "Pretraining" for details). M θ (P, x i = d i ; Γ) → R,(2) x i does not appear in Γ. This way, our cost model can be applied to a wide range of DCOP algorithms where evaluating the quality of an assignment is critical. Pretrained Cost Model for DCOPs In this section, we elaborate our pretrained cost model GAT-PCM. We begin with illustrating the architecture of the model in Fig. 1. We then outline the centralized pretraining procedure for the model in Algo. 1 to learn generalizable cost heuristics. We further propose a distributed embedding schema for decentralized model inference in Algo. 2. Finally, we show how to use GAT-PCM to construct effective heuristics to boost DCOP algorithms. The architecture of our GAT-PCM is illustrated in Fig. 1. Recall that we aim to train a model to predict the optimal cost of a partially instantiated DCOP instance (cf. Eq. (2)) and thus, we first need to embed a partially instantiated DCOP instance and the key is to build a suitable graph representation for a partially instantiated DCOP instance. Graph Representations Since DCOPs can be naturally represented by graphs with arbitrary sizes, we resort to GATs to learn generalizable and permutation-invariant representation of DCOPs. To this end, we first transform a DCOP instance P ≡ I, X, D, F to a microstructure representation (Jégou 1993) where each variable assignment corresponds to a vertex and the constraint cost between a pair of vertices is represented by a weighted edge (cf. Fig. 1(b)). After that, for each assignment x i = d i in the partial assign-ment Γ, we remove all the other variable-assignment vertices of x i , except x i , d i , and their related edges from the microstructure (cf. Fig. 1(c)). Then the reduced microstructure represents the partially instantiated DCOP instance w.r.t. Γ. The reduced microstructure is further compiled into a directed tripartite graph T G = (V X , V C , V F ), E T G which serves as the input of our GAT-PCM model (cf. Fig. 1(d)). Specifically, for each edge in the microstructure, we insert a constraint-cost node v c ∈ V C which corresponds to the constraint cost between the related pair of variable assignments. For each constraint function f ∈ F , we also create a function node v f ∈ V F in the graph, and each related constraint-cost node will be directed to v f . Note that v f can be regarded as the proxy of all related constraint-cost nodes. Besides, variable-assignment nodes related to Γ will also be removed from the tripartite graph since they are subsumed by their related constraint-cost nodes. Finally, we note that the loopy nature of undirected microstructure may lead to missense propagation and potentially cause an oversmoothing problem . For example, x 3 , R should be independent of x 3 , L since they are two different assignments of the same variable. However, x 3 , L could indirectly influence x 3 , R through multiple paths (e.g., x 3 , L − x 2 , R − x 3 , R ) when applying GATs. Therefore, we require the tripartite graph to be directed and acyclic such that each constraint-cost node or variable-assignment node has a path to the target variable-assignment node. Specifically, we determine the directions between constraint-cost nodes and variable-assignment nodes through a two-phase procedure. First, we build a Directed Acyclic Graph (DAG) for a constraint graph induced by the set of unassigned variables such that every unassigned variable has a path to the target variable. To this end, we build a pseudo tree P T (Freuder and Quinn 1985) with the target variable as its root and use P T as the DAG where each node of P T will be directed to its parent or pseudo parents. Second, for any pair of constrained variables x i and x j in the DAG, where x i is the precursor of x j , and any related pair of variable assignments x i , d i and x j , d j , we set the node of x i , d i to be the precursor of the constraint-cost node of f ij (d i , d j ) and set the constraint-cost node of f ij (d i , d j ) to be the precursor of the node of x j , d j in the tripartite graph. Note that constraint-cost nodes related to a unary function will be set to be the precursor of their corresponding variable-assignment nodes. For space complexity, given an instance with |I| variables and maximum domain size of d, the directed acyclic tripartite graph has O(d|I|) variable-assignment nodes, O(|I| 2 ) function nodes and O(d 2 |I| 2 ) constraint-cost nodes. Graph Embeddings Given a directed tripartite graph representation, we use GATs to learn an embedding with supervised training (cf. Fig. 1(e)). Each node v i has a fourdimensional initial feature vector h (0) i ∈ H (0) where the first three elements denote the one-hot encoding of node types (i.e., variable-assignment nodes, constraint-cost nodes, and function nodes) and the last element is set to be the constraint cost of v i if it is a constraint-cost node and otherwise, 0. The initial feature matrix H (0) is then embedded through T layers of the GAT. Formally, H (t) = M θ,(t) (H (t−1) ), t = 1, . . . , T,(3) where H (t) is the embedding in the t-th timestep and M θ,(t) is the t-th layer of the GAT. Finally, given a target variableassignment x m = d m and a partial assignment Γ, we predict the optimal cost of the partially instantiated problem instance induced by Γ ∪ {x m = d m } based on the embedding of the node of v m = x m , d m ∈ V X and the accumulated embedding of all function nodes of the tripartite graph as follows:ĉ m = M θ,(T +1) (h (T ) m ⊕ vi∈V F h (T ) i ),(4) where M θ,(T +1) is a fully-connected layer and ⊕ is the concatenation operation. Note that, by our construction of the tripartite graph, function nodes are the proxies of all constraint-cost nodes and all the other variable-assignment nodes have been directed to the target variable-assignment node. Therefore, we do not need to include the embeddings of constraint-cost nodes and variable-assignment nodes except the target variableassignment node in Eq. (4). Pretraining Algorithm 1 sketches the training procedure. For each epoch, we first generate labelled data (i.e., partial assignments, target assignments and corresponding optimal costs) in phase I and then train our model in phase II. Algorithm 1: Offline pretraining procedure Require: number of training epochs N , number of training iterations K, problem distribution P, optimal DCOP algorithm A, capacitated FIFO buffer B 1: for n = 1, . . . , N do Phase I: generating labelled data 2: P ≡ I, X, D, F ∼ P, P T ← build a pseudo tree for P 3: for all xi ∈ X do 4: Sep(xi) ← anc. connecting xi and its desc. in P T 5: for all context Γi ∈ Π x j ∈Sep(x i ) Dj do 6: for all di ∈ Di do 7: P ← REDUCE(P, Γi, xi = di) 8: c * ← A(P ), B ← B ∪ { P, Γi, xi = di, c * } Phase II: training the model 9: for k = 1, . . . , K do 10: B ← sample a batch of data from B 11: train the model M θ to minimize Eq. (5) return M θ Specifically, we first sample a DCOP instance P from the problem distribution P. For each target variable x i , instead of randomly generating partial assignments, we build a pseudo tree P T and use its contexts w.r.t. P T as partial assignments (line 2-5). In this way, we avoid redundant partial assignments by focusing only on the variables that are constrained with x i or its descendants. After obtaining the subproblem rooted at x i (cf. procedure REDUCE), we apply any off-the-shelf optimal DCOP algorithm A to solve P to get the optimal cost c * (line 6-8). Each tuple of partial assignment, target assignment, optimal cost and problem instance will be stored in a capacitated FIFO buffer B. In phase II, we uniformly sample a batch B of data from the buffer to train our model using the mean squared error loss: L(θ) = 1 |B| P,Γ,xi=di,c * ∈B (M θ (P, x i = d i ; Γ) − c * ) 2 . (5) Distributed Embedding Schema Different from pretraining stage where the model has access to all the knowledge (e.g., variables, domains, constraints, etc.) about the instance to be solved, an agent in real-world scenarios usually can only be aware of its local problem due to privacy concern and/or geographical limitation, posing a significant challenge when applying our model to solve DCOPs. Also, centralized model inference could overwhelm a single agent. Therefore, we aim to develop a distributed schema for model inference in which each agent only uses its local knowledge to cooperatively compute Eq. (3) and Eq. (4). We exploit the directed and acyclic nature of our tripartite graph and propose an efficient Distributed Embedding Schema (DES) in Algorithm 2. The general idea is that each agent maintains the embeddings w.r.t. its local problem. Specifically, an agent i maintains the following components: (1) its own variable-assignment nodes and (induced) unary constraint-cost and function nodes; (2) all function nodes f ij where x j is a successor of x i ; and (3) all constraint-cost nodes f ij (d i , d j ) where x j is a successor of x i . Each time the agent updates the local embeddings via a single step of model inference after receiving the embeddings from its pre- (0) i ← STACK(H (0) i , h (0) X ) 5: else 6: for all di ∈ Di do H (0) i ← STACK(H (0) i , h (0) X ) 7: for all j ∈ Si do 8: H (0) i ← STACK(H (0) i , h (0) F ) 9: for all cost value cij ∈ fij(·, ·) do 10: H (0) i ← STACK(H (0) i , h (0) C ⊕ cij) 11: for all j ∈ Pi do 12: for all cost value cji ∈ fji(·, ·) do 13: H (0) i ← STACK(H (0) i , h (0) C ⊕ cji) 14: i ← zero vector, ti ← 1, H (t i ) i ← M θ,(t i ) (H (t i −1) i ) 15: send H (t i ) i [fij(·, ·)] to j, ∀j ∈ Si 16: if Pi = ∅ then 17: for ti = 2, . . . , T do 18: H (t i ) ← M θ,(t i ) (H (t i −1) ) 19: if ti < T then 20: send cursors. Taking the tripartite graph in Fig. 1(d) as an example, x 2 maintains embeddings for x 2 , R , f 12 (R, R), f 12 , f 23 (R, R) and f 23 (R, L). To update its local embeddings for x 2 , R , f 12 (R, R) and f 12 , x 2 only needs one step of model inference after receiving the latest embedding of constraintcost nodes f 23 (R, R) and f 23 (R, L) from its precursor x 3 . Next, we give details about the schema. First, we use primitive STACK to concatenate the initial features of local nodes 1 to construct the initial embeddings H After that, agent i computes its first round embeddings and 1 We omit unary functions for simplicity. sends the updated embeddings of constraint-cost nodes to each of its successors (line 14-15). If agent i is a source node, i.e., P i = ∅, it directly updates the subsequent embeddings and sends the constraint-cost node embeddings at each timestep to its successors (line 16-20) since it does not need to wait for the embeddings from its precursors. Besides, the agent also sends the local accumulated function node embedding i to one of its successors (line 21). H (t i ) i [fij(·, ·)] to j, ∀j ∈ Si 21: i ← j∈S i H (T ) i [fij], send i to j ∈ After receiving the constraint-cost node embeddings from its precursor j, agent i temporarily stores the embeddings to Cache i according to the timestamp t j (line 23). If all the precursors' constraint-cost node embeddings for the t ith layer have arrived, agent i updates the local embedding H and sends the up-to-date embeddings to its successors (line 27-30). If all GAT layers are exhausted, the agent computes the local accumulated function node embedding i and sends it to one of its successors (line 31-32). After receiving an accumulated function-node embedding, agent i either directly forwards the embedding to one of its successors or adds to its own accumulated function-node embedding, depending on whether it is the target agent (line 34-37). After received the accumulated embedding messages of all the other agents, the target agent m outputs the predicted optimal cost bŷ c m = M θ,(T +1) (H (T ) m [ x m , d m ] ⊕ m ),(6) where H (T ) m [ x m , d m ] is the embedding for variable- assignment node x m , d m in H (T ) m (line 38-39). We now show the soundness and complexity of DES. We first show that DES results in the same embeddings as its centralized counterpart. Lemma 1. In DES, each agent i with P i = ∅ receives exactly T − 1 constraint-cost node embedding messages from j, ∀j ∈ P i , one for each timestep t j = 1, . . . , T − 1. Proof. Consider the base case where all the precursors are source i.e., P j = ∅, ∀j ∈ P i . Since it cannot receive a embedding from other agent, each precursor j sends exactly T − 1 constraint-cost node embeddings to i, one for each timestep t j = 1, . . . , T − 1 according to line 15, 19-20. Assume that the lemma holds for all j ∈ P i with P j = ∅. By assumption, the condition of line 24 holds for t j = 1, . . . , T − 1 and hence precursor j sends embedding to i for t j = 2, . . . , T −1 (line 27-30). Together with the embedding sent in line 15, each precursor j sends T − 1 constraint-cost node embedding messages to i in total, one for each timestep t j = 1, . . . , T − 1, which concludes the lemma. Lemma 2. For any agent i and timestep t = 1, . . . , T , after performing DES, its local embeddings are the same as the ones in H (t) . I.e., H (t) i [ x i , d i ] = H (t) [ x i , d i ], H (t) i [f ij (d i , d j )] = H (t) [f ij (d i , d j )], H (t) i [f ij ] = H (t) [f ij ], ∀d i ∈ D i , x j ∈ S i , d j ∈ D j . Proof. We only show the proof for variable-assignment nodes. Similar argument can be applied to constraint-cost nodes and function nodes. In the first timestep, i.e., t = 1, for each node x i , d i , Eq. (3) computes H (1) [ x i , d i ] based on the initial feature H (0) [f ij (d i , d j )] , ∀j ∈ P i , d j ∈ D j , which is the same as in DES,i.e., Assume that the lemma holds for t > 1. Before computing the embeddings for (t+1)-th timestep, agent i must have received the embedding H (t) j [f ij (d i , d j )], which equals to H (t) [f ij (d i , d j )] according to the assumption, from j, ∀j ∈ P i , d i ∈ D i , d j ∈ D j (line 20, 30 23-26, Lemma 1). Therefore, agent i computes H i . Proof. We prove the lemma by showing each agent sends exactly one accumulated embedding message w.r.t. its local function nodes to one of its successors (i.e., line 21 and 32). It is trivial for the agents without precursor since they do not receive any message (line 28) and only send one accumulated embedding message by the end of procedure INITIAL-IZATION (line 21). (t+1) i [ x i , d i ] according to H (t) [f ij (d i , d j )], ∀j ∈ P i , d j ∈ D j , which is equivalent to Eq. (3)). Consequently, H (t+1) i [ x i , d i ] = H (t+1) [ x i , d i ] Consider an agent i with P i = ∅. According to Lemma 1, i executes line 27-32 for T − 1 times. Given the initial value of 1 (line 14), t i will eventually equal to T , implying line 32 will be executed only once. Since it does not perform line 21, i sends exactly one accumulated embedding message w.r.t. its local function nodes. Since by construction each agent in the DAG has a path to the target agent m, all the accumulated embeddings will be forwarded to m (line 34-37). Therefore, by Lemma 2, m = i∈I j∈Si H (T ) i [f ij ] = i∈I j∈Si H (T ) [f ij ]. Note that ∀f ij ∈ F , it must be either the case j ∈ S i if i ≺ j or the case i ∈ S j if j ≺ i in the DAG. Hence, m = i∈I j∈Si H (T ) [f ij ] = fij ∈F H (T ) [f ij ] = vi∈V F h (T ) i . Then we show the soundness of our DES as follows: Proposition 1. DES is sound, i.e., Eq. (6) returns the same result as Eq. (4). Proof. According to the Lemma 2 and Lemma 3, by the end of DES, the target agent has the same variable-assignment embedding and accumulated function node embedding as the ones computed by Eq. (3). Therefore, Eq. (6) is equivalent to Eq. (4). Finally, we show the complexity of our DES as follows: Proposition 2. Each agent in DES requires T steps of model inference, O(|I|d 2 ) spaces, and communicates O(T |I|d 2 ) information. Proof. By line 14, 27-28 and Lemma 1, each agent performs T times of model inference. Each agent i needs to maintain embedding for O(d) assignment-variable nodes (line 4-6), O(|S i |d 2 ) + |P i |d 2 ) constraint-cost nodes (line 7, 9-13), and O(|S i |) function nodes (line 8). Since in the worst case, the agent is constrained with all the other |I| − 1 agents, i's space complexity is O(|I|d 2 ). Finally, since for each timestep t i = 1, . . . , T − 1 agent i sends the constraintcost node embeddings to its successors, its communication overhead is O(T |S i |d 2 ) = O(T |I|d 2 ). GAT-PCM as Heuristics Since our model GAT-PCM predicts the optimal cost of a target assignment given a partial assignment, it can serve as a general heuristic to boost the performance of a wide range of DCOP algorithms where the core operation is to evaluate the quality of an assignment. We consider two kinds of well-known algorithms and show how our model can boost them as follows: • Local search. A key task in local search is to find good assignments for a set of variables given the other variables' assignments. For example, in Distributed Large Neighborhood Search (DLNS) (Hoang et al. 2018) , each round a subroutine is called to solve a subproblem induced by the destroyed variables (also called repair phase). Currently, DPOP (Petcu and Faltings 2005) is used to solve a tree-structured relaxation of the subproblem, which ignores a large proportion of constraints and thus leads to poor performance on general problems. Instead, we use our GAT-PCM to solve the subproblem without relaxation (i.e., all constraints between all pairs of destroyed variables are included) since the overhead is polynomial in the number of agents (cf. Proposition 2). Specifically, for each connected subproblem, we assume a variable ordering (e.g., lexicographical ordering, pseudo tree). Then we greedily assign each variable according to the costs predicted by GAT-PCM, i.e., we select an assignment with the smallest predicted cost for each variable. • Backtracking search. Domain ordering is another important task in backtracking search for DCOPs. Previously, domain ordering utilizes local information only, e.g., prioritizing the assignment with minimum conflicts w.r.t. each unassigned variable (Frost and Dechter 1995) or querying a lower bound lookup table. On the other hand, our GAT-PCM offers a more general and systematic way for domain ordering. Specifically, for an unassigned variable, we could query GAT-PCM for the optimal cost of each assignment under the current partial assignment and give the priority to the one with minimum predicted cost. Empirical Evaluation In this section, we perform extensive empirical studies. We begin with introducing the details of experiments and pretraining stage. Then we analyze the results and demonstrate the capability of our GAT-PCM to boost DCOP algorithms. Benchmarks We consider four types of benchmarks in our experiments, i.e., random DCOPs, scale-free networks, grid networks, and weighted graph coloring problems. For random DCOPs and weighted graph coloring problems, given density of p 1 ∈ (0, 1), we randomly create a constraint for a pair of variables with probability p 1 . For scale-free networks, we use the BA model (Barabási and Albert 1999) with parameter m 0 and m 1 to generate constraint relations: starting from a connected graph with m 0 vertices, a new vertex is connected to m 1 vertices with a probability which is proportional to the degree of each existing vertex in each iteration. Besides, variables in a grid network are arranged into a 2D grid, where each variable is constrained with four neighboring variables excepts the ones located at the boundary. Finally, for each constraint in random DCOPs, scalefree networks and grid networks, we uniformly sample a cost from [0, 100] for each pair of variable-assignments. Differently, constraints of the weighted graph coloring problems incur a cost which is also uniformly sampled from [0, 100] if two constrained variables have the same assignment. Baselines We consider four types of baselines: local search, belief propagation, region optimal method, and large neighborhood search. We use DSA (Zhang et al. 2005) with p = 0.8 and GDBA (Okamoto, Zivan, and Nahon 2016) with M, N M, T as two representative local search methods, Max-sum ADVP (Zivan et al. 2017) as a representative belief propagation method, RODA (Grinshpoun et al. 2019) with t = 2, k = 3 as a representative region optimal method, and T-DLNS (Hoang et al. 2018) with destroy probability p = 0.5 as a representative large neighborhood search method. All experiments are conducted on an Intel i9-9820X workstation with GeForce RTX 3090 GPUs. For each data point, we average the results over 50 instances and report standard error of the mean (SEM) as confidence intervals. Implementation and Hyperparameters Our GAT-PCM model has four GAT layers (i.e., T = 4). Each layer in the first three layers has 8 output channels and 8 heads of attention, while the last layer has 16 output channels and 4 heads of attention. Each GAT layer uses ELU (Clevert, Unterthiner, and Hochreiter 2016) Finally, we use DPOP (Petcu and Faltings 2005) to generate the optimal cost labels. For hyperparameters, we set the batch size and the number of training epochs to be 64 and 5000, respectively. Our model was implemented with the PyTorch Geometric framework (Fey and Lenssen 2019) and the model was trained with the Adam optimizer (Kingma and Ba 2014) using the learning rate of 0.0001 and a 5 × 10 −5 weight decay ratio. Results In the first set of experiments, we evaluate the performance of our GAT-PCM when combined with the DLNS framework, which we name it GAT-PCM-DLNS, in solving large-scale DCOPs. We run GAT-PCM-DLNS with destroy probability of 0.2 for 1000 iterations and report the normalized anytime cost (i.e., the best solution cost divided by the number of constraints) as the result. Fig. 2 presents the results of solution quality where all baselines run for the same simulated runtime as GAT-PCM-DLNS. It can be seen that DSA explores low-quality solutions since it iteratively approaches a Nash equilibrium, resulting in 1-opt solutions similar to Max-sum ADVP. GDBA improves by increasing the weights when agents get trapped in quasi-local minima. RODA finds solutions better than 1-opt by coordinating the variables in a coalition of size 3. T-DLNS, on the other hand, tries to optimize by optimally solving a treestructured relaxation of the subproblem induced by the destroyed variables in each round. However, T-DLNS could ig- nore a large proportion of constraints and therefore perform poorly when solving complex problems (e.g., the problems with more than 200 variables). Differently, our GAT-PCM-DLNS solves the induced subproblem without relaxation, leading to a significant improvement over the state-of-thearts when solving unstructured problems (i.e., Fig. 2(a-b)). Interestingly, T-DLNS achieves the best performance when solving small grid networks. That is because the variables in the problem are under-constrained and T-DLNS only needs to drop few edges to obtain a tree-structured problem. In fact, the average degree in these problems is less than 3.8. However, our GAT-PCM-DLNS still outperforms T-DLNS when the grid size is higher than 14. We display the average memory footprint per agent of GAT-PCM-DLNS in the first set of experiments in Fig 3, where "Conf #1" to "Conf #5" refer the growing complexity of each experiment. Specifically, the memory overhead of each agent consists of two parts, i.e., storing the pretrained model and local embeddings. The former consumes about 60KB memory, while the latter requires space proportional to the number of agents and the size of each constraint matrix (cf. Prop. 2). It can be concluded that our method has a modest memory requirement and scales up to large instances well in various settings. In particular, our method has a (nearly) constant memory footprint when solving grid network problems since each agent is constrained with at most four other agents regardless of the grid size. To investigate how fast our GAT-PCM-DLNS finds a good solution, we conduct a convergence analysis which measures the performance in terms of simulated time (Sultanik, Lass, and Regli 2008) on the problems with |I| = 1000, p 1 = 0.005 and d = 3 and present the results in Fig. 4. It can be seen that local search algorithms including DSA and GDBA quickly converge to a poor local optimum, while RODA finds a better solution in the first three seconds. T-DLNS slowly improves the solution but is strictly dominated by RODA. In contrast, our GAT-PCM-DLNS improves much steadily, outperforming all baselines after 18 seconds. Finally, we demonstrate the merit of our GAT-PCM in accelerating backtracking search for random DCOPs with p 1 = 0.25 and scale-free networks with |I| = 18, m 0 = 5 by conducting a case study on the symmetric version of PT-ISABB (Deng et al. 2019) (referred as PT-ISBB) and present the results in Fig. 5. Specifically, we set the memory bud- get k = 2 and compare the simulated runtime of PT-ISBB using three domain ordering generation techniques: alphabetically, lower bound lookup tables (PT-ISBB-LB), and our GAT-PCM (PT-ISBB-GAT-PCM). For PT-ISBB-GAT-PCM, we only perform domain ordering for the variables in the first three levels in a pseudo tree. It can be observed that the backtracking search with alphabetic domain ordering performs poorly and is dominated by the one with the lower bound induced domain ordering in the most cases. Notably, when solving the problems with 22 variables, PT-ISBB-LB exhibits the worst performance, because the lower bounds generated by approximated inference are not tight in complex problems, and hence the induced domain ordering may not prioritize promising assignments properly. On the other hand, our GAT-PCM powered backtracking search uses the predicted total cost of a subproblem as the criterion, resulting in a more efficient domain ordering and thus achieving the best results in solving complex problems. Conclusion In this paper, we present GAT-PCM, the first effective and general purpose deep pretrained model for DCOPs. We propose a novel directed acyclic graph representation schema for DCOPs and leverage the Graph Attention Networks (GATs) to embed our graph representations. Instead of generating heuristics for a particular algorithm, we train the model with optimally labelled data to predict the optimal cost of a target assignment given a partial assignment, such that GAT-PCM can be applied to boost the performance of a wide range of DCOP algorithms where evaluating the quality of an assignment is critical. To enable efficient graph embedding in a distributed environment, we propose DES to perform decentralized model inference without disclosing local constraints, where each agent exchanges only the embedded vectors via localized communication. Finally, we develop several heuristics based on GAT-PCM to improve local search and backtracking search algorithms. Extensive empirical evaluations confirm the superiority of GAT-PCM based algorithms over the state-of-the-arts. In future, we plan to extend GAT-PCM to deal with the problems with higher-arity constraints and hard constraints. Besides, since agents in DES exchange the embedded vectors instead of constraint costs, it is promising to extend our methods to an asymmetric setting (Grinshpoun et al. 2013). embeddings stored in Cache i [t i ] (line 24-26). Then the agent computes the embeddings H (ti+1) i and the lemma holds by induction. Lemma 3. For target agent m, after performing DES, m = vi∈V F h (T ) Figure 2 : 2Solution quality comparisons: (a) random DCOPs; (b) weighted graph coloring problems; (c) grid networks Figure 3 : 3Memory footprint of GAT-PCM-DLNS Figure 4 : 4Convergence analysis Figure 5 : 5Runtime comparison on the problems with d = 5 Require: trained model M θ , precursors Pi, successors Si, target assignment xm = dm, initial variable-assignment node feature h(0) X , initial function node feature h (0) F , one-hot encoding for constraint-cost node h (0) C 1: When INITIALIZATION: 2: Cachei ← [], H (0) i ← empty tensor 3: for t = 1, . . . , T do Cachei[t] ← empty map 4: if xi = xm then H Si 22 : 22When RECEIVE embeddingH (t j ) from j ∈ Pi:23: Cachei[tj][j] ←H (t j ) 24: if |Cachei[ti]| = |Pi| then 25: for all j ∈ Pi do 26: H (t i ) i [f ij (·, ·)] ← Cache[ti][j ] 27: ti ← ti + 1 28: H (t i ) ← M θ,(t i ) (H (t i −1) ) 29: if ti < T then 30: send H (t i ) i [f ij (·, ·)] to j , ∀j ∈ Si 31: else 32: i ← j ∈S i H (T ) i [f ij ], send i to j ∈ Si 33: When RECEIVE accum. embedding j from j ∈ Pi: 34: if xi = xm then 35: send j to j ∈ Si 36: else 37: i ← ADD( i, j ) 38: if all accum. embeddings have arrived then 39: computes Eq. (6) as the activation function. In the pretraining stage, we consider a random DCOP distribution with |I| ∈ [15, 30], d ∈ [3, 15] and p 1 ∈ [0.1, 0.4].Conf #1 Conf #2 Conf #3 Conf #4 Conf #5 Configurations 70 75 80 85 90 95 100 Avg. mem. footprint per agent (KB) Random DCOPs WGC Grid AcknowledgementThis research was supported by the National Research Foundation, Singapore under its AI Singapore Programme (AISG Award No: AISG-RP-2019-0013), National Satellite of Excellence in Trustworthy Software Systems (Award No: NSOE-TSS2019-01), and NTU. Emergence of scaling in random networks. A.-L Barabási, Albert , R , Science. 2865439Barabási, A.-L.; and Albert, R. 1999. Emergence of scaling in random networks. Science, 286(5439): 509-512. Machine Learning for combinatorial optimization: A methodological tour d'horizon. Y Bengio, A Lodi, A Prouvost, European Journal of Operational Research. 2902Bengio, Y.; Lodi, A.; and Prouvost, A. 2021. Machine Learning for combinatorial optimization: A methodological tour d'horizon. European Journal of Operational Research, 290(2): 405-421. . T B Brown, B Mann, N Ryder, M Subbiah, J Kaplan, P Dhariwal, A Neelakantan, P Shyam, G Sastry, A Askell, arXiv:2005.14165arXiv preprintet al. 2020. Language models are few-shot learnersBrown, T. B.; Mann, B.; Ryder, N.; Subbiah, M.; Kaplan, J.; Dhariwal, P.; Neelakantan, A.; Shyam, P.; Sastry, G.; Askell, A.; et al. 2020. Language models are few-shot learners. arXiv preprint arXiv:2005.14165. A class of iterative refined Max-sum algorithms via non-consecutive value propagation strategies. Z Chen, Y Deng, T Wu, Z He, Autonomous Agents and Multi-Agent Systems. 326Chen, Z.; Deng, Y.; Wu, T.; and He, Z. 2018. A class of iter- ative refined Max-sum algorithms via non-consecutive value propagation strategies. Autonomous Agents and Multi-Agent Systems, 32(6): 822-860. RMB-DPOP: Refining MB-DPOP by reducing redundant inference. Z Chen, W Zhang, Y Deng, D Chen, Q Li, AAMAS. Chen, Z.; Zhang, W.; Deng, Y.; Chen, D.; and Li, Q. 2020. RMB-DPOP: Refining MB-DPOP by reducing redundant inference. In AAMAS, 249-257. Fast and accurate deep network learning by exponential linear units (ELUs). D Clevert, T Unterthiner, S Hochreiter, ICLR. Clevert, D.; Unterthiner, T.; and Hochreiter, S. 2016. Fast and accurate deep network learning by exponential linear units (ELUs). In ICLR. Governing convergence of Max-sum on DCOPs through damping and splitting. L Cohen, R Galiki, R Zivan, Artificial Intelligence. 279103212Cohen, L.; Galiki, R.; and Zivan, R. 2020. Governing con- vergence of Max-sum on DCOPs through damping and split- ting. Artificial Intelligence, 279: 103212. Bucket elimination: A unifying framework for probabilistic inference. R Dechter, Learning in Graphical Models. Springer89Dechter, R. 1998. Bucket elimination: A unifying frame- work for probabilistic inference. In Learning in Graphical Models, volume 89 of NATO ASI Series, 75-104. Springer. PT-ISABB: A hybrid tree-based complete algorithm to solve asymmetric distributed constraint optimization problems. Y Deng, Z Chen, D Chen, X Jiang, Q Li, AAMAS. Deng, Y.; Chen, Z.; Chen, D.; Jiang, X.; and Li, Q. 2019. PT-ISABB: A hybrid tree-based complete algorithm to solve asymmetric distributed constraint optimization problems. In AAMAS, 1506-1514. Neural regretmatching for distributed constraint optimization problems. Y Deng, R Yu, X Wang, An , B , IJCAI. Deng, Y.; Yu, R.; Wang, X.; and An, B. 2021. Neural regret- matching for distributed constraint optimization problems. In IJCAI, 146-153. J Devlin, M.-W Chang, K Lee, K Toutanova, arXiv:1810.04805BERT: Pre-training of deep bidirectional transformers for language understanding. arXiv preprintDevlin, J.; Chang, M.-W.; Lee, K.; and Toutanova, K. 2018. BERT: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805. An extensible SAT-solver. N Eén, N Sörensson, International conference on theory and applications of satisfiability testing. Eén, N.; and Sörensson, N. 2003. An extensible SAT-solver. In International conference on theory and applications of satisfiability testing, 502-518. . A Farinelli, A Rogers, A Petcu, N R Jennings, Farinelli, A.; Rogers, A.; Petcu, A.; and Jennings, N. R. Decentralised coordination of low-power embedded devices using the Max-sum algorithm. AAMAS. Decentralised coordination of low-power embedded devices using the Max-sum algorithm. In AAMAS, 639-646. Fast graph representation learning with PyTorch Geometric. M Fey, J E Lenssen, ICLR Workshop on Representation Learning on Graphs and Manifolds. Fey, M.; and Lenssen, J. E. 2019. Fast graph representation learning with PyTorch Geometric. In ICLR Workshop on Representation Learning on Graphs and Manifolds. A distributed constraint optimization (DCOP) approach to the economic dispatch with demand response. F Fioretto, W Yeoh, E Pontelli, Y Ma, S J Ranade, AAMAS. Fioretto, F.; Yeoh, W.; Pontelli, E.; Ma, Y.; and Ranade, S. J. 2017. A distributed constraint optimization (DCOP) approach to the economic dispatch with demand response. In AAMAS, 999-1007. Taking advantage of stable sets of variables in constraint satisfaction problems. E C Freuder, M J Quinn, IJCAI. Freuder, E. C.; and Quinn, M. J. 1985. Taking advantage of stable sets of variables in constraint satisfaction problems. In IJCAI, 1076-1078. Look-ahead value ordering for constraint satisfaction problems. D Frost, R Dechter, IJCAI. Frost, D.; and Dechter, R. 1995. Look-ahead value ordering for constraint satisfaction problems. In IJCAI, 572-578. Exact combinatorial optimization with graph convolutional neural networks. M Gasse, D Chételat, N Ferroni, L Charlin, A Lodi, In NeurIPS. Gasse, M.; Chételat, D.; Ferroni, N.; Charlin, L.; and Lodi, A. 2019. Exact combinatorial optimization with graph con- volutional neural networks. In NeurIPS, 15554-15566. Asymmetric distributed constraint optimization problems. T Grinshpoun, A Grubshtein, R Zivan, A Netzer, A Meisels, Journal of Artificial Intelligence Research. 47Grinshpoun, T.; Grubshtein, A.; Zivan, R.; Netzer, A.; and Meisels, A. 2013. Asymmetric distributed constraint op- timization problems. Journal of Artificial Intelligence Re- search, 47: 613-647. Privacy preserving region optimal algorithms for symmetric and asymmetric DCOPs. T Grinshpoun, T Tassa, V Levit, R Zivan, Artificial Intelligence. 266Grinshpoun, T.; Tassa, T.; Levit, V.; and Zivan, R. 2019. Privacy preserving region optimal algorithms for symmetric and asymmetric DCOPs. Artificial Intelligence, 266: 27-50. Deep residual learning for image recognition. K He, X Zhang, S Ren, J Sun, CVPR. He, K.; Zhang, X.; Ren, S.; and Sun, J. 2016. Deep residual learning for image recognition. In CVPR, 770-778. DSSA+: Distributed collision avoidance algorithm in an environment where both course and speed changes are allowed. K Hirayama, K Miyake, T Shiotani, T Okimoto, International Journal on Marine Navigation and Safety of Sea Transportation. 131Hirayama, K.; Miyake, K.; Shiotani, T.; and Okimoto, T. 2019. DSSA+: Distributed collision avoidance algorithm in an environment where both course and speed changes are allowed. International Journal on Marine Navigation and Safety of Sea Transportation, 13(1): 117-123. Distributed partial constraint satisfaction problem. K Hirayama, M Yokoo, CP. Hirayama, K.; and Yokoo, M. 1997. Distributed partial con- straint satisfaction problem. In CP, 222-236. A large neighboring search schema for multi-agent optimization. K D Hoang, F Fioretto, W Yeoh, E Pontelli, R Zivan, CP. Hoang, K. D.; Fioretto, F.; Yeoh, W.; Pontelli, E.; and Zivan, R. 2018. A large neighboring search schema for multi-agent optimization. In CP, 688-706. Long short-term memory. S Hochreiter, J Schmidhuber, Neural computation. 98Hochreiter, S.; and Schmidhuber, J. 1997. Long short-term memory. Neural computation, 9(8): 1735-1780. Decomposition of domains based on the micro-structure of finite constraint-satisfaction problems. P Jégou, AAAI. Jégou, P. 1993. Decomposition of domains based on the micro-structure of finite constraint-satisfaction problems. In AAAI, 731-736. Im-ageNet classification with deep convolutional neural networks. D P Kingma, J Ba, A Krizhevsky, I Sutskever, G E Hinton, arXiv:1412.6980Communications of the ACM. 606arXiv preprintAdam: A method for stochastic optimizationKingma, D. P.; and Ba, J. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980. Krizhevsky, A.; Sutskever, I.; and Hinton, G. E. 2017. Im- ageNet classification with deep convolutional neural net- works. Communications of the ACM, 60(6): 84-90. Can Q-Learning with graph networks learn a generalizable branching heuristic for a SAT solver? NeurIPS. V Kurin, S Godil, S Whiteson, B Catanzaro, Kurin, V.; Godil, S.; Whiteson, S.; and Catanzaro, B. 2020. Can Q-Learning with graph networks learn a generalizable branching heuristic for a SAT solver? NeurIPS, 9608-9621. Handwritten digit recognition with a back-propagation network. Y Lecun, B E Boser, J S Denker, D Henderson, R E Howard, W E Hubbard, L D Jackel, NeurIPS. LeCun, Y.; Boser, B. E.; Denker, J. S.; Henderson, D.; Howard, R. E.; Hubbard, W. E.; and Jackel, L. D. 1989. Handwritten digit recognition with a back-propagation net- work. In NeurIPS, 396-404. Learning heuristics for quantified boolean formulas through reinforcement learning. G Lederman, M Rabe, S Seshia, E A Lee, ICLR. Lederman, G.; Rabe, M.; Seshia, S.; and Lee, E. A. 2020. Learning heuristics for quantified boolean formulas through reinforcement learning. In ICLR. Deep-GCNs: Can GCNs go as deep as CNNs? In ICCV. G Li, M Muller, A Thabet, B Ghanem, Li, G.; Muller, M.; Thabet, A.; and Ghanem, B. 2019. Deep- GCNs: Can GCNs go as deep as CNNs? In ICCV, 9267- 9276. Forward bounding on pseudo-trees for DCOPs and ADCOPs. O Litov, A Meisels, Artificial Intelligence. 252Litov, O.; and Meisels, A. 2017. Forward bounding on pseudo-trees for DCOPs and ADCOPs. Artificial Intelli- gence, 252: 83-99. Distributed algorithms for DCOP: A graphical-game-based approach. R T Maheswaran, J P Pearce, M Tambe, ISCA PDCS. Maheswaran, R. T.; Pearce, J. P.; and Tambe, M. 2004. Dis- tributed algorithms for DCOP: A graphical-game-based ap- proach. In ISCA PDCS, 432-439. Humanlevel control through deep reinforcement learning. V Mnih, K Kavukcuoglu, D Silver, Nature. 5187540Mnih, V.; Kavukcuoglu, K.; Silver, D.; et al. 2015. Human- level control through deep reinforcement learning. Nature, 518(7540): 529-533. Adopt: Asynchronous distributed constraint optimization with quality guarantees. P J Modi, W.-M Shen, M Tambe, M Yokoo, Artificial Intelligence. 1611-2Modi, P. J.; Shen, W.-M.; Tambe, M.; and Yokoo, M. 2005. Adopt: Asynchronous distributed constraint optimization with quality guarantees. Artificial Intelligence, 161(1-2): 149-180. A multi-agent approach to optimal channel assignment in WLANs. T L Monteiro, G Pujolle, M E Pellenz, M C Penna, R D Souza, WCNC. Monteiro, T. L.; Pujolle, G.; Pellenz, M. E.; Penna, M. C.; and Souza, R. D. 2012. A multi-agent approach to optimal channel assignment in WLANs. In WCNC, 2637-2642. Distributed Gibbs: A linear-space sampling-based DCOP algorithm. D T Nguyen, W Yeoh, H C Lau, R Zivan, Journal of Artificial Intelligence Research. 64Nguyen, D. T.; Yeoh, W.; Lau, H. C.; and Zivan, R. 2019. Distributed Gibbs: A linear-space sampling-based DCOP al- gorithm. Journal of Artificial Intelligence Research, 64: 705-748. Distributed breakout: Beyond satisfaction. S Okamoto, R Zivan, A Nahon, IJCAI. Okamoto, S.; Zivan, R.; and Nahon, A. 2016. Distributed breakout: Beyond satisfaction. In IJCAI, 447-453. DUCT: An upper confidence bound approach to distributed constraint optimization problems. B Ottens, C Dimitrakakis, B Faltings, ACM Transactions on Intelligent Systems and Technology. 8527Ottens, B.; Dimitrakakis, C.; and Faltings, B. 2017. DUCT: An upper confidence bound approach to distributed con- straint optimization problems. ACM Transactions on Intel- ligent Systems and Technology, 8(5): 69:1-69:27. A scalable method for multiagent constraint optimization. A Petcu, B Faltings, IJCAI. Petcu, A.; and Faltings, B. 2005. A scalable method for multiagent constraint optimization. In IJCAI, 266-271. MB-DPOP: A new memory-bounded algorithm for distributed optimization. A Petcu, B Faltings, IJCAI. Petcu, A.; and Faltings, B. 2007. MB-DPOP: A new memory-bounded algorithm for distributed optimization. In IJCAI, 1452-1457. Y Razeghi, K Kask, Y Lu, P Baldi, S Agarwal, R Dechter, Deep Bucket Elimination. In IJCAI. Razeghi, Y.; Kask, K.; Lu, Y.; Baldi, P.; Agarwal, S.; and Dechter, R. 2021. Deep Bucket Elimination. In IJCAI, 4235-4242. Bounded approximate decentralised coordination via the max-sum algorithm. A Rogers, A Farinelli, R Stranders, N R Jennings, Artificial Intelligence. 1752Rogers, A.; Farinelli, A.; Stranders, R.; and Jennings, N. R. 2011. Bounded approximate decentralised coordination via the max-sum algorithm. Artificial Intelligence, 175(2): 730- 759. Learning a SAT solver from single-bit supervision. D Selsam, M Lamm, B Bünz, P Liang, L De Moura, D L Dill, ICLR. Selsam, D.; Lamm, M.; Bünz, B.; Liang, P.; de Moura, L.; and Dill, D. L. 2019. Learning a SAT solver from single-bit supervision. In ICLR. Very deep convolutional networks for large-scale image recognition. K Simonyan, A Zisserman, arXiv:1409.1556arXiv preprintSimonyan, K.; and Zisserman, A. 2014. Very deep convo- lutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556. DCOPolis: A framework for simulating and deploying distributed constraint reasoning algorithms. E A Sultanik, R N Lass, W C Regli, AAMAS. Sultanik, E. A.; Lass, R. N.; and Regli, W. C. 2008. DCOPo- lis: A framework for simulating and deploying distributed constraint reasoning algorithms. In AAMAS, 1667-1668. Attention is all you need. A Vaswani, N Shazeer, N Parmar, J Uszkoreit, L Jones, A N Gomez, L Kaiser, I Polosukhin, P Veličković, G Cucurull, A Casanova, A Romero, P Lio, Y Bengio, arXiv:1706.03762arXiv:1710.10903Graph attention networks. arXiv preprintVaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A. N.; Kaiser, L.; and Polosukhin, I. 2017. At- tention is all you need. arXiv preprint arXiv:1706.03762. Veličković, P.; Cucurull, G.; Casanova, A.; Romero, A.; Lio, P.; and Bengio, Y. 2017. Graph attention networks. arXiv preprint arXiv:1710.10903. Simple statistical gradient-following algorithms for connectionist reinforcement learning. Machine Learning. R J Williams, 8Williams, R. J. 1992. Simple statistical gradient-following algorithms for connectionist reinforcement learning. Ma- chine Learning, 8(3): 229-256. Towards effective deep learning for constraint satisfaction problems. H Xu, S Koenig, T S Kumar, CP. Xu, H.; Koenig, S.; and Kumar, T. S. 2018. Towards effec- tive deep learning for constraint satisfaction problems. In CP, 588-597. BnB-ADOPT: An asynchronous branch-and-bound DCOP algorithm. W Yeoh, A Felner, S Koenig, Journal of Artificial Intelligence Research. 38Yeoh, W.; Felner, A.; and Koenig, S. 2010. BnB- ADOPT: An asynchronous branch-and-bound DCOP algo- rithm. Journal of Artificial Intelligence Research, 38: 85- 133. Learning local search heuristics for boolean satisfiability. E Yolcu, B Póczos, NeurIPS. Yolcu, E.; and Póczos, B. 2019. Learning local search heuristics for boolean satisfiability. In NeurIPS, 7990-8001. Distributed stochastic search and distributed breakout: Properties, comparison and applications to constraint optimization problems in sensor networks. W Zhang, G Wang, Z Xing, L Wittenburg, Artificial Intelligence. 1611-2Zhang, W.; Wang, G.; Xing, Z.; and Wittenburg, L. 2005. Distributed stochastic search and distributed breakout: Prop- erties, comparison and applications to constraint optimiza- tion problems in sensor networks. Artificial Intelligence, 161(1-2): 55-87. Balancing exploration and exploitation in incomplete min/max-sum inference for distributed constraint optimization. R Zivan, T Parash, L Cohen, H Peled, S Okamoto, Autonomous Agents and Multi-Agent Systems. 315Zivan, R.; Parash, T.; Cohen, L.; Peled, H.; and Okamoto, S. 2017. Balancing exploration and exploitation in incomplete min/max-sum inference for distributed constraint optimiza- tion. Autonomous Agents and Multi-Agent Systems, 31(5): 1165-1207.
[]
[ "Hole propagation in the orbital compass models", "Hole propagation in the orbital compass models" ]
[ "Wojciech Brzezicki \nMarian Smoluchowski Institute of Physics\nJagellonian University\nReymonta 4PL-30059KrakówPoland\n\nDipartimento di Fisica 'E.R. Caianiello'\nUniversita di Salerno\nI-84084Fisciano (Salerno)Italy\n", "Maria Daghofer \nInstitut für Theoretische Festkörperphysik\nIFW DresdenD-01171DresdenGermany\n", "Andrzej M Oleś \nMarian Smoluchowski Institute of Physics\nJagellonian University\nReymonta 4PL-30059KrakówPoland\n\nMax-Planck-Institut für Festkörperforschung\nHeisenbergstrasse 1D-70569StuttgartGermany\n" ]
[ "Marian Smoluchowski Institute of Physics\nJagellonian University\nReymonta 4PL-30059KrakówPoland", "Dipartimento di Fisica 'E.R. Caianiello'\nUniversita di Salerno\nI-84084Fisciano (Salerno)Italy", "Institut für Theoretische Festkörperphysik\nIFW DresdenD-01171DresdenGermany", "Marian Smoluchowski Institute of Physics\nJagellonian University\nReymonta 4PL-30059KrakówPoland", "Max-Planck-Institut für Festkörperforschung\nHeisenbergstrasse 1D-70569StuttgartGermany" ]
[]
We explore the propagation of a single hole in the generalized quantum compass model which interpolates between fully isotropic antiferromagnetic (AF) phase in the Ising model and nematic order of decoupled AF chains for frustrated compass interactions. We observe coherent hole motion due to either interorbital hopping or due to the three-site effective hopping, while quantum spin fluctuations in the ordered background do not play any role.Properties of strongly correlated transition metal oxides are determined by effective interactions in form of spin-orbital superexchange, introduced first long ago by Kugel and Khomskii [1]. The spin-orbital interactions have enhanced quantum fluctuations[2]and are characterized by frustration and entanglement[3]. It leads, for instance, to rather cute topological order in an exactly solvable SU(2)⊗XY ring[4]. To understand better the consequences of directional orbital interactions, it is of interest to investigate doped orbital systems[5].Probably the simplest model that describes orbital-like superexchange is the two-dimensional (2D) orbital compass model (OCM)[6]. The so-called generalized compass model (GCM) introduced later [7] provides a possibility to investigate a second order quantum phase transition (QPT) between the Ising model and generic OCM when frustration increases. The orbital anisotropies are captured in the OCM with different spin components coupled along each bond, J x σ x i σ x j and J z σ z i σ z j along a and b axis of the square lattice. Recent interest in this model is motivated by its interdisciplinary character as it plays a role in the variety of phenomena beyond the correlated oxides: (i) it is dual to recently studied models of p + ip superconducting arrays [8], (ii) it provides an effective description for Josephson arrays of protected qubits[9]realized in recent experiments[10], and (iii) it could also describe polar molecules in optical lattices[11], as well as nitrogen-vacancy centers in a diamond matrix[12].An exact solution of the one-dimensional (1D) generalized variant of the compass model[13]gives a QPT at J x = J z . A similar QCP occurs in the 2D OCM between types of 1D nematic orders: for J x > J z (J x < J z ), antiferromagnetic (AF) chains form along a (b) that are -in the thermodynamic limit -decoupled along b (a). It has been shown that the symmetry allows one to reduce the original L×L compass cluster to a smaller (L−1)×(L−1) one with modified interactions which made it possible to obtain the full exact spectra and the specific heat for larger clusters[14]. Electron itinerancy has been addressed in the weak-coupling limit at temperatures above the ordering transition[15].In this paper we will discuss the motion of a single hole in the ordered phases of the GCM, including the nematic phases of the simple OCM. Following[16], we obtain the spectral functions of the itinerant models that reproduce GCM in the strong coupling regime. A great advantage of using the itinerant models is that a variational cluster approach (VCA) can be used to obtain unbiased results for both weak and strong coupling regime. The VCA was introduced to study strongly correlated electrons in models with local interactions[17,18]. Since the interactions are here Ising-like, quantum fluctuations are suppressed and the paradigm for hole propagation known from the spin t-J model does no longer apply. This happens for the t 2g electrons in ab planes of Sr 2 VO 4 where instead holes move mostly via three-site terms[19,20]. In case of e g electrons, interorbital hopping delocalizes holes within ferromagnetic LaMnO 3 planes[21]. In the present case, hole propagation occurs through quantum processes involving the hole itself, rather than those of the ordered background.The 2D GCM with AF interactions (J > 0) is,withσ i (θ) being the composed pseudospins,interpolating between σ x i for θ = 0 and (σ x i ± σ z i )/ √ 2 for θ = π/2 and {σ x i , σ z i } are S = 1/2 pseudospin operators. {i+a(b)} is a shorthand notation for the nearest neighbor of site i along the axis a(b). For θ = 0 GCM corresponds to the classical Ising model with S x i components coupled on all the bonds. In the opposite limit θ = π/2 describes the OCM in a rotated spin space: bonds along a couple the spin component (S x i + S z i ) and bonds along b the orthogonal one (S x i − S z i ). For 0 < θ < π/2, the GCM interpolates between Ising and compass models[7]. The rotation of orbital operators (2) provides a convenient arXiv:1405.5322v1 [cond-mat.str-el]
10.12693/aphyspola.127.263
[ "https://arxiv.org/pdf/1405.5322v1.pdf" ]
118,570,897
1405.5322
acce87c9af0a0399f3a9716eb55cef09204bb783
Hole propagation in the orbital compass models 21 May 2014 Wojciech Brzezicki Marian Smoluchowski Institute of Physics Jagellonian University Reymonta 4PL-30059KrakówPoland Dipartimento di Fisica 'E.R. Caianiello' Universita di Salerno I-84084Fisciano (Salerno)Italy Maria Daghofer Institut für Theoretische Festkörperphysik IFW DresdenD-01171DresdenGermany Andrzej M Oleś Marian Smoluchowski Institute of Physics Jagellonian University Reymonta 4PL-30059KrakówPoland Max-Planck-Institut für Festkörperforschung Heisenbergstrasse 1D-70569StuttgartGermany Hole propagation in the orbital compass models 21 May 2014(Dated: 15 May, 2014)numbers: 7510Jm0365Ud6470Tg7525Dk We explore the propagation of a single hole in the generalized quantum compass model which interpolates between fully isotropic antiferromagnetic (AF) phase in the Ising model and nematic order of decoupled AF chains for frustrated compass interactions. We observe coherent hole motion due to either interorbital hopping or due to the three-site effective hopping, while quantum spin fluctuations in the ordered background do not play any role.Properties of strongly correlated transition metal oxides are determined by effective interactions in form of spin-orbital superexchange, introduced first long ago by Kugel and Khomskii [1]. The spin-orbital interactions have enhanced quantum fluctuations[2]and are characterized by frustration and entanglement[3]. It leads, for instance, to rather cute topological order in an exactly solvable SU(2)⊗XY ring[4]. To understand better the consequences of directional orbital interactions, it is of interest to investigate doped orbital systems[5].Probably the simplest model that describes orbital-like superexchange is the two-dimensional (2D) orbital compass model (OCM)[6]. The so-called generalized compass model (GCM) introduced later [7] provides a possibility to investigate a second order quantum phase transition (QPT) between the Ising model and generic OCM when frustration increases. The orbital anisotropies are captured in the OCM with different spin components coupled along each bond, J x σ x i σ x j and J z σ z i σ z j along a and b axis of the square lattice. Recent interest in this model is motivated by its interdisciplinary character as it plays a role in the variety of phenomena beyond the correlated oxides: (i) it is dual to recently studied models of p + ip superconducting arrays [8], (ii) it provides an effective description for Josephson arrays of protected qubits[9]realized in recent experiments[10], and (iii) it could also describe polar molecules in optical lattices[11], as well as nitrogen-vacancy centers in a diamond matrix[12].An exact solution of the one-dimensional (1D) generalized variant of the compass model[13]gives a QPT at J x = J z . A similar QCP occurs in the 2D OCM between types of 1D nematic orders: for J x > J z (J x < J z ), antiferromagnetic (AF) chains form along a (b) that are -in the thermodynamic limit -decoupled along b (a). It has been shown that the symmetry allows one to reduce the original L×L compass cluster to a smaller (L−1)×(L−1) one with modified interactions which made it possible to obtain the full exact spectra and the specific heat for larger clusters[14]. Electron itinerancy has been addressed in the weak-coupling limit at temperatures above the ordering transition[15].In this paper we will discuss the motion of a single hole in the ordered phases of the GCM, including the nematic phases of the simple OCM. Following[16], we obtain the spectral functions of the itinerant models that reproduce GCM in the strong coupling regime. A great advantage of using the itinerant models is that a variational cluster approach (VCA) can be used to obtain unbiased results for both weak and strong coupling regime. The VCA was introduced to study strongly correlated electrons in models with local interactions[17,18]. Since the interactions are here Ising-like, quantum fluctuations are suppressed and the paradigm for hole propagation known from the spin t-J model does no longer apply. This happens for the t 2g electrons in ab planes of Sr 2 VO 4 where instead holes move mostly via three-site terms[19,20]. In case of e g electrons, interorbital hopping delocalizes holes within ferromagnetic LaMnO 3 planes[21]. In the present case, hole propagation occurs through quantum processes involving the hole itself, rather than those of the ordered background.The 2D GCM with AF interactions (J > 0) is,withσ i (θ) being the composed pseudospins,interpolating between σ x i for θ = 0 and (σ x i ± σ z i )/ √ 2 for θ = π/2 and {σ x i , σ z i } are S = 1/2 pseudospin operators. {i+a(b)} is a shorthand notation for the nearest neighbor of site i along the axis a(b). For θ = 0 GCM corresponds to the classical Ising model with S x i components coupled on all the bonds. In the opposite limit θ = π/2 describes the OCM in a rotated spin space: bonds along a couple the spin component (S x i + S z i ) and bonds along b the orthogonal one (S x i − S z i ). For 0 < θ < π/2, the GCM interpolates between Ising and compass models[7]. The rotation of orbital operators (2) provides a convenient arXiv:1405.5322v1 [cond-mat.str-el] We explore the propagation of a single hole in the generalized quantum compass model which interpolates between fully isotropic antiferromagnetic (AF) phase in the Ising model and nematic order of decoupled AF chains for frustrated compass interactions. We observe coherent hole motion due to either interorbital hopping or due to the three-site effective hopping, while quantum spin fluctuations in the ordered background do not play any role. Properties of strongly correlated transition metal oxides are determined by effective interactions in form of spin-orbital superexchange, introduced first long ago by Kugel and Khomskii [1]. The spin-orbital interactions have enhanced quantum fluctuations [2] and are characterized by frustration and entanglement [3]. It leads, for instance, to rather cute topological order in an exactly solvable SU(2)⊗XY ring [4]. To understand better the consequences of directional orbital interactions, it is of interest to investigate doped orbital systems [5]. Probably the simplest model that describes orbital-like superexchange is the two-dimensional (2D) orbital compass model (OCM) [6]. The so-called generalized compass model (GCM) introduced later [7] provides a possibility to investigate a second order quantum phase transition (QPT) between the Ising model and generic OCM when frustration increases. The orbital anisotropies are captured in the OCM with different spin components coupled along each bond, J x σ x i σ x j and J z σ z i σ z j along a and b axis of the square lattice. Recent interest in this model is motivated by its interdisciplinary character as it plays a role in the variety of phenomena beyond the correlated oxides: (i) it is dual to recently studied models of p + ip superconducting arrays [8], (ii) it provides an effective description for Josephson arrays of protected qubits [9] realized in recent experiments [10], and (iii) it could also describe polar molecules in optical lattices [11], as well as nitrogen-vacancy centers in a diamond matrix [12]. An exact solution of the one-dimensional (1D) generalized variant of the compass model [13] gives a QPT at J x = J z . A similar QCP occurs in the 2D OCM between types of 1D nematic orders: for J x > J z (J x < J z ), antiferromagnetic (AF) chains form along a (b) that are -in the thermodynamic limit -decoupled along b (a). It has been shown that the symmetry allows one to reduce the original L×L compass cluster to a smaller (L−1)×(L−1) one with modified interactions which made it possible to obtain the full exact spectra and the specific heat for larger clusters [14]. Electron itinerancy has been addressed in the weak-coupling limit at temperatures above the ordering transition [15]. In this paper we will discuss the motion of a single hole in the ordered phases of the GCM, including the nematic phases of the simple OCM. Following [16], we obtain the spectral functions of the itinerant models that reproduce GCM in the strong coupling regime. A great advantage of using the itinerant models is that a variational cluster approach (VCA) can be used to obtain unbiased results for both weak and strong coupling regime. The VCA was introduced to study strongly correlated electrons in models with local interactions [17,18]. Since the interactions are here Ising-like, quantum fluctuations are suppressed and the paradigm for hole propagation known from the spin t-J model does no longer apply. This happens for the t 2g electrons in ab planes of Sr 2 VO 4 where instead holes move mostly via three-site terms [19,20]. In case of e g electrons, interorbital hopping delocalizes holes within ferromagnetic LaMnO 3 planes [21]. In the present case, hole propagation occurs through quantum processes involving the hole itself, rather than those of the ordered background. The 2D GCM with AF interactions (J > 0) is, H θ J = J i {σ i (θ)σ i+a (θ) +σ i (−θ)σ i+b (−θ)} ,(1) withσ i (θ) being the composed pseudospins, σ i (θ) = cos(θ/2)σ x i + sin(θ/2)σ z i ,(2) interpolating between σ x i for θ = 0 and (σ x i ± σ z i )/ √ 2 for θ = π/2 and {σ x i , σ z i } are S = 1/2 pseudospin operators. {i+a(b)} is a shorthand notation for the nearest neighbor of site i along the axis a(b). For θ = 0 GCM corresponds to the classical Ising model with S x i components coupled on all the bonds. In the opposite limit θ = π/2 describes the OCM in a rotated spin space: bonds along a couple the spin component (S x i + S z i ) and bonds along b the orthogonal one (S x i − S z i ). For 0 < θ < π/2, the GCM interpolates between Ising and compass models [7]. The rotation of orbital operators (2) provides a convenient arXiv:1405.5322v1 [cond-mat.str-el] 21 May 2014 way to detect the phase transition between 2D Ising and nematic compass order: In the former, moments lie along x while they lie along either x + z (in the following identified with lattice axis a) or x − z in the latter. GCM follows from the two-orbital Hubbard model [16], H t−U = t i µ,ν= α,β A µν c † i,µ c i+a,ν +B µν c † i,µ c i+b,ν +H.c. + U i n i,α n i,β ,(3) at large U and half filling, where A µ,ν and B µ,ν are hopping matrices in a, b directions between orbitals α and β. These can be obtained using standard perturbation theory for two neighboring sites as, A θ = 1 √ 2 1 + sin θ 2 cos θ 2 cos θ 2 1 − sin θ 2 = 1 √ 2 1 +σ(θ) ,(4)B θ = 1 √ 2 1 + sin θ 2 − cos θ 2 − cos θ 2 1 − sin θ 2 = 1 √ 2 1 −σ(−θ) .(5) The pseudospins {σ x i , σ z i } are the quadratic forms of the fermions c † i , i.e., σ z i = n i,α − n i,β , σ x i = c † i,α c i,β + c † i,β c i, To see the action of the row flips in the fermion space, the operator Q i should be first generalized to the case of double and zero occupancy of site i. This is achieved by modifying τ x i as follows, τ x i →τ x i = (1 − n i ) 2 + τ x i , so that (τ x i ) 2 = 1. Now we can produce newQ i operator in the same way as before and see its action on the fermion operators, which isQ i (c j,α(β) )Q i = c j,β(α) , for all c j,µ lying on the line ofQ i and unity for the others. Under this change the interaction part of the H t−U remains unchanged, i.e.,Q i H UQi = U i n i,α n i,β . After a single row-flip B 0 remains invariant and A 0 changes as A 0 = 1 0 0 0 → 0 0 0 1 .(7) This brings us to the conclusion that H 0 t−U is covariant under the action of theQ i ; the exact form of the Hamiltonian changes, but the change is such that the properties (π,0) (π,π) (0,0) (π,0) (0,π) -14 -13 of the new Hamiltonian are the same as before -only the orbitals along one line are renamed which is irrelevant for the physics. This is important for the VCA calculation as it allows us to calculate one-particle spectra in one of the nematic ground states, e.g., the AF one, instead of having to average over many of them [16]. For the OCM, results were tested for finite-size effects by changing cluster geometry and size; data presented here are for a 3 × 4 cluster. In what follows we compare results obtained by the VCA and by MF. A first difference concerns the critical angle θ c of the QPT from the AFx phase to the AFa one: Whereas the VCA value θ VCA c ≈ 88 • is close to the quasiexact θ MERA c 84.8 • [7], MF deviates more strongly with θ MF c ≈ 68 • . While such a discrepancy might suggest the importance of quantum fluctuations within the AF background, we are going to see that processes related to the hole itself are more important. Figure 1 illustrates how the spectral density changes across the QPT from the Ising to nematic order for increasing θ. For θ = π/6, see Fig. 1(a), which is very close to the classical AF Ising model, we see a ladder spectrum typical for the θ = 0 limit, because at weak quantum fluctuations the hole is confined in a string potential. The two MF bands can naturally not reflect such a ladder spectrum. Nevertheless, MF bands reflect the limited hole mobility and thereby qualitatively reproduce the shape of the topmost VCA band. For θ = 5π/12, see Fig. 1(b), the bands become significantly more dispersive, especially the ones on the top. The shape of the topmost band continues to be qualitatively well reproduced by the MF and this band is the sharpest feature seen in the spectral function at θ = 5π/12. We observe that the bands predicted by MF repel each other in the VCA and new features emerge at the intermediate energies, with rather incoherent weight. Similarly to the generic OCM case, see Fig. 1(c) in [16], bands are most dispersive along the direction (0, 0) → (π, π). Since the ground state is still Ising ordered (AFx phase), the increased dispersion, especially of the rather coherent topmost band, is here not primarily driven by quantum fluctuations. Instead, interorbital hopping is now significant, see Eqs. (4) and (5), which allows the hole to propagate, similar to the case of a hole in e g orbital order [21]. Finally, in Fig. 1(c) we show the spectral function at θ > θ c in the AFa nematic order (θ = 89 • ). The bottom band is seen as a coherent feature which roughly agrees with the MF prediction, but is much less dispersive. The upper band cannot be identified so easily, even though the features around k = (π/2, π/2) resemble the MF bands. Strong coupling differences to the MF bands are on one hand the incoherent weight and on the other the separation of bottom and top bands. One finds that the MF bands do not really cross at k = (π/2, π/2), but they remain very close to each other. In the VCA, they are better separated, suggesting a strong effective interaction at this value of k that cannot be captured by a simple MF approach. A distinct feature observed in Fig. 1(c) is a rather coherent band in the middle of the spectrum, absent in the MF approach. It seems to strongly repel the two bands at the top and bottom of the spectrum, thus making them flatter and widening the overall spectrum. We have shown that three-site hopping is the mechanism responsible for the observed dispersion of this additional band [16]. Summarizing, we have seen that the coherent motion of a single hole (present for any θ > 0) is due to: (i) interorbital hopping in the AF phase, and (ii) three-site hopping for the nematic order. MF cannot fully describe either case, it misses the ladder spectrum due to the string potential (AF order) and the three-site hopping (nematic order). In both cases, motion is thus due the quantum fluctuations caused by the hole itself rather than by the fluctuations of the ordered background. W PACS numbers: 75.10.Jm, 03.65.Ud, 64.70.Tg, 75.25.Dk α and the superexchange is J = t 2 /U . In the small-U regime the properties of the itinerant model of Eq. (3) can be well described by a mean-field (MF) approach[16].Let us first discuss in more depth the somewhat surprising result that a hole does not couple the AF chains of the OCM. The relevant are the row/column flips along the x or z axis. To see their impact on the itinerant model it is more convenient to look at the OCM in its original basis at site i, {τ z i , τ x i }. Setting τ can easily transform GCM at θ = π/2 into OCM with τ z i τ z i+a bonds along the a axis and τ x i τ x i+b along the b one. Now we can see that OCM commutes with P i = n τ z i+nb and Q i = n τ x i+na operators and the hopping matrices take form of Figure 1 . 1Spectral functions obtained in the VCA at strong coupling (U = 20t) for the GCM with increasing frustration of interactions at: (a) θ = π/6, (b) θ = 5π/12, and (c) θ = 89 • . The plots (a) and (b) refer to the AFx phase of GCM and plot (c) to the AFa one (θ VCA c ≈ 88 • ). Solid lines stand for the MF bands. .B. acknowledges the kind hospitality of the Leibniz Institute for Solid State and Materials Research in Dresden. W.B. and A.M.O. acknowledge support by the Polish National Science Center (NCN) under Project No. 2012/04/A/ST3/00331. M.D. thanks Deutsche Forschungsgemeinschaft (grant No. DA 1235/1-1 under Emmy-Noether Program) for support. . K I Kugel, D I Khomskii, JETP. 37725K.I. Kugel, D.I. Khomskii, JETP 37, 725 (1973); . Sov. Phys. Usp. 25231Sov. Phys. Usp. 25, 231 (1982). . G Khaliullin, Prog. Theor. Phys. Suppl. 160155G. Khaliullin, Prog. Theor. Phys. Suppl. 160, 155 (2005). . A M Oleś, J. Phys.: Condens. Matter. 24313201A.M. Oleś, J. Phys.: Condens. Matter 24, 313201 (2012). . W Brzezicki, J Dziarmaga, A M Oleś, Phys. Rev. Lett. 112117204W. Brzezicki, J. Dziarmaga, A.M. Oleś, Phys. Rev. Lett. 112, 117204 (2014). . P Wróbel, A M Oleś, Phys. Rev. Lett. 104206401P. Wróbel, A.M. Oleś, Phys. Rev. Lett. 104, 206401 (2010). . Z Nussinov, J Van Den, Brink, arXiv:1303.5922Z. Nussinov, J. van den Brink, arXiv:1303.5922 (2013). . L Cincio, J Dziarmaga, A M Oleś, Phys. Rev. B. 82104416L. Cincio, J. Dziarmaga, A.M. Oleś, Phys. Rev. B 82, 104416 (2010). . Z Nussinov, E Fradkin, Phys. Rev. B. 71195120Z. Nussinov, E. Fradkin, Phys. Rev. B 71, 195120 (2005). . B Douçot, M V Feigel&apos;man, L B Ioffe, A S Ioselevich, Phys. Rev. B. 7124505B. Douçot, M.V. Feigel'man, L.B. Ioffe, A.S. Ioselevich, Phys. Rev. B 71, 024505 (2005). . S Gladchenko, D Olaya, E Dupont-Ferrier, B Douçot, L B Ioffe, M E Gershenson, J. Phys. Soc. Jpn. 961606S. Gladchenko, D. Olaya, E. Dupont-Ferrier, B. Douçot, L.B. Ioffe, M.E. Gershenson, J. Phys. Soc. Jpn. 96, 1606 (2009). . P Milman, W Maineult, S Guibal, L Guidoni, B Douçot, L Ioffe, T Coudreau, Phys. Rev. Lett. 9920503P. Milman, W. Maineult, S. Guibal, L. Guidoni, B. Douçot, L. Ioffe, T. Coudreau, Phys. Rev. Lett. 99, 020503 (2007). . F Trousselet, A M Oleś, P Horsch, Europhys. Lett. 9140005F. Trousselet, A.M. Oleś, P. Horsch, Europhys. Lett. 91, 40005 (2010); . Phys. Rev. B. 86134412Phys. Rev. B 86, 134412 (2012). . W Brzezicki, J Dziarmaga, A M Oleś, Phys. Rev. B. 75134415W. Brzezicki, J. Dziarmaga, A.M. Oleś, Phys. Rev. B 75, 134415 (2007); . W Brzezicki, A M Oleś, Acta Phys. Polon. A. 115162W. Brzezicki, A.M. Oleś, Acta Phys. Polon. A 115, 162 (2009). . W Brzezicki, A M Oleś, Phys. Rev. B. 8260401W. Brzezicki, A.M. Oleś, Phys. Rev. B 82, 060401 (2010); . Phys. Rev. B. 87214421Phys. Rev. B 87, 214421 (2013); . J. Phys.: Conf. Ser. 20012017J. Phys.: Conf. Ser. 200, 012017 (2010). . J Nusu, S Ishihara, Europhys. Lett. 9727002J. Nusu, S. Ishihara, Europhys. Lett. 97, 27002 (2012). . W Brzezicki, M Daghofer, A M Oleś, Phys. Rev. B. 8924417W. Brzezicki, M. Daghofer, A.M. Oleś, Phys. Rev. B 89, 024417 (2014). . C Dahnken, M Aichhorn, W Hanke, E Arrigoni, M Potthoff, Phys. Rev. B. 70245110C. Dahnken, M. Aichhorn, W. Hanke, E. Arrigoni, M. Potthoff, Phys. Rev. B 70, 245110 (2004). . M Potthoff, M Aichhorn, C Dahnken, Phys. Rev. Lett. 91206402M. Potthoff, M. Aichhorn, C. Dahnken, Phys. Rev. Lett. 91, 206402 (2003). . M Daghofer, K Wohlfeld, A M Oleś, E Arrigoni, P Horsch, Phys. Rev. Lett. 10066403M. Daghofer, K. Wohlfeld, A.M. Oleś, E. Arrigoni, P. Horsch, Phys. Rev. Lett. 100, 066403 (2008). . K Wohlfeld, M Daghofer, A M Oleś, P Horsch, Phys. Rev. B. 78214423K. Wohlfeld, M. Daghofer, A.M. Oleś, P. Horsch, Phys. Rev. B 78, 214423 (2008). . J Van Den, P Brink, A M Horsch, Oleś, Phys. Rev. Lett. 855174J. van den Brink, P. Horsch, A.M. Oleś, Phys. Rev. Lett. 85, 5174 (2000).
[]
[ "Computations and Complexities of Tarski's Fixed Points and Supermodular Games", "Computations and Complexities of Tarski's Fixed Points and Supermodular Games" ]
[ "Chuangyin Dang \nDept. of Systems Engineering & Engineering Management\nDept. of Industrial Engineering & Decision Analytics\nCity University of Hong Kong Kowloon\nHong Kong SARChina\n", "Qi Qi \nDept. of Management Science & Engineering\nThe Hong Kong University of Science and Technology Kowloon\nHong Kong SARChina\n", "Yinyu Ye [email protected] \nStanford University Stanford\n94305-4026CA\n" ]
[ "Dept. of Systems Engineering & Engineering Management\nDept. of Industrial Engineering & Decision Analytics\nCity University of Hong Kong Kowloon\nHong Kong SARChina", "Dept. of Management Science & Engineering\nThe Hong Kong University of Science and Technology Kowloon\nHong Kong SARChina", "Stanford University Stanford\n94305-4026CA" ]
[]
We consider two models of computation for Tarski's order preserving function f related to fixed points in a complete lattice: the oracle function model and the polynomial function model. In both models, we find the first polynomial time algorithm for finding a Tarski's fixed point. In addition, we provide a matching oracle bound for determining the uniqueness in the oracle function model and prove it is Co-NP hard in the polynomial function model. The existence of the pure Nash equilibrium in supermodular games is proved by Tarski's fixed point theorem Exploring the difference between supermodular games and Tarski's fixed point, we also develop the computational results for finding one pure Nash equilibrium and determining the uniqueness of the equilibrium in supermodular games.
null
null
218,718,488
2005.09836
d4c845b00ac53647dce08ccd3ff28a4e80b96317
Computations and Complexities of Tarski's Fixed Points and Supermodular Games 20 May 2020 Chuangyin Dang Dept. of Systems Engineering & Engineering Management Dept. of Industrial Engineering & Decision Analytics City University of Hong Kong Kowloon Hong Kong SARChina Qi Qi Dept. of Management Science & Engineering The Hong Kong University of Science and Technology Kowloon Hong Kong SARChina Yinyu Ye [email protected] Stanford University Stanford 94305-4026CA Computations and Complexities of Tarski's Fixed Points and Supermodular Games 20 May 20201Fixed Point TheoremEquilibrium ComputationSupermodular GameOrder Preserving MappingCo-NP Hardness We consider two models of computation for Tarski's order preserving function f related to fixed points in a complete lattice: the oracle function model and the polynomial function model. In both models, we find the first polynomial time algorithm for finding a Tarski's fixed point. In addition, we provide a matching oracle bound for determining the uniqueness in the oracle function model and prove it is Co-NP hard in the polynomial function model. The existence of the pure Nash equilibrium in supermodular games is proved by Tarski's fixed point theorem Exploring the difference between supermodular games and Tarski's fixed point, we also develop the computational results for finding one pure Nash equilibrium and determining the uniqueness of the equilibrium in supermodular games. Introduction Supermodular games, also known as the games of strategic complements, are formalized by Topkis in 1979 [23] and have been extensively studied in the literature, such as Bernstein and Federgruen [2][1], Cachon [4], Cachon and Lariviere [5], Fudenberg and Tirole [11], Lippman and McCardle [16], Milgrom and Roberts [17] [18], Milgrom and Shannon [19], Topkis [24], and Vives [25] [26]. In supermodular games, the utility function of every player has increasing differences. Then the best response of a player is a nondecreasing function of other players' strategies. For example, if firm A's competing firm B starts spending more money on research it becomes more advisable for firm A to do the same. Supermodular games arise in many applied models. They cover most static market models. For example, the investment games, Bertrand oligopoly, Cournot oligopoly all can be modeled as supermodular games. Many models in operations research have also been analyzed as supermodular games. For example, supply chain analysis, revenue management games, price and service competition, inventory competition etc. Recently, the problem of power control in cellular CDMA wireless network is also modeled as a supermodular game. The existence of a pure Nash equilibrium in any supermodular game is proved by Tarski's fixed point theorem [21]. The well-known Tarski's fixed point theorem (Tarski) asserts that, if (L, ) is a complete lattice and f is order-preserving from L into itself, then there exists some x * ∈ L such that f (x * ) = x * . This theorem plays a crucial role in the study of supermodular games for economic analysis and has other important applications. To compute a Nash equilibrium of a supermodular game, a generic approach is to convert it into the computation of a fixed point of an order preserving mapping. Recently, an algorithm has been proposed in Echenique [10] to find all pure strategy Nash equilibria of a supermodular game, which motivated to the study in this paper. An efficient computational algorithm for finding a Nash equilibrium has been a recognized important technical advantage in applications. Further, it is sometimes desirable to know if an already-found equilibrium for such applications is unique or not, for the decision whether additional resource should be spent to improve the already found solution. There were some interesting complexity results in algorithmic game theory research along this line, on determining whether or not a game has a unique equilibrium point. For the bimatrix game, Gilboa and Zemel [12] showed that it is NP-hard to determine whether or not there is a second Nash equilibrium. For this problem, computing even one equilibrium (which is know to exist), is already difficult and no polynomial time algorithms are known: Nash equilibrium for the bimatrix game is known to be PPAD-complete [9]. Similar cases are known for other problems such as the market equilibrium computation (Codenotti et al.) [3]. In this work, we first consider the fixed point computation of order preserving functions over a complete lattice, both for finding a solution and for determining the uniqueness of an already-found solution. Then we study the computational problems for finding one pure Nash equilibrium and determining the uniqueness of the equilibrium in supermodular games. We are interested in both the oracle function model and the polynomial function model. For both the fixed point problem and supermodular games, the domain space can be huge. Most interesting discussions consider a succinct representation (see Section 2.2) of the lattice (L, ) such that the input size is related to log |L|. It is enough for the representation of a variable in a lattice of size |L|. Both the oracle function model and the polynomial time function model return the function value f (x) on a lattice node x where x is of size log |L|. They differ in the ways the functions are computed. The polynomial time function model computes f (x) by an explicitly given algorithm, in time polynomial of log |L|. The oracle model, on the other hand, always returns the value in one oracle step. More details comparing those two models can be found in Section 2.3. Main Results and Related Work A partially order set L is defined with as a binary relation on the set L such that is reflexive, transitive, and anti-symmetric. A lattice is a partially ordered set (L, ), in which any two elements x and y have a least upper bound (supremum), sup L (x, y) = inf{z ∈ L | x z and y z}, and a greatest lower bound (infimum), inf L (x, y) = sup{z ∈ L | z x and z y}, in the set. A lattice (L, ) is complete if every nonempty subset of L has a supremum and an infimum in L. Let f be a mapping from L to itself. f is order-preserving if f (x) f (y) for any x and y of L with x y. We focus on the componentwise ordering and lexicographic ordering finite lattices. Let L d = {x ∈ Z d | a ≤ x ≤ b}, where a and b are two finite vectors of Z d with a < b. We denote the componentwise ordering and the lexicographic ordering as ≤ c and ≤ l respectively. Clearly, (L d , ≤ c ) is a finite lattice with componentwise ordering and (L d , ≤ l ) is a finite lattice with lexicographic ordering. Let f c and f l be an order preserving mapping from L d into itself under the componentwise ordering and the lexicographic ordering respectively. Tarski's Fixed Points: Oracle Function Model When f l (·) and f c (·) are given as oracle functions, we develop a complete understand for finding a Tarski's fixed point as well as determining uniqueness of the Tarski's fixed point in both the lexicographic ordering and the componentwise ordering lattices. We develop an algorithm of time complexity O((log d |L|)) to find a Tarski's fixed point on the componentwise ordering lattice (L, ≤ c ), for any constant dimension d. This algorithm is based on the binary search method. We first present the algorithm when d = 2. Follows the similar principle, this algorithm can be generalized to any constant dimension. This is the first known polynomial time algorithm for finding the Tarski's fixed point in terms of the componentwise ordering. In literature, we only have a polynomial time algorithm for the total order lattices (Chang et al.) [6]. Recently, Mihalis, Kusha and Papadimitriou stated in a private communication that they proved a lower bound of Ω(log 2 |L|) in the oracle function model for finding a Tarskis fixed point in the two dimensional case, and conjectured a lower bound of Ω(log d |L|) for general d (Christos H. Papadimitriou, private communication, March, 2019). Together with our upper bound results, they establish a matching bound of Θ(log 2 |L|) for finding a Tarskis fixed point in the two dimensional case. On the other hand, given a general lattice (L, ) with one already known fixed point, to find out whether it is unique will take Ω(|L|) time for any algorithm. For componentwise ordering lattice, we derive a Θ(N 1 + N 2 + · · · + N d ) matching bound for determining the uniqueness of the fixed point, where L = {x ∈ Z d | a ≤ x ≤ b} and N i = b i − a i . In addition, we prove this matching bound for both deterministic algorithm and randomized algorithm. For a lexicographic ordering lattice, it can be viewed as a componentwise ordering lattice with dimension one by an appropriate polynomial time transformation to change the oracle function for the d-dimension space to an oracle function on the 1-dimension space. All the above results can be transplanted onto the lexicographic ordering lattice with a set of related parameters. In literature, a polynomial time algorithm is known only for the total order lattices. When the lattice (L, ) has a total order, i.e., all the point in the lattice is comparable, there is a matching bound of θ(log |L|), where an Ω(|L|) lower bound is known for general lattices (when the lattice is given as an oracle) in Chang et al. [6]. Tarski's Fixed Points: Polynomial Function Model Under the polynomial time function model, our polynomial time algorithm applies when the dimension is any finite constant. When the dimension is used as a part of the input size in unary, we first present a polynomial-time reduction of a 3-SAT problem to an order preserving mapping f from a componentwise ordering lattice L into itself. As a result of this reduction, we obtain that, given f as a polynomial time function, determining whether f has a unique fixed point in L is a Co-NP hard problem. Furthermore, even when the dimension is one, we also show that determining the uniqueness of Tarski's fixed point in a lexicographic lattice is Co-NP hard though there exists a polynomialtime algorithm for computing a Tarski's fixed point in a lexicographic lattice in any dimension. Our main results for Tarki's fixed point computation are summarized in Table 1 and Table 2. Supermodular Games For supermodular games, we develop an algorithm to find a pure Nash equilbirum in polynomial time O(log N 1 · · · log N d−1 ) in the oracle function model, where d is the total number of players, N i is the number of strategies of player i and N 1 ≤ N 2 · · · ≤ N d . It is the first polynomial time algorithm when d is a constant. Thus a pure Nash equilibirum can be found in time O(poly(log |L|) · (log N 1 · · · log N d−1 ) in the polynomial function model, where |L| = N 1 × N 2 · · · × N d . In the polynomial function model, we prove determining the uniqueness is Co-NP-hard. In literature, Robinson(1951) [20] introduce the iterative method to solve a game and Topkis(1979) [23] use this method to find a pure Nash equilibrium in supermodular game which takes time O(N 1 + N 2 + · · · + N d ). The first non-trivial algorithm for finding pure Nash equilibria is proposed by Echenique in 2007 [10]. However, the algorithm takes expenontial time O(N 1 × N 2 × · · · × N d ) to find the first pure equilibrium in the worst case. Polynomial Function Oracle Function Componentwise O(poly(log |L|) · (log N 1 · · · log N d )) O((log N 1 · · · log N d ) Lexicographic O(poly(log |L|) · log |L|) O(log |L|)+ N 2 + · · · + N d ) Lexicographic Co-NP-Complete Θ(|L|) Organization The rest of the paper is organized as follows. First, in Section 2, we present definitions as well as the difference of the polynomial function model and the oracle function model. We develop polynomial time algorithms in oracle function model for componentwise ordering and lexicographic ordering in Section 3. In Section 4, we derive the matching bound for determining the uniqueness of Tarski's fixed point under the oracle function model. We prove co-NP hardness for determining the uniqueness of Tarski's fixed point under the polynomial function model in Section 5. In Section 6, we develop the computational results for finding one pure Nash equilibrium and determining the uniqueness of the equilibrium in supermdular games. We conclude with discussion and remarks on our results and open problems in Section 7. Preliminaries In this section, we first introduce the formal definitions of the related concepts as well as the Tarski's fixed point theorem. We next compare the difference between the oracle function model and the polynomial function model. The lattice is complete lattice if for any subset A = {a 1 , a 2 , · · · , a k } ⊆ L, there is a unique meet and a unique join: A = (a 1 ∧ a 2 ∧ · · · ∧ a k ) and A = (a 1 ∨ a 2 ∨ · · · ∨ a k ). For simplicity, we use L for a lattice when no ambiguity exists on . We should specify whenever it is necessary. [21]. If L is a complete lattice and f an increasing from L to itself, there exists some x * ∈ L such that f (x * ) = x * , which is a fixed point of f . This theorem guarantees the existence of fixed points of any order-preserving function f : L → L on any nonempty complete lattice. ∀x, y ∈ R d , x ≤ l y if either x = y or x i = y i , for i = 1, 2, . . . , k − 1, and x k < y k for some k ≤ d. Definition 5. (Componentwise Ordering Function). Given a set of points on a ddimensional space, the componentwise ordering function ≤ c is defined as: ∀x, y ∈ R d , x ≤ c y if ∀i ∈ {1, 2, · · · , d} : x i ≤ y i . Big Input Data and Succinct Representation For the problems we consider in this work, there are usually 2 d * n nodes where d is a constant and n is an input parameter. Therefore, the input size is exponential in the input parameter n. We need to represent such input data succinctly. As an example, for the set N = {0, 1, 2, · · · , 2 n − 1}, the input can be described as all the integers i: 0 ≤ i ≤ 2 n − 1. Each such integer i can be written by up to n bits. When a computational problem involved a function such as f : N → N . There is always a question how this function is given as an input? As an exmaple, let f (i) represent the parity of the integer i ∈ N . Then as an input to the computational problem, f can be an circuit that takes the last bit of the input i. Therefore, the size of f is a polynomial in the number of bits, n, of the input data. In general, however, the input functions are not that simple. We should define two models of functions for succinct representation with input involved with functions on big dataset in our computational problems. The Oracle Function Model Versus the Polynomial Time Function Model The two succinctly represented function models are the oracle functions and the polynomial functions. For the oracle model, we treat the function as a black box that outputs the function value for every domain variable once a request is sent in to the oracle. The output of the oracle is arbitrary on the first query but it cannot change a function value after a query is already made to the oracle on the same variable. For exmaple, let N = {0, 1}. Let f : N → N be an oracle function. When we ask for f (0), the oracle could answer anything, either 0 or 1. Suppose the oracle answers f (0) = 1 in the first query in one run of our algorithm. Later, if we need to use f (0) again in the same run of the algorithm, it must be the same 1. Equivalently, we may assume that the function values are stored in the harddisk. After a query, it is saved in the memory cache. Later uses of the same query will be the value in the memory cache and there is no need to check with the harddisk again. It is important to note that, the oracle funciton model contains all the functions f : N → M where N is its domain and M is its range. This is very different from the polynomial function we are going to introduce next. For the polynomial function model, the input function is an algorithm that gives the answer for the function value on the input data. The algorithm returns the answer in time polynomial in the input parameter n. Alternatively, the polynomial time algorithm can be replaced by a polynomial size logical circuits consisting of gates {AN D, N OT, OR} of Boolean variables. Clearly oracle function admits much more functions than those computable in polynomial time. Therefore a problem is usually much harder under the oracle function model than under the polynomial time function model. Polynomial Time Algorithm under Oracle Function Model In this section, we consider the complexity of finding a Tarski's fixed point in any constant dimension d with the function value f given by an oracle. Chang et al. [6] proved that a fixed point can be found in time polynomial when the given lattice is total order. Define L = {x ∈ Z d | a ≤ x ≤ b}, where a and b are two finite vectors of Z d with a < b. Theorem 2. (Chang et al.) [6] When (L, ) is given as an input and the order preserving function f is given as an oracle, a Tarski's fixed point can be found in time O(log |L|) on a finite lattice when is a total order on L. Since any two vectors in the lexicographic ordering is comparable, the lexicographic ordering is a total order. We have Corollary 1. When (L, ) is given as an input and the order preserving function f is given as an oracle, a Tarski's fixed point can be found in time O(log |L|) on a finite lattice when is a lexicographic ordering in L. The proof is rather standard utilizing the total order property of the lexicographic ordering. As the componentwise ordering lattice cannot be modelled as a total order, it leaves open the oracle complexity of finding a fixed point in componentwise ordering lattice. Here we show that this problem is also polynomial time solvable, by designing a polynomial algorithm to find a fixed point of f in time O((log |L|) d ) given componentwise ordering lattice L. The algorithm exploits the order properties of the componentwise lattice and applying the binary search method with a dimension reduction technique. To illustrate the main ideas, we first consider the 2D case before moving on to the general case. WLOG, we assume L is a N × N square centred at point (0, 0). The componentwise ordering is denoted as ≤ c . Algorithm 3.1. Point check() (A polynomial algorithm for 2D lattice) • Input: 2-dimensional lattice (L, ≤ c ), |L| = N 2 (Input size to the oracle is 2 log N since the input size for both dimensions to the oracle is log N . ) Oracle function f . f is a order preserving function. ∀x ∈ L, f (x) ∈ L and f (x) ≤ c f (y) if x ≤ c y, ∀x, y ∈ L • Point check(L, f ) Let x 0 be the center point in L. Let x L be the left most point in L such that x L 2 = x 0 2 . Let x R be the right most point in L such that x R 2 = x 0 2 . 1. If f (x 0 ) = x 0 ,return(x 0 );end; 2. If f (x 0 ) ≥ c x 0 ,L = {x|x ≥ c x 0 , x ∈ L}. Point check(L , f ); 3. If f (x 0 ) ≤ c x 0 , L = {x|x ≤ c x 0 , x ∈ L}. Point check(L , f ); 4. If f (x 0 ) 1 < x 0 1 and f (x 0 ) 2 > x 0 2 , Binary Search(x L , x 0 ); 5. If f (x 0 ) 1 > x 0 1 and f (x 0 ) 2 < x 0 2 , Binary Search(x 0 , x R ); • Binary Search(x, y) Let x m = 1/2(x + y) 1. If f (x m ) = x m ,return(x m );end; 2. If f (x m ) ≥ c x m , L = {x|x ≥ c x m , x ∈ L}. Point check(L , f ); 3. If f (x m ) ≤ c x m , L = {x|x ≤ c x m , x ∈ L}. Point check(L , f ); 4. If f (x m ) 1 < x m 1 and f (x m ) 2 > x m 2 , Binary Search(x, x m ); 5. If f (x m ) 1 > x m 1 and f (x m ) 2 < x m 2 , Binary Search(x m , y); Theorem 3. When the order preserving function f is given as an oracle, a Tarski's fixed point can be found in time O(log 2 N ) on a finite 2D lattice formed by integer points of a box with side length N by using Algorithm 3.1 Point check. Proof. Start from a lattice of size |L|, we first prove that in at most O(log N ) steps the above algorithm either finds the fixed point or reduces the input lattice to size |L|/2. 1. Case I: If f (x 0 ) = x 0 , x 0 is the fixed point which is found in 1 step. 2. Case II: If f (x 0 ) ≥ c x 0 , since f is a order preserving function, ∀y ≥ c x 0 , we have f (y) ≥ c f (x 0 ) ≥ c x 0 . Let L = {x|x ≥ c x 0 , x ∈ L}. Define f (x) = f (x), ∀x ∈ L . Then f : L → L is a order preserving function on the complete lattice L . By Tarski's fixed point theorem, there must exist a fixed point in L . Next we only need to check L which is only 1/4 size of |L|. 3. Case III: If f (x 0 ) ≤ c x 0 , similar to the analysis in Case II, we only need to consider L = {x|x ≤ c x 0 , x ∈ L} which is only 1/4 size of |L| in the next step. 4. Case IV: If f (x 0 ) 1 < x 0 1 and f (x 0 ) 2 > x 0 2 , we prove that Binary Search(x L , x 0 ) finds a fixed point or reduce the size of the lattice by half in log N 2 steps. Since f is a order preserving function, ∀ adjacent points u ≤ c v ∈ L, it is impossible that f (u) 1 > u 1 and f (v) 1 < v 1 . Thus, on a line segment [x, y] where x 2 = y 2 , if f (x) 1 ≥ x 1 and f (y) 1 < y 1 , there must exist a point z such that f (z) 1 = z 1 . On the other hand, we have f (x 0 ) 1 < x 0 1 and by the boundary condition f (x L ) 1 ≥ x L 1 , therefore, there must exist a point x ∈ [x l , x 0 ) such that f (x ) 1 = x 1 . This point x can be found in time log N 2 by using binary search. If f (x ) 2 > x 2 , similar to the analysis in Case II, we only need to consider L = {x|x ≥ c x , x ∈ L} which is at most 1/2 size of |L| in the next step. If f (x ) 2 < x 2 , we only need to consider L = {x|x ≤ c x , x ∈ L} which is at most 1/4 size of |L| in the next step. If f (x ) 2 = x 2 , then x is the fixed point. 5. Case V: If f (x 0 ) 1 > x 0 1 and f (x 0 ) 2 < x 0 2 , similarly, we can prove that Binary Search(x 0 , x R )finds a fixed point or reduce the size of the lattice by half in log N 2 steps. The size of the lattice is reduced by half in every O(log N ) steps. Therefore, the algorithm finds a fixed point in at most O(log N × log L) = O(log 2 N ) steps. The above algorithm can be generalized to any constant dimensional lattice with L = {x ∈ Z d | a ≤ x ≤ b}, where a and b are two finite vectors of Z d with a < b. We reduce a (d + 1)-dimension problem to a d-dimension one. Assume we have an algorithm for a d-dimensional problem with time complexity O(log d |L|). Let the algorithm be A d (L, f ). Consider a d + 1-dimensional lattice (L, ≤ c ). Choose the central point in L, and denote it by O = (O 1 , O 2 , · · · , O d+1 ) T . Take the section of L by a hyperplane parallel to x d+1 = 0 passing through O. Denote it as L d . Clearly, it is a d-dimensional lattice. We define a new oracle function f d on L d , based on the oracle function f on L. Define f d (x 1 , x 2 , · · · , x d ) = (y 1 , y 2 , · · · , y d ), if f (x 1 , x 2 , · · · , x d , O d+1 ) = (y 1 , y 2 , · · · , y d , y d+1 ). We apply the algorithm A d (L d , f d ) to obtain a Tarski's fixed point in time (log |L|) d . Let the fixed point be denoted by x * . Therefore, f (x * ) = (x * , O d+1 ) + ae d+1 or f (x * ) = (x * , O d+1 ) − ae d+1 , where a is some constant, e d+1 is a d + 1 dimensional unit vector with 1 on its d + 1th position. In either case, we obtain a new box B with size no more than half of the original box defined by [a, b], such that f (·) maps all points in B into B and is order preserving. We can apply the algorithm recursively on B. The base case can be handle easily. Therefore the total time is T (|L| d+1 ) ≤ T ( |L| d+1 2 ) + O(log d |L|) . It follows that T (|L| d+1 ) = O(log d |L|). Formally, the polynomial time algorithm for finding a Tarski's fixed point in a ddimensional componentwise ordering lattice is described as follows. • Input: A d dimensional lattice L d , WLOG, |L d | = N d (Input size to the oracle is d log N since the input size for both dimensions to the oracle is log N .). An oracle function f d . f d is a order preserving function. ∀x ∈ L d , f d (x) ∈ L d and f d (x) ≤ c f d (y) if x ≤ c y, ∀x, y ∈ L d . • Fixed point(L d ) 1. If d > 1 (a) Let x 0 be the center point in L d . (b) Let L d−1 = {x = (x 1 , x 2 , · · · , x d−1 )|(x, x 0 d ) ∈ L d }. (c) Let f d−1 (x) = (f d (x, x 0 d ) 1 , f d (x, x 0 d ) 2 , · · · , f d (x, x 0 d ) d−1 ). (d) x * =Fixed point(L d−1 ). (e) If f d (x * , x 0 d ) d > x 0 d , L d = {x|x ≥ (x * , x 0 d )}; Fixed point(L d ); (f ) If f d (x * , x 0 d ) d < x 0 d , L d = {x|x ≤ (x * , x 0 d )}; Fixed point(L d ); (g) If f d (x * , x 0 d ) d = x 0 d , return (x * , x 0 d ); end; 2. If d = 1 , let x L be the left end point and x R be the right end point. binary search(x L , x R , f d ). • binary search(x, y, f ) 1. If f (x L ) = 0, output x L ; 2. else if f (x R ) = 0, output x R ; 3. else (a) If f ( 1/2(x L + x R ) ) < 1/2(x L + x R ) , binary search(x L , 1/2(x L + x R ) , f ); (b) If f ( 1/2(x L +x R ) ) > 1/2(x L +x R ) , binary search( 1/2(x L +x R ) , x R , f ); (c) else output x * . Determining Uniqueness under Oracle Function Model It has been a natural question to check whether there is another fixed point after finding the first one, such as in the applications for finding all Nash equilibria (Echenique) [10]. In this section we develop a lower bound that, given a general lattice L with one already known fixed point, finding whether it is unique will take Ω(|L|) time for any algorithm. Even for the componentwise ordering lattice, we also derive a Θ(N 1 + N 2 + · · · + N d ) matching bounds for determining the uniqueness of the fixed point even for randomized algorithms. The technique builds on and further reveals crucial properties of mathematical structures for fixed points. Theorem 5. Given a lattice (L, ), an order preserving function f and a fixed point x 0 , it takes time Ω(|L|) for any deterministic algorithm to decide whether there is a unique fixed point. Proof. Consider the lattice on a real line: 0 ≺ 1 ≺ 2 ≺ · · · ≺ L − 1. Let x 0 = 0, define f (0) = 0 and f (x) = x − 1 for all x ≥ 1 except a possible fixed point x * . f (x * ) = x * or f (x * ) = x * − 1 which is not known until we query x * . Given a deterministic algorithm A, define y j be the j-th item A queried in its effort to find x * . Our adversary will answer x − 1 whenever A asks for f (x) until the last item when the adversary answers x. Clearly this derives a lower bound of L. For a randomized algorithm R, let p ij be the probability R queries x = i on its j-th query. Let k be the total number of queries R makes. We have: k j=1 |L|−1 i=0 p ij = k. Therefore, there exist i * such that k j=1 p i * j ≤ k |L| . The adversary will place f (i * ) = i * , which is queried with probability k |L| < 1/2 when we choose k = |L|−1 2 . Therefore, we have Theorem 6. Given a lattice (L, ), an order preserving function f and a fixed point x 0 , it takes time Ω(|L|) for any randomized algorithm to decide whether there is a unique fixed point with probability at least 1/2. As we noted before, for a lexicographic ordering lattice, it can be viewed as a total ordering lattice or componentwise ordering lattice with dimension one by an appropriate polynomial time transformation to change the oracle function for the d-dimension space to an oracle function on the 1-dimension space. Therefore, Corollary 2. Given a lattice (L, ≤ l ), an order preserving function f and given a fixed point x 0 , it takes time Ω(|L|) both for any deterministic algorithm and for any randomized algorithm to decide whether there is a unique fixed point with probability at least 1/2. Next we consider a componentwise lattice. Theorem 7. Given the componentwise lattice L = N 1 × N 2 × · · · × N d of d dimensions, an order preserving function f and a fixed point x 0 , the deterministic oracle complexity is θ(N 1 + N 2 + · · · + N d ) to decide whether there is a unique fixed point. The adversary will set g(x) = f (x) − x to be auxi(x) except at certain points (to be decided according to the algorithm) where it may hide a zero point. Proof. For dimension d ≥ 2, let L = {x ∈ Z : 0 ≤ c x ≤ c (N 1 , N 2 , · · · , N d )}. For x = (x 1 , x 2 , · · · , x d ), Proof of the Lower Bound: First consider x such that x d = 0. It constitute a solution of d − 1 dimension. By inductive hypothesis, it requires time N 1 + N 2 + · · · + N d−1 to decide whether or not there is one zero point at x d = 0. Second, when there is no such zero point, we need to decide if there is a zero point at x with x d > 0. Fixing any i > 0, we will set, for all x with x d = i, g(x) = 0 whenever none of such x is queried, and set g(x) = −e d otherwise. This will take N d queries. One may note that the adversary always answers a non-zero value. In fact, for any pair i = maxindex(x) and j = x i not query, the adversary can make g(x) = 0 without violating the order preserving property. Proof of the Upper Bound: We design an algorithm which always queries the componentwise maximum point of the lattice x max = (N 1 , N 2 , · · · , N d ). We should have g(x max ) ≤ c 0. We are done if it is zero. Otherwise, there must exist some i, such that g(x max ) i < 0. The problem is reduced to a smaller lattice L = {x ∈ Z : 0 ≤ c x ≤ c (N 1 , N 2 , · · · , N i−1 , N i − 1, N i+1 , · · · , N d )} which has a total sum of side lengths at most N 1 +N 2 +· · ·+N d −1. The claim follows. For the randomized lower bound, it follows in the same way as in the one-dimensional case for general lattice. We can always set f (x) = 0 for all x with i = maxindex(x) and j = x i if none of such x is queried. Corollary 3. Given the componentwise lattice L = N 1 × N 2 × · · · × N d of d dimensions, an order preserving function f and a fixed point x 0 , it takes time θ(N 1 +N 2 +· · ·+N d ) for any randomized algorithm to decide whether there is a unique fixed point with probability at least 1/2. Determining Uniqueness under Polynomial Function Model In this section, we consider the dimension as a part of the input size in unary and develop a hardness proof for the polynomial function model for determining the uniqueness of a given fixed point. We start with a polynomial-time reduction from the 3-SAT problem which is NP-complete to one of finding a second Tarski's fixed point, by deriving an order preserving mapping f from a componentwise ordering lattice L into itself, with a given fixed point. Therefore, given f as a polynomial time function with a known fixed point, determining whether f has another fixed point in L is an NP-hard problem. In other words, determining the uniqueness of a Tarski's fixed point is co-NP-hard. Furthermore, even for the case when the dimension is one, the uniqueness problem is still co-NP-hard. This can be done by designing a polynomial-time reduction from the 3-SAT problem to the uniqueness of Tarski's fixed point in a lexicographic lattice. As the lexicographic order defines a total order, it can be reduced to a one dimensional problem by finding a polynomial time algorithm for the order function calculation. It then follows that determining the uniqueness of Tarski's fixed point in a lexicographic lattice is Co-NP hard though there exists a polynomial-time algorithm for finding one Tarski's fixed point in a lexicographic lattice in any dimension. We start with one of the NP-complete problems, 3-SAT, defined as follows. Definition 6. (3CNF-formula) A literal is a boolean variable. A clause is several literals connected with ∨'s. A boolean formula is in conjuctive normal form (CNF) if it is made of clauses connected with ∧'s. If every clause has exactly 3 literals, the CNF-formula is called 3CNF-formula. Definition 7. (3-SAT Problem) Input: n boolean variables x 1 , x 2 , · · · , x n m clauses C 1 , C 2 , · · · , C m , each consisting of three literals from the set {x 1 ,x 1 , x 2 ,x 2 , · · · , x n ,x n }. Output: An assignment of {0, 1} to the boolean variables x 1 , x 2 , · · · , x n , such that the 3CNF-formula F : C 1 ∧ C 2 · · · ∧ C m = true, i.e., there is at least one true literal for every clause. Theorem 8. [13] 3-SAT is NP-complete For both lexicographic ordering and componentwise ordering, the Co-NP-hardness results can be derived from a reduction from 3-SAT problem. Proof of Co-NP-hard in Lexicographic Ordering Corollary 4. Given lattice (L, ≤ l ) and an order preserving mapping f as a polynomial function, determining that f has a unique fixed point in L is a Co-NP hard problem. Proof. Consider a 3-SAT problem with a 3CNF-formula F (x 1 , x 2 , · · · , x n ). We define the function f as follows: f (−1) = −1 and ∀i ≥ 0, we rewrite i in binary form i 1 i 2 · · · i n . Let f (i) = i if F (i 1 , i 2 , · · · , i n ) = true, and f (i) = i − 1 otherwise. Then f is an order preserving function on the lexicographic ordering lattice L = {−1, 0, · · · , 2 n −1}. If we find a fixed point f (i * ) = i * and i * = −1 on lattice L, we find an assignment (i * 1 , i * 2 , · · · , i * n ) such that F (i * 1 , i * 2 , · · · , i * n ) = true for the 3-SAT problem. Since 3-SAT problem is NP-hard, find a second Tarski's fixed point in lexicographic ordering lattice is NP-hard. Therefore, determining the uniqueness is Co-NP-hard. Proof of Co-NP-hard in Componentwise Ordering Corollary 5. Given lattice (L, ≤ c ) and an order preserving mapping f as a polynomial function, determining that f has a unique fixed point in L is a Co-NP hard problem. Proof. Again we consider a 3-SAT problem with a 3CNF-formula F (x 1 , x 2 , · · · , x n ). For any node v = (v 1 , v 2 , · · · , v d ) in a d-dimensional componnentwise ordering lattice, we define the function f (v) as follows: 1) f (v) = f ((i, i, · · · , i)), where i = max{v 1 , v 2 , · · · , v d }. 2) f ((−1, −1, · · · , −1)) = (−1, −1, · · · , −1). 3) ∀i ≥ 0, we rewrite i in binary form i 1 i 2 · · · i n . f ((i, i, · · · , i)) = (i, i, · · · , i) if F (i 1 , i 2 , · · · , i n ) = true, and f ((i, i, · · · , i)) = (i − 1, i − 1, · · · , i − 1) otherwise. Then f (v) is an order preserving function on the componnentwise ordering lattice L = {(v 1 , · · · , v d ) : ∀i, v i ∈ {−1, 0, · · · , 2 n − 1}}. If we find a fixed point f (v * ) = v * and v * = (−1, −1, · · · , −1) on lattice L, we find an assignment (i * 1 , i * 2 , · · · , i * n ) such that F (i * 1 , i * 2 , · · · , i * n ) = true for the 3-SAT problem. Since 3-SAT problem is NP-hard, find a second Tarski's fixed point in componentwise ordering lattice is NP-hard. Therefore, determining the uniqueness is Co-NP-hard. Finding Equilibria in Supermodular Games In previous sections, we solve the computational problems of the Tarski's fixed point. We are still interested in how to find a pure Nash equilibrium and how to determine the uniqueness of the pure Nash in a supermodular game. As the strong connection between the equilibrium of supermodular games and the Tarski's fixed point, the question is whether the previous results hold for supermodular games. In this section, we develop a polynomial time algorithm to find a pure Nash equilibirum which is more efficient than the algorithm we design for Tarski's fixed point before. But the Co-NP-hardness result still holds for supermodular games. Supermodular Games and Tarski's Fixed Points We will start with the formal definition of supermodular games. • S i is a finite subset of R; • u i has increasing differences in (s i , s −i ), where s −i is the strategy set of all other players except player i. I.e., u(s i , s −i ) − u(s i , s −i ) ≥ u(s i , s −i ) − u(s i , s −i ), ∀s i ≥ s i , s −i ≥ s −i In the following discussion, W.O.L.G, we assume S i = {0, 1, · · · , N i − 1}. The model can be viewed as a discretized version of a game with continuous strategy spaces, where each S i is an interval. Then S = × d i=1 S i is a componentwise lattice (see Example 1). B i (s −i ) := arg max s i ∈S i u i (s i , s −i ). Denote B i (s −i ) as the greatest element and B i (s −i ) as the least element in B i (s −i ). In supermodular games, by Topkis' theorem [22], we have, B i (s −i ) ≥ B i (s −i ) and B i (s −i ) ≥ B i (s −i ) if s −i ≥ s −i . Let B(s) = {B i (s −i ) : i = 1, · · · , d} be the least best-response function of the game, then B : S → S is order preserving. Tarski's fixed point theorem guarantees the existence of fixed points of any order preserving function f : L → L on any nonempty complete lattice. A supermodular game (S, ) is a complete lattice and the least best-response function B is order-preserving from S to itself. Therefore, there exists an equilibrium point x * ∈ S such that B(x * ) = x * . Equilibrium Computation in Supermodular Games Recall B(s) = {B i (s −i ) : i = 1, · · · , d}, where s ∈ S and S = × d i=1 S i is the bestresponse function of the supermodular game. We assume the strategy set for each player i is S i = {0, 1, · · · , N i − 1}. In Tarski's fixed point theorem, the only requirement for function f is order-preserving. In supermodular game, B is not only order-preserving but also need to be consistency, since for the same s −i , the value of B i (s −i ) should be the same. Therefore. if there exists an algorithm A that finds a Tarski's fixed point for any componentwise lattice with order-preserving function f in time T, A can finds an equilibrium in supermodular game in time T . However, not vice versa. W.L.O.G., assume N 1 ≤ N 2 ≤ · · · ≤ N d . Theorem 9. When the best response function B is given as an oracle, a pure Nash equilibrium can be found in time O(log N 1 log N 2 · · · log N d−1 ) in a supermodular game Γ. Proof. The algorithm is similar to the proof of Theorem 4 for finding one Tarski's fixed point. The only difference here is for the 2D case. On a N 1 × N 2 box, we start with the node ( N 1 2 , y), where 1 ≤ y ≤ N 2 can be any integer. We query the value of B( N 1 2 , y). Assume the value is (x , y ). Next we query the value of B( N 1 2 , y ). Because in the previous query we have already known that B 2 ( N 1 2 ) = y , we must have B( N 1 2 , y ) = (x , y ). 1. If x = N 1 2 , ( N 1 2 , y ) is a pure Nash equilibrium. 2. If x < N 1 2 , all nodes smaller than (x , y ) form a complete lattice and the size is less than half of the original lattice. 3. If x > N 1 2 , all nodes greater than (x , y ) form a complete lattice and the size is also less than half of the original lattice. Therefore, by using the property of the best response function, a pure Nash can be found in time 2 log N 1 for 2D case. Recall that finding a Tarski's fixed point takes time O(log N 1 log N 2 ) for 2D case. The generalization for the higher dimensional cases is similar to what we do for finding one Tarski's fixed point. Thus we can find a pure Nash in time O(log N 1 log N 2 · · · log N d−1 ). Again, by a reduction from 3-SAT problem, we obtain Theorem 10. Given a d-player supermodular game Γ with strategy set S = × d i=1 S i : ∀i, S i = {−1, 0, · · · , 2 n − 1} and a best response function B as a polynomial function determining that Γ has a unique Nash equilibrium is a Co-NP hard problem. Proof. Consider a 3-SAT problem with a 3CNF-formula F (x 1 , x 2 , · · · , x n ). B(s) = {B i (s −i ) : i = 1, · · · , d}, where s ∈ S and S = × d i=1 S i is the best response function of the supermodular game. We define ∀i, B i (s −i ) as follows: 1) B i (s −i ) = B i ((j, j, · · · , j)), where j = max{s −i }. 2) B i ((−1, −1, · · · , −1)) = −1. 3) ∀j ≥ 0, we rewrite j in binary form j 1 j 2 · · · j n . B i ((j, j, · · · , j)) = j if F (j 1 , j 2 , · · · , j n ) = true, and B i ((j, j, · · · , j)) = j − 1 otherwise. For ∀s = {s 1 , s 2 , · · · , s d } > {s 1 , s 2 , · · · , s d } = s , we have B(s) ≥ B(s ), which implies B(s) is an order preserving function. Then B(s) = {B i (s −i ) : i = 1, · · · , d} is a best response function for the supermodular game Γ. Next we prove that if s * = (s * 1 , s * 2 , · · · , s * d ) is an equilibrium of the supermodular game Γ with the above best response function B, we must have s * 1 = s * 2 = · · · = s * d . I.e., all the elements of s * must be identical. Suppose (x 1 , x 2 , · · · , x d ) is an equilibrium. By the definition of B i , max(x 2 , x 3 , · · · , x d ) = x 1 or x 1 + 1. 1) Case I: max(x 2 , x 3 , · · · , x d ) = x 1 . Then max(x 1 , x 3 , · · · , x d ) = x 1 , so B 2 (x −2 ) = B 2 (x 1 , x 1 , · · · , x 1 ) = x 1 . Since x is an equilibrium, x 2 = B 2 (x −2 ) . Therefore, x 2 = x 1 . Similarly, we can prove ∀i, x i = x 1 . 2) Case 2: max(x 2 , x 3 , · · · , x d ) = x 1 + 1. We consider two cases, a)only one element of {x 2 , x 3 , · · · , x d } is x 1 + 1 and b) at least two elements equal to x 1 + 1. a) W.L.O.G, assume only x 2 = x 1 + 1. Then B 2 (x −2 ) = B 2 (x 1 , x 1 , · · · , x 1 ) = x 1 or x 1 − 1 < x 2 . This contradicts to the assumption that x is an equilibrium. b) W.L.O.G, assume x 2 = x 3 = x 1 + 1. Then B 2 (x −2 ) = B 2 (x 1 + 1, x 1 + 1, · · · , x 1 + 1) = B 1 (x −1 ) = x 1 < x 2 . Again it contradicts to the assumption that x is an equilibrium. Therefore, s * = (s * 1 , s * 2 , · · · , s * d ) is an equilibrium of the supermodular game Γ with the above best response function B, we must have s * 1 = s * 2 = · · · = s * d . Hence, if we find an equilibrium s * and s * = (−1, −1, · · · , −1) in Γ, we find an assignment (j * 1 , j * 2 , · · · , j * n ) such that F (j * 1 , j * 2 , · · · , j * n ) = true for the 3-SAT problem. Since 3-SAT problem is NPhard, find a second equilibrium is NP-hard. Therefore, determining the uniqueness is Co-NP-hard. Conclusion and Open Problems Results on the Tarski's fixed points contrast with past results for the general fixed point computation in several ways. First in the oracle function model, several fixed point computational problems are known to be require an exponential number of queries for constant dimensions, including the two dimensional case (Chen and Deng; Hirsch et al.) [8,14]. Our results prove the Tarski's fixed point to be polynomial in the oracle model. It also follows that it is so for the polynomial function model, which is also different for those fixed point computational problems which are known to be PPAD-complete for constant dimensions, including the two dimensional case (Chen and Deng) [7]. Recently, Mihalis, Kusha and Papadimitriou stated in a private communication that they proved a lower bound of Ω(log 2 |L|) in the oracle function model for finding a Tarski's fixed point in the two dimensional case. Together with our upper bound results, we conjecture a matching bound of Θ(log d |L|) for general d. In the polynomial function model, we prove that determining the uniqueness is co-NP-complete. In comparison, the uniqueness of Nash equilibrium is known to be co-NPcomplete but its existence is in PPAD. The above comparisons with previous work leave the following outstanding open problem: Is it PPAD-complete to find a Tarski's fixed point in the variable dimension n for the polynomial function model? This problem is known to be true for finding a Sperner simplex in dimension n when n is a variable. We conjecture that this is also true for finding a Tarski's fixed point. Appendix: Alternative proofs for Co-NP-hardness Let P = {x ∈ R n | Ax ≤ b} be a full-dimensional polytope, where A is an m × n rational matrix satisfying that each row of A has at most one positive entry and b a rational vector of R m . It has been shown in (Lagarias)[15] that Theorem 11. [15] Determining whether there is an integer point in P is an NP-complete problem. Proof of Co-NP-hard in Lexicographic Ordering Corollary 6. Given lattice (L, ≤ l ) and an order preserving mapping f as a polynomial function, determining that f has a unique fixed point in L is a Co-NP hard problem. We assume n ≥ 2. Similarly, let x max = (x max 1 , x max 2 , . . . , x max n ) with x max j = max x∈P x j , j = 1, 2, . . . , n, and x min = (x min 1 , x min 2 , . . . , x min n ) with x min j = min x∈P x j , j = 1, 2, . . . , n. Let D(P ) = {x ∈ Z n | x l ≤ l x ≤ l x u }, where x u = x max and x l = x min . For y ∈ R n and k ∈ N ∪ {0}, let P (y, k) = P if k = 0, {x ∈ P | x i = y i , i = 1, 2, . . . , k} otherwise. Definition 9. For y ∈ D(P ), h(y) = (h 1 (y), h 2 (y), . . . , h n (y)) ∈ D(P ) is given as follows: Step 1: If y 1 = x l 1 , let h(y) = x l . If y ∈ P , let h(y) = y. Otherwise, let k = 2 and go to Step 2. Step 2: Solve the linear program min x k − v k subject to x ∈ P (y, k − 1) and v ∈ P (y, k − 1), to obtain its optimal solution (x * , v * ). Let d min k (y) = x * k and d max k (y) = v * k . If y k ≥ d min k (y) , go to Step 3. Otherwise, go to Step 4. Step 3: If d max k (y) < d min k (y) , go to Step 4. Otherwise, go to Step 5. Step 4: Let p(y) = k. If y k−1 ≤ x l k−1 + 1, let h i (y) = y i if 1 ≤ i ≤ k − 2, x l i if k − 1 ≤ i ≤ n, i = 1, 2, . . . , n. Otherwise, let h i (y) =    y i if 1 ≤ i ≤ k − 2, y k−1 − 1 if i = k − 1, x u i if k ≤ i ≤ n, i = 1, 2, . . . , n. Step 5: If y k > d max k (y) , let p(y) = k and h i (y) =    y i if 1 ≤ i ≤ k − 1, d max k (y) if i = k, x u i if k + 1 ≤ i ≤ n, i = 1, 2, . . . , n. Otherwise, let k = k + 1 and go to Step 2. Lemma 1. x l ≤ h(y) ≤ l y and h(y) = y for all y ∈ D(P ) with y = x l and y / ∈ P . Proof. Clearly, the lemma holds for all y ∈ D(P ) with y 1 = x l 1 and y = x l . Let y be any given point in D(P ) with y 1 = x l 1 and y / ∈ P and k = p(y). From the definition of h(y), we obtain that k is well defined, k ≥ 2, and x l i < d min k (y) ≤ y i ≤ d max k (y) , i = 1, 2, . . . , k − 1. Furthermore, one of the following five cases must occur. Case 1: y k ≥ d min k (y) , d max k (y) < d min k (y) and y k−1 ≤ x l k−1 + 1. From Step 4, we find that h i (y) = y i if 1 ≤ i ≤ k − 2, x l i if k − 1 ≤ i ≤ n, i = 1, 2, . . . , n. Thus, it follows from y k−1 > x l k−1 that x l ≤ h(y) ≤ l y and h(y) = y. Case 2: y k ≥ d min k (y) , d max k (y) < d min k (y) and y k−1 > x l k−1 + 1. From Step 4, we find that h i (y) =    y i if 1 ≤ i ≤ k − 2, y k−1 − 1 if i = k − 1, x u i if k ≤ i ≤ n, i = 1, 2, . . . , n. Thus, it follows from y k−1 − 1 < y k−1 that x l ≤ h(y) ≤ l y and h(y) = y. Case 3: y k ≥ d min k (y) , d max k (y) ≥ d min k (y) and y k > d max k (y) . From Step 5, we find that 1, 2, . . . , n. Thus, it follows from y k > d max k (y) that x l ≤ h(y) ≤ l y and h(y) = y. h i (y) =    y i if 1 ≤ i ≤ k − 1, d max k (y) if i = k, x u i if k + 1 ≤ i ≤ n, i = Case 4: y k < d min k (y) and y k−1 ≤ x l k−1 + 1. From Step 4, we find that h i (y) = y i if 1 ≤ i ≤ k − 2, x l i if k − 1 ≤ i ≤ n, i = 1, 2, . . . , n. Thus, it follows from y k−1 > x l k−1 that x l ≤ h(y) ≤ l y and h(y) = y. Case 5: Consider the case that y k < d min k (y) and y k−1 > x l k−1 + 1. From Step 4, we find that h i (y) =    y i if 1 ≤ i ≤ k − 2, y k−1 − 1 if i = k − 1, x u i if k ≤ i ≤ n, i = 1, 2, . . . , n. Thus, it follows from y k−1 − 1 < y k−1 that x l ≤ h(y) ≤ l y and h(y) = y. Therefore, it always holds that x l ≤ h(y) ≤ l y and h(y) = y. The proof is completed. As a corollary of Lemma 1, we obtain that Corollary 7. For any given x * ∈ D(P ), x * ∈ P if and only if h(x * ) = x * and x * = x l . Theorem 12. Under the lexicographic ordering, h is an order preserving mapping from D(P ) into itself. Proof. Let y 1 and y 2 be any given two points in D(P ) with y 1 ≤ l y 2 and y 1 = y 2 . Let q be the index in N satisfying that y 1 i = y 2 i , i = 1, 2, . . . , q − 1, and y 1 q < y 2 q . From the definition of h, we obtain that h(y 1 ) = x l if y 1 1 = x l 1 and that h(y 2 ) = y 2 if y 2 ∈ P . Thus, when y 1 1 = x l 1 or y 2 ∈ P , it follows from Lemma 1 that h(y 1 ) ≤ l h(y 2 ). Suppose that y 1 1 = x l 1 and y 2 / ∈ P . Let k 1 = p(y 1 ) and k 2 = p(y 2 ). From the definition of h(y 2 ), we obtain that k 2 is well defined and k 2 ≥ 2. Case 1: 2 ≤ k 2 ≤ q − 1. From y 1 i = y 2 i , i = 1, 2, . . . , q − 1, we derive that k 1 = k 2 . Thus, h(y 1 ) = h(y 2 ). Therefore, h(y 1 ) ≤ l h(y 2 ). Case 2: 2 ≤ k 2 = q. From the definition of k 2 = p(y 2 ), we know that x l i < d min i (y 2 ) ≤ y 2 i ≤ d max i (y 2 ) , i = 1, 2, . . . , q − 1. Since y 1 i = y 2 i , i = 1, 2, . . . , q − 1, hence, d min i (y 1 ) = d min i (y 2 ) and d max 1. Suppose that y 2 q ≥ d min q (y 2 ) , d max q (y 2 ) < d min q (y 2 ) and y 2 q−1 ≤ x l q−1 + 1. From Step 4, we find that h i (y 2 ) = y 2 i if 1 ≤ i ≤ q − 2, x l i if q − 1 ≤ i ≤ n, i = 1, 2, . . . , n. Since d max q (y 2 ) < d min q (y 2 ) , d min q (y 1 ) = d min q (y 2 ) and d max q (y 1 ) = d max q (y 2 ) , we derive that k 1 = k 2 = q. Thus, it follows from y 1 q−1 = y 2 q−1 ≤ x l q−1 + 1 and Step 4 that h i (y 1 ) = y 1 i if 1 ≤ i ≤ q − 2, x l i if q − 1 ≤ i ≤ n, i = 1, 2, . . . , n. Therefore, h(y 1 ) = h(y 2 ), and consequently h(y 1 ) ≤ l h(y 2 ). 2. Suppose that y 2 q ≥ d min q (y 2 ) , d max q (y 2 ) < d min q (y 2 ) and y 2 q−1 > x l q−1 + 1. From Step 4, we find that h i (y 2 ) =    y 2 i if 1 ≤ i ≤ q − 2, y 2 q−1 − 1 if i = q − 1, x u i if q ≤ i ≤ n, i = 1, 2, . . . , n. Since d max q (y 2 ) < d min q (y 2 ) , d min q (y 1 ) = d min q (y 2 ) and d max q (y 1 ) = d max q (y 2 ) , we derive that k 1 = k 2 = q. Thus, it follows from y 1 q−1 = y 2 q−1 > x l q−1 + 1 and Step 4 that h i (y 1 ) =    y 1 i if 1 ≤ i ≤ q − 2, y 1 q−1 − 1 if i = q − 1, x u i if q ≤ i ≤ n, i = 1, 2, . . . , n. Therefore, h(y 1 ) = h(y 2 ), and consequently h(y 1 ) ≤ l h(y 2 ). 3. Suppose that y 2 q ≥ d min q (y 2 ) , d max q (y 2 ) ≥ d min q (y 2 ) and y 2 q > d max q (y 2 ) . From Step 5, we find that h i (y 2 ) =    y 2 i if 1 ≤ i ≤ q − 1, d max q (y 2 ) if i = q, x u i if q + 1 ≤ i ≤ n, i = 1, 2, . . . , n. • Consider that y 1 ∈ P . Then, h(y 1 ) = y 1 and d min i (y 1 ) ≤ y 1 i ≤ d max i (y 1 ) , i = 1, 2, . . . , n. Thus, from d max q (y 1 ) = d max q (y 2 ) , we ob- tain that h q (y 1 ) = y 1 q ≤ d max q (y 2 ) = h q (y 2 ) . Therefore, h(y 1 ) ≤ l h(y 2 ) follows from h i (y 1 ) = h i (y 2 ), i = 1, 2, . . . , q − 1, and h i (y 1 ) ≤ h i (y 2 ), i = q, q + 1, . . . , n. • Consider that y 1 / ∈ P . From y 1 i = y 2 i , i = 1, 2, . . . , q − 1, and k 2 = q, we derive that k 1 ≥ q. (a) Assume that k 1 = q. Since d max q (y 1 ) ≥ d min q (y 1 ) , hence, either y 1 q > d max q (y 1 ) or y 1 q < d min q (y 1 ) . Thus, from the definition of h(y 1 ), we obtain that, when y 1 q > d max q (y 1 ) , h i (y 1 ) =    y 1 i if 1 ≤ i ≤ q − 1, d max q (y 1 ) if i = q, x u i if q + 1 ≤ i ≤ n, i = 1, 2, . . . , n; and when y 1 q < d min q (y 1 ) , if y 1 q−1 ≤ x l q−1 + 1, h i (y 1 ) = y 1 i if 1 ≤ i ≤ q − 2, x l i otheriwse, i = 1, 2, . . . , n, and if y 1 q−1 > x l q−1 + 1, h i (y 1 ) =    y 1 i if 1 ≤ i ≤ q − 2, y 1 q−1 − 1 if i = q − 1, x u i if q ≤ i ≤ n, i = 1, 2, . . . , n. Therefore, when y 1 q > d max q (y 1 ) , h(y 1 ) ≤ l h(y 2 ) follows from h(y 1 ) = h(y 2 ); and when y 1 q < d min q (y 1 ) , if y 1 q−1 ≤ x l q−1 + 1, then h(y 1 ) ≤ l h(y 2 ) follows from h i (y 1 ) = h i (y 2 ), i = 1, 2, . . . , q − 2, and h q−1 (y 1 ) = x l q−1 < y 2 q−1 = h q−1 (y 2 ), and if y 1 q−1 > x l q−1 + 1, then h(y 1 ) ≤ l h(y 2 ) follows from h i (y 1 ) = h i (y 2 ), i = 1, 2, . . . , q − 2, and h q−1 (y 1 ) = y 1 q−1 − 1 < y 1 q−1 = y 2 q−1 = h q−1 (y 2 ). (b) Assume that k 1 > q. Then, k 1 − 1 ≥ q and d min i (y 1 ) ≤ y 1 i ≤ d max i (y 1 ) , i = 1, 2, . . . , k 1 − 1. Thus, from the definition of h(y 1 ), we obtain that h i (y 1 ) = y 1 i = y 2 i = h i (y 2 ), i = 1, 2, . . . , q − 1, h q (y 1 ) ≤ y 1 q ≤ d max q (y 1 ) = d max q (y 2 ) = h q (y 2 ), and h i (y 1 ) ≤ x u i = h i (y 2 ), i = q + 1, q + 2, . . . , n. Therefore, h(y 1 ) ≤ l h(y 2 ). 4. Suppose that y 2 q < d min q (y 2 ) and y 2 q−1 ≤ x l q−1 + 1. From Step 4, we find that h i (y 2 ) = y 2 i if 1 ≤ i ≤ q − 2, x l i if q − 1 ≤ i ≤ n, i = 1, 2, . . . , n. Since y 1 i = y 2 i , i = 1, 2, . . . , q − 1, y 1 q < y 2 q , d min q (y 1 ) = d min q (y 2 ) , and k 2 = q, hence, we derive that k 1 = k 2 = q and y 1 q < d min q (y 1 ) . Thus, it follows from y 1 q−1 = y 2 q−1 and Step 4 that h i (y 1 ) = y 1 i if 1 ≤ i ≤ q − 2, x l i if q − 1 ≤ i ≤ n, i = 1, 2, . . . , n. Therefore, h(y 1 ) = h(y 2 ) and h(y 1 ) ≤ l h(y 2 ). 5. Suppose that y 2 q < d min q (y 2 ) and y 2 q−1 > x l q−1 + 1. From Step 4, we find that h i (y 2 ) =    y 2 i if 1 ≤ i ≤ q − 2, y 2 q−1 − 1 if i = q − 1, x u i if q ≤ i ≤ n, i = 1, 2, . . . , n. Since y 1 i = y 2 i , i = 1, 2, . . . , q − 1, y 1 q < y 2 q , d min q (y 1 ) = d min q (y 2 ) , and k 2 = q, hence, we derive that k 1 = k 2 = q and y 1 q < d min q (y 1 ) . Thus, it follows from y 1 q−1 = y 2 q−1 and Step 4 that h i (y 1 ) =    y 1 i if 1 ≤ i ≤ q − 2, y 1 q−1 − 1 if i = q − 1, x u i if q ≤ i ≤ n, i = 1, 2, . . . , n. Therefore, h(y 1 ) = h(y 2 ) and h(y 1 ) ≤ l h(y 2 ). Case 3: 2 ≤ k 2 = q + 1. From the definition of k 2 , we know that x l i < d min i (y 2 ) ≤ y 2 i ≤ d max i (y 2 ) , i = 1, 2, . . . , q. Since y 1 i = y 2 i , i = 1, 2, . . . , q − 1, hence, d min i (y 1 ) = d min i (y 2 ) and d max i (y 1 ) = d max i (y 2 ) , i = 1, 2, . . . , q, x l i < d min i (y 1 ) ≤ y 1 i ≤ d max i (y 1 ) , i = 1, 2, . . . , q − 1, and d min q (y 1 ) ≤ d max q (y 1 ) . 1. Suppose that y 2 q+1 ≥ d min q+1 (y 2 ) , d max q+1 (y 2 ) < d min q+1 (y 2 ) and y 2 q ≤ x l q + 1. From Step 4, we find that h i (y 2 ) = y 2 i if 1 ≤ i ≤ q − 1, x l i if q ≤ i ≤ n, i = 1, 2, . . . , n. Since y 1 q < y 2 q ≤ x l q + 1, we get that y 1 q = x l q and k 1 = q ≥ 2. Thus, it follows from y 1 q = x l q < d min q (y 1 ) and Step 4 that, if y 1 q−1 ≤ x l q−1 +1, h i (y 1 ) = y 1 i if 1 ≤ i ≤ q − 2, x l i if q − 1 ≤ i ≤ n, i = 1, 2, . . . , n, and if y 1 q−1 > x l q−1 + 1, h i (y 1 ) =    y 1 i if 1 ≤ i ≤ q − 2, y 1 q−1 − 1 if i = q − 1, x u i if q ≤ i ≤ n, i = 1, 2, . . . , n. Therefore, if y 1 q−1 ≤ x l q−1 + 1, then h(y 1 ) ≤ l h(y 2 ) follows from h i (y 1 ) = h i (y 2 ), i = 1, 2, . . . , q − 2, and h q−1 (y 1 ) = x l q−1 < y 2 q−1 = h q−1 (y 2 ), and if y 1 q−1 > x l q−1 + 1, then h(y 1 ) ≤ l h(y 2 ) follows from h i (y 1 ) = h i (y 2 ), i = 1, 2, . . . , q − 2, and h q−1 (y 1 ) = y 1 q−1 − 1 < y 2 q−1 = h q−1 (y 2 ). 2. Suppose that y 2 q+1 ≥ d min q+1 (y 2 ) , d max q+1 (y 2 ) < d min q+1 (y 2 ) and y 2 q > x l q + 1. From Step 4, we find that h i (y 2 ) =    y 2 i if 1 ≤ i ≤ q − 1, y 2 q − 1 if i = q, x u i if q + 1 ≤ i ≤ n, i = 1, 2, . . . , n. • Assume that y 1 ∈ P . Thus, h(y 1 ) = y 1 . Therefore, h(y 1 ) ≤ l h(y 2 ) follows from h i (y 1 ) = h i (y 2 ), i = 1, 2, . . . , q − 1, h q (y 1 ) = y 1 q ≤ y 2 q − 1 = h q (y 2 ), and h i (y 1 ) ≤ x u i = h i (y 2 ), i = q + 1, q + 2, . . . , n. • Assume that y 1 / ∈ P . Then, we must have k 1 ≥ q. Consider that k 1 = q. Since d min q (y 1 ) ≤ d max q (y 1 ) , hence, either y 1 q > d max q (y 1 ) or y 1 q < d min q (y 1 ) . (a) Suppose that y 1 q > d max q (y 1 ) . From Step 5, we obtain that h i (y 1 ) =    y 1 i if 1 ≤ i ≤ q − 1, d max q (y 1 ) if i = q, x u i if q + 1 ≤ i ≤ n, i = 1, 2, . . . , n. Thus, h q (y 1 ) < y 1 q . Therefore, h(y 1 ) ≤ l h(y 2 ) follows from h i (y 1 ) = h i (y 2 ), i = 1, 2, . . . , q − 1, and h q (y 1 ) < y 1 q ≤ y 2 q − 1 = h q (y 2 ). (b) Suppose that y 1 q < d min q (y 1 ) . From Step 4, we obtain that, if y 1 q−1 ≤ x l q−1 + 1, h i (y 1 ) = y 1 i if 1 ≤ i ≤ q − 2, x l i otheriwse, i = 1, 2, . . . , n, and if y 1 q−1 > x l q−1 + 1, h i (y 1 ) =    y 1 i if 1 ≤ i ≤ q − 2, y 1 q−1 − 1 if i = q − 1, x u i if q ≤ i ≤ n, i = 1, 2, . . . , n. Therefore, if y 1 q−1 ≤ x l q−1 + 1, then h(y 1 ) ≤ l h(y 2 ) follows from h i (y 1 ) = h i (y 2 ), i = 1, 2, . . . , q−2, and h q−1 (y 1 ) = x l q−1 < y 2 q−1 = h q−1 (y 2 ), and if y 1 q−1 > x l q−1 + 1, then h(y 1 ) ≤ l h(y 2 ) follows from h i (y 1 ) = h i (y 2 ), i = 1, 2, . . . , q − 2, and h q−1 (y 1 ) = y 1 q−1 − 1 < y 2 q−1 = h q−1 (y 2 ). Consider that k 1 > q. From the definition of h(y 1 ), we derive that h i (y 1 ) = y 1 i , i = 1, 2, . . . , q − 1, and h q (y 1 ) ≤ y 1 q . Thus, h(y 1 ) ≤ l h(y 2 ) follows immediately from h i (y 1 ) = h i (y 2 ), i = 1, 2, . . . , q − 1, h q (y 1 ) ≤ y 1 q ≤ y 2 q − 1 = h q (y 2 ), and h i (y 1 ) ≤ x u i = h i (y 2 ), i = q + 1, q + 2, . . . , n. 3. Suppose that y 2 q+1 ≥ d min q+1 (y 2 ) , d max q+1 (y 2 ) ≥ d min q+1 (y 2 ) and y 2 q+1 > d max q+1 (y 2 ) . From Step 5, we find that h i (y 2 ) =    y 2 i if 1 ≤ i ≤ q, d max q+1 (y 2 ) if i = q + 1, x u i if q + 2 ≤ i ≤ n, i = 1, 2, . . . , n. • Assume that y 1 ∈ P . Then, h(y 1 ) = y 1 . Thus, from y 1 q < y 2 q , we obtain that h q (y 1 ) < y 2 q = h q (y 2 ). Therefore, h(y 1 ) ≤ l h(y 2 ) follows immediately from h i (y 1 ) = h i (y 2 ), i = 1, 2, . . . , q − 1, and h q (y 1 ) < h q (y 2 ). • Assume that y 1 / ∈ P . Then, we must have k 1 ≥ q. Consider that k 1 = q. Since d min q (y 1 ) ≤ d max q (y 1 ) , hence, either y 1 q > d max q (y 1 ) or y 1 q < d min q (y 1 ) . (a) Suppose that y 1 q > d max q (y 1 ) . From Step 5, we obtain that h i (y 1 ) =    y 1 i if 1 ≤ i ≤ q − 1, d max q (y 1 ) if i = q, x u i if q + 1 ≤ i ≤ n, i = 1, 2, . . . , n. Thus, h q (y 1 ) < y 1 q . Therefore, h(y 1 ) ≤ l h(y 2 ) follows from h i (y 1 ) = h i (y 2 ), i = 1, 2, . . . , q − 1, and h q (y 1 ) < y 1 q < y 2 q = h q (y 2 ). (b) Suppose that y 1 q < d min q (y 1 ) . From Step 4, we obtain that, if y 1 q−1 ≤ x l q−1 + 1, h i (y 1 ) = y 1 i if 1 ≤ i ≤ q − 2, x l i otheriwse, i = 1, 2, . . . , n, and if y 1 q−1 > x l q−1 + 1, h i (y 1 ) =    y 1 i if 1 ≤ i ≤ q − 2, y 1 q−1 − 1 if i = q − 1, x u i if q ≤ i ≤ n, i = 1, 2, . . . , n. Therefore, if y 1 q−1 ≤ x l q−1 + 1, then h(y 1 ) ≤ l h(y 2 ) follows from h i (y 1 ) = h i (y 2 ), i = 1, 2, . . . , q−2, and h q−1 (y 1 ) = x l q−1 < y 1 q−1 = y 2 q−1 = h q−1 (y 2 ) , and if y 1 q−1 > x l q−1 + 1, then h(y 1 ) ≤ l h(y 2 ) follows from h i (y 1 ) = h i (y 2 ), i = 1, 2, . . . , q − 2, and h q−1 (y 1 ) = y 1 q−1 − 1 < y 1 q−1 = y 2 q−1 = h q−1 (y 2 ). Consider that k 1 > q. From the definition of h(y 1 ), we derive that h i (y 1 ) = y 1 i , i = 1, 2, . . . , q − 1, and h q (y 1 ) ≤ y 1 q . Thus, h(y 1 ) ≤ l h(y 2 ) follows immediately from h i (y 1 ) = h i (y 2 ), i = 1, 2, . . . , q − 1, and h q (y 1 ) ≤ y 1 q < y 2 q = h q (y 2 ). 4. Suppose that y 2 q+1 < d min q+1 (y 2 ) and y 2 q ≤ x l q + 1. From Step 4, we find that h i (y 2 ) = y 2 i if 1 ≤ i ≤ q − 1, x l i if q ≤ i ≤ n, i = 1, 2, . . . , n. Since y 1 q < y 2 q ≤ x l q + 1, we get that y 1 q = x l q and k 1 = q ≥ 2. Thus, we obtain from Step 4 that, if y 1 q−1 ≤ x l q−1 + 1, h i (y 1 ) = y 1 i if 1 ≤ i ≤ q − 2, x l i if q − 1 ≤ i ≤ n, i = 1, 2, . . . , n, and if y 1 q−1 > x l q−1 + 1, h i (y 1 ) =    y 1 i if 1 ≤ i ≤ q − 2, y 1 q−1 − 1 if i = q − 1, x u i if q ≤ i ≤ n, i = 1, 2, . . . , n. Therefore, if y 1 q−1 ≤ x l q−1 + 1, then h(y 1 ) ≤ l h(y 2 ) follows from h i (y 1 ) = h i (y 2 ), i = 1, 2, . . . , q − 2, and h q−1 (y 1 ) = x l q−1 < y 1 q−1 = y 2 q−1 = h q−1 (y 2 ), and if y 1 q−1 > x l q−1 + 1, then h(y 1 ) ≤ l h(y 2 ) follows from h i (y 1 ) = h i (y 2 ), i = 1, 2, . . . , q − 2, and h q−1 (y 1 ) = y 1 q−1 − 1 < y 1 q−1 = y 2 q−1 = h q−1 (y 2 ). 5. Suppose that y 2 q+1 < d min q+1 (y 2 ) and y 2 q > x l k 2 −1 + 1. From Step 4, we find that h i (y 2 ) =    y 2 i if 1 ≤ i ≤ q − 1, y 2 q − 1 if i = q, x u i if q + 1 ≤ i ≤ n, i = 1, 2, . . . , n. • Assume that y 1 ∈ P . Thus, h(y 1 ) = y 1 . Therefore, h(y 1 ) ≤ l h(y 2 ) follows from h i (y 1 ) = h i (y 2 ), i = 1, 2, . . . , q − 1, h q (y 1 ) = y 1 q ≤ y 2 q − 1 = h q (y 2 ), and h i (y 1 ) ≤ x u i = h i (y 2 ), i = q + 1, q + 2, . . . , n. • Assume that y 1 / ∈ P . Then, we must have that k 1 ≥ q. Consider that k 1 = q. Since d min q (y 1 ) ≤ d max q (y 1 ) , hence, either y 1 q > d max q (y 1 ) or y 1 q < d min q (y 1 ) . (a) Suppose that y 1 q > d max q (y 1 ) . From Step 5, we obtain that h i (y 1 ) =    y 1 i if 1 ≤ i ≤ q − 1, d max q (y 1 ) if i = q, x u i if q + 1 ≤ i ≤ n, i = 1, 2, . . . , n. Thus, h q (y 1 ) < y 1 q . Therefore, h(y 1 ) ≤ l h(y 2 ) follows from h i (y 1 ) = h i (y 2 ), i = 1, 2, . . . , q − 1, and h q (y 1 ) < y 1 q ≤ y 2 q − 1 = h q (y 2 ). (b) Suppose that y 1 q < d min q (y 1 ) . From Step 4, we obtain that, if y 1 q−1 ≤ x l q−1 + 1, h i (y 1 ) = y 1 i if 1 ≤ i ≤ q − 2, x l i otheriwse, i = 1, 2, . . . , n, and if y 1 q−1 > x l q−1 + 1, h i (y 1 ) =    y 1 i if 1 ≤ i ≤ q − 2, y 1 q−1 − 1 if i = q − 1, x u i if q ≤ i ≤ n, i = 1, 2, . . . , n. Therefore, if y 1 q−1 ≤ x l q−1 + 1, then h(y 1 ) ≤ l h(y 2 ) follows from h i (y 1 ) = h i (y 2 ), i = 1, 2, . . . , q−2, and h q−1 (y 1 ) = x l q−1 < y 1 q−1 = y 2 q−1 = h q−1 (y 2 ), and if y 1 q−1 > x l q−1 + 1, then h(y 1 ) ≤ l h(y 2 ) follows from h i (y 1 ) = h i (y 2 ), i = 1, 2, . . . , q − 2, and h q−1 (y 1 ) = y 1 q−1 − 1 < y 1 q−1 = y 2 q−1 = h q−1 (y 2 ). Consider that k 1 > q. From the definition of h(y 1 ), we derive that h i (y 1 ) = y 1 i , i = 1, 2, . . . , q − 1, and h q (y 1 ) ≤ y 1 q . Thus, h(y 1 ) ≤ l h(y 2 ) follows immediately from h i (y 1 ) = h i (y 2 ), i = 1, 2, . . . , q − 1, h q (y 1 ) ≤ y 1 q ≤ y 2 q − 1 = h q (y 2 ), and h i (y 1 ) ≤ x u i = h i (y 2 ), i = q + 1, q + 2, . . . , n. Case 4: k 2 > q + 1. From k 2 − 1 > q, we obtain that h i (y 2 ) = y 2 i , i = 1, 2, . . . , q. Thus, y 1 ≤ l h(y 2 ) since y 1 i = y 2 i , i = 1, 2, . . . , q − 1, and y 1 q < y 2 q . Therefore, it follows immediately from Lemma 1 that h(y 1 ) ≤ l h(y 2 ). For x ∈ R n , we define h(x) = d(x) with d(x) =    x l if P (x) = ∅, argmax y∈P (x) e y otherwise. It follows from Lemma 3 that d(x) is well defined. Proof. Let x 1 and x 2 be two different points of R n with x 1 ≤ c x 2 . Then, P (x 1 ) ⊆ P (x 2 ). Thus, from the definition of d(x), we obtain that x min ≤ c d(x 1 ) ≤ c d(x 2 ) ≤ c x max . The first part of the lemma follows immediately. Let x * be an integer point in P . Then, d(x * ) = argmax y∈P (x * ) e y = x * . Thus, h(x * ) = x * = x l . Let x * be a point in R n satisfying that h(x * ) = x * = x l . Suppose that P (x * ) = ∅. Then, d(x * ) = x l . Thus, x * = h(x * ) = x l = x l . A contradiction occurs. Therefore, P (x * ) = ∅ and, consequently, d(x * ) ∈ P . Since x * ≥ c d(x * ) ≥ c d(x * ) = h(x * ) = x * , hence, d(x * ) = d(x * ) = x * . This completes the proof. Let (L, ≤ c ) be a finite lattice and f an order preserving mapping from L into itself. As a corollary of Theorem 11 and Lemma 4, we obtain that Corollary 9. Given lattice (L, ≤ c ) and an order preserving mapping f as a polynomial function, determining that f has a unique fixed point in L is a Co-NP hard problem. 2. 1 1The Lattice and Tarski's Fixed Point Theorem Definition 1. (Partial Order vs. Total Order) A relationship on a set L is a partial order if it satisfies reflexivity (∀a ∈ L : a a); antisymmetry (a b and b a implies a = b); transitivity (a b and b c implies a c). It is a total order if ∀a, b ∈ L: either a b or b a. Definition 2. (Lattice) (L, ) is a lattice if 1. L is a partial ordered set; 2. There are two operations: meet ∧ and join ∨ on any pair of elements a, b of L such that a, b a ∨ b and a ∧ b a, b Definition 3 . 3(Order Preserving Function) A function f on a lattice (L, ) is order preserving if a b implies f (a) f (b). Theorem 1. (Tarski's Fixed Point Theorem) Definition 4 . 4(Lexicographic Ordering Function). Given a set of points on a d-dimensional space R d , the lexicographic ordering function ≤ l is defined as: Figure 1 : 1A polynomial algorithm for 2D Lattice Consider the algorithm Point check(L, f ). Algorithm 3 . 2 . 32Fixed point() (A polynomial algorithm for any constant dimensional lattice) Theorem 4 . 4When the order preserving function f is given as an oracle, a Tarski's fixed point can be found in time O(log d |L|) on a componentwise ordering lattice (L, ≤ c ). let maxindex(x) = max{i : x i > 0} for any non-zero vector x. Define auxi(x) = −e maxindex(x) where e i is a unit vector in i-th coordinate. Therefore, auxi(·) is well defined on nonzero vectors in the lattice L. One example of two dimension case is demonstrated inFig.2. The fixed point is denoted by the red color. The direction of all the other points are defined by the function auxi(·). Figure 2 : 2auxi(x) Definition 8 . 8(Supermodular Games) Let Γ = {(S i , u i ) : i = 1, · · · , d} be a finite supermodular game with d players: Example 1 . 1(Supermodular game and componentwise lattice) Consider a supermodular game of two players: S 1 = {0, 1, 2}, S 2 = {0, 1}. Then S = S 1 × S 2 is a componentwise ordering lattice as shown inFig. 3. Figure 3 : 3An example of componentwise ordering lattice Let B i denote i's best-response function in Γ. 2 ) , i = 1, 2, . . . , q, and x l i < d min i (y 1 ) ≤ y 1 i ≤ d max i (y 1 ) , i = 1, 2, . . . , q − 1. Example 2 . 2Consider P = {x ∈ R 3 | Ax ≤ c b}, b = (0, −10, 10, 0) . For y = (−3, −4, 5) , h(y) = (−3, −5, 5) . An illustration of h can be found inFig.4. Figure 4 : 4An Illustration of h Lemma 4. h is an order preserving mapping from R n to D(P ). Moreover, h(x * ) = x * = x l if and only if x * is an integer point in P . Table 1 : 1Main Results for Finding one Tarski's Fixed PointPolynomial Function Oracle Function Componentwise Co-NP-Complete Θ(N 1 Table 2 : 2Main Results for Determining the Uniqueness of Tarski's Fixed Points Corollary 8. Given lattice (L, ≤ l ) and an order preserving mapping f as a polynomial function, determining that f has a unique fixed point in L is a Co-NP hard problem.Poof of Co-NP-hard in Componentwise OrderingLet N = {1, 2, . . . , n} and N 0 = {0, 1, . . . , n}. For any real number α, let α denote the greatest integer less than or equal to α and α the smallest integer greater than or equal to α. For any vector x = (x 1 , x 2 , . . . , x n ) ∈ R n , let x = ( x 1 , x 2 , . . . , x n ) and x = ( x 1 , x 2 , . . . , x n ) . Given these notations, we present a polynomial-time reduction of integer programming, which is as follows.For any x ∈ R n , letThen, as a direct result of the property of the matrix A, one can easily obtain thatLet e = (1, 1, . . . , 1) ∈ R n . For any given v ∈ R n , if P (v) = ∅, Lemma 2 implies that max x∈P (v) e x has a unique solution, which we denote byProof. Suppose that there is a point x 0 = (x 0 1 , x 0 2 , . . . , x 0 n ) ∈ P (v) with x 0 k > x v k for some k ∈ N . Then, Lemma 2 implies thatThus, e x 0v > e x v = max x∈P (v) e x. A contradiction arises. This completes the proof., . . . , x max n ) be the unique solution of max x∈P e x and x min = (x min 1 , x min 2 , . . . , x min n ) with x min j = min x∈P x j , j = 1, 2, . . . , n. Then,where x u = x max and x l = x min . Thus, D(P ) contains all integer points in P . Without loss of generality, we assume that x l < c x min (Letfor some i ∈ N ). Obviously, the sizes of both x l and x u are bounded by polynomials of the sizes of the matrix A and the vector b since x l and x u are obtained from the solutions of linear programs with rational data. Decentralized supply chains with competing retailers under demand uncertainty. F Bernstein, A Federgruen, Management Science. 51F. Bernstein and A. Federgruen (2005). Decentralized supply chains with competing retailers under demand uncertainty, Management Science 51: 18-29. A general equilibrium model for industries with price and service competition. F Bernstein, A Federgruen, Operations Research. 52F. Bernstein and A. Federgruen (2004). A general equilibrium model for industries with price and service competition, Operations Research 52: 868-886. The complexity of equilibria: Hardness results for economies via a correspondence with games. B Codenotti, A Saberi, K R Varadarajan, Y Ye, Theor. Comput. Sci. 4082-3B. Codenotti, A. Saberi, K. R. Varadarajan and Y. Ye (2008). The complexity of equilibria: Hardness results for economies via a correspondence with games. Theor. Comput. Sci. 408(2-3): 188-198. Stcok wars: inventory competition in a two echelon supply chain. G P Cachon, Operations Research. 49G.P. Cachon (2001). Stcok wars: inventory competition in a two echelon supply chain, Operations Research 49: 658-674. Capacity choice and allocation: strategic behavior and supply chain performance. G P Cachon, M A Lariviere, Management Sceince. 45G.P. Cachon and M.A. Lariviere (1999). Capacity choice and allocation: strategic behavior and supply chain performance, Management Sceince 45: 1091-1108. The complexity of Tarski's fixed point theorem. C L Chang, Y D Lyuu, Y W Ti, Theoretical Computer Science. 401C.L. Chang, Y.D. Lyuu and Y.W. Ti (2008). The complexity of Tarski's fixed point theorem, Theoretical Computer Science 401: 228-235. On the Complexity of 2D Discrete Fixed Point Problem. X Chen, X Deng, Theoretical Computer Science. 41044X. Chen and X. Deng (2009). On the Complexity of 2D Discrete Fixed Point Prob- lem, Theoretical Computer Science 410(44), 4448-4456. Matching algorithmic bounds for finding a Brouwer fixed point. X Chen, X Deng, Journal of the ACM. 55326X. Chen and X. Deng (2008). Matching algorithmic bounds for finding a Brouwer fixed point, Journal of the ACM 55(3), 13:1-13:26. Settling the computational complexity of two player Nash equilibrium. X Chen, X Deng, S Teng, Journal of the ACM (JACM). 356X. Chen, X. Deng and S. Teng (2008). Settling the computational complexity of two player Nash equilibrium, Journal of the ACM (JACM) 56(3). Finding all equilibria in games of strategic complements. F Echenique, Journal of Economic Theory. 135F. Echenique (2007). Finding all equilibria in games of strategic complements, Jour- nal of Economic Theory 135: 514-532. Game Theory. D Fudenberg, J Tirole, MIT PressD. Fudenberg and J. Tirole (1991). Game Theory, MIT Press. Nash and Correlated Equilibria: Some Complexity Considerations. I Gilboa, E Zemmel, Games and Economic Behavior. 1I. Gilboa and E. Zemmel (1989). Nash and Correlated Equilibria: Some Complexity Considerations, Games and Economic Behavior 1: 80-93. Reducibility Among Combinatorial Problems. R M Karp, R. E. MillerR.M. Karp (1972). Reducibility Among Combinatorial Problems. In: R. E. Miller; J W Thatcher, Complexity of Computer Computations. J.D. BohlingerNew YorkPlenumJ. W. Thatcher; J.D. Bohlinger (eds.) Complexity of Computer Computations. New York: Plenum: 85-103. Exponential Lower Bounds for Finding Brouwer Fixed Points. M D Hirsch, C H Papadimitriou, S Vavasis, Journal of Complexity. 5M.D. Hirsch, C.H. Papadimitriou and S. Vavasis (1989). Exponential Lower Bounds for Finding Brouwer Fixed Points, Journal of Complexity, Vol. 5, pp.379-416. The computational complexity of simultaneous Diophantine approximation problems. J C Lagarias, SIAM Journal on Computing. 14J.C. Lagarias (1985). The computational complexity of simultaneous Diophantine approximation problems, SIAM Journal on Computing 14: 196-209. The competitive newsboy. S A Lippman, K F Mccardle, Operations Research. 45S.A. Lippman and K.F. McCardle (1997). The competitive newsboy, Operations Research 45: 54-65. Rationalizability, learning, and equilibrium in games with strategic complementarities. P Milgrom, J Roberts, Econometrica. 58P. Milgrom and J. Roberts (1990). Rationalizability, learning, and equilibrium in games with strategic complementarities, Econometrica 58: 155-1277. Comparing equilibria. P Milgrom, J Roberts, American Economic Review. 84P. Milgrom and J. Roberts (1994). Comparing equilibria, American Economic Review 84: 441-459. P Milgrom, C Shannon, Monotone comparative statics. 62P. Milgrom and C. Shannon (1994). Monotone comparative statics, Econometrica 62: 157-180. An iterative method of solving a game. J Robinson, Annals of Mathematics. 542J. Robinson (1951). An iterative method of solving a game, Annals of Mathematics 54(2):296-301. A lattice-theoretical fixpoint theorem and its applications. A Tarski, Pacific Journal of Mathematics. 5A. Tarski (1955). A lattice-theoretical fixpoint theorem and its applications, Pacific Journal of Mathematics 5: 285-308. Ordered Optimal Decisions. D M Topkis, Ph.D. Dissertation, Stanford UniversityD.M. Topkis (1968). Ordered Optimal Decisions. Ph.D. Dissertation, Stanford Uni- versity. Equilibrium points in nonzero-sum n-person submodular games. D M Topkis, SIAM Journal on Control and Optimization. 17D.M. Topkis (1979). Equilibrium points in nonzero-sum n-person submodular games, SIAM Journal on Control and Optimization 17: 773-787. Supermodularity and Complementarity. D M Topkis, Princeton University PressD.M. Topkis (1998). Supermodularity and Complementarity, Princeton University Press. Nash equilibrium with strategic complementarities. X Vives, Journal of Mathematical Economics. 19X. Vives (1990). Nash equilibrium with strategic complementarities, Journal of Mathematical Economics 19: 305-321. Complemetarities and games: new developments. X Vives, Journal of Economic Literature XLIII. X. Vives (2005). Complemetarities and games: new developments, Journal of Eco- nomic Literature XLIII: 437-479. This completes the proof. From Definition 9, one can see that, for each y ∈ D(P ), it takes at most 2n linear programs to compute h(y). Therefore, h(y) is polynomial-time defined for any given y ∈ D(P ). Since 2 ≤ k 2 ≤ n, hence, one of the above four cases must occur. The above results show that, for every case, it always holds that h(y 1 ) ≤ l h(y 2 ). As a corollary of Theorem 12 and Theorem 11, we obtain thatSince 2 ≤ k 2 ≤ n, hence, one of the above four cases must occur. The above results show that, for every case, it always holds that h(y 1 ) ≤ l h(y 2 ). This completes the proof. From Definition 9, one can see that, for each y ∈ D(P ), it takes at most 2n linear programs to compute h(y). Therefore, h(y) is polynomial-time defined for any given y ∈ D(P ). As a corollary of Theorem 12 and Theorem 11, we obtain that
[]
[ "BOUNDEDNESS OF MULTILINEAR PSEUDO-DIFFERENTIAL OPERATORS ON MODULATION SPACES", "BOUNDEDNESS OF MULTILINEAR PSEUDO-DIFFERENTIAL OPERATORS ON MODULATION SPACES" ]
[ "Shahla Molahajloo ", "ANDKasso A Okoudjou ", "Götz E Pfander " ]
[]
[]
Boundedness results for multilinear pseudodifferential operators on products of modulation spaces are derived based on ordered integrability conditions on the short-time Fourier transform of the operators' symbols. The flexibility and strength of the introduced methods is demonstrated by their application to the bilinear and trilinear Hilbert transform.
10.1007/s00041-016-9461-2
[ "https://arxiv.org/pdf/1502.03317v1.pdf" ]
14,594,681
1502.03317
39fbacb9bcac24f1349052876588d5ab4971fbab
BOUNDEDNESS OF MULTILINEAR PSEUDO-DIFFERENTIAL OPERATORS ON MODULATION SPACES 11 Feb 2015 Shahla Molahajloo ANDKasso A Okoudjou Götz E Pfander BOUNDEDNESS OF MULTILINEAR PSEUDO-DIFFERENTIAL OPERATORS ON MODULATION SPACES 11 Feb 2015arXiv:1502.03317v1 [math.FA] Boundedness results for multilinear pseudodifferential operators on products of modulation spaces are derived based on ordered integrability conditions on the short-time Fourier transform of the operators' symbols. The flexibility and strength of the introduced methods is demonstrated by their application to the bilinear and trilinear Hilbert transform. Introduction and motivation Pseudodifferential operators have long been studied in the context of partial differential equations [39,40,42,57,59,67,69]. Among the most investigated topics on such operators are minimal smoothness and decay conditions on their symbols that guarantee their boundedness on function spaces of interest. In recent years, results from time-frequency analysis have been exploited to obtain boundedness results on so-called modulation spaces, which in turn yield boundedness on Bessel potential spaces, Sobolev spaces, and Lebesgue spaces via well established embedding results. In this paper, we develop time-frequency analysis based methods in order to establish boundedness of classes multilinear pseudodifferential operators on products of modulation spaces. 1.1. Pseudodifferential operators. A pseudodiffrential operator is an operator T σ formally defined through its symbol σ by T σ f (x) = R d σ(x, ξ)f (ξ) e 2πix·ξ dξ, where the Fourier transformation is formally given by (Ff ) (ξ) = f (ξ) = R d e −2πix·ξ f (x) dx. Hörmander symbol classes are arguably the most used in investigating pseudodifferential operators. In particular, the class of smooth symbols with bounded derivatives was shown to yield bounded operator on L 2 in the celebrated work of Calderón and Vaillancourt [11]. More specifically, if σ ∈ S 0 0,0 , that is, for all non-negative integers α, β there exists C α,β with (1.1) |∂ α x ∂ β ξ σ(x, ξ)| ≤ C α,β , then T σ maps L 2 into itself. 1.2. Time-frequency analysis of pseudodifferential operators. In [55], J. Sjöstrand defined a class of bounded operators on L 2 whose symbols do not have to satisfy a differentiability assumption and which contains those operators with symbol in S 0 0,0 . He proved that this class of symbols forms an algebra under the so-called twisted convolution [30,34,55,56]. Incidentally, symbols of Sjöstrand's class operators are characterized by their membership in the modulation space M ∞,1 , a space of tempered distributions introduced by Feitchinger via integrability and decay conditions on the distributions' short-time Fourier transform [20]. Gröchenig and Heil then significantly extended Sjöstrands results by establishing the boundedness of his pseudodifferential operators on all modulation spaces [35]. These and similar results on pseudodifferential operators were recently extended by Molahajloo and Pfander through the introduction of ordered integrability conditions on the short-time Fourier transform of the operators' symbols [49]. Similar approaches have been used to derive other boundedness results of pseudodifferential operators on modulation space like spaces [10]. The approach of varying integration orders of short-time Fourier transforms of, here, symbols of multilinear operators lies at the center of this paper. Today, the functional analytical tools developed to analyze pseudodifferential operators on modulation spaces form an integral part of time-frequency analysis. They are used, for example, to model time-varying filters prevalent in signal processing. By now, a robust body of work stemming from this point of view has been developed [18,35,36,37,54,60,61,63,66], and has lead to a number of applications to areas such as seismic imaging, and communication theory [47,58]. Multilinear pseudodifferential operators. A multilinear pseudo-differential operator T σ with distributional symbol σ on R (m+1)d , is formally given by (1.2) ( T σ f ) (x) = R md e 2πix·( d i=1 ξ i ) σ(x, ξ) f 1 (ξ 1 ) f 2 (ξ 2 ) . . . f m (ξ m ) dξ. Here and in the following we use boldface characters as ξ = (ξ 1 , . . . , ξ m ) to denote products of m vectors ξ i ∈ R d , and it will not cause confusion to use the symbol f for both, a vector of m functions or distributions f = (f 1 , . . . , f m ), that is, a vector valued function or distribution on R d , and the rank one tensor f = f 1 ⊗ . . . ⊗ f m , a function or distribution on R md . For example, we write f (ξ) = f 1 (ξ 1 ) · . . . · f m (ξ m ), while f (ξ) = ( f 1 (ξ), . . . , f m (ξ)). A trivial example of a multilinear operator is given by the constant symbol σ ≡ 1. Clearly, T σ (f ) is simply the product f 1 (x)f 2 (x) . . . f m (x). Thus, Hölder's inequality determines boundedness on products of Lebesgue spaces. On the other hand, when the symbol is independent of the space variable x, that is, when σ(x, ξ) ≡ τ (ξ), the T σ = T τ is a multilinear Fourier multipliers. We refer to [2,3,17,32,48,50] and the references therein for a small sample of the vast literature on multilinear pseudodiffrential operators. One of the questions that has been repeatedly investigated relates to (minimal) conditions on the symbols σ that would guarantee the boundedness of (1.2) on products of certain function spaces, see [17,Theorem 34]. For example, one can ask if a multilinear version of (1.1) exist. Bényi and Torres ( [2]) proved that unless additional conditions are added, there exist symbols which satisfy such multilinear estimates but for which the corresponding multilinear pseudodifferential operators are unbounded on products of certain Lebesgue spaces. Indeed, in the bilinear case, that is, when m = 2, the class of operators whose symbols satisfy for all non-negative integers α, β, γ, (1.3) |∂ α x ∂ β ξ ∂ γ η σ(x, ξ, η)| ≤ C α,β,γ contains operators that do not map L 2 ×L 2 into L 1 . Multilinear pseudodifferential operators in the context of their boundedness on modulation spaces, were first investigated in [6,7]. Results obtained in this setting have been used to establish well posedness for a number of non-linear PDEs in these spaces [5,9]. For example, and as opposed to the classical analysis of multilinear pseudodifferential operators, it was proved in [7] that symbols satisfying (1.3) yield boundedness from L 2 × L 2 into the modulation space M 1,∞ , a space that contains L 1 . The current paper offers some new insights and results in this line of investigation. 1.4. Our contributions. Modulation spaces are defined by imposing integrability conditions on the short-time Fourier transform of the distribution at hand. Following ideas from Molahajloo and Pfander [49], we impose various ordered integrability conditions on the short-time Fourier transform of a tempered distribution σ on R (m+1)d which is a symbol of a multilinear pseudodifferential operator. By using this new setting, we establish new boundedness results for multilinear pseudodifferential operators on products of modulations spaces. For example, the following result follows from our main result, Theorem 4.1. Theorem 1.1. If 1 ≤ p 0 , p 1 , p 2 , q 1 , q 2 , q 3 ≤ ∞ satisfy 1 p 0 ≤ 1 p 1 + 1 p 2 and 1 + 1 q 3 ≤ 1 q 1 + 1 q 2 , and if for some Schwartz class function ϕ, the symbol short-time Fourier transform V ϕ σ(x, t 1 , t 2 , ξ 1 , ξ 2 , ν) = σ( x, ξ 1 , ξ 2 )ϕ(x− x)ϕ(ξ 1 − ξ 1 )ϕ(ξ 2 − ξ 2 )) e −2πi(xν−t 1 ξ 1 −t 2 ξ 2 ) d x d ξ 1 d ξ 2 satisfies σ M (∞,1,1);(∞,∞,1) = sup ξ 1 ,ξ 2 sup x |V ϕ σ(x, t 1 , t 2 , ξ 1 , ξ 2 , ν)| dt 1 dt 2 dν < ∞, (1.4) then the pseudodifferential operator T σ initially defined on S(R d ) × S(R d ) by T σ (f 1 , f 2 )(x) = e 2πix·(ξ 1 +ξ 2 ) σ(x, ξ 1 , ξ 2 ) f 1 (ξ 1 ) f 2 (ξ 2 ) dξ 2 dξ 1 extends to a bounded bilinear operator from M p 1 ,q 1 × M p 2 ,q 2 into M p 0 ,q 3 . Moreover, there exists a constant C > 0 that only depends on d, the p i , and q i with T σ (f 1 , f 2 ) M p 0 ,q 3 ≤ C σ M (∞,1,1);(∞,∞,1) f 1 M p 1 ,q 1 f 2 M p 2 ,q 2 . We note that the classical modulation space M ∞,1 (R 3d ) can be continuously embedded into M (∞,1,1),(∞,∞,1) (R 3d ) implicitly defined by (1.4). Indeed, σ M (∞,1,1);(∞,∞,1) = sup ξ 1 ,ξ 2 sup x |V ϕ σ(x, t 1 , t 2 , ξ 1 , ξ 2 , ν)| dt 1 dt 2 dν ≤ sup x,ξ 1 ,ξ 2 |V ϕ σ(x, t 1 , t 2 , ξ 1 , ξ 2 , ν)| dt 1 dt 2 dν = σ M ∞;1 . As a consequence Theorem 1.1 already extends the main result, Theorem 3.1, in [7]. The herein presented new approach allows us to investigate the boundedness of the bilinear Hilbert transform on products of modulation spaces. Indeed, in the one dimensional setting, d = 1, it can be shown that the symbol of the bilinear Hilbert transform σ H ∈ M (∞,1,r);(∞,∞,1) \ M (∞,1,1);(∞,∞,1) for all r > 1. Hence, σ H ∈ M ∞,1 and existing methods to investigate multilinear pseudodifferential operators on products of modulations spaces are not applicable. Using the techniques developed below, we obtain novel and wide reaching boundedness results for the bilinear Hilbert transform on the product of modulation spaces. For example, as a special case of our result, we prove that the bilinear Hilbert transform is bounded from L 2 × L 2 into the modulation space M 1+ǫ,1 for any ǫ > 0. The results established here aim at generality and differ in technique from the ground breaking results about the bilinear Hilbert transformed as obtained by Lacey and Thiele [44,43,45,46]. They are therefore not easily compared to those obtained using "hard analysis" techniques. Nonetheless, using our results and some embeddings of modulation spaces into Lebesgue space, we discuss the relation of our results on the boundedness of the bilinear Hilbert transform to the known classical results. The herein given framework is flexible enough to allow an initial investigation of the trilinear Hilbert transform. Here we did not try to optimize our results but just show through some examples how one can tackle this more difficult operator in the context of modulation spaces. 1.5. Outline. We introduce our new class of symbols based on a modification of the shorttime Fourier transform in Section 2. We then prove a number of technical results including some Young-type inequalities, that form the foundation of our main results. Section 3 contains most of the key results needed to establish our results. This naturally leads to our main results concerning the boundedness of multilinear pseudodifferential operators on product of modulation spaces. Section 4 is devoted to applications of our results. In Section 4.1 we specialize our results to the bilinear case, proving boundedness results of bilinear pseudodifferential operators on products of modulation spaces. We then consider as example the bilinear Hilbert transform in Section 4.2. In Section 4.3 we initiate an investigation of the boundedness of the trilinear Hilbert transform on products of modulation spaces. Symbol classes for multilinear pseudodifferential operators 2.1. Background on modulation spaces. Let r = (r 1 , r 2 , . . . , r m ) where 1 ≤ r i < ∞, i = 1, 2, . . . , m. The mixed norm space L r (R md ) is Banach space of measurable functions F on R md with finite norm [1] F L r = R d . . . R d R d |F (x 1 , . . . , x m )| r 1 dx 1 r 2 /r 1 dx 2 . . . rm/r m−1 dx m 1/rm . Similarly, we define L r (R md ) where r i = ∞ for some indices i. For a nonnegative measurable function w on R md wee define L r w (R md ) to be the space all F on R md for which F w is in L r (R md ), that is, F L r w = F w L r < ∞. For the purpose of this paper, we define a mixed norm space depending on a permutation that determines the order of integration. For a permutation ρ on {1, 2, . . . , n}, the weighted mixed norm space L r;ρ w (R md ) is the set of all measurable functions F on R md for which F L r;ρ w = R d R d . . . R d |F (x 1 , x 2 , . . . , x n ) w(x 1 , x 2 , . . . , x n )| r ρ(1) dx ρ(1) r ρ(2) /r ρ(1) dx ρ(2) r ρ(3) /r ρ(2) . . . dx ρ(n) 1/r ρ(n) is finite. Let M ν denote modulation by ν ∈ R d , namely, M ν f (x) = e 2πit·ν f (x), and let T t be translation by t ∈ R d , that is, T t f (x) = f (x − t). The short-time Fourier transform V φ f of f ∈ S ′ (R d ) with respect to the Gaussian window φ(x) = e − x 2 is given by V φ f (t, ν) = F f T t φ (ν) = (f, M ν T t φ) = f (x) e −2πixν φ(x − t) dx . The modulation space M p,q (R d ), 1 ≤ p, q ≤ ∞, is a Banach space consisting of those f ∈ S ′ (R d ) with f M p,q = V φ f L p,q = |V φ f (t, ν)| p dt q/p dν 1/q < ∞ , with usual adjustment of the mixed norm space if p = ∞ and/or q = ∞. We refer to [20,34] for background on modulation spaces. In the sequel we consider weight functions w on R 2(m+1)d . We assume that w is continuous and sub-multiplicative, that is, w(x + y) ≤ Cw(x)w(y). Associated to w will be a family of wmoderate weight functions v. That is v is positive, continuous and satisfies v(x+y) ≤ Cw(x)v(y). 2.2. A new class of symbols. The commonly used short-time Fourier transform analyzes functions in time 1 ; as symbols have time and frequency variables, we base the herein used shorttime Fourier transform on a Fourier transform that takes Fourier transforms in time variables and inverse Fourier transforms in frequency variables. We then order the variables, first time, then frequency. That is, we follow the idea of symplectic Fourier transforms F s on phase space, F s F (t, ν) = R (m+1)d F (x, ξ) e 2πi(ξt−xν) dξdx. For F ∈ S ′ (R (m+1)d ) and φ ∈ S(R (m+1)d ), we define the symbol short-time Fourier transform V φ F of F with respect to φ by V φ F (x, t, ξ, ν) = F s F T (x,ξ) φ (t, ν) = F, M (−ν,t) T (x,ξ) φ = R md R d e −2πi( xν−t ξ) F ( x, ξ, )φ( x − x, ξ − ξ) d x d ξ where x, ν ∈ R d , and t, ξ ∈ R md . Note that the symbol short-time Fourier transform is related to the ordinary short-time Fourier transform by V φ F (x, t, ξ, ν) = V φ F (x, ξ, ν, −t). Modulation spaces for symbols of multilinear operators are then defined by requiring the symbol short-time Fourier transform of an operator to be in certain weighted L p spaces. To describe these, we fix decay parameters 1 ≤ p 0 , p 1 , . . . , p m , q 1 , q 2 , . . . , q m , q m+1 ≤ ∞, and permutations κ on {0, 1, . . . , m} and ρ on {1, . . . , m, m + 1}. The latter indicate the integration order of the time, respectively frequency, variables. Put, p = (p 1 , p 2 , . . . , p m ), q = (q 1 , q 2 , . . . , q m ) and let w be a weight function on R 2(m+1)d . Then L (p 0 ,p),κ;(q,q m+1 ),ρ w (R 2(m+1)d ) is the mixed norm space consisting of those measurable functions F for which the norm F L (p 0 ,p),κ;(q,q m+1 ),ρ w = R d R d . . . R d R d . . . R d R d |w(t 0 , t 1 , . . . , t m , ξ 1 , . . . , ξ m , ξ m+1 ) F (t 0 , t 1 , . . . , t m , ξ 1 , . . . , ξ m , ξ m+1 )| p κ(0) dt κ(0) p κ(1) /p κ(0) dt κ(1) p κ(2) /p κ(1) . . . dt κ(m) q ρ(1) /p κ(m) dξ ρ(1) q ρ(2) /q ρ(1) . . . dξ ρ(m+1) 1/q ρ(m+1) is finite. The weighted symbol modulation space M (p 0 ,p),κ;(q,q m+1 ),ρ w (R (m+1)d ) is composed of those F ∈ S ′ (R (m+1)d ) with F M (p 0 ,p),κ;(q,q m+1 ),ρ w = V φ F L (p 0 ,p),κ;(q,q m+1 ),ρ w < ∞ . When κ and ρ are identity permutations, then we denote L (p 0 ,p),κ;(q,q m+1 ),ρ w (R 2(m+1)d ) and M (p 0 ,p),κ;(q,q m+1 ),ρ w (R 2(m+1)d ) by L (p 0 ,p);(q,q 0 ) w (R 2(m+1)d ) and M (p 0 ,p);(q,q 0 ) w (R 2(m+1)d ), respectively. The dependence of the norm on the choice of κ, ρ, as well as the advantage of choosing a particular order will be discussed in Section 2.4. For simplicity of notation, we set S(ξ) = m i=1 ξ i . For functions g and components of f in S(R d ), the Rihaczek transform R(f , g) of f and g is defined by R (f , g) (x, ξ) = e 2πix·(ξ 1 +...+ξm) f 1 (ξ 1 ) · . . . · f m (ξ m )g(x) = e 2πix·S(ξ) f (ξ)g(x). Multilinear pseudo-differential operators are related to Rihaczek transforms by T σ f , g = σ, R(f , g) a-priori for all functions f i and g in S(R d ) and symbols σ ∈ S(R (m+1)d ). With x ± t = x ± (t 1 , . . . , t m ) = (x ± t 1 , . . . , x ± t m ) , it can be easily seen that R (f , g) (x, ξ) = F t→ξ (f (· + x)) g(x) where F t→ξ (f (· + x)) (ξ) = R md e −2πit·ξ f (t + x) dt. Lemma 2.1. For ϕ real-valued, ϕ = (ϕ, . . . , ϕ), f = (f 1 , f 2 , . . . , f m ) ∈ S(R d ) m , and g ∈ S(R d ), V T A (ϕ⊗ϕ) T A (f ⊗ g)(x, −ξ, t, ν) = V ϕ f 1 (x − t 1 , ξ 1 ) . . . V ϕ f m (x − t m , ξ m ) · V ϕ g(x, ν − S(ξ)). Moreover, V R(ϕ,ϕ) R(f , g) (x, ξ, ν, t) = e −2πiξt V T A (ϕ⊗ϕ) T A (f ⊗ g) (x, −t, ν, ξ), and in particular, | V R(ϕ,ϕ) R(f , g) (x, ξ, ν, t)| = |V T A (ϕ⊗ϕ) T A (f ⊗ g)(x, −ξ, −t, ν)| Proof. We compute V T A (ϕ⊗ϕ) T A (f ⊗ g) (x, −ξ, t, ν) = R md R d e −2πi( xν+ tξ) T A f ⊗ g ( x, t)T A (ϕ ⊗ ϕ) ( x − x, t − t) d x d t = R d R md e −2πi tξ f ( x − t)ϕ( x − x − t + t) d t e −2πi xν g( x)ϕ( x − x) d x = R d R md f (s)g( x)e −2πi(ν x+ξ( x−s)) ϕ(s − (x − t))ϕ( x − x) d x ds = R md e −2πiξs f (s)ϕ(s − (x − t)) ds R d e −2πi(ν+S(ξ)) x g( x)ϕ( x − x) d x = (V ϕ f ) (x − t, ξ) (V ϕ g) (x, ν + S(ξ)). Further, V R(ϕ,ϕ) R(f , g) (x, ξ, ν, t) = R md R d e −2πi(ν x+t ξ) R(f , g)( x, ξ)R(ϕ, ϕ)( x − x, ξ − ξ) d x d ξ = R md R d e −2πi(ν x+t ξ) F t→ ξ f ( x − ·) g( x)F t→ ξ−ξ (ϕ( x − x − ·))ϕ( x − x) d x d ξ = R md R d e −2πi(ν x+t ξ) F t→ ξ f ( x − ·) g( x)F t→ξ− ξ (ϕ( x − x − ·))ϕ( x − x) d x d ξ. On the other hand, by using Parseval identity we have V T A (ϕ⊗ϕ) T A (f ⊗ g) (x, t, ν, ξ) = R d R md e −2πi( xν+ tξ) T A f ⊗ g ( x, t)T A (ϕ ⊗ ϕ) ( x − x, t − t) d x d t = R d R md e −2πi tξ f ( x − t)ϕ( x − x − t + t) d t e −2πi xν g( x)ϕ( x − x) d x = R d R md F t→ ξ f ( x − ·) F −1 t→ ξ e −2πi tξ ϕ( x − x + t − ·) e −2πi xν g( x)ϕ( x − x) d ξ d x. But, F −1 t→ ξ e −2πi tξ ϕ( x − x + t − ·) = e −2πit(ξ− ξ) F γ→ξ− ξ (ϕ( x − x − ·)) , therefore, V T A (ϕ⊗ϕ) T A (f ⊗ g) (x, t, ν, ξ) = e −2πitξ R md R d e 2πi(t ξ−v x) F t→ ξ f ( x − ·) F t→ξ− ξ (ϕ( x − x − ·)) g( x)ϕ( x − x) d x d ξ. 2.3. Young type results. The following results are consequences of Young's inequality and will be central in proving our main results. We use the convention that summation over the empty set is equal to 0. Lemma 2.2. Suppose that 1 ≤ p k , r k ≤ ∞ for k = 0, 1, . . . , m and (A1) p k ≤ r k , k = 1, . . . , m; (A2) k ℓ=1 1 p ℓ − 1 r ℓ ≤ 1 r 0 − 1 p k+1 , k = 0, . . . , m − 1; (A3) m ℓ=1 1 p ℓ − 1 r ℓ = 1 r 0 − 1 p 0 ; then F (x, t) = f (x − t)g(x) satisfies F L (r 0 ,r) ≤ g L p 0 f L p . Proof. For simplicity, we use capital letters for the reciprocals of p k , r k , that is, P k = 1/p k , R k = 1/r k , k = 0, . . . , m. Recalling that summation over the empty set is defined as 0, our assumptions (A1) -(A3) are simply (A1) P k ≥ R k , k = 1, . . . , m; (A2) R 0 − P k+1 ≥ k ℓ=1 P ℓ − R ℓ , k = 0, . . . , m − 1; (A3) m ℓ=0 R ℓ = m ℓ=0 P ℓ . Define 1/b 1 = B 1 = R 0 + R 1 − P 1 , and for k = 2, . . . , m, 1/b k = B k = B k−1 + R k − P k = R 0 + k ℓ=1 R ℓ − P ℓ . The first application of Young's inequality below requires that p 1 /r 0 , r 1 /r 0 , b 1 /r 0 ≥ 1 and 1/(p 1 /r 0 ) + 1/(b 1 /r 0 ) = 1 + 1/(r 1 /r 0 ). This translates to R 0 ≥ R 1 , B 1 , P 1 and P 1 + B 1 = R 0 + R 1 which is equivalent to R 0 ≥ R 1 , P 1 , R 0 + R 1 − P 1 . But, condition (A1) of the hypothesis implies that P 1 ≥ R 1 . Thus we have, R 0 ≥ R 1 , P 1 and P 1 ≥ R 1 , that is, R 0 ≥ P 1 ≥ R 1 . Similarly, the successive applications of Young's inequality follow by replacing p 1 , r 1 , b 1 , r 0 by p k , r k , b k , b k−1 , respectively. That is, we require B k−1 ≥ R k , B k−1 + R k − P k , P k which is equivalent to B k−1 ≥ P k ≥ R k which follows from (A1). We shall also use the standard fact that for 0 < α, β, γ, δ < ∞, |f | α γ L β = |f | αδ γ δ L β/δ , and set f (x) = f (−x). We compute F rm L r 0 ,r = R d R d . . . R d R d |f 1 (x − t 1 ) . . . f m (x − t m ) g(x)| r 0 dx r 1 r 0 dt 1 r 2 r 1 . . . rm dt m = R d R d . . . R d R d | f 1 (t 1 − x) T t 2 f 2 (x) . . . T tm f m (x) g(x) | r 0 dx r 1 r 0 dt 1 r 2 r 1 . . . rm dt m = R d R d . . . R d | f 1 | r 0 * |T t 2 f 2 . . . T tm f m g| r 0 (t 1 ) r 1 r 0 dt 1 r 2 r 1 . . . rm dt m = R d R d . . . R d | f 1 | r 0 * |T t 2 f 2 . . . T tm f m g| r 0 r 1 r 0 r 2 r 1 L r 1 /r 0 dt 2 r 3 r 2 . . . rm dt m ≤ R d R d . . . R d | f 1 | r 0 r 2 r 0 L p 1 /r 0 |T t 2 f 2 . . . T tm f m g| r 0 r 2 r 0 L b 1 /r 0 dt 2 r 3 r 2 . . . rm dt m = R d R d . . . R d f 1 r 2 L p 1 |T t 2 f 2 . . . T tm f m g| b 1 r 2 b 1 L 1 dt 2 r 3 r 2 . . . rm dt m = f 1 rm L p 1 R d R d . . . R d R d |f 2 (x − t 2 ) . . . f m (x − t m ) g(x)| b 1 dx r 2 b 1 dt 2 r 3 r 2 . . . rm dt m . . . ≤ f 1 rm L p 1 . . . f m−1 rm L p m−1 R d R d |f m (x − t m ) g(x)| b m−1 dx rm b m−1 dt m = f 1 rm L p 1 . . . f m−1 rm L p m−1 | f m | b m−1 * |g| b m−1 rm b m−1 L rm b m−1 ≤ f 1 rm L p 1 . . . f m−1 rm L p m−1 f m rm L pm |g| p 0 rm p 0 L 1 = f 1 rm L p 1 . . . f m rm L pm g rm L p 0 , where each inequality stems from an application of Young's inequality for convolutions. In the final step, we used b m = p 0 which follows by combining the definition of b m with hypothesis (A3). Remark 2.3. Observe that if we would add the condition p 0 ≤ r 0 in hypothesis (A1) of Lemma 2.2, then (A1) and (A3) would combine to imply p k = r k for k = 0, . . . , m. Indeed, the strength of Lemma 2.2 lies in the fact that p 0 ≤ r 0 and p k = r k for k = 0, . . . , m are not implied by the hypotheses. Setting ∆ k = 1 p k − 1 r k for k = 0, . . . , m, (A1) in Lemma 2.2 is ∆ 1 , . . . , ∆ m ≥ 0 and condition (A3) becomes ∆ 0 + m k=1 ∆ k = 0, a condition that allows ∆ 0 to be negative, that is p 0 > r 0 . In short, all ∆ k > 0 contribute to compensate for ∆ 0 = r 0 − p 0 being negative. Let us now briefly discuss condition (A2) in Lemma 2.2. For k = 0, we have 0 ≤ 1 r 0 − 1 p 1 . To satisfy condition (A2) for k = 1, we increase the left hand side by ∆ 1 = 1 p 1 − 1 r 1 ≥ 0, add to the right hand side the possibly negative term 1 p 1 − 1 p 2 , and require that the sum on the left remains bounded above by the sum on the right. For k = 2, we increase the left hand side by ∆ 2 = 1 p 2 − 1 r 2 ≥ 0 and add to the right hand side 1 p 2 − 1 p 3 , maintaining that the right hand side dominates the left hand side. This is illustrated in Figure 1 below. In the case m = 1, the conditions ∆ 1 ≥ 0 and ∆ 0 + ∆ 1 = 0 from Lemma 2.2 are amended by the requirement r 0 ≤ p 1 , and, for example, if r 0 = 1, p 0 = 2, then Lemma 2.2 is applicable whenever 1 ≥ 1 p 1 = 1 r 1 + 1 2 , that is, if 1 ≤ p 1 = 2r 1 r 1 +2 . If m = 2, then ∆ 1 , ∆ 2 ≥ 0 and ∆ 0 + ∆ 1 + ∆ 2 = 0 from Lemma 2.2 are combined with the condition r 0 ≤ p 1 and ∆ 1 ≤ 1 r 0 − 1 p 2 . It is crucial in what follows to observe that these conditions are sensitive to the order of the p k and the r k . For example, the parameters r 0 = 1, p 0 = 2, r 1 = 1 = p 1 , p 2 = 1, r 2 = 2 satisfy the hypothesis, while r 0 = 1, p 0 = 2, r 2 = 1 = p 2 , p 1 = 1, r 1 = 2 do not. Indeed, if for some k, ∆ k = 1 p k − 1 r k is much smaller than 1 p k − 1 p k+1 , then we would profit more from this if k is a small index, that is, the respective summands play a role early on in the summation. Below, we shall use this idea and reorder the indices. This allows us to first choose κ(1) = k 1 ∈ {1, . . . , d} with ∆ κ(1) = 1 p κ(1) − 1 r κ(1) small, and then κ(2) = k 2 so that 1 p κ(1) − 1 p κ(2) is large. Clearly, the feasibility of κ(2) also depends on the size of ∆ κ(2) = 1 p κ(2) − 1 r κ(2) , so finding an optimal order cannot be achieved with a greedy algorithm. Moreover, note that the spaces M (p 0 ,p),κ;(q,q m+1 ),σ and M (p 0 ,p),id;(q,q m+1 ),id are not identical, hence, we cannot choose κ and ρ arbitrarily. Remark 2.4. Note that conditions (A1) and (A3) follow from (but are not equivalent to) the simpler condition 0 .5 1 1.5 2 2.5 3 3.5(A4) 1 ≤ r 0 ≤ p 1 ≤ r 1 ≤ p 2 ≤ . . . ≤ r m−1 ≤ p m ≤ r m ≤ ∞. Equality (A3) can then be satisfied by choosing an appropriate p 0 ≥ 1. The inequalities in (A1) imply that the LHS of (A2) is positive and, hence, always r 0 ≤ p k ≤ r k for all k. Also, (A1) and (A2) necessitate p k ≤ r k ≤ p m+1 . Similarly to Lemma 2.2, we show the following. Lemma 2.5. Suppose that 1 ≤ q k , s k ≤ ∞ for k = 1, . . . , m + 1 and (B1) q k ≥ s k , k = 1, . . . , m; (B2) m ℓ=k+1 1 q ℓ − 1 s ℓ ≥ 1 s m+1 − 1 q k , k = 1, . . . , m; (B3) m ℓ=1 1 q ℓ − 1 s ℓ = 1 s m+1 − 1 q m+1 ; then for G(t, x) = f (t)g(x + S(t)) we have G L s,s m+1 ≤ f L q g L q m+1 . Proof. As before, our computations involve the introduction of an auxiliary parameter b k . We start with a formal computation, namely, . . G s m+1 L r,s m+1 = R d R d . . . R d |f 1 (t 1 ) . . . f m (t m ) g(x + t 1 + . . . + t m )| s 1 dt 1 s 2 s 1 . . . dt m s m+1 sm dx = R d R d | f m (t m )| sm . . . R d | f 2 (t 2 )| s 2 R d | f 1 (t 1 )g(x − t 1 −t 2 − . . . −t m )| s 1 dt 1 s 2 s 1 dt 2 s 3 s 2 . . . s m+1 sm dx = R d R d | f m (t m )| sm . . . R d | f 2 (t 2 )| s 2 | f 1 | s 1 * |g| s 1 (x − t 2 −t 3 − . . . −t m ) s 2 s 1 dt 2 s 3 s 2 dt 3 s 4 s 3 . . . s m+1 sm dx . . . = R d | f m | sm * | f m−1 | s m−1 * . . . | f 2 | s 2 * | f 1 | s 1 * |g(x)| s 1 s 2 s 1 s 3 s 2 . . . sm s m−1 s m+1 sm dx = | f m | sm * | f m−1 | s m−1 * . . . | f 2 | s 2 * | f 1 | s 1 * |g| s 1 s 2 s 1 s 3 s 2 . . . sm s m−1 s m+1 sm L s m+1 sm ≤ | f m | sm s m+1 sm L qm/sm | f m−1 | s m−1 * . . . | f 2 | s 2 * | f 1 | s 1 * |g| s 1 s 2 s 1 s 3 s 2 . . . sm s m−1 s m+1 sm L bm /sm = f m s m+1 L qm | f m−1 | s m−1 * . . . | f 2 | s 2 * | f 1 | s 1 * |g| s 1= f m s m+1 L qm . . . f 2 s m+1 L q 2 | f 1 | s 1 * |g| s 1 s m+1 s 1 L b 2 /s 1 ≤ f m s m+1 L qm . . . f 2 s m+1 L q 2 | f 1 | s 1 s m+1 s 1 L q 1 /s 1 |g| s 1 s m+1 s 1 L q m+1 /s 1 = f m s m+1 L qm . . . f 1 s m+1 L q 1 g s m+1 L q m+1 . To justify the first application of Young's inequality, we require 1 qm sm + 1 bm sm = 1 + 1 s m+1 sm , q m s m , b m s m , s m+1 s m ≥ 1. Using reciprocals, this is equivalent to Q m + B m = S m + S m+1 , S m ≥ Q m , B m , S m+1 , that is, B m = S m − Q m + S m+1 , S m ≥ Q m , B m , S m+1 . The subsequent application of Young's inequality requires 1 q m−1 s m−1 + 1 b m−1 s m−1 = 1 + 1 bm s m−1 , q m−1 s m−1 , b m−1 s m−1 , b m s m−1 ≥ 1. Using reciprocals, this is equivalent to B m−1 = S m−1 − Q m−1 + B m = S m+1 + m ℓ=m−1 S ℓ − Q ℓ , S m−1 ≥ Q m−1 , B m−1 , B m . In general, for k = 1, . . . , m − 2, we require B m−k = S m−k − Q m−k + B m−k+1 = S m+1 + m ℓ=m−k S ℓ − Q ℓ , S m−k ≥ Q m−k , B m−k , B m−k+1 , and finally, for the last application of Young's inequality, we require Q m+1 = S 1 − Q 1 + B 2 = S m+1 + m ℓ=m−k S ℓ − Q ℓ , S 1 ≥ Q 1 , Q m+1 , B 2 . Now, S k ≥ Q k for k = 1, . . . , m implies 0 ≤ S m+1 ≤ B m ≤ B m−1 ≤ . . . ≤ B 3 ≤ B 2 ≤ Q m+1 , hence, it suffices to postulate aside of S k ≥ Q k for k = 1, . . . , m the conditions S k ≥ B k for k = 2, . . . , m and S 1 ≥ Q m+1 , B 2 . For k = 2, . . . , m, we use that m+1 ℓ=1 S ℓ − Q ℓ = 0 implies m ℓ=k S ℓ − Q ℓ = −S m+1 + Q m+1 − k−1 ℓ=1 S ℓ − Q ℓ in order to rewrite S k ≥ B k in form of S k ≥ B k = S m+1 + m ℓ=k S ℓ − Q ℓ = Q m+1 − k−1 ℓ=1 S ℓ − Q ℓ which is Q m+1 − S k ≤ k−1 ℓ=1 S ℓ − Q ℓ . For k = 1, the above covers the condition Q m+1 ≤ S 1 . In summary, for k = 1, . . . , m + 1 we obtained the sufficient conditions (B1') q k ≥ s k , k = 1, . . . , m (B2') k ℓ=1 1 s ℓ − 1 q ℓ ≥ 1 q m+1 − 1 s k+1 , k = 0, . . . , m − 1; (B3') m ℓ=1 1 s ℓ − 1 q ℓ = 1 q m+1 − 1 s m+1 . Forming the difference of (B3') and (B2') gives (B2") m ℓ=k+1 1 s ℓ − 1 q ℓ ≤ 1 s k+1 − 1 s m+1 , k = 0, . . . , m − 1. Reindexing leads to (B2") m ℓ=k 1 s ℓ − 1 q ℓ ≤ 1 s k − 1 s m+1 , k = 1, . . . , m, and adding 1 q k − 1 s k to both sides, and then multiplying both sides by -1 gives (B2) m ℓ=k+1 1 q ℓ − 1 s ℓ ≥ 1 s m+1 − 1 q k , k = 1, . . . , m. Remark 2.6. The conditions (B1)-(B3) are similar to those in (A1)-(A3). Indeed, a change of variable k → m + 1 − k, that is, renaming q k = q m+1−k and s k = s m+1−k , k = 1, . . . , m + 1, turns (B2) into m ℓ=k+1 1 q m+1−ℓ − 1 s m+1−ℓ ≥ 1 s m+1−(m+1) − 1 q m+1−k = 1 s 0 − 1 q m+1−k , k = 1, . . . , m. We have m ℓ=k+1 1 q m+1−ℓ − 1 s m+1−ℓ = m−k ℓ ′ =1 1 q ℓ ′ − 1 s ℓ ′ , hence, we obtain for k ′ = m − k the conditions k ′ ℓ ′ =1 1 q ℓ ′ − 1 s ℓ ′ ≥ 1 s 0 − 1 q k ′ +1 , k ′ = 0, . . . , m − 1. We conclude that difference between the conditions in Lemma 2.2 and in Lemma 2.5 liesaside of naming the decay parameters -simply in replacing ≤ in (A1) and (A2) by ≥ in (B1) and (B2). Hence, it comes to no surprise that (B1) and (B2) follow from, but are not equivalent to (B4) 1 ≤ s 1 ≤ q 1 ≤ s 2 ≤ . . . ≤ q m−1 ≤ s m ≤ q m ≤ s m+1 ≤ ∞. Moreover, (B1) implies m ℓ=k+1 1 q ℓ − 1 s ℓ ≤ 0, and, hence, q m+1 ≥ q k for k = 1, . . . , m. 2.4. Young type results with permutations. As observed in Remark 2.3, condition (A2) in Lemma 2.2 and, similarly, (B2) in Lemma 2.5 are sensitive to the order of the p k , r k , q k , and s k . To obtain a bound for operators as desired, we may have to reorder the parameters. This motivates the introduction of permutations κ and ρ. In addition to the flexibility obtained at cost of notational complexity, we observe that the permutation of the integration order will allow us to pull out integration with respect to some variables. In fact, setting t 0 = x and choosing j = κ −1 (0), we arrive at F r κ(m) L (r 0 ,r);κ = R d R d . . . R d R d |f 1 (t 0 − t 1 ) . . . f m (t 0 − t m ) g(t 0 )| r κ(0) dt κ(0) r κ(1) r κ(0) dt κ(1) r κ(2) r κ(1) . . . r κ(m) dt κ(m) = R d R d . . . R d R d m ℓ=0 |f κ(ℓ) (t 0 − t κ(ℓ) )g(t 0 )| r κ(0) dt κ(0) r κ(1) r κ(0) dt κ(1) r κ(2) r κ(1) . . . r κ(m) dt κ(m) = f κ(0) r κ(m) L p κ(0) f κ(1) r κ(m) L p κ(1) . . . f κ(j−1) r κ(m) L p κ(j−1) × . . . |f κ(j+1) (x − t κ(j+1) ) . . . f κ(m) (x − t κ(m) ) g(x)| r κ(j) dx r κ(j+1) r κ(j) dt κ(j+1) r κ(j+2) r κ(j+1) . . . r κ(m) dt κ(m) . We can then apply Lemma 2.2 to the iterated integral on the right hand side. This observation leads us to the following result. Lemma 2.7. Let κ be a permutation on {0, 1, . . . , m}, z = κ −1 (0), and let 1 ≤ p k , r k ≤ ∞, k = 0, 1, . . . , m, satisfy (A0) p κ(ℓ) = r κ(ℓ) , ℓ = 0, . . . , z − 1; (A1) p κ(ℓ) ≤ r κ(ℓ) , ℓ = z, . . . , m; (A2) k ℓ=z+1 1 p κ(ℓ) − 1 r κ(ℓ) ≤ 1 r 0 − 1 p κ(k+1) , k = z, . . . , m − 1; (A3) m ℓ=z+1 1 p κ(ℓ) − 1 r κ(ℓ) = 1 r 0 − 1 p 0 . Then for F (x, t) = f (x − t)g(x) it holds F L (r 0 ,r)κ ≤ g L p 0 f L p . Remark 2.8. Loosely speaking, the decay of a function F (x, t 1 , . . . , t d ) in the variables (x, t 1 , . . . , t d ) is given by the parameters (p 0 , p 1 , . . . , p d ), that is, L p 0 -decay in x, L p 1 -decay in t 1 , . . ., L p d -decay in t d . As we then use the flexibility of order of integration, it is worth noting that Minkowski's inequality for integrals implies that integrating with respect to variables with large exponents last, increases the size of the space. For example, if q ≥ p, we have Similarly to Lemma 2.7, we formulate the following. (B2) F L (p,q);(0,1) = |F (x, t 1 )| p dx q/p dt 1 1/q ≤ |F (x, t 1 )| q dx p/q dt 1 1/p = F L (p,w−1 ℓ=k+1 1 q ρ(ℓ) − 1 s ρ(ℓ) ≥ 1 s m+1 − 1 q ρ(k) , k = 1, . . . , w − 1 (B3) w−1 ℓ=1 1 q ρ(ℓ) − 1 s ρ(ℓ) = 1 s m+1 − 1 q m+1 . Then G(ξ, ν) = f (ξ)g(ν + S(ξ)) satisfies G L (s,s m+1 ),ρ ≤ f L q g L q m+1 . Boundedness on modulation spaces When applying Lemmas 2.2, 2.5, 2.7, and 2.9 in the context of modulation spaces, we can use the property that M p 1 ,q 1 embeds continuously in M p 2 ,q 2 if p 1 ≤ p 2 and q 1 ≤ q 2 . To exploit this in full, the introduction of auxiliary parameters p and s is required as illustrated by Example 3.2 below. Proposition 3.1. Given 1 ≤ p 0 , p, p, q, q, q m+1 , r 0 , r, s, s m+1 ≤ ∞ with p ≤ p ≤ r and s, q ≤ q. Let κ be a permutation on {0, . . . , m} and let z = κ −1 (0). Similarly, let ρ be a permutation on {1, 2, . . . , m + 1} and w = ρ −1 (m + 1). Assume (1) k ℓ=z+1 1 p κ(ℓ) − 1 r κ(ℓ) ≤ 1 r 0 − 1 p κ(k+1) , k = z, . . . , m − 1; (2) m ℓ=z+1 1 p κ(ℓ) − 1 r κ(ℓ) ≥ 1 r 0 − 1 p 0 ; (3) w−1 ℓ=k+1 1 q ρ(ℓ) − 1 s ρ(ℓ) ≥ 1 s m+1 − 1 q k , k = 1, . . . , w − 1; (4) w−1 ℓ=1 1 q ρ(ℓ) − 1 s ρ(ℓ) ≥ 1 s m+1 − 1 q m+1 . Let v be a weight function on R 2(m+1)d and assume that w 0 , w 1 , . . . , w m are weights on R 2d such that (3.1) v(x, t, ξ, ν) ≤ w 0 (x, ν + S(ξ))w 1 (x − t 1 , ξ 1 ) · . . . · w m (x − t m , ξ m ). For ϕ ∈ S(R d ) real valued, f ∈ M p,κ;q,ρ w (R md ), and g ∈ M p 0 ,q m+1 w 0 (R d ), we have V T A (ϕ⊗ϕ) T A (f ⊗ g) ∈ L (r 0 ,r)κ,(s,s m+1 )ρ v (R 2(m+1)d ) with V T A (ϕ⊗ϕ) T A (f ⊗ g) L (r 0 ,r),κ;(s,s m+1 ),ρ v ≤ C f 1 M p 1 ,q 1 w 1 . . . f m M pm,qm wm g M p 0 ,q m+1 w 0 , (3.2) where the LHS is defined by integrating the variables in the index order κ(0), κ(1), . . . , κ(m), ρ(1), . . . , ρ(m), ρ(m + 1). In particular, T A (f ⊗ g) M (r 0 ,r)κ,(s,s m+1 )ρ v ≤ C f 1 M p 1 ,q 1 w 1 . . . f m M pm,qm wm g M p 0 ,q m+1 w 0 . Note that C depends only on the parameters p i , r i , q i , s i and d. Proof. For simplicity we assume ρ = κ = id and use Lemma 2.2 and Lemma 2.5. The general case follows as Lemma 2.7 and Lemma 2.9 followed from Lemma 2.2 and Lemma 2.5. Let f = (f 1 , f 2 , . . . , f m ), φ = (φ, φ, . . . , φ) and w = (w 1 .w 2 . . . . .w m ). Then V φ f = V φ f 1 ⊗ V φ f 2 ⊗ · · · ⊗ V φ f m , and by Lemma 2.1, we have V T A (ϕ⊗ϕ) T A (f ⊗ g)(x, −ξ, t, ν) = V ϕ f (x − t, ξ)V ϕ g(x, ν − S(ξ)), where x, ν ∈ R d , and t, ξ ∈ R md . So, if (3.1) and conditions (2) and (4) v(x, t, ξ, ν)V T A (ϕ⊗ϕ) T A (f ⊗ g)(x, t, ξ, ν) L (r 0 ,r),(s,s m+1 ) (x,t,ξ,ν) ≤ w(x − t, ξ) (V ϕ f ) (x − t, ξ)w 0 (x, ν + S(ξ)) (V ϕ g) (x, ν + S(ξ)) L r 0 ,r (x,t) L s,s m+1 (ξ,ν) ≤ w(t, ξ) (V ϕ f ) (t, ξ) L p (t) w 0 (x, ν + S(ξ)) (V ϕ g) (x, ν + S(ξ)) L p 0 (x) L s,s m+1 (ξ,ν) ≤ w(t, ξ) (V ϕ f ) (t, ξ) L p (t) L q (ξ) w 0 (x, ν + S(ξ)) (V ϕ g) (x, ν + S(ξ)) L p 0 (x) L q m+1 (ν) = V ϕ f L p,q w V ϕ g L p 0 q m+1 w 0 . We now use that p ≤ p and q ≤ q implies f M p, q f M p,q , a property that clearly carries through to the class of weighted modulation spaces considered in this paper. If hypotheses (2) and (4) hold with strict inequalities, then we can increase p 0 to appropriate p 0 and q m+1 to appropriate q m+1 so that (2) and (4) will hold with equalities. The resulting inequalities involving p 0 and q m+1 then again implies the weaker inequalities involving p 0 and q m+1 . Example 3.2. The conditions r 0 ≤ p k ≤ r k , k = 1, . . . , m, and m ℓ=1 1/p ℓ − 1/r ℓ ≥ 1/r 0 − 1/p 0 do not guarantee the existence of a permutation κ so that also k ℓ=1 1/p κ(ℓ) − 1/r κ(ℓ) ≥ 1/r 0 − 1/p κ(k+1) , k = 0, . . . , m − 1. Indeed, consider for m = 2 the case r 0 = 1, p 1 = p 2 = 10/9, r 1 = r 2 = 2, and p 0 = 5. It is easy to see that no κ exist that allows us to apply Proposition 3.1 to obtain for these parameters (3.2). Using r 0 = 1, p 1 = p 2 = 10/9, r 1 = r 2 = 2, r 1 = r 2 = 2, we can choose κ(0) = 1, κ(1) = 0, κ(2) = 2, to obtain (3.2) for p 0 ≤ 5/3. Unfortunately, this is again not the best we can do. In fact, we can replace p 2 by p 2 = 15/9 ∈ [10/9, 18/9] = [p 2 , r 2 ]. This choice allows us to choose for κ the identity which leads to sufficiency for p 0 ≤ 2, which by inclusion also gives boundedness with r 0 = 1, p 1 = p 2 = 10/9, r 1 = r 2 = 2, r 1 = r 2 = 2. Remark 3.3. Observe those k with r 0 > r k must satisfy κ −1 (k) < z; possibly there are also k with r 0 ≤ r k and κ −1 (k) < z. Importantly, only those k with p k < r k and κ −1 (k) > z contribute to filling the gap between p 0 and r 0 , see Remark 2.3 As immediate consequence, we obtain our first main result. Theorem 3.4. Given 1 ≤ p 0 , p, p, q, q, q m+1 , r 0 , r, s, s m+1 ≤ ∞ with p ≤ p ≤ r ′ and q, s ′ ≤ q. Let κ be a permutation on {0, . . . , m} and let z = κ −1 (0). Similarly, let ρ be a permutation on {1, 2, . . . , m + 1} and w = ρ −1 (m + 1) and (1) 1 r 0 + 1 p κ(k+1) + k ℓ=z+1 1 p κ(ℓ) + 1 r κ(ℓ) ≤ k−z+1, k = z, . . . , m−1; (2) 1 r 0 + m ℓ=z+1 1 p κ(ℓ) + 1 r κ(ℓ) ≥ m − z + 1 p 0 ; (3) 1 s m+1 + 1 q ρ(k) + w−1 ℓ=k+1 1 q ρ(ℓ) + 1 s ρ(ℓ) ≥ w − k, k = 1, . . . , w − 1; (4) 1 s m+1 + w−1 ℓ=1 1 q ρ(ℓ) + 1 s ρ(ℓ) ≥ w − 1 + 1 q m+1 . Let v be a weight function on R 2(m+1)d and assume that w 0 , w 1 , . . . , w m are weights on R 2d such that v(x, t, −ξ, ν) −1 ≤ w 0 (x, ν + S(ξ)) −1 w 1 (x − t 1 , ξ 1 ) · . . . · w m (x − t m , ξ m ). Assume that σ ∈ M Proof. Let f k ∈ M p k ,q k w k , k = 1, . . . m, ϕ ∈ S(R d ), and denote ϕ = (ϕ, . . . , ϕ). Note that sup{| ·, g |, g ∈ M p ′ 0 ,q ′ m+1 1/w 0 } defines a norm which is equivalent to · M p 0 q m+1 w 0 for p 0 , q m+1 ∈ [1, ∞] (g ∈ M p ′ 0 ,q ′ m+1 1/w 0 as follows | T σ f , g | = | σ, R(f , g) | = | V R(ϕ,ϕ) σ, V R(ϕ,ϕ) R(f , g) ≤ σ M (r 0 ,r),κ;(s,s m+1 ),ρ v R(f , g) M (r ′ 0 ,r ′ )κ,(s ′ ,s ′ m+1 )ρ 1/w . Using the conjugate indices r ′ 0 , r ′ κ , s ′ m+1 , s ′ ρ , it is easy to see that the conditions on the indices (1)-(4) are equivalent to those in Proposition 3.1. Therefore, R(f , g) M (r ′ 0 ,r ′ )κ,(s ′ ,s ′ m+1 )ρ 1/w ≤ C f 1 M p 1 ,q 1 w 1 . . . f m M pm,qm wm g M p ′ 0 ,q ′ m+1 w 0 . Note that the criteria on time and frequency are separated. Even when it comes to order of integration, we do not link these, that is, the permutations κ and ρ are not necessarily identical. Corollary 3.5. If 1 ≤ r ′ 0 ≤ p 1 ≤ r ′ 1 ≤ p 2 ≤ . . . ≤ r ′ m−1 ≤ p m ≤ r ′ m ≤ ∞; 1 ≤ s ′ 1 ≤ q 1 ≤ s ′ 2 ≤ q 2 ≤ . . . ≤ q m−1 ≤ s ′ m ≤ q m ≤ s ′ m+1 ≤ ∞; and 1 r 0 + m ℓ=1 1 p ℓ + 1 r ℓ ≥ m + 1 p 0 ; 1 s m+1 + m ℓ=1 1 q ℓ + 1 s ℓ ≥ m + 1 q m+1 ; then the conclusion of Theorem 3.4 for any symbol σ ∈ M (r 0 ,r)κ,(s,s 0 )ρ v , where κ, ρ are the identity permutations. Proof. Note that since κ, ρ are the identity permutations, then z = 0 and ω = m + 1. (1) 1 r 0 + 1 p k+1 + k ℓ=1 1 p ℓ + 1 r ℓ ≤ k + 1, k = 0, . . . , m−1; (2) 1 s m+1 + 1 q k + m ℓ=k+1 1 q ℓ + 1 s ℓ ≥ m − k + 1, k = 1, . . . , m, follow from the monotonicity conditions. Applications In Section 4.1 we simplify the conditions of Theorem 3.4 in case of bilinear operators, that is, m = 2. The focus of Section 4.2 lies on establishing boundedness of the bilinear Hilbert transform on products of modulation spaces. We stress that these results are beyond the reach of existing methods of time-frequency analysis of bilinear pseudodifferential operators as developed in [7,6,8,9]. Finally, in Section 4.3 we consider the trilinear Hilbert transform. Bilinear pseudodifferential operators. A bilinear pseudodifferential operator with symbol σ is formally defined by (4.1) T σ (f, g)(x) = R×R σ(x, ξ 1 , ξ 2 )f (ξ 1 )ĝ(ξ 2 ) dξ 1 dξ 2 . For m = 2, Theorem 3.4 simplifies to the following. Theorem 4.1. Let 1 ≤ p 0 , p 1 , p 2 , q 1 , q 2 , q 3 , r 0 , r 1 , r 2 , s 1 , s 2 , s 3 ≤ ∞. If 1/p 1 + 1/r 1 , 1/p 2 + 1/r 2 ≥ 1 and one of the following 1 p 0 ≤ 1 r 0 , (using κ = (1, 2, 0) or (2, 1, 0)); (1) 1 + 1 p 0 ≤ 1 r 0 + 1 r 1 + 1 p 1 , r 1 ≤ p 0 , r 0 , (κ = (2, 0, 1)); (2) 1 + 1 p 0 ≤ 1 r 0 + 1 r 2 + 1 p 2 , r 2 ≤ p 0 , r 0 , (κ = (1, 0, 2)); (3) 2 + 1 p 0 ≤ 1 r 0 + 1 r 1 + 1 r 2 + 1 max{p 1 , r ′ 0 } + 1 p 2 , r 2 ≤ p 0 , r 1 , r 2 ≤ r 0 , (κ = (0, 1, 2)); (4) 2 + 1 p 0 ≤ 1 r 0 + 1 r 1 + 1 r 2 + 1 max{p 2 , r ′ 0 } + 1 p 1 , r 1 ≤ p 0 , r 1 , r 2 ≤ r 0 , (κ = (0, 2, 1)); (5) as well as one of 1 q 3 ≤ 1 s 3 , (using ρ = (3, 1, 2) or (3, 2, 1)); (1) 1 + 1 q 3 ≤ 1 q 1 + 1 s 1 + 1 s 3 , s 3 ≤ q 3 , s 1 , q ′ 1 , (ρ = (1, 3, 2)); (2) 1 + 1 q 3 ≤ 1 q 2 + 1 s 2 + 1 s 3 , s 3 ≤ q 3 , s 2 , q ′ 2 , (ρ = (2, 3, 1)); (3) 2 ≤ 1 max{q 1 , s ′ 1 } + 1 max{q 2 , s ′ 2 } + 1 s 2 + 1 s 3 , s 3 ≤ q ′ 2 , s 2 ,(4)2 + 1 q 3 ≤ 1 max{q 1 , s ′ 1 } + 1 max{q 2 , s ′ 2 } + 1 s 1 + 1 s 2 + 1 s 3 , (ρ = (1, 2, 3)); 2 ≤ 1 max{q 1 , s ′ 1 } + 1 max{q 2 , s ′ 2 } + 1 s 1 + 1 s 3 , s 3 ≤ q ′ 1 , s 1 ,(5)2 + 1 q 3 ≤ 1 max{q 1 , s ′ 1 } + 1 max{q 2 , s ′ 2 } + 1 s 1 + 1 s 2 + 1 s 3 , (ρ = (1, 3, 2)), hold. Assume that w 0 , w 1 , w 2 , and v are weight functions satisfying v(x, t 1 , t 2 , ξ 1 , ξ 2 , ν) −1 ≤ w 0 (x, ν + ξ 1 + ξ 2 ) −1 · w 1 (x − t 1 , ξ 1 ) · w 2 (x − t 2 , ξ 2 ). If σ ∈ M (r 0 ,r 1 ,r 2 ),κ;(s 1 ,s 2 ,s 3 ),ρ , the bilinear pseudodifferential operator T σ initially defined on S(R d ) × S(R d ) by (4.1) extends to a bounded bilinear operator from M p 1 ,q 1 w 1 × M p 2 ,q 2 w 2 into M p 0 ,q 3 w 0 . Moreover, there exists a constant C > 0, such that we have Proof. This result is derived from Theorem 3.4 for m = 2, namely, we establish conditions on the p 0 , p 1 , p 2 , r 0 , r 1 , r 2 , q 1 , q 2 , q 3 , s 1 , s 2 , s 3 for the existence of p 1 , p 2 , q 1 , q 2 satisfying the conditions of Theorem 3.4. T σ (f 1 , f 2 ) M p 0 ,q 3 w 0 ≤ C σ M ( If κ = (1 2 0) or κ = (2 1 0), then z = 2 in Theorem 3.4 and we require in addition only 1 r 0 ≥ 1 p 0 . For the remaining cases, we have to show that the conditions above imply the existence of p 1 ≥ p 1 , p 2 ≥ p 2 which allow for the application of Theorem 3.4. If κ = (1 0 2), we have z = 1, and we seek, with notation as before, P 1 and P 2 with P 1 ≤ P 1 ; P 2 ≤ P 2 ; P 1 + R 1 ≥ 1; P 2 + R 2 ≥ 1; R 0 + P 2 ≤ 1; R 0 + P 2 + R 2 ≥ 1 + P 0 ; that is, 1 − R 1 ≤ P 1 ≤ P 1 ; 1 − R 0 − R 2 + P 0 , 1 − R 2 ≤ P 2 ≤ P 2 , 1 − R 0 ; (4.2) which defines a non empty set if and only if P 1 + R 1 ≥ 1, P 2 + R 2 ≥ 1, R 2 ≥ P 0 , R 0 , 1 + P 0 ≤ R 0 + R 1 + P 2 . For κ = (0 1 2) we have z = 0 in Theorem 3.4 and we require that some P 1 and P 2 satisfy P 1 ≤ P 1 ; P 2 ≤ P 2 ; P 1 + R 1 ≥ 1; P 2 + R 2 ≥ 1; R 0 + P 2 ≤ 1; R 0 + P 2 + P 1 + R 1 ≤ 2; R 0 + P 2 + R 2 + P 1 + R 1 ≥ 2 + P 0 ; that is, 1 − R 1 ≤ P 1 ≤ P 1 , 1 − R 0 ; (4.3) 1 − R 2 ≤ P 2 ≤ P 2 ; (4.4) 2 + P 0 − R 0 − R 1 − R 2 ≤ P 1 + P 2 ≤ 2 − R 0 − R 1 . (4.5) Note that (4.3) defines a vertical strip in the ( P 1 , P 2 ) plane which is non-empty if and only if P 1 + R 1 ≥ 1 and R 0 ≤ R 1 . Similarly, (4.4) defines a horizontal strip which is not empty if we assume P 2 + R 2 ≥ 1. Lastly, the diagonal strip given by (4.5) is nonempty if and only if P 0 ≤ R 2 . To obtain a boundedness result, we still need to establish that the diagonal strip meats the rectangle given by the intersection of horizontal and vertical strips. This is the case if the upper right hand corner of the rectangle is above the lower diagonal given by P 1 + P 2 = 2 + P 0 − R 0 − R 1 − R 2 , that is, if min{P 1 , 1 − R 0 } + P 2 ≥ 2 + P 0 − R 0 − R 1 − R 2 , and if the lower left corner of the rectangle lies below the upper diagonal, that is, if 1 − R 1 + 1 − R 2 ≤ 2 − R 0 − R 1 , which holds if R 0 ≤ R 2 . Let us now turn to the frequency side. If ρ = (3 2 1) or ρ = (3 1 2), we have w = 1 and an application Theorem 3.4 requires the single but strong assumption Q 3 ≤ S 3 . For ρ = (1 3 2) we have w = 2 in Theorem 3.4. To satifiy the conditions, we need to establish the existence of Q 1 and Q 2 satisfy Q 1 ≤ Q 1 ; Q 2 ≤ Q 2 ; Q 1 + S 1 ≤ 1; Q 2 + S 2 ≤ 1; S 3 + Q 1 ≥ 1; S 3 + Q 1 + S 1 ≥ 1 + Q 3 . The existence of such Q 2 is trivial, so we are left with 1 + Q 3 − S 1 − S 3 , 1 − S 3 ≤ Q 1 ≤ Q 1 , 1 − S 1 , . Note that this inequality is exactly (4.2) with S 3 replacing R 2 , S 1 replacing R 0 Q 3 replacing P 0 , and Q 1 , Q 1 in place of P 2 , P 2 . We conclude that for the existence of Q 1 , we require S 3 ≥ S 1 , Q 3 , 1 − Q 1 , and 1 + Q 3 ≤ Q 1 + S 1 + S 3 . For ρ = (1 2 3) we have w = 3 in Theorem 3.4. We need to establish the existence of Q 1 and Q 2 satisfy Q 1 ≤ Q 1 ; Q 2 ≤ Q 2 ; Q 1 + S 1 ≤ 1; Q 2 + S 2 ≤ 1; S 3 + Q 2 ≥ 1; S 3 + Q 2 + Q 1 + S 2 ≥ 2; S 3 + Q 1 + Q 2 + S 2 + S 1 ≥ 2 + Q 3 ; that is, choosing Q 1 = min{Q 1 , 1 − S 1 }, and Q 2 = min{Q 2 , 1 − S 2 }, we require 1 ≤ min{Q 2 , 1 − S 2 } + S 3 ; 2 ≤ min{Q 1 , 1 − S 1 } + min{Q 2 , 1 − S 2 } + S 2 + S 3 ; 2 + Q 3 ≤ min{Q 1 , 1 − S 1 } + min{Q 2 , 1 − S 2 } + S 1 + S 2 + S 3 . Proof. Proof of Theoremm 1.1 Theorem 1.1 now follows from choosing κ and ρ to be the identity permutations, and r 0 = s 1 = s 2 = ∞, r 1 = r 2 = s 3 = 1. Note that this result covers and extends Theorem 3.1 in [7]. Remark 4.2. Using Remark 2.8, we observe that M ∞,1 (R 3d ) M (∞,1,1),(∞,∞,1) (R 3d ). Indeed, in both cases we have the same decay parameters, but different integration orders, namely M ∞,1 x → ∞, ξ 1 → ∞, ξ 2 → ∞, ν → 1, t 1 → 1, t 2 → 1; M (∞,1,1),(∞,∞,1) x → ∞, t 1 → 1, t 2 → 1, ξ 1 → ∞, ξ 2 → ∞, ν → 1. Inclusion follows from the fact that we always moved a large exponent to the right of a small exponent. Note that for any r ∈ M 1,∞ (R) \ M ∞,1 (R), for example, a chirped signal r(ξ) = e 2πiξ 2 u(ξ) with u(ξ) ∈ L 2 \ L 1 , we have σ(x, ξ 1 , ξ 2 ) = r(ξ 1 ) ∈ M (∞,1,1),(∞,∞,1) \ M ∞,1 . Example 4.3. With κ and ρ are the identity, that is, κ = (0, 1, 2) and ρ = (1, 2, 3), we illustrate the applicability of Theorem 4.1 for maps on L 2 × L 2 = M 2,2 × M 2,2 , that is, p 1 = p 2 = q 1 = q 2 = 2. On the time side, we require r 1 , r 2 ≤ 2, r 0 and r 2 ≤ p 0 as well as 3 2 + 1 p 0 ≤ 1 r 0 + 1 r 1 + 1 r 2 + 1 max{2, r ′ 0 } . Our goal is to obtain results for r 0 large, hence, we assume r 0 ≥ 2. (In case of r 0 ≤ 2, the last inequality above does not depend on r 0 , and we can improve the result by fixing r 0 = 2.) We obtain the range of applicability r 1 , r 2 ≤ 2 ≤ r 0 , and r 2 ≤ p 0 , and 1 + 1 p 0 ≤ 1 r 0 + 1 r 1 + 1 r 2 . On the frequency side, we have to satisfy the conditions s 3 ≤ 2, s 2 , 2 ≤ 1 max{2, s ′ 1 } + 1 max{2, s ′ 2 } + 1 s 2 + 1 s 3 , 2 + 1 q 3 ≤ 1 max{2, s ′ 1 } + 1 max{2, s ′ 2 } + 1 s 1 + 1 s 2 + 1 s 3 . Let us assume s 1 ≤ 2 ≤ s 2 , then we have the range of applicability s 1 , s 3 ≤ 2 ≤ s 2 , 1 2 + 1 s 1 , 1 2 + 1 q 3 ≤ 1 s 2 + 1 s 3 . The range of applicability gives exponents that guarantee that a bilinear pseudodifferential operator maps boundedly L 2 × L 2 into M p 0 ,q 3 if σ ∈ M (r 0 ,r 1 ,r 2 ),(s 1 ,s 2 ,s 3 ) . In particular, when σ ∈ M (∞,1,1),(2,2,1) we can take p 0 = q 3 = 1. So we get that T σ maps L 2 × L 2 into M 1,1 ⊂ M 1,∞ . 4.2. The bilinear Hilbert transform. We now consider boundedness properties of the bilinear Hilbert transform on modulation spaces. Recall that this operator is defined for f, g ∈ S(R) by BH(f, g)(x) = lim ǫ→0 |y|>ǫ f (x + y)g(x − y) dy y . Equivalently, this operator can be written as a Fourier multiplier, that is, a bilinear pseudodifferential operator whose symbol is independent of the space variable, with symbol σ BH (x, ξ 1 , ξ 2 ) = σ(ξ 1 − ξ 2 ), where σ(x) = −πisign (x), x = 0. Our first goal is to identify which of the (unweighted) spaces M (r 0 ,r 1 ,r 2 ),κ;(s 1 ,s 2 ,s 0 ),ρ the symbol σ BH belongs to. To this end consider the window function Ψ(x, ξ 1 , ξ 2 ) = ψ(x)ψ(ξ 2 )ψ(ξ 1 − ξ 2 ), where ψ ∈ S(R) such that ψ(x) = ψ 1 (x) − ψ 1 (−x) with ψ 1 ∈ S(R), 0 ≤ ψ 1 (x) ≤ 1 for all x ∈ R. In addition, we require that the support of ψ 1 is strictly included in (0, 1). Then V Ψ σ BH (x, t 1 , t 2 , ξ 1 , ξ 2 , ν) = V ψ 1(x, ν) V ψ σ(ξ 1 − ξ 2 , t 1 ) V ψ 1(ξ 2 , t 1 + t 2 ) Assume that the two permutations κ of {0, 1, 2}, and ρ of {1, 2, 3} are identities. Moreover, suppose that all the weights are identically equal to 1. Proof. Let r > 1. We shall integrate V Ψ σ BH (x, t, ξ, ν) = V ψ 1(x, ν) V ψ σ(ξ 1 − ξ 2 , t 1 ) V ψ 1(ξ 2 , t 1 + t 2 ) in the order x → r 0 = ∞ t 1 → r 1 = 1 t 2 → r 2 = r > 1 ξ 1 → s 1 = ∞ ξ 2 → s 2 = ∞ ν → s 0 = 1. We estimate σ BH M (∞,∞,1),(∞,1,r) = R sup ξ 1 ,ξ 2 R R sup x |V Ψ σ BH (x, t, ξ, ν)|dt 1 r dt 2 1/r dν = R sup ξ 1 ,ξ 2 R R sup x |V ψ 1(x, ν) V ψ σ(ξ 1 − ξ 2 , t 1 ) V ψ 1(ξ 2 , t 1 + t 2 )|dt 1 r dt 2 1/r dν = ψ L 1 sup ξ 1 ,ξ 2 R R |V ψ σ(ξ 1 − ξ 2 , t 1 ) V ψ 1(ξ 2 , t 1 + t 2 )|dt 1 r dt 2 1/r ≤ ψ L 1 sup ξ 1 ,ξ 2 |V ψ σ(ξ 1 − ξ 2 , ·)| * |V ψ 1(ξ 2 , ·)| L r ≤ ψ L 1 sup ξ 1 ,ξ 2 V ψ σ(ξ 1 − ξ 2 , ·) L r V ψ 1(ξ 2 , ·) L 1 = ψ 2 L 1 sup ξ 1 V ψ σ(ξ 1 ·) L r , where we have repeatedly used the fact that V ψ 1(x, ν) = e 2πixνψ (ν), and V ψ 1 ∈ L ∞ (x)L 1 (ν), that is R sup x |V ψ 1(x, ν)|dν = ψ L 1 < ∞. Thus, we are left to estimate sup ξ V ψ σ(ξ·) L r . Recall that ψ(x) = ψ 1 (x) − ψ 1 (−x), hence, we have V ψ σ(ξ, t) = e −2πiξt − −ξ −∞ e −2πiyt ψ(y)dy + ∞ −ξ e −2πity ψ(y)dy . A series of straightforward calculations yields 1] , χ [0,ξ] ∈ L r uniformly for |ξ| ≤ 1 for each r > 1. |V ψ σ(ξ, t)| =      |ψ 1 (t) −ψ 1 (−t)| if |ξ| ≥ 1 |ψ 1 (−t) − χ [0,−ξ] * ψ 1 (t) + χ [−ξ,1] * ψ 1 (t)| if − 1 ≤ ξ ≤ 0 |ψ 1 (t) − χ [ξ,1] * ψ 1 (−t) + χ [0,ξ] * ψ 1 (−t)| if 0 ≤ ξ ≤ 1, where χ [a,b] denotes the characteristic function of [a, b]. We note that that χ [0,−ξ] , χ [−ξ,1] , χ [ξ, For |ξ| ≥ 1, we have V ψ σ(ξ, ·) L q ≤ 2 ψ 1 L q for any q ≥ 1. Now consider −1 ≤ ξ ≤ 0, then V ψ σ(ξ, ·) L r ≤ ψ 1 L r + χ [0,−ξ] * ψ 1 L r + χ [−ξ,1] * ψ 1 L r ≤ ψ 1 L r + ψ 1 L 1 ( χ [0,−ξ] L r + χ [−ξ,1] L r ) ≤ ψ 1 L r + C ψ 1 L 1 where C > 0 is a constant that depends only on r. Using a similar estimate for 0 ≤ ξ ≤ 1, we conclude that sup ξ V ψ σ(ξ, ·) L r ≤ C < ∞ where C depends only on ψ 1 and r. Observe that σ BH ∈ M (∞,1,r),(∞,∞,1) (R 3 ) \ M (∞,1,1),(∞,∞,1) (R 3 ) for all r > 1. Consequently, to obtain a boundedness result for the bilinear Hilbert transform, we cannot apply any of the existing results on bilinear pseudodifferential operators. However, using the symbol classes introduced we obtain the following result: Theorem 4.5. Let 1 ≤ p 0 , p 1 , p 2 , q 1 , q 2 , q 3 ≤ ∞ satisfy 1 p 1 + 1 p 2 > 1 p 0 and that 1 q 1 + 1 q 2 ≥ 1 + 1 q 3 . Then the bilinear Hilbert transform extends to a bounded bilinear operator from M p 1 ,q 1 × M p 2 ,q 2 into M p 0 ,q 3 . Moreover, there exists a constant C > 0 such that BH(f, g) M p 0 ,q 3 ≤ C f M p 1 ,q 1 g M p 2 ,q 2 . In particular, for any 1 ≤ p, q ≤ ∞, and ǫ > 0, the BH continuously maps M p,q × M p ′ ,q ′ into M 1+ǫ,∞ and we have BH(f, g) M 1+ǫ,∞ ≤ C f M p,q g M p ′ ,q ′ . Proof. Since the symbol σ BH of BH satisfies σ BH ∈ M (∞,1,r),(∞,∞,1) , the proof follows from Theorem 4.1. Indeed, on the time side, all simple inequalities hold and we are left to check 2 + 1 p 0 ≤ 1 r 0 + 1 r 1 + 1 r 2 + 1 max{p 1 , r ′ 0 } + 1 p 2 , which is with 1 r = 1 − ǫ 2 + 1 p 0 ≤ 0 + 1 + 1 − ǫ + 1 max{p 1 , 1} + 1 p 2 . On the frequency side, the conditions 2 ≤ 1 max{q 1 , s ′ 1 } + 1 max{q 2 , s ′ 2 } + 1 s 2 + 1 s 3 , s 3 ≤ q ′ 2 , s 2 , 2 + 1 q 3 ≤ 1 max{q 1 , s ′ 1 } + 1 max{q 2 , s ′ 2 } + 1 s 1 + 1 s 2 + 1 s 3 , are clearly satisfied whenever 2 + 1 q 3 ≤ 1 max{q 1 , 1} + 1 max{q 2 , 1} + 0 + 0 + 0. Remark 4.6. It was proved in [44,45] that the bilinear Hilbert transform BH continuously maps L p 1 × L p 2 into L p where 1 p = 1 p 1 + 1 p 2 , 1 ≤ p 1 , p 2 ≤ ∞ and 2/3 < p ≤ ∞. Our results give that if 1 < p, q, p 1 < ∞ then H maps continuously M p 1 ,q × M p ′ 1 ,q ′ into M p,∞ . One can use embeddings between modulation spaces and Lebesgue spaces to get some "mixed" boundedness results. For example, assume that q ≥ 2 and q ′ ≤ p 1 ≤ q, then it is known that (see [60,Proposition 1.7 ]) L p 1 ⊂ M p 1 ,q and M p ′ 1 ,q ′ ⊂ L p ′ 1 . Consequently, it follows from Theorem 4.5 that BH continuously maps L p 1 ×M p ′ 1 ,q ′ into M p,∞ ⊃ L p . 4.3. The trilinear Hilbert transform. In this final section we consider the trilinear Hilbert transform TH given formally by TH(f, g, h)(x) = lim ǫ→0 |t|>ǫ f (x − t)g(x + t)h(x + 2t) dt t . The trilinear Hilbert transform can be written as a trilinear pseudodifferential operator, or more specifically as a trilinear Fourier multiplier given by TH(f, g, h)(x) = R×R×R σ TH (x, ξ 1 , ξ 2 , ξ 3 )f (ξ 1 )ĝ(ξ 2 )ĥ(ξ 3 )e 2πix(ξ 1 +ξ 2 +ξ 3 ) dξ 1 dξ 2 dξ 3 where σ TH (x, ξ 1 , ξ 2 , ξ 3 ) = σ(ξ 1 − ξ 2 − 2ξ 3 ) = πisign(ξ 1 − ξ 2 − 2ξ 3 ). Recall from Section 4.2 that ψ ∈ S(R) is chosen such that ψ(x) = ψ 1 (x) − ψ 1 (−x) with ψ 1 ∈ S(R), 0 ≤ ψ 1 (x) ≤ 1 for all x ∈ R. Next we define Ψ(x, ξ 1 , ξ 2 , ξ 3 ) = ψ(x)ψ(ξ 2 )ψ(ξ 3 )ψ(ξ 1 − ξ 2 −2ξ 3 ). We can now compute the symbol window Fourier transform V Ψ σ TH of σ TH with respect to Ψ and obtain V Ψ σ TH (x, t, ξ, ν) = V ψ 1(x, ν)V ψ 1(ξ 2 , −t 1 − t 2 )V ψ 1(ξ 2 , −2t 1 − t 3 )V ψ σ(ξ 1 − ξ 2 − 2ξ 3 , −t 1 )|. Observe that |V g 1(x, η)| = |ĝ(η)|. Hence, |V Ψ σ TH (x, t, ξ, ν)| = |ψ(ν)||ψ(−t 1 − t 2 )||ψ(−2t 1 − t 3 )||V ψ σ(ξ 1 − ξ 2 − 2ξ 3 , −t 1 )|. But by the choice of ψ we see thatψ(−η) = −ψ(η). Proposition 4.7. For r > 1, we have σ TH ∈ M (∞,1,r,r),(∞,∞,∞,1) . In particular, this conclusion holds when r = 1 + ǫ for all ǫ > 0. Proof. Let r > 1. We proceed as in the proof of Proposition 4.4, and integrate V Ψ σ TH (x, t, ξ, ν) in the following order: x → r 0 = ∞, t 1 → r 1 = 1, t 2 → r 2 = r > 1, t 3 → r 3 = r > 1, ξ 1 → s 1 = ∞, ξ 2 → s 2 = ∞, ξ 3 → s 3 = ∞, ν → s 0 = 1. In particular, we estimate σ TH M (∞,1,r,r),(∞,∞,∞,1) = R dν sup ξ 1 ,ξ 2 ,ξ 3 R dt 3 R dt 2 R dt 1 sup x |V Ψ σ H (x, t, ξ, ν)| r 1/r = R dν sup ξ 1 ,ξ 2 ,ξ 3 R dt 3 R dt 2 R dt 1 sup x |ψ(ν)| r |ψ(−t 1 − t 2 )| r |ψ(−2t 1 − t 3 )| r |V ψ σ(ξ 1 − ξ 2 − 2ξ 3 , −t 1 )| r 1/r = ψ 1 sup ξ 1 ,ξ 2 ,ξ 3 R dt 3 R dt 2 R dt 1 |ψ(−t 1 − t 2 )| r |ψ(−2t 1 − t 3 )| r |V ψ σ(ξ 1 − ξ 2 − 2ξ 3 , −t 1 )| r 1/r = ψ 1 sup ξ 1 ,ξ 2 ,ξ 3 R dt 3 R dt 2 R dt 1 |ψ(t 2 + t 1 )| r |ψ(t 3 + 2t 1 )| r |V ψ σ(ξ 1 − ξ 2 − 2ξ 3 , −t 1 )| r 1/r = ψ 1 sup ξ 1 ,ξ 2 ,ξ 3 R dt 3 R dt 2 R dt 1 |ψ(t 2 − t 1 )| r |ψ(2(t 3 − t 1 ))| r |V ψ σ(ξ 1 − ξ 2 − 2ξ 3 , t 1 )| r 1/r = ψ 1 sup ξ 1 ,ξ 2 ,ξ 3 R dt 3 R dt 2 R dt 1 |ψ(t 1 )| r |ψ(2(t 2 − t 3 − t 1 ))| r |V ψ σ(ξ 1 − ξ 2 − 2ξ 3 , t 2 − t 1 )| r 1/r = ψ 1 sup ξ 1 ,ξ 2 ,ξ 3 R dt 3 R dt 2 |ψ| r * (|T t 3ψ 2 | r | V ψ σ(ξ 1 − ξ 2 − 2ξ 3 , ·)| r )(t 2 ) 1/r whereψ 2 (ξ) =ψ(2ξ), and V ψ σ(ξ 1 − ξ 2 − 2ξ 3 , η) = V ψ σ(ξ 1 − ξ 2 − 2ξ 3 , −η). Consequently, σ TH M (∞,1,r,r),(∞,∞,∞,1) ≤ ψ 1 ψ r sup ξ 1 ,ξ 2 ,ξ 3 R dt 3 R dt 2 |ψ 2 (t 2 − t 3 )| r | V ψ σ(ξ 1 − ξ 2 − 2ξ 3 , t 2 )| r ) 1/r = ψ 1 ψ r sup ξ 1 ,ξ 2 ,ξ 3 R dt 3 |ψ 2 | r * | V ψ σ(ξ 1 − ξ 2 − 2ξ 3 , ·)| r (t 3 ) 1/r ≤ ψ 1 ψ r ψ 2 r sup ξ 1 ,ξ 2 ,ξ 3 R dt 3 | V ψ σ(ξ 1 − ξ 2 − 2ξ 3 , t 3 )| r 1/r = ψ 1 ψ r ψ 2 r sup ξ 1 ,ξ 2 ,ξ 3 R dt 3 |V ψ σ(ξ 1 − ξ 2 − 2ξ 3 , t 3 )| r 1/r . Note that the condition for k = 1 follows from the k = 2 condition since p ′ 3 ≤ r ′ . For the existence of p 1 ≥ p 1 , satisfying the k = 2 and k = 3 conditions, we require 2 − 1 r ≥ 2 − 2 r + 1 p 0 , which is r ≤ p 0 , a condition that is met. Some p 1 ≥ p 1 will satisfy all conditions if 1 p 1 ≥ 1 − 2 r + 1 p 0 . Indeed, 1 − 2 r + 1 p 0 = 1 − 1 r + 1 p 0 − 1 r ≤ 1 − 1 r ≤ 1 p 1 . We now consider the conditions of Theorem 3.4 on the frequency side. We choose ρ to be the identity permutation on {1, 2, 3, 4}, s 1 = s 2 = s 3 = ∞, s 4 = 1. We now have to consider existence of q 1 ≥ s ′ 1 = 1, q 2 ≥ s ′ 2 = 1, and q 3 ≥ s ′ 3 = 1 with k = 1 : 1 q 1 + 1 q 2 + 1 q 3 ≥ 2; k = 2 : 1 q 2 + 1 q 3 ≥ 1; k = 3 : 1 q 3 ≥ 0 ; k = 4 : 1 q 1 + 1 q 2 + 1 q 3 ≥ 2 + 1 q 4 . These conditions reduce to 1 q 2 + 1 q 3 ≥ 1, 1 q 1 + 1 q 2 + 1 q 3 ≥ 2 + 1 q 4 . To assume optimally large q 1 , q 2 , q 3 , we chooseq 2 = q 2 = q, q ′ 3 =q ′ 3 = q ′ and 1 q 1 = 1 + 1 q 4 , the latter only being satisfied if q 1 = 1 and q 4 = ∞. In [50,Theorem 13] it is proved that the trilinear Hilbert transform is bounded from L p ×L q ×A into L r whenever 1 < p, q ≤ ∞, 2/3 < r < ∞ and 1 p + 1 q = 1 r , where A is the Fourier algebra. In particular, for p = q = 2, then r = 1 and the operator maps boundedly L 2 × L 2 × A into L 1 . From [60, Proposition 1.7] we know that when p ∈ (1, 2) and p < q ′ < p ′ , then FL q ′ ⊂ M p ′ ,q ′ . We can then conclude that TH continuously maps M p 1 ,1 × M p,q × FL q ′ into M p 0 ,∞ . Figure 1 . 1Depiction of condition (A2) in Lemma 2.2. After adding a pair of colored fields, the top row must always exceed the lower row, with the lower row finally catching up in the last step, see Remark 2.3. q);(1,0) ,which implies L (p,q);(0,1) ⊆ L (p,q);(1,0) if q ≥ p, for example, L (1,∞);(0,1) ⊆ L (1,∞);(1,0) . This inclusion is strict in general, for example, choose F (x, t 1 ) = g(x − t 1 ) ∈ L (1,∞);(1,0) \ L (1,∞);(0,1) for any function g ∈ L 1 . Lemma 2. 9 . 9Let ρ be a permutation on {1, . . . , m + 1}, w = ρ −1 (m + 1), and 1 ≤ q k , s k ≤ ∞ be k = 1, . . . , m + 1 satisfy (B0) q κ(ℓ) = s κ(ℓ) , ℓ = w, . . . , m; (B1) q ρ(k) ≥ s ρ(k) , k = 1, . . . , w − 1; above hold with equality, then (A1)-(A3) in Lemma 2.2 and (B1)-(B3) in Lemma 2.5 will hold. Then (r 0 0 . 00,r),κ;(s,s m+1 ),ρ v . Then the multilinear pseudodifferential operator T σ defined initially for f k ∈ S(R d ) for k = 1, 2, . . . , m by (1.2) extends to a bounded multilinear operator from M p 1 ,q 1 w 1 × M p 2 ,q 2 w 2 × . . . × M pm,qm wm into M p 0 ,q m+1 w Moreover, there exists a constant C so that for all f , we have T σ f M p 0 q m+1 w 0 ≤ C σ M (r 0 ,r),κ;(s,s m+1 ),ρ v f 1 M p 1 ,q 1 w 1 . . . f m M pm,qm wm . r 0 ,r 1 ,r 2 ),κ;(s 1 ,s 2 ,s 3 ),ρ f 1 M p 1 ,q 1 w 1 f 2 M p 2 ,q 2 w 2with appropriately chosen order of integration κ, ρ. Proposition 4 . 4 . 44For r > 1, we have that σ BH ∈ M (∞,1,r),(∞,∞,1) . Hence, to complete our result on the basis of Lemma 2.1, we estimate forsee, for example, [65, Proposition 1.2(3)]). For clarity, we always refer to the variables x,y,t as time variables, even though a physical interpretation of time necessitates d = 1. Alternatively, one can consider multivariate x,y,t as spatial variables. Acknowledgment K. A. Okoudjou was partially supported by a RASA from the Graduate School of UMCP, the Alexander von Humboldt foundation, and by a grant from the Simons Foundation (#319197 to Kasso Okoudjou). G. E. Pfander appreciates the hospitality of the mathematics departments at MIT and at the TU Munich. This project originated during a sabbatical at MIT and was completed during a visit of TU Munich as John von Neumann Visiting Professor. G. E. Pfander also appreciates funding from the German Science Foundation (DFG) within the project Sampling of Operators.The proof is complete by observing that the proof of Proposition 4.4 implies thatUsing this result and Theorem 3.4 for m = 3 we can give the following initial result on the boundedness of TH on product of modulation spaces.Theorem 4.8. For p, p 0 , p 1 ∈ (1, ∞) and 1 ≤ q ≤ ∞, the trilinear Hilbert transform TH is bounded from M p 1 ,1 × M p,q × M p ′ ,q ′ into M p 0 ,∞ and we have the following estimate:Remark 4.9. Before proving this result we point out that the strongest results are obtained by choosing p 0 as close to 1 as possible and p 1 as close to ∞ as possible.As special case, we see that TH boundedly mapsfor every r < ∞ and ǫ > 0.Proof. We set r = min{p 0 , p ′ 1 , p, p ′ } > 1. The symbol of TH is in the symbol modulation space with decay parameters r 0 = ∞, r 1 = 1, r 2 = r 3 = r > 1 as used in Theorem 3.4. Note that here, κ is the identity permutation, so z = 0. The boundedness conditions in Theorem 3.4 now readThe four conditions above reduce to k = 1 :1r ′ ] and obtain k = 1 :11 p 1 ≤ 1 − 1 r ; k = 3 : 1 p 1 ≥ 1 − 2 r + 1 p 0 ; that is k = 1 : p 1 ≥ p 3 ; k = 2 : p 1 ≥ r ′ ; k = 3 : The Space L p , with Mixed Norm. A Benedek, R Panzone, Duke Math. J. 28A. Benedek and R. Panzone, The Space L p , with Mixed Norm, Duke Math. J. 28 (1961), 301-324. Almost orthogonality and a class of bounded bilinear pseudodifferential operators. Á Bényi, R Torres, Math. Res. Lett. 111Á. Bényi and R. Torres, Almost orthogonality and a class of bounded bilinear pseudodifferential operators, Math. Res. Lett. 11 (2004), no. 1, 1-11. Multilinear almost diagonal estimates and applications. Á Bényi, N Tzirakis, Studia Math. 1641Á. Bényi, N. Tzirakis, Multilinear almost diagonal estimates and applications, Studia Math. 164 (2004), no. 1, 75-89. A class of Fourier multipliers for modulation spaces. Á Bényi, L Grafakos, K Gröchenig, K Okoudjou, Appl. Comput. Harmon. Anal. 191Á. Bényi, L. Grafakos, K. Gröchenig, K. Okoudjou, A class of Fourier multipliers for modulation spaces, Appl. Comput. Harmon. Anal. 19 (2005), no. 1, 131-139. Unimodular Fourier multipliers for modulation spaces. Á Bényi, K Gröchenig, K Okoudjou, L Rogers, J. Funct. Anal. 2462Á. Bényi, K. Gröchenig, K. Okoudjou, L. Rogers, Unimodular Fourier multipliers for modulation spaces, J. Funct. Anal. 246 (2007), no. 2, 366-384. Bilinear pseudodifferential operators on modulation spaces. A Bényi, K Okoudjou, J. Fourier Anal. Appl. 103A. Bényi and K. Okoudjou, Bilinear pseudodifferential operators on modulation spaces, J. Fourier Anal. Appl. 10 (2004), no. 3, 301-313. Modulation spaces and a class of bounded multilinear pseudodifferential operators. A Bényi, K Gröchenig, C Heil, K Okoudjou, J. Operator Theory. 542A. Bényi, K. Gröchenig, C. Heil and K. Okoudjou, Modulation spaces and a class of bounded multilinear pseudodifferential operators, J. Operator Theory 54 (2005), no. 2, 387-399. Modulation space estimates for multilinear pseudodifferential operators. A Bényi, K A Okoudjou, Studia Math. 1722A. Bényi and K. A. Okoudjou, Modulation space estimates for multilinear pseudodifferential operators, Studia Math. 172 (2006), no. 2, 169-180. Local well-posedness of nonlinear dispersive equations on modulation spaces. A Bényi, K A Okoudjou, Bull. Lond. Math. Soc. 413A. Bényi and K. A. Okoudjou, Local well-posedness of nonlinear dispersive equations on modulation spaces, Bull. Lond. Math. Soc. 41 (2009), no. 3, 549-558. Mixed modulation spaces and their application to pseudodifferential operators. S Bishop, J. Math. Anal. Appl. 363S. Bishop, Mixed modulation spaces and their application to pseudodifferential operators, J. Math. Anal. Appl. 363 (2010) 1, 255-264. On the boundedness of pseudo-differential operators. A P Calderón, R Vaillancourt, J. Math. Soc. Japan. 23A. P. Calderón, R. Vaillancourt, On the boundedness of pseudo-differential operators, J. Math. Soc. Japan, 23 (1971) 374-378. L p -Boundedness of Multilinear Pseudo-Differential Operators. V Catanȃ, S Molahajloo, M W Wong, Pseudo-Differential Operators: Complex Analysis and Partial Differential Equations Operator Theory: Advances and Applications 205. V. Catanȃ, S. Molahajloo and M. W. Wong, L p -Boundedness of Multilinear Pseudo-Differential Operators, in Pseudo-Differential Operators: Complex Analysis and Partial Differential Equations Operator Theory: Advances and Applications 205, Birkhäuser, 2010, 167-180. On convergence and growth of partial sums of Fourier series. L Carleson, Acta Math. 116L. Carleson, On convergence and growth of partial sums of Fourier series, Acta Math. 116 (1966), 135-157. Metaplectic Representation on Wiener Amalgam Spaces and Applications to the Schrödinger Equation. E Codero, F Nicola, J. Funct. Anal. 254E. Codero and F. Nicola, Metaplectic Representation on Wiener Amalgam Spaces and Applications to the Schrödinger Equation, J. Funct. Anal. 254 (2008), 506-534. Pseudodifferential Operators on L p , Wiener Amalgam and Modulation Spaces. E Codero, F Nicola, Int. Math. Res. Notices. 10E. Codero and F. Nicola, Pseudodifferential Operators on L p , Wiener Amalgam and Modulation Spaces, Int. Math. Res. Notices 10 (2010), 1860-1893. Trace Ideals for Fourier Integral Operators with Non-Smooth Symbols. F Concetti, J , Toft , Pseudo-Differential Operators: Partial Differential Equations and Time Frequency Analysis, Fields Institute Communications. 52F. Concetti, J, Toft, Trace Ideals for Fourier Integral Operators with Non-Smooth Symbols, in Pseudo- Differential Operators: Partial Differential Equations and Time Frequency Analysis, Fields Institute Com- munications, 52 (2007), 255-264. Au delà des opérateurs pseudo-différentiels. R R Coifman, Y Meyer, Yves , Astérisque. 57Société Mathématique de FranceR. R. Coifman, Y. Meyer, Yves, "Au delà des opérateurs pseudo-différentiels," Astérisque, 57, Société Mathématique de France, Paris, 1978. Boundedness of Pseudodifferential Operators on Modulation Spaces. W Czaja, J. Math. Anal. Appl. 2841W. Czaja, Boundedness of Pseudodifferential Operators on Modulation Spaces, J. Math. Anal. Appl. 284 (1) (2003), 389-396. Pointwise convergence of Fourier series. C Fefferman, Ann. of Math. 98C. Fefferman, Pointwise convergence of Fourier series, Ann. of Math. 98 (1973), 551-571. Modulation spaces on locally Abelian groups. H G Feichtinger, Proceedings of International Conference on Wavelets and Applications. International Conference on Wavelets and ApplicationsChennai, IndiaAllied PublishersUniversity of ViennaTechnical reportH. G. Feichtinger, Modulation spaces on locally Abelian groups, Technical report, University of Vienna, 1983. Updated version appeared in: Proceedings of International Conference on Wavelets and Applications 2002, Allied Publishers, Chennai, India, 2003, pp. 99-140. Atomic Characterization of Modulation Spaces through the Gabor-Type Representations. H G Feichtinger, Rocky Mountain J. Math. 19H. G. Feichtinger, Atomic Characterization of Modulation Spaces through the Gabor-Type Representations, Rocky Mountain J. Math. 19 (1989), 113-126. On a New Segal Algebra. H G Feichtinger, Monatsh. Math. 92H. G. Feichtinger, On a New Segal Algebra, Monatsh. Math. 92 (1981), 269-289. Banach Spaces Related to Integrable Group Representations and Their Atomic Decompositions I. H G Feichtinger, K Gröchenig, J. Funct. Anal. 86H. G. Feichtinger and K. Gröchenig, Banach Spaces Related to Integrable Group Representations and Their Atomic Decompositions I, J. Funct. Anal. 86 (1989), 307-340. Banach Spaces Related to Integrable Group Representations and Their Atomic Decompositions II. H G Feichtinger, K Gröchenig, Monatsh. Math. 108H. G. Feichtinger and K. Gröchenig, Banach Spaces Related to Integrable Group Representations and Their Atomic Decompositions II, Monatsh. Math. 108 (1989), 129-148. Gabor Wavelets and the Heisenberg Group: Gabor Expansions and Short Time Fourier Transform from the Group Theoretical Point of View. H G Feichtinger, K Gröchenig, Wavelets: a tutorial in theory and applications. BostonAcademic PressH. G. Feichtinger and K. Gröchenig, Gabor Wavelets and the Heisenberg Group: Gabor Expansions and Short Time Fourier Transform from the Group Theoretical Point of View, in Wavelets: a tutorial in theory and applications, Academic Press, Boston, 1992. Gabor Frames and Time-Frequency Analysis of Distributions. H G Feichtinger, K Gröchenig, J. Funct. Anal. 146H. G. Feichtinger and K. Gröchenig, Gabor Frames and Time-Frequency Analysis of Distributions, J. Funct. Anal. 146 (1997), 464-495. Wiener amalgams and pointwise summability of Fourier transforms and Fourier series. H G Feichtinger, F Weisz, Math. Proc. Cambridge Philos. Soc. 1403H. G. Feichtinger, F. Weisz, Wiener amalgams and pointwise summability of Fourier transforms and Fourier series, Math. Proc. Cambridge Philos. Soc. 140 (2006), no. 3, 509-536. Fourier multipliers of classical modulation spaces. H G Feichtinger, G Narimani, Appl. Comput. Harmon. Anal. 213H. G. Feichtinger, G. Narimani, Fourier multipliers of classical modulation spaces, Appl. Comput. Harmon. Anal. 21 (2006), no. 3, 349-359. H G Feichtinger, F Weisz, Gabor analysis on Wiener amalgams, Sampl. Theory Signal Image Process. 6H. G. Feichtinger, F. Weisz, Gabor analysis on Wiener amalgams, Sampl. Theory Signal Image Process. 6 (2007), no. 2, 129-150. Harmonic analysis in phase space. G B Folland, Annals of Mathematics Studies. 122Princeton University PressG. B. Folland, "Harmonic analysis in phase space," Annals of Mathematics Studies, 122, Princeton University Press, Princeton, NJ, (1989). Characterization of L p (R n ) using Gabor frames. L Grafakos, C Lennard, J. Fourier Anal. Appl. 72L. Grafakos, C. Lennard, Characterization of L p (R n ) using Gabor frames, J. Fourier Anal. Appl. 7 (2001), no. 2, 101-126. Rodolfo A multilinear Schur test and multiplier operators. L Grafakos, R H Torres, J. Funct. Anal. 1871L. Grafakos, R. H. Torres, Rodolfo A multilinear Schur test and multiplier operators, J. Funct. Anal. 187 (2001), no. 1, 1-24. K Gröchenig, C Heil, Gabor, Littlewood-Paley, Gabor expansions in L p (R d ). 146K. Gröchenig, C. Heil, Gabor meets Littlewood-Paley: Gabor expansions in L p (R d ), Studia Math. 146 (2001), no. 1, 15-33. K Gröchenig, Foundation of Time-Frequency Analysis. BostonK. Gröchenig, Foundation of Time-Frequency Analysis, Brikhäuser, Boston, 2001. Counterexamples for Boundedness of Pseudodifferential Operators. K Gröchenig, C Heil, Osaka J. Math. 413K. Gröchenig and C. Heil, Counterexamples for Boundedness of Pseudodifferential Operators, Osaka J. Math. 41 (3) (2004), 681-691. Modulation Spaces and Pseudodifferential Operators. K Gröchenig, C Heil, Integr. Equat. Oper. th. 344K. Gröchenig and C. Heil, Modulation Spaces and Pseudodifferential Operators, Integr. Equat. Oper. th. 34 (4) (1999), 439-457. Singular values of compact pseudodifferential operators. C Heil, J Ramanathan, P Topiwala, J. Funct. Anal. 1502C. Heil, J. Ramanathan, P. Topiwala, Singular values of compact pseudodifferential operators, J. Funct. Anal. 150 (1997), no. 2, 426-452. Irregular and multi-channel sampling of operators. Y M Hong, G E Pfander, PreprintY. M. Hong and G. E. Pfander, Irregular and multi-channel sampling of operators, 2009, Preprint. The Analysis of Linear Partial Differential Operators I, Second Edition. L Hörmander, Springer-VerlagBerlinL. Hörmander, The Analysis of Linear Partial Differential Operators I, Second Edition, Springer-Verlag, Berlin, 1990. The Weyl Calculus of Pseudodifferential Operators. L Hörmander, Comm. Pure Appl. Math. 32L. Hörmander, The Weyl Calculus of Pseudodifferential Operators, Comm. Pure Appl. Math. 32 (1979), 360-444. L p -Boundedness of Pseudo-Differential Operators of Class S0,0. I L Hwang, R B Lee, Trans. Amer. Math. Soc. 3462I. L. Hwang and R. B. Lee, L p -Boundedness of Pseudo-Differential Operators of Class S0,0, Trans. Amer. Math. Soc. 346 (2) (1994), 489-510. Pseudo-Differential Operators, Translated by Hitoshi Kumano-Go, Rémi Vaillancourt and Michihiro Nagase. H Kumano-Go, MIT PressH. Kumano-Go, Pseudo-Differential Operators, Translated by Hitoshi Kumano-Go, Rémi Vaillancourt and Michihiro Nagase, MIT Press, 1982. On the bilinear Hilbert transform. M T Lacey, Proceedings of the International Congress of Mathematicians. the International Congress of MathematiciansBerlinIIM. T. Lacey, On the bilinear Hilbert transform, Proceedings of the International Congress of Mathematicians, Vol. II (Berlin, 1998). Doc. Math. 1998, Extra Vol. II, 647-656. M T Lacey, C Thiele, L p estimates on the bilinear Hilbert transform for 2 < p < ∞. M. T. Lacey, C. Thiele, L p estimates on the bilinear Hilbert transform for 2 < p < ∞, Ann. of Math. (2) 146 (1997), no. 3, 693-724. On Calderón's conjecture. M Lacey, C Thiele, Ann. of Math. 2M. Lacey, C. Thiele, On Calderón's conjecture, Ann. of Math. (2) 149 (1999), no. 2, 475-496. Aproof of boundedness of the Carleson operator. M Lacey, C Thiele, Math. Res. Lett. 74M. Lacey, C. Thiele, Aproof of boundedness of the Carleson operator, Math. Res. Lett. 7 (2000), no. 4, 361-370. The Gabor transform, pseudodifferential operators, and seismic deconvolution. G F Margrave, M P Lamoureux, J P Grossman, D C Henley, V Iliescu, Integrated Computer-Aided Engineering. 121G. F. Margrave, M. P. Lamoureux, J. P. Grossman, D. C. Henley and V. Iliescu, The Gabor transform, pseudodifferential operators, and seismic deconvolution, Integrated Computer-Aided Engineering, 12 (2005), no. 1, 43-55. Translated from the 1990 and 1991 French originals by David Salinger. Y Meyer, R Coifman, &quot; Wavelets, Cambridge Studies in Advanced Mathematics. 48Cambridge University PressCalderón-Zygmund and multilinear operatorsY. Meyer, R. Coifman,, " Wavelets," Calderón-Zygmund and multilinear operators. Translated from the 1990 and 1991 French originals by David Salinger. Cambridge Studies in Advanced Mathematics, 48, Cambridge University Press, Cambridge, 1997. Boundedness of pseudo-differential operators on L p , Sobolev and modulation spaces. S Molahajloo, G E Pfander, Math. Model. Nat. Phenom. 81S. Molahajloo, G. E. Pfander, Boundedness of pseudo-differential operators on L p , Sobolev and modulation spaces, Math. Model. Nat. Phenom. 8 (2013), no. 1, 175-192. Multi-linear operators given by singular multipliers. C Muscalu, T Tao, C Thiele, J. Amer. Math. Soc. 152C. Muscalu, T. Tao, C. Thiele, Multi-linear operators given by singular multipliers, J. Amer. Math. Soc., 15 (2002), no. 2, 469-496. A Beurling-Helson Type Theorem for Modulation Spaces. K A Okoudjou, J. Func. Spaces Appl. 71K. A. Okoudjou, A Beurling-Helson Type Theorem for Modulation Spaces, J. Func. Spaces Appl., 7 (1) (2009), 33-41. Operator Identification and Feichtinger's Algebra, Sampl. Theory Signal Image Process. G E Pfander, D Walnut, 5G. E. Pfander and D. Walnut, Operator Identification and Feichtinger's Algebra, Sampl. Theory Signal Image Process. 5 (2) (2006), 151-168. G E Pfander, arxiv: 1010.6165Sampling of Operators. G. E. Pfander, Sampling of Operators, arxiv: 1010.6165. Pseudodifferential operators, Gabor frames, and local trigonometric bases. R Rochberg, K Tachizawa, Gabor Analysis and Algorithms: Theory and Applications. H. G. Feichtinger and T. StrohmerBostonBirkhäuserR. Rochberg and K. Tachizawa, Pseudodifferential operators, Gabor frames, and local trigonometric bases, in Gabor Analysis and Algorithms: Theory and Applications (H. G. Feichtinger and T. Strohmer, eds.), Birkhäuser, Boston, 1997, pp. 171-192. An Algebra of Pseudodifferential Operators. J Sjöstrand, Math. Res. Lett. 12J. Sjöstrand, An Algebra of Pseudodifferential Operators, Math. Res. Lett. 1 (2) (1994), 185-192. Wiender Type Algebras of Pseudodifferential Operators. J Sjöstrand, SéminaireÉquations aux dérivées Partielles. 4J. Sjöstrand, Wiender Type Algebras of Pseudodifferential Operators, in SéminaireÉquations aux dérivées Partielles, 1994-1995, exp. 4, 1-19. Harmonic analysis: real-variable methods, orthogonality, and oscillatory integrals. E M Stein, Princeton University PressPrinceton, N.J.E. M. Stein, "Harmonic analysis: real-variable methods, orthogonality, and oscillatory integrals," Princeton University Press, Princeton, N.J., (1993). Pseudodifferential operators and Banach algebras in mobile communications. T Strohmer, Appl. Comput. Harmon. Anal. 202T. Strohmer, Pseudodifferential operators and Banach algebras in mobile communications, Appl. Comput. Harmon. Anal. 20 (2006), no. 2, 237-249. Pseudodifferential operators. M E Taylor, Princeton University PressPrinceton, N.J.M. E. Taylor, "Pseudodifferential operators," Princeton University Press, Princeton, N.J., (1981). Continuity Properties for Modulation Spaces, with Applications to Pseudo-Differential Calculus I. J Toft, J. Funct. Anal. 207J. Toft, Continuity Properties for Modulation Spaces, with Applications to Pseudo-Differential Calculus I, J. Funct. Anal. 207 (2004), 399-429 J Toft, Continuity Properties for Modulation Spaces, with Applications to Pseudo-Differential Calculus II. 26J. Toft, Continuity Properties for Modulation Spaces, with Applications to Pseudo-Differential Calculus II, Ann. Glob. Anal. Geom. 26 (2004), 73-106. Fourier Modulation Spaces and Positivity in Twisted Convolution Algebra, Integral Transforms and Special Functions 17 nos. J Toft, 2J. Toft, Fourier Modulation Spaces and Positivity in Twisted Convolution Algebra, Integral Transforms and Special Functions 17 nos. 2-3 (2006), 193-198. J Toft, Pseudo-Differential Operators with Smooth Symbols on Modulation Spaces. J. Toft, Pseudo-Differential Operators with Smooth Symbols on Modulation Spaces, CUBO. 11 (2009), 87-107. Micro-Local Analysis in Fourier Lebesgue and Modulation Spaces. Part II. J Toft, S Pilipovic, N Teofanov, J. Pseudo-Differ. Oper. Appl. 1J. Toft, S. Pilipovic, N. Teofanov, (2010). Micro-Local Analysis in Fourier Lebesgue and Modulation Spaces. Part II, J. Pseudo-Differ. Oper. Appl. 1 (2010), 341-376. ıtContinuity and Schatten properties for pseudo-differential operators on modulation spaces. J Toft, Oper. Theory Adv. Appl. Modern trends in pseudo-differential operatorsJ. Toft, ıtContinuity and Schatten properties for pseudo-differential operators on modulation spaces,in "Mod- ern trends in pseudo-differential operators", Oper. Theory Adv. Appl., Birkhäuser, 172 (2007), 173-206, Basel. The boundedness of pseudodifferential operators on modulation spaces. K Tachizawa, Math. Nachr. 168K. Tachizawa, The boundedness of pseudodifferential operators on modulation spaces, Math. Nachr. 168 (1994), 263-277. An Introduction to Pseudo-Differential Operators. M W Wong, World ScientificSecond EditionM. W. Wong, An Introduction to Pseudo-Differential Operators, Second Edition, World Scientific, 1999. M W Wong, Fredholm Pseudo-Differential Operators on Weighted Sobolev Spaces. 21M. W. Wong, Fredholm Pseudo-Differential Operators on Weighted Sobolev Spaces, Ark. Mat. 21 (2) (1983), 271-282. M W Wong, 45137-66731 Iran E-mail address: [email protected] Molahajloo, Department of Mathematics, Institute for Advanced Studies in Basic Sciences (IASBS), P. O. Box 45195-1159. Gava Zang, ZanjanSpringer-VerlagWeyl TransformsM. W. Wong, Weyl Transforms, Springer-Verlag, 1998. Shahla Molahajloo, Department of Mathematics, Institute for Advanced Studies in Basic Sci- ences (IASBS), P. O. Box 45195-1159, Gava Zang, Zanjan 45137-66731 Iran E-mail address: [email protected] . A Kasso, Okoudjou, College Park, MD, 20742 USA E-mail addressDepartment of Mathematics, University of [email protected] A. Okoudjou, Department of Mathematics, University of Maryland, College Park, MD, 20742 USA E-mail address: [email protected] . E Götz, Pfander, School of Science and Engineering, Jacobs University, 28759 Bremen, Germany E-mail address: [email protected]ötz E. Pfander, School of Science and Engineering, Jacobs University, 28759 Bremen, Germany E-mail address: [email protected]
[]
[ "Ionization potentials and electron affinities from reduced density matrix functional theory", "Ionization potentials and electron affinities from reduced density matrix functional theory" ]
[ "E N Zarkadoula \nTheoretical and Physical Chemistry Institute\nNational Hellenic Research Foundation\nVass. Constantinou 48GR-11635AthensGreece\n\nSchool of Physics and Astronomy\nUniversity of London\nMile End RoadE1 4NSLondonQueen MaryUnited Kingdom\n", "S Sharma \nMax-Planck-Institut für Mikrostrukturphysik\nWeinberg 2D-06120HalleGermany\n", "J K Dewhurst \nMax-Planck-Institut für Mikrostrukturphysik\nWeinberg 2D-06120HalleGermany\n", "E K U Gross \nMax-Planck-Institut für Mikrostrukturphysik\nWeinberg 2D-06120HalleGermany\n", "N N Lathiotakis \nTheoretical and Physical Chemistry Institute\nNational Hellenic Research Foundation\nVass. Constantinou 48GR-11635AthensGreece\n" ]
[ "Theoretical and Physical Chemistry Institute\nNational Hellenic Research Foundation\nVass. Constantinou 48GR-11635AthensGreece", "School of Physics and Astronomy\nUniversity of London\nMile End RoadE1 4NSLondonQueen MaryUnited Kingdom", "Max-Planck-Institut für Mikrostrukturphysik\nWeinberg 2D-06120HalleGermany", "Max-Planck-Institut für Mikrostrukturphysik\nWeinberg 2D-06120HalleGermany", "Max-Planck-Institut für Mikrostrukturphysik\nWeinberg 2D-06120HalleGermany", "Theoretical and Physical Chemistry Institute\nNational Hellenic Research Foundation\nVass. Constantinou 48GR-11635AthensGreece" ]
[]
In the recent work of S. Sharma et al., (arxiv.org: cond-matt/0912.1118), a single-electron spectrum associated with the natural orbitals was defined as the derivative of the total energy with respect to the occupation numbers at half filling for the orbital of interest. This idea reproduces the bands of various periodic systems using the appropriate functional quite accurately. In the present work we apply this approximation to the calculation of the ionization potentials and electron affinities of molecular systems using various functionals within the reduced density-matrix functional theory. We demonstrate that this approximation is very successful in general and in particular for certain functionals it performs better than the direct determination of the ionization potentials and electron affinities through the calculation of positive and negative ions respectively. The reason for this is identified to be the inaccuracy that arises from different handling of the open-and closed-shell systems. PACS numbers: 31.15.ve 71.15.-m
10.1103/physreva.85.032504
[ "https://arxiv.org/pdf/1201.6237v2.pdf" ]
44,844,874
1201.6237
7910ec7ad5b9f23b02aed459dcd8d36c8e38eae1
Ionization potentials and electron affinities from reduced density matrix functional theory 2 Mar 2012 E N Zarkadoula Theoretical and Physical Chemistry Institute National Hellenic Research Foundation Vass. Constantinou 48GR-11635AthensGreece School of Physics and Astronomy University of London Mile End RoadE1 4NSLondonQueen MaryUnited Kingdom S Sharma Max-Planck-Institut für Mikrostrukturphysik Weinberg 2D-06120HalleGermany J K Dewhurst Max-Planck-Institut für Mikrostrukturphysik Weinberg 2D-06120HalleGermany E K U Gross Max-Planck-Institut für Mikrostrukturphysik Weinberg 2D-06120HalleGermany N N Lathiotakis Theoretical and Physical Chemistry Institute National Hellenic Research Foundation Vass. Constantinou 48GR-11635AthensGreece Ionization potentials and electron affinities from reduced density matrix functional theory 2 Mar 2012(Dated: December 21, 2013) In the recent work of S. Sharma et al., (arxiv.org: cond-matt/0912.1118), a single-electron spectrum associated with the natural orbitals was defined as the derivative of the total energy with respect to the occupation numbers at half filling for the orbital of interest. This idea reproduces the bands of various periodic systems using the appropriate functional quite accurately. In the present work we apply this approximation to the calculation of the ionization potentials and electron affinities of molecular systems using various functionals within the reduced density-matrix functional theory. We demonstrate that this approximation is very successful in general and in particular for certain functionals it performs better than the direct determination of the ionization potentials and electron affinities through the calculation of positive and negative ions respectively. The reason for this is identified to be the inaccuracy that arises from different handling of the open-and closed-shell systems. PACS numbers: 31.15.ve 71.15.-m In the recent work of S. Sharma et al., (arxiv.org: cond-matt/0912.1118), a single-electron spectrum associated with the natural orbitals was defined as the derivative of the total energy with respect to the occupation numbers at half filling for the orbital of interest. This idea reproduces the bands of various periodic systems using the appropriate functional quite accurately. In the present work we apply this approximation to the calculation of the ionization potentials and electron affinities of molecular systems using various functionals within the reduced density-matrix functional theory. We demonstrate that this approximation is very successful in general and in particular for certain functionals it performs better than the direct determination of the ionization potentials and electron affinities through the calculation of positive and negative ions respectively. The reason for this is identified to be the inaccuracy that arises from different handling of the open-and closed-shell systems. It is generally accepted today that the Fermi surfaces of metallic systems obtained with density functional theory (DFT), even at the level of local density approximation (LDA), are in good agreement with experiments. Unfortunately, this is not the case with the band gaps of insulators and semiconductors which are highly underestimated by most of the exchange-correlation (xc) functionals within DFT. Even with the exact xc functional of DFT, the Kohn-Sham (KS) gap is not expected to reproduce the experimental gap [1]. This deviation from experiment is most dramatic for Mott-insulators, most of which are predicted by their KS spectrum to be metallic while they are experimentally known to be insulating in nature. In this regard reduced density matrix functional theory (RDMFT) has shown great promise in improving on DFT results for a wide class of systems in that it not only improves the KS-band gaps for insulators in general, but also predicts the correct insulating nature for Mott insulators [2]. Within RDMFT the total energy of a system of interacting electrons is expressed in terms of the one-body reduced density matrix (1-RDM), γ(r, r ′ ). This energy functional is then minimized with respect to γ under the N -representability conditions [3] which restrict the minimization to the domain of 1-RDMs that correspond to ensembles of N -electron wave functions. A major advantage of RDMFT comes from the fact that the exact kinetic energy is easily expressed as a functional of the 1-RDM of the ground state. In addition, due to the departure from the idempotent single-determinant solution, static electronic correlations are well described [4]. The total ground-state energy as a functional of γ reads (atomic units are used throughout): E[γ] = − 1 2 lim r→r ′ ∇ 2 r γ(r, r ′ ) d 3 r ′ + ρ(r) V ext (r) d 3 r + 1 2 ρ(r) ρ(r ′ ) |r − r ′ | d 3 r d 3 r ′ + E xc [γ],(1) where ρ(r) = γ(r, r). V ext is a given external potential, and E xc we call the xc energy functional. In practice, the xc functional is an unknown functional of the 1-RDM and needs to be approximated. A milestone in the development of approximate functionals of the 1-RDM is the Müller functional [4,5], which has the following form: E xc [γ] = E xc [{φ j }, {n j }] = − 1 2 |γ 1/2 (r, r ′ )| 2 |r − r ′ | d 3 r d 3 r ′ (2) where 1/2 is an exponent in the operator sense. Diagonalization of γ produces a set of natural orbitals (the eigenvectors of γ), φ j , and occupation numbers (the eigenvalues of γ), n j . The Müller functional is known to over correlate [6][7][8][9][10], however, there exist several other approximations most of which are modifications of this functional and are known to improve results for finite systems [8,[10][11][12][13][14][15][16][17][18][19][20][21][22][23][24][25][26][27][28][29]. Several of these RDMFT functionals reproduce the discontinuity of the chemical potential at integer number of electrons which is a measure of the fundamental gap of the system [2,16,30,31]. More precisely, it was demonstrated [16,30,31] that the complete removal of the self-interaction (SI) terms leads to discontinuities in the chemical potential that are in good agreement with the fundamental gap for finite systems. Unfortunately, this removal has no effect in the total energy for infinitely extended natural orbitals in periodic systems since their contribution vanishes in the limit of the size of system going to infinity. To overcome this problem, Sharma et al. [2] introduced the power functional [2,32] that reproduces discontinuities without requiring the removal of SI terms. This functional has the form E xc [γ] = E xc [{φ j }, {n j }] = − 1 2 |γ α (r, r ′ )| 2 |r − r ′ | d 3 r d 3 r ′ (3) where α is an exponent in the operator sense. The power functional was applied in the calculation of the fundamental gap of various systems [2,27] including transition metal oxides [2]. An optimal value of α between 0.6 and 0.7 was found to reproduce gaps of all systems in close agreement with experiments. These gaps were obtained from the discontinuity of the chemical potential, µ(N ), at an integer total number of electrons N . A problem of this method of predicting the gap is that the shape of µ(N ) differs substantially from a step function leading to large error-bars in the prediction of the gap. A second problem is that one needs to calculate the total energy (and µ) for several values of N making the calculation time consuming. Finally, a third problem is that this method does not allow for the calculation of quantities other than the gap, for direct comparison with experiments, like for example the density of states for extended systems and ionization potentials (IPs) and electron affinities (EAs) for finite systems. An advantage of DFT is that the KS eigenvalues can be used as an approximate single-electron spectrum of the system. Thus, quantities like the IP and EA can be easily estimated using the KS spectrum. A fundamental difference between RDMFT and DFT is the lack of a KS system within RDMFT and the lack of eigenvalue equation makes it difficult to obtain (even approximately) the IPs and the EAs. One way to calculate IPs in RDMFT, is to use extended Koopman's theorem (EKT), as was proposed by Pernal and Cioslowski [33]. They demonstrated that the Lagrangian matrix in RDMFT is identical with the generalized Fock matrix entering EKT. Thus, IPs can be calculated by diagonalization of this matrix. They used this idea in the calculation of IPs for small molecular systems using the so called Buijse Baerends Corrected (BBC) [8] and Goedecker Umrigar (GU) [11] functionals and showed that the error in the obtained IPs is of the order of 4-6%. The same idea was employed in combination with yet another xc functional, namely the PNOF1 [13,14] functional, for the calculation of the first IPs (FIPs) as well as higher IPs (HIPs) of molecular systems yielding results of similar quality [34]. However, the application of this method is restricted to finite systems, since, for solids, it would require the diagonalization of a large matrix in wave-vector space. Sharma et al. in Ref. [35] introduced an alternative technique to obtain spectral information. We refer to this technique as the "derivative" (DER) method as it entails for each natural orbital, k, the associated energy, ǫ k , be obtained as the derivative of the total energy with respect to the occupation number, n k , at n k = 1/2 and with the rest of the occupation numbers set equal to their groundstate optimal values. This technique has been applied for the calculation of densities of states of transition-metal oxides (NiO, FeO, CoO and MnO) and the results were found to be in excellent agreement with experiments [35] and other state-of-the-art many-body techniques like dynamical mean-field theory and the GW method. In the present work this technique is applied to finite systems. We discuss the validity of the approximations necessary for the accuracy of the method. We present results for the FIP as well as HIP, for atoms and molecules as well as the EAs of atoms, molecules, and radicals adopting several present-day functionals of the 1-RDM. We compare the results with EKT, QCI(T), and experiment. II. THEORY By definition, the ionization potential and electron affinity are given by IP = E(N − 1) − E(N ) EA = E(N ) − E(N + 1),(4) where E(N ) is the ground-state total energy of the charge-neutral system, and E(N − 1), (E(N + 1)) is the energy of the system with one electron removed (added). In the rest of the article we refer to this method of calculating the IP and EA as the definition method (DEF). Due to Koopman's theorem, within the Hartree Fock (HF) theory, the IP in Eq. (4) is well approximated by the eigenvalue of the highest occupied molecular orbital (HOMO). On the other hand, within DFT, the KS energy of the HOMO is exactly equal to the IP in Eq. (4) for the exact xc functional. Likewise, the exact EA equals the orbital energy of the HOMO of the N + 1 electron system calculated with the exact xc functional. , and ∆ (EKT) using the EKT are also included. Within RDMFT, there is no effective single particle KS system reproducing the non-idempotent 1-RDM of the interacting system and quantities like IP and EA can not be obtained from an eigenvalue equation. However, approximate but meaningful single particle energies associated with the natural orbitals can be defined as in Ref. [35]: ǫ k = E({n j })| n k =1 − E({n j })| n k =0 .(5) The two energies on the right hand side are the energies of the system with all natural orbitals and occupation numbers having the optimal ground-state values except for the natural orbital of interest, k, for which occupation numbers are set to either n k = 1 or n k = 0. In this way, these energies are approximate electron addition and/or removal energies for the natural orbital k. We refer to this method for calculating IPs and EAs as the "energydifference" method (DIF). It has been shown that, for extended systems [35], the total energy is almost linear if a particular occupation, n k , is varied between zero and 1. If it was exactly linear then the energy difference in Eq. (5) would be given by the tangent of E({n j }). In absence of this linearity a good choice is to use the Slater trick and approximate ǫ k by ǫ k = ∂E[{φ j }, {n j }] ∂n k n k =1/2 ,(6) where the derivative is calculated at the ground-state natural orbitals {φ j } and occupations {n j }, except for k which is set to n k = 1/2. This is a good approximation because if one expands E({n j }) at n j = 1/2 the term in Eq. (6) is the leading order term with the second order term being identically equal to zero. At first sight Eq. (6) looks similar to the Janak's theorem [43], which gives the eigen-energies of the Kohn-Sham system within DFT. However, it is important to note that within RDMFT lack of single particle eigenvalue equations does not permit the direct use of Janak's theorem-Janak's theorem would lead to all orbital energies, for fractionally occupied states, to be degenerate with value equal to the chemical potential. III. METHODOLOGY We calculate the IPs and EAs of a set of atoms and molecules using DEF, DIF, and DER methods [i.e., using the Eqs. (4), (5) and (6)]. For comparison, results are also calculated using the EKT method. Our implementation is included in a computer code for finite systems [44] which minimizes 1-RDM functionals with respect to occupation numbers and natural orbitals and is based on the expansion of the orbitals in Gaussian basis sets. The one-and two-electron integrals are calculated by use of the GAMESS program [45]. Addition or removal of an electron requires the extension of the theory to open shells. In the present work, like in Refs. [16,30,31,46] we use the simple extension proposed in Ref. [12]. In other words, we assume that orbitals are spin independent while occupation numbers are spin dependent. For the calculation of IPs we adopt the cc-pVDZ basis set [47] for all elements. EAs are calculated as the IPs of negative ions, i.e., of N + 1 electrons. In other words, for both IPs and EAs, the orbital energy of the HOMO (for either the neutral or ionic system) is calculated. Since the HOMOs of the negative ions are relatively diffuse states, the aug-cc-pVDZ basis set is used [47]. We should mention that a lot of neutral systems do not bind an extra electron. In that case, EA is equal to zero, i.e. the extra electron is completely delocalized. However, for small positive EA, the state of the extra electron can be delocalized and impossible to describe with localized basis sets. To ensure fair comparison with experiments a set of atoms, molecules and radicals which are known experimentally to have relatively large and positive EA is used. Calculation of IPs and EAs with DEF method requires the difference of two energies, one for a closed-shell and another for an open shell-system. The broken spin symmetry in open-shell systems leads to twice as many variational parameters as there are in a closed shell system. This extra variational freedom over correlates the openshell systems leading to systematic errors in IPs and EAs. Given the exact xc functional the DEF method would be exact, however, for an approximate functional the DEF method would show these systematic errors in IPs and EAs. DIF and DER methods on the other hand do not suffer from this error since only one minimization, for the charge neutral system, is performed. In addition, it should not come as a surprise if the DER method performs better than DEF and DIF in many cases as it suffers less from possible inaccuracies introduced by the functional and its non unique extension to the case of open shells, mainly because only the 1/2 electron is present in the open shell. One could also consider DER method in conjunction with orbital relaxation (at fixed n j = 1/2). However, this procedure requires full orbital minimization for each j making it computationally very demanding, while the aim of the present work is to define a computationally inexpensive single electron spectrum in terms of the optimal 1RDM of the charge neutral system. Another point to be considered is that the application of the DER method requires the total energy functional to be continuous at n k = 1/2. However, there are functionals that introduce a discontinuity at n k = 1/2 to distinguish between strongly and weakly occupied orbitals [34]. In all cases studied here, we do not find optimal occupation numbers equal to 1/2. Thus, the step function can be safely shifted slightly away from n k = 1/2 without affecting the results. However, this procedure cannot be used in cases with optimal occupations equal to 1/2, like H 2 at the dissociation limit or when they vary continuously from 1 to zero. IV. RESULTS Our results for the average absolute errors in the calculation of IPs with the three methods are included in Table I method are also included in the table. The actual values for IPs obtained using the DER method are compiled in Table II. (Full results for all methods as well as EKT can be found in the supplementary material [50].) It is clear from Table I that all functionals in combination with the DER method give reasonable results for IPs with errors ranging from 4 to 13%. ML, AC3 and power functionals perform slightly better by giving an average error of only 4-6%. For the systems considered here, the ML functional with the DER method is the most accurate for the FIPs (with an error of only 2%). It is important to note that the errors from the DER and EKT method are of the same order (using the same functionals and basis set), while there is less computational effort involved in DER method. The comparison of the DIF and DER methods allows us to assess the validity of the linear approximation-the linearity of the total energy with respect to variation of one occupation number while keeping the rest of the occupation numbers as well as natural orbitals frozen (this was demonstrated for solids in Ref. [35]). As we see in Table II, the DER method gives good results also for finite systems and for the best performing functionals the average difference between the DIF and DER methods' results is in the range of 2-7%. This percentage may be regarded as the magnitude of non-linearity of the total energy with respect to variation of a single occupation number. As mentioned in Sec. III, the DEF method suffers from the over correlation error due to the approximate nature of the xc functional and the difference in variational freedom between closed and open-shell systems. Since the DER method is less prone to this error, it is not a surprise that for the power and ML functionals the DER method improves the results over the DEF method. This indicates that the dependence of the total energy on a particular occupation number deviates from linearity but this deviation works in favor of the DER method by further improving the results. In order to understand this, in Fig. 1(top), we show the tangents at 1/2 of the total energy as a function of the occupation number of the HOMO, while keeping the rest frozen. The plots are made for various functionals. It is clear from Fig. 1 that although the total energy functional itself deviates from linearity the tangent at 1/2 is very close to the one that reproduces the experimental results. One reason for this improvement over the DIF method might be that the extension of the theory to open shells in the case the DER method is minimal-since only half of an electron is unpaired-and DER method reduces the error introduced by the extension of functionals to open shells. The average percentage errors and the values of HIPs for various atoms and molecules are also presented in Table I and Table II respectively. The average error in HIPs obtained using the Müller, GU, power and PNOF1 functionals, is substantially lower (4-7%) than for the FIPs. The rest of the functionals are less accurate for HIPs with average absolute errors slightly higher than those for the FIPs (5-10%). The average absolute errors in the calculation of EAs with the DEF, DIF, and DER methods are shown in Table III. The actual values for EAs obtained using DER method are included in Table IV. As already mentioned, EAs are more difficult quantities to calculate-the errors are introduced by describing negative ions with localized basis which are usually optimized for the description of the ground states of neutral systems. In addition, being a small quantity, EAs are more prone to errors in the differences of total energies corresponding to two different shell structures (for example the difference in energy between a doublet and a singlet state). Thus, it is not surprising that the errors in Tables III and IV are substantially higher than for the IPs. However, stateof-the art quantum chemical methods like QCI(T) also exhibit large errors within the adopted basis set. Under these considerations the AC3, PNOF1 and ML function-als perform surprisingly well for EAs with average errors of 21.4%, 28.6%, and 21.5% respectively. These errors are close to that of QCI(T) (17.3%). It is interesting to note that (see Table IV) there are only a few cases (5 of 60) for which Müller, GU, BBC3 and power functionals give a zero EA [E(N + 1) > E(N )], i.e., the system is not predicted by the corresponding functional to bind an extra electron. For the best performers like the AC3, PNOF1, and ML functionals, no such case exists. In order to compare the DIF and DER methods for the determination of EAs, Fig. 1(bottom) shows the tangents at 1/2 to the dependence of the total energy on the occupation of the HOMO of the negative ions F − and Cl − . The exact tangent that reproduces the experimental EAs is also shown in the figure. Again, as in the case of IPs, the DER method not only looks like a reasonable approximation but it also improves on the results of the DEF method (for the functionals considered and in all cases studied in the present work). In particular, for the case of Cl − [see Fig. 1(bottom)] the tangents at 1/2 are in very good agreement with the exact tangent that reproduces experimental EA, although the dependence of the total energy on the HOMO occupation number deviates significantly from linearity. A striking example of the pathological behavior mentioned in Sec. III is the negative ion F − . This system is found experimentally to be energetically lower than the neutral F atom by 3.34 eV (see Table IV). All functionals underestimate this energy substantially (see supplementary material [50]), as a result of the enhanced variational freedom of the open-shell, neutral F atom compared to the closed-shell F − . In two extreme cases (ML and AC3 functionals) F − is found energetically above the neutral F atom. V. SUMMARY In summary we examined the performance of the derivative-method proposed in Ref. [35] for calculation of the IPs and EAs of finite systems. The accuracy of IPs and EAs calculated using the derivative-method are compared to the IPs and EAs calculated using the definition of these quantities [which involves two total energy minimizations for the system and the positive (for IP) or negative ion (for EA)]. In order to have a complete analysis we have also considered an intermediate method (difference method), in which the IPs and EAs are determined by the difference of the total energies with fixing one occupation number to 1 and/or zero. All these results are further compared to the state-of-the-art CI method as well as experiments. We find that, for IPs both the difference and derivative methods are good approximations to the definition of this quantity. Furthermore, it was found that the derivative method results, obtained using Müller, power, and ML functionals, are better than the values obtained using the definition method itself (with errors of the order of 4-8% only). Among these functionals the ML functional in conjunction with derivative method is most accurate with errors of only up to 2%. For the EAs the errors are significantly larger, with the ML functional in conjunction with the derivative method being the most accurate (with an error of 21%). The errors in EAs were found to be comparable to the errors in the CI results. From the present study we conclude that the results of the derivative method for IPs and EAs are in good agreement with experiments and this method is a promising technique to obtain the single-electron spectrum for systems where state-of-the-art quantum chemical methods can not be applied and DFT results deviate significantly from experiment. FIG. 1 : 1(Color online) The total energy, E, as a function of the occupation number, n, of the HOMO, for Ne atom and CH4 (top) and the negative ions F − and Cl − (bottom). The line giving the correct experimental values for IP/EA is also shown. Curves are shifted to coincide at n = 1/2 in order to compare the tangents with the straight line reproducing the experimental results. The values at the two ends are used in DIF for the calculation of IP for Ne, CH4 and the affinity of F, Cl. The derivatives at n = 1/2 are used for the calculation of the same quantities with the DER method. TABLE II : IIIonization potentials (FIPs and HIPs), in eV, for various molecules calculated with different functionals using DER method. These results are compared with Hartree-Fock, QCI(T), and the experimental data. QCI(T) values were calculated with Gaussian 09 program[42] using the same basis set through Eq. (4). In the bottom row are included the percentage absolute average errors ∆FIP, ∆HIP, and ∆ in the calculation of the FIPs, HIPs and all IPs respectively. For comparison, the errors ∆(EKT) FIP , ∆ (EKT) HIP TABLE III : IIIAverage absolute errors ∆DEF, ∆DIF, and ∆DER, in the calculation of EAs for a set of atoms, molecules and radicals calculated with various xc functionals using DEF, DIF, and DER respectively. . Average errors in the results obtained with EKT0.00 0.25 0.50 0.75 1.00 n 0.00 0.25 0.50 0.75 1.00 E [Ha] Expt. correct tangent Müller GU Power AC3 BBC3 ML Ne 0.00 0.25 0.50 0.75 1.00 n 0.00 0.20 0.40 0.60 0.80 E [Ha] Expt. correct tangent Müller GU Power AC3 BBC3 ML CH 4 0.00 0.25 0.50 0.75 1.00 n 0.00 0.10 0.20 0.30 0.40 0.50 E [Ha] Expt. correct tangent Müller GU Power AC3 BBC3 ML TABLE IV : IVElectron Affinities for various atoms, molecules and radicals calculated as the IP of the system of N + 1 electrons with different functionals using the DER method. For systems where N +1 electrons energy is found higher than the N electron energy, zero affinity is assumed. The results are compared with QCI(T) and experiments. QCI(T) values were calculated with Gaussian 09 program[42]. . R W Godby, L J Sham, M Schlüter, Phys. Rev. Lett. 562415R. W. Godby, L. J. Sham, and M. Schlüter, Phys. Rev. Lett. 56, 2415 (1986). . S Sharma, J K Dewhurst, N N Lathiotakis, E K U Gross, Phys. Rev. B. 78201103S. Sharma, J. K. Dewhurst, N. N. Lathiotakis, and E. K. U. Gross, Phys. Rev. B 78, 201103 (2008). . A Coleman, Rev. Mod. Phys. 35668A. Coleman, Rev. Mod. Phys. 35, 668 (1963). . M A Buijse, E J Baerends, Mol. Phys. 100401M. A. Buijse and E. J. Baerends, Mol. Phys. 100, 401 (2002). . A M K Müller, Phys. Lett. 105446A. M. K. Müller, Phys. Lett. 105A, 446 (1984). . V N Staroverov, G E Scuseria, J. Chem. Phys. 1172489V. N. Staroverov and G. E. Scuseria, J. Chem. Phys. 117, 2489 (2002). . J M Herbert, J E Harriman, Chem. Phys. Lett. 382142J. M. Herbert and J. E. Harriman, Chem. Phys. Lett. 382, 142 (2003). . O Gritsenko, K Pernal, E J Baerends, J. Chem. Phys. 122204102O. Gritsenko, K. Pernal, and E. J. Baerends, J. Chem. Phys. 122, 204102 (2005). . R L Frank, E H Lieb, R Seiringer, H Siedentop, Phys. Rev. A. 7652517R. L. Frank, E. H. Lieb, R. Seiringer, and H. Siedentop, Phys. Rev. A 76, 052517 (2007). . N N Lathiotakis, N Helbig, E K U Gross, Phys. Rev. B. 75195120N. N. Lathiotakis, N. Helbig, and E. K. U. Gross, Phys. Rev. B 75, 195120 (2007). . S Goedecker, C J Umrigar, Phys. Rev. Lett. 81866S. Goedecker and C. J. Umrigar, Phys. Rev. Lett. 81, 866 (1998). . N N Lathiotakis, N Helbig, E K U Gross, Phys. Rev. A. 7230501N. N. Lathiotakis, N. Helbig, and E. K. U. Gross, Phys. Rev. A 72, 030501 (2005). . M Piris, Int. J. Quant. Chem. 1061093M. Piris, Int. J. Quant. Chem 106, 1093 (2006). . P Leiva, M Piris, J. Chem. Phys. 123214102P. Leiva and M. Piris, J. Chem. Phys. 123, 214102 (2005). . P Leiva, M Piris, Int. J. Quant. Chem. 1071P. Leiva and M. Piris, Int. J. Quant. Chem. 107, 1 (2007). . N Helbig, N N Lathiotakis, M Albrecht, E K U Gross, Europhys. Lett. 7767003N. Helbig, N. N. Lathiotakis, M. Albrecht, and E. K. U. Gross, Europhys. Lett. 77, 67003 (2007). . K , Phys. Rev. Lett. 94233002K. Pernal, Phys. Rev. Lett. 94, 233002 (2005). . N N Lathiotakis, M A L Marques, J. Chem. Phys. 128184103N. N. Lathiotakis and M. A. L. Marques, J. Chem. Phys. 128, 184103 (2008). . M A L Marques, N N Lathiotakis, Phys. Rev. A. 7732509M. A. L. Marques and N. N. Lathiotakis, Phys. Rev. A 77, 032509 (2008). . R Requist, O Pankratov, Phys. Rev. B. 77235121R. Requist and O. Pankratov, Phys. Rev. B 77, 235121 (2008). . M Piris, J Matxain, X Lopez, J. Chem. Phys. 13121102M. Piris, J. Matxain, and X. Lopez, J. Chem. Phys. 131, 021102 (2009). . D R Rohr, K Pernal, O V Gritsenko, E J Baerends, J. Chem. Phys. 129164105D. R. Rohr, K. Pernal, O. V. Gritsenko, and E. J. Baerends, J. Chem. Phys. 129, 164105 (2008). . N N Lathiotakis, N Helbig, A Zacarias, E K U Gross, J. Chem. Phys. 13064109N. N. Lathiotakis, N. Helbig, A. Zacarias, and E. K. U. Gross, J. Chem. Phys. 130, 064109 (2009). . M Piris, J M Matxain, X Lopez, J M Ugalde, J. Chem. Phys. 13231103M. Piris, J. M. Matxain, X. Lopez, and J. M. Ugalde, J. Chem. Phys. 132, 031103 (2010). . F G Eich, S Kurth, C R Proetto, S Sharma, E K U Gross, Phys. Rev. B. 8124430F. G. Eich, S. Kurth, C. R. Proetto, S. Sharma, and E. K. U. Gross, Phys. Rev. B 81, 024430 (2010). . N N Lathiotakis, N I Gidopoulos, N Helbig, J. Chem. Phys. 13284105N. N. Lathiotakis, N. I. Gidopoulos, and N. Helbig, J. Chem. Phys. 132, 084105 (2010). . E Tölö, A Harju, Phys. Rev. B. 8175321E. Tölö and A. Harju, Phys. Rev. B 81, 075321 (2010). . D R Rohr, J Toulouse, K Pernal, Phys. Rev. A. 8252502D. R. Rohr, J. Toulouse, and K. Pernal, Phys. Rev. A 82, 052502 (2010). . M Piris, X Lopez, F Ruiprez, J M Matxain, J M Ugalde, J. Chem. Phys. 134164102M. Piris, X. Lopez, F. Ruiprez, J. M. Matxain, and J. M. Ugalde, J. Chem. Phys. 134, 164102 (2011). . N Helbig, N N Lathiotakis, E K U Gross, Phys. Rev. A. 7922504N. Helbig, N. N. Lathiotakis, and E. K. U. Gross, Phys. Rev. A 79, 022504 (2009). . N N Lathiotakis, S Sharma, N Helbig, J K Dewhurst, M A L Marques, F Eich, T Baldsiefen, A Zacarias, E K U Gross, Zeitschrift für Physikalische Chemie. 224467N. N. Lathiotakis, S. Sharma, N. Helbig, J. K. Dewhurst, M. A. L. Marques, F. Eich, T. Baldsiefen, A. Zacarias, and E. K. U. Gross, Zeitschrift für Physikalische Chemie 224, 467 (2010). . N N Lathiotakis, S Sharma, J K Dewhurst, F G Eich, M A L Marques, E K U Gross, Phys. Rev. A. 7940501N. N. Lathiotakis, S. Sharma, J. K. Dewhurst, F. G. Eich, M. A. L. Marques, and E. K. U. Gross, Phys. Rev. A 79, 040501 (2009). . K Pernal, J Cioslowski, Chem. Phys. Lett. 41271K. Pernal and J. Cioslowski, Chem. Phys. Lett. 412, 71 (2005). . P Leiva, M Piris, Journal of Molecular Structure:THEOCHEM. 77045P. Leiva and M. Piris, Journal of Molecular Struc- ture:THEOCHEM 770, 45 (2006). . S Sharma, S Shallcross, J K Dewhurst, E K U Gross, arXiv:0912.1118cond-matS. Sharma, S. Shallcross, J. K. Dewhurst, and E. K. U. Gross, http://www.arxiv.org, cond-mat: arXiv:0912.1118 (2009). . L A Curtiss, P C Redfern, K Raghavachari, J A Pople, J. Chem. Phys. 10942L. A. Curtiss, P. C. Redfern, K. Raghavachari, and J. A. Pople, J. Chem. Phys. 109, 42 (1998). . E Mccormack, J M Gilligan, C Cornaggia, E E Eyler, Phys. Rev. A. 392260E. McCormack, J. M. Gilligan, C. Cornaggia, and E. E. Eyler, Phys. Rev. A 39, 2260 (1989). . H R Ihle, C H Wu, J. Chem. Phys. 631605H. R. Ihle and C. H. Wu, J. Chem. Phys. 63, 1605 (1975). . J L Bahr, A J Blake, J H Carver, V Kumar, J.Quant.Spectrosc.Radiat.Transfer. 91359J. L. Bahr, A. J. Blake, J. H. Carver, and V. Kumar, J.Quant.Spectrosc.Radiat.Transfer. 9, 1359 (1969). . L S Cederbaum, W Von Niessen, Chem. Phys. Lett. 24263L. S. Cederbaum and W. von Niessen, Chem. Phys. Lett. 24, 263 (1974). . G Bieri, L Asbrink, Journal of Electron Spectroscopy and Related Phenomena. 20149G. Bieri and L. Asbrink, Journal of Electron Spec- troscopy and Related Phenomena 20, 149 (1980). Gaussian 09 Revision A.1, gaussian Inc. M J Frisch, G W Trucks, H B Schlegel, G E Scuseria, M A Robb, J R Cheeseman, G Scalmani, V Barone, B Mennucci, G A Petersson, Wallingford CTM. J. Frisch, G. W. Trucks, H. B. Schlegel, G. E. Scuseria, M. A. Robb, J. R. Cheeseman, G. Scalmani, V. Barone, B. Mennucci, G. A. Petersson, et al., Gaussian 09 Revi- sion A.1, gaussian Inc. Wallingford CT 2009. . J F Janak, Phys. Rev. B. 187165J. F. Janak, Phys. Rev. B 18, 7165 (1978). . info: [email protected] computer program. HIPPO computer program, info: [email protected]. . M W Schmidt, K K Baldridge, J A Boatz, S T Elbert, M S Gordon, J H Jensen, S Koseki, N Matsunaga, K A Nguyen, S J Su, J. Comp. Chem. 141347M. W. Schmidt, K. K. Baldridge, J. A. Boatz, S. T. El- bert, M. S. Gordon, J. H. Jensen, S. Koseki, N. Mat- sunaga, K. A. Nguyen, S. J. Su, et al., J. Comp. Chem. 14, 1347 (1993). . N Helbig, G Theodorakopoulos, N N Lathiotakis, J. Chem. Phys. 13554109N. Helbig, G. Theodorakopoulos, and N. N. Lathiotakis, J. Chem. Phys. 135, 054109 (2011). . T H Dunning, Jr , J. Chem. Phys. 901007T. H. Dunning, Jr., J. Chem. Phys. 90, 1007 (1989). . H W Sarkas, J H Hendricks, S T Arnold, K H Bowen, J. Chem. Phys. 1001884H. W. Sarkas, J. H. Hendricks, S. T. Arnold, and K. H. Bowen, J. Chem. Phys. 100, 1884 (1994). . M Meunier, N Quirke, D Binesti, Molecular Simulation. 23109M. Meunier, N. Quirke, and D. Binesti, Molecular Simu- lation 23, 109 (1999). See supplementary material at [URL will be inserted by AIP] for a set of tables containing our calculation results for the IPs and EAs using RDMFT functionals with methods DEF, DIF, and DER, as well as extended Koopman. s theoremSee supplementary material at [URL will be inserted by AIP] for a set of tables containing our calculation re- sults for the IPs and EAs using RDMFT functionals with methods DEF, DIF, and DER, as well as extended Koop- man's theorem.
[]
[ "Quantitative Estimates for Operator-Valued and Infinitesimal Boolean and Monotone Limit Theorems", "Quantitative Estimates for Operator-Valued and Infinitesimal Boolean and Monotone Limit Theorems" ]
[ "Octavio Arizmendi ", "ANDMarwa Banna ", "Pei-Lun Tseng " ]
[]
[]
We provide Berry-Esseen bounds for sums of operator-valued Boolean and monotone independent variables, in terms of the first moments of the summands. Our bounds are on the level of Cauchy transforms as well as the Lévy distance. As applications, we obtain quantitative bounds for the corresponding CLTs, provide a quantitative "fourth moment theorem" for monotone independent random variables including the operator-valued case, and generalize the results by Hao and Popa on matrices with Boolean entries. Our approach relies on a Lindeberg method that we develop for sums of Boolean/monotone independent random variables. Furthermore, we push this approach to the infinitesimal setting to obtain the first quantitative estimates for the operator-valued infinitesimal free, Boolean and monotone CLT.
null
[ "https://export.arxiv.org/pdf/2211.08054v1.pdf" ]
253,523,139
2211.08054
d31cc53e879212e45496383f8054e6dfbb7fc7b8
Quantitative Estimates for Operator-Valued and Infinitesimal Boolean and Monotone Limit Theorems 15 Nov 2022 Octavio Arizmendi ANDMarwa Banna Pei-Lun Tseng Quantitative Estimates for Operator-Valued and Infinitesimal Boolean and Monotone Limit Theorems 15 Nov 2022 We provide Berry-Esseen bounds for sums of operator-valued Boolean and monotone independent variables, in terms of the first moments of the summands. Our bounds are on the level of Cauchy transforms as well as the Lévy distance. As applications, we obtain quantitative bounds for the corresponding CLTs, provide a quantitative "fourth moment theorem" for monotone independent random variables including the operator-valued case, and generalize the results by Hao and Popa on matrices with Boolean entries. Our approach relies on a Lindeberg method that we develop for sums of Boolean/monotone independent random variables. Furthermore, we push this approach to the infinitesimal setting to obtain the first quantitative estimates for the operator-valued infinitesimal free, Boolean and monotone CLT. INTRODUCTION Noncommutative probability theory deals with operators from a probabilistic viewpoint. Like any probabilistic theory, one of its main notions is that of independence. A special feature of noncommutative probability is however the diversity of such notions, where five of them are considered to be the most fundamental following the classification of natural products of Muraki [24] from the categorical axiomatization viewpoint of Ben Ghorbal and Schürmann [13]. Other than the well studied tensor independence and free independence, the are three other notions satisfying such axioms: Boolean independence, monotone independence and its mirror image antimonotone independence. Operator-valued (OV) and operator-valued infinitesimal (OVI) extensions of these different notions of independence also exist, where independence is defined with respect to some conditional expectation and some completely positive linear B-bimodule map respectively, see Sections 2.1 and 2.2. In this paper, we are mainly concerned with operator-valued Boolean and monotone independence. We are interested in studying distributions of sums of independent variables x 1 + · · · + x n , for which we prove Berry-Esseen results that are further extended to the operator-valued infinitesimal setting. One should note that Boolean and monotone independences have the special feature that constants are not independent from any non-trivial algebra A. Thus adapting strategies for free independence, as for example in [6], is not direct. For instance, when working with Cauchy transforms one has the problem that (Boolean or monotone) independence of X from Y, does not imply the independence of X from (z − Y) −1 , for a complex number z. Thus in order to make precise estimates of joint moments one needs a more detailed analysis. Our main results, stated in Section 1.1, have various applications including quantitative bounds for the operator-valued Boolean and monotone central limit theorems. These bounds are on the level of the operator-valued Cauchy transforms as well as the Lévy distance and are merely in terms of the moments of fourth and second order, see Sections 4.1 and 4.2. As further applications, we provide in Section 4.3 a quantitative "fourth moment theorem" for the class of infinitely divisible measures with respect to the monotone convolution including the operator-valued case. Our approach also generalizes the results by Hao and Popa [30] on matrices with Boolean entries to include general variance profiles, see Section 5. Finally, as our results extend to the infinitesimal setting, we provide in Section 4.4 the first quantitative estimates for the operator-valued infinitesimal free, Boolean and monotone central limit theorems. Moreover, we provide in Appendix A an algebraic construction of the operator-valued infinitesimal product in the free, Boolean and monotone settings. Our approach relies on the Lindeberg method [19] which is a powerful method that allows comparing distributions of sums of independent variables. The idea is to compare the distribution of x 1 + · · · + x n with that of y 1 + · · · + y n , by keeping track of the effect of changing x i by y i , one at a time. This method has been used in noncommutative probability previously, in the context of random matrices, free and exchangeable random variables; see the papers [5,6,10]. 1.1. Statement of results. Let (A, E, B) be an operator-valued C * -probability space and let x = {x 1 , . . . , x N } and y = {y 1 , . . . , y N } be two self-adjoint families in A whose elements are infinitesimally free, Boolean, or monotone independent over B. Our aim is to compare the analytic distribution, in terms of Cauchy transforms, of x N = N i=1 x i and y N = N i=1 y i , whenever the variables are centered and have matching moments of second order. More precisely, we assume the following: To state our main estimate we need to introduce first some notation on operator-valued moments α 2 (x) = max 1≤i≤N E[x 2 i ] , α 4 (x) := max 1≤i≤N sup E[x i b * x 2 i bx i ] and α 4 (x) = max 1≤i≤N E[x 4 i ] , where the supremum is taken over b ∈ B such that b = 1. If, in addition, (A, ϕ) is a W * -probability space with ϕ = ϕ • E, we have that, for any ε > 0, R ℑm ϕ[G x N (t + iε)] − ℑm ϕ[G y N (t + iε)] dt ≤ π ε 3 α 2 (x) α 4 (x) + α 4 (y) N. (1.2) Let (A, E, B) be an operator-valued C * -probability space and a be a element in A. We denote by A a the B-bimodule algebra generated by a. For given x 1 , . . . , x k ∈ A, we write A x 1 ≺ A x 2 ≺ · · · ≺ A x k over B, whenever i < j and A x i is monotone independent from A x j over B. Theorem 1.2 (Monotone Case). Let (A, E, B) be an operator-valued C * -probability space. Let N ∈ N and consider two families x = {x 1 , . . . , x N } and y = {y 1 , . . . , y N } of self-adjoint elements in A satisfying Assumption 1 and that are such that A x 1 ≺ A x 2 ≺ · · · ≺ A x N over B and A y 1 ≺ A y 2 ≺ · · · ≺ A y N over B. Then for any b ∈ H + (B), E[G x N (b)] − E[G y N (b)] ≤ ℑm(b) −1 4 α 2 (x) α 4 (x) + α 4 (y) N. (1.3) In the case of a W * -probability space (A, ϕ), where B = C and E = ϕ, we have in addition for any ε > 0, R ℑm ϕ[G x N (t + iε)] − ℑm ϕ[G y N (t + iε)] dt ≤ π ε 3 α 2 (x) α 4 (x) + α 4 (y) N. (1.4) The above estimates allow us to quantify the CLT for sums of Boolean or monotone independent random variables with amalgamation in terms of the second and fourth moment; see Section 4 for the precise statements. More precisely, the bounds (1.1) and (1.3) allow us to prove convergence in distribution over B, while the bounds (1.2) and (1.4) yield estimates on the Lévy distance. The method of proof is also adapted to provide estimates on the distribution of matrices with Boolean entries; see Section 5. Suppose that (A, E, E ′ , B) is an OV C * -infinitesimal probability space, and let x = {x 1 , . . . , x N } and y = {y 1 , . . . , y N } be two self-adjoint families in A. E ′ G x N (b) − E ′ G y N (b) ≤ 2N ℑm(b) −1 4 max 1≤i≤N x i 3 + max 1≤i≤N y i 3 . (1.5) Here x and y are infinitesimally monotone independent in the sense that A x 1 ≺≺ ≺ A x 2 ≺≺ ≺ · · · ≺≺ ≺ A x N and A y 1 ≺≺ ≺ A y 2 ≺≺ ≺ · · · ≺≺ ≺ A y N over B, see Section 2.2 for more details on the order. PRELIMINARIES 2.1. OV Probability Theory. Operator-valued probability theory allows considering conditional expectations and gives a wider range of applicability. To define such spaces, we start by introducing some notions and recalling some definitions. Let A be a unital algebra over C and B a unital subalgebra of A. We say that C is a subalgebra of A over B if C is a subalgebra of A and bc, cb ∈ C, for all b ∈ B and c ∈ C. A subalgebra of A over B may not contain the unit of A. For x 1 , . . . , x r ∈ A, let B x 1 , . . . , x r 0 denote the subalgebra of A over B consisting of finite sums of elements of {b 1 x i 1 b 2 . . . x in b n+1 : b i ∈ B, n ≥ 1, i 1 , . . . , i n ∈ {1, . . . , r}}. Note that, in general, B is not contained in B x 1 , . . . , x r 0 . Let D be another unital algebra containing B as a subalgebra. A map f from A to D is called B-linear if f(b 1 xb 2 + y) = b 1 f(x)b 2 + f(y), for all b 1 , b 2 ∈ B and x, y ∈ A. A B-linear map E with values in B is called a conditional expectation if E(b) = b, for all b ∈ B. From now on we assume that E is a conditional expectation in the above sense. A random variable is an element of A and a random vector is an element of A r for any r ≥ 1. A W * -probability space is a pair (A, ϕ) where A is a von Neumann algebra and ϕ is a faithful normal state on A. (A, E, B) is called an operator-valued C * -probability space if A is a unital C *algebra, B is a unital subalgebra, and E : A → B is a conditional expectation that is completely positive. Working in such spaces allows conditioning with respect to E which carries many similar properties as the scalar-valued expectation ϕ. In addition, if (A, ϕ) is a tracial W * -probability space and B is a von Neumann subalgebra of A, then there exists a unique conditional expectation E : A → B that preserves the trace, i.e. ϕ • E = ϕ. Note that in the tracial setting, the existence of a trace-preserving conditional expectation E was proved in [39]. As in the Boolean and monotone cases, the state ϕ is non-tracial, we will only consider the settings where there exists a conditional expectation E with ϕ • E = ϕ. A main difference, however, is that notions of positivity for operator-valued distributions need to hold for all matrix amplifications. It is also needed to extend the different notions of independence to the operator-valued setting, where independence is defined with respect to the conditional expectation E. This was done over time, where the notion of OV free independence was first introduced in [40]. On the other hand, Boolean independence, introduced by Speicher and Woroudi [35], and monotone independence, introduced by Muraki [21,22,23], were extended later on to the operator-valued setting (see [29,28,16,33]). We recall these notions of independence in the following definition. • Sub-algebras (A i ) i∈I of A that contain B are called freely independent (or just free for short) over B if for i 1 , i 2 , . . . , i n ∈ I, i 1 = i 2 = i 3 . . . = i n , and x j ∈ A i j with E[x j ] = 0 for all j = 1, 2, . . . , n, then E[x 1 · · · x n ] = 0. • (The possibly non-unital) B-bimodule subalgebras (A i ) i∈I of A are called Boolean independent over B if for all n ∈ N and x 1 , . . . , x n ∈ A such that x j ∈ A i j where i 1 = i 2 = . . . = i n ∈ I, we have E[x 1 · · · x n ] = E[x 1 ] · · · E[x n ]. • Assume that Λ is equipped with a linear order <. (The possibly non-unital) B-bimodule subalgebras (A i ) i∈Λ of A are called monotone independent over B if E[x 1 · · · x j · · · x n ] = E[x 1 · · · x j−1 E[x j ]x j+1 · · · x n ] whenever x j ∈ A i j , i j ∈ Λ for all j and i j−1 < i j > i j+1 , where one of the inequalities is eliminated if j = 1 or j = n. A crucial difference between monotone independence from the free and Boolean independence is the lack of symmetry, so when talking about monotone tuples of variables we need to specify an order. Therefore, we say subalgebras A α and A β are monotone independent in the sense that they are monotone independent with the order α < β and we denote it by A α ≺ A β . Given an OV probability space (A, E, B), elements (x i ) i∈I of A are said to be freely independent over B if the unital algebras generated by elements x i over B are freely independent. Similarly, (x i ) i∈I are said to be Boolean (respectively monotone) independent if the non-unital B-bimodule subalgebras generated by x i , (i ∈ I) over B form a Boolean (respectively monotone) independent family. Since the notion of monotone independence depends on the order, we write x ≺ y or A x ≺ A y whenever x, y are monotone independent. As in the scalar, noncommutative joint distributions are defined in the operator-valued realm as the collection of all possible B-valued mixed moments. Definition 2.2 (B-valued joint distributions). Let X = (x 1 , . . . , x r ) be a random vector in an OV C * -probability space (A, E, B) and let i 1 , . . . , i k ∈ {1, . . . , r} for n ≥ 1. The multilinear map µ X i 1 ,...,i k defined by µ X i 1 ,...,i k (b 1 , . . . , b k−1 ) = E(x i 1 b 1 · · · b k−1 x i k ) is called the (i 1 , . . . , i k )-moment of X. Then the B-distribution of X is defined as the collection of all possible (i 1 , . . . , i k )-moments of X: µ X = µ X i 1 ,...,i k (b 1 , . . . , b k−1 ) | i 1 , . . . , i k ∈ {1, . . . , r}, b 1 , . . . , b k−1 ∈ B . Moreover in the OV setting, convergence in distribution means convergence in norm of all joint B-valued moments. More precisely, for each n ∈ N, X n = (x (i) n ) i∈I ∈ A n and X = (x i ) i∈I ∈ A, we say (X n ) n converges to X in distribution over B if for any k ≥ 0, i 1 , . . . , i k ∈ {1, . . . , r} and b 0 , b 1 , . . . , b k ∈ B, µ Xn i 1 ,...,i k (b 1 , . . . , b k−1 ) − µ X i 1 ,...,i k (b 1 , . . . , b k−1 ) −→ 0 as n → ∞. Let us now introduce the notion of Cauchy transforms, which is a crucial tool to study distributions. Suppose that (A, E, B) is an OV C * -probability space. The resolvent of an element x ∈ A, is given by G x (b) = (b − x) −1 for any b ∈ B such that b − x is invertible. If x = x * ∈ A, then the OV Cauchy transform of x is given by G B x (b) := E[G x (b)] for all b ∈ H + (B), where H + (B) denotes the OV upper half plane of B defined by H + (B) := b ∈ B | ℑm(b) = 1 2i (b − b * ) > 0 . Note that for each x = x * ∈ A, b − x is invertible for all b ∈ H + (B) over which G B x is hence well-defined. If B = C, then we have the (scalar-version) Cauchy transform of x G x (z) := ϕ[G x (z)] for all z ∈ C + , where C + denotes the upper half complex plane. Note that G x encodes all the moments of x; indeed, for any z ∈ C + such that |z| < x , we have G x (z) = ∞ n=1 ϕ(x n ) z n+1 . For later use, we note that for a given x = x * ∈ A, we obtain by the resolvent identity G x (z) 2 L 2 (A,ϕ) = ϕ (z − x) −1 (z − x) −1 = − ℑm(G x (z)) ℑm(z) for all z ∈ C + , which together with the fact that − 1 π ℑ(G x (t + iε)) dt is a probability measure for every ε > 0 yields that R G x (t + iε) 2 L 2 (A,ϕ) dt = π ε for all ε > 0. However in the OV setting, the OV Cauchy transform G B x alone does not encode all possible moments of x and one would need for this purpose all the matrix amplifications {G M k (B) 1 k ⊗x } k∈N of G B x defined for any k ∈ N by G M k (B) 1 k ⊗x (b) := id k ⊗ E[G 1 k ⊗x (b)] for all b ∈ M k (B) such that b − 1 k ⊗ x is invertible. For more details on this matter, see [40]. The combinatorial approach for OV free probability theory was first introduced by Speicher [34]. The OV Boolean case was later developed by Popa [29] while the OV monotone case was investigated in [16,28]. We review now notions and properties of OV cumulants, which play a key role in the combinatorial counterpart of OV non-commutative probability theory. We denote by NC(n) the set of non-crossing partitions of [n] := {1, . . . , n}, namely, partitions π = {V 1 , . . . , V r } such that V i and V j do not cross for any 1 ≤ i, j ≤ r. The blocks of π are V 1 , . . . , V r and the size (in this case r) of the partition is denoted by |π|. We denote furthermore by NC 2 (n) the set of non-crossing pair partitions of [n]. In order to describe cumulants for various notions of independence, we give further notation: Notation 2.3. Suppose that A is a unital algebra, and B is a unital sub-algebra of A. Let {f B n : A n → B} n be a family of multilinear maps. Given a partition π ∈ NC(n), we define the map f B π : A n → B recursively as follows: • For π = 1 n := {1, . . . , n}, define f B π := f B n . • For π ∈ NC(n) \ {1 n }, pick up an interval block V = {l + 1, . . . , l + k} ∈ π and define f B π (x 1 , . . . , x n ) = f B π ′ (x 1 , . . . , x l f B k (x l+1 , . . . , x l+k ), x l+k+1 , . . . , x n ) = f B π ′ (x 1 , . . . , x l , f B k (x l+1 , . . . , x l+k )x l+k+1 , . . . , x n ) where π ′ = π \ V ∈ NC(n − k). Definition 2.4. Let (A, E, B) be an OV C * -probability space. The OV free cumulants {r B n : A n → B} n , the OV Boolean cumulants {β B n : A n → B} n , and the OV monotone cumulants {h B n : A n → B} n are recursively defined for all n ∈ N and x 1 , . . . , x n ∈ A via the following moment-cumulant formulas: E[x 1 . . . x n ] = π∈NC(n) r B π (x 1 , . . . , x n ); E[x 1 . . . x n ] = π∈I(n) β B π (x 1 , . . . , x n ); E[x 1 . . . x n ] = π∈NC(n) 1 τ(π)! h B π (x 1 , . . . , x n ); where I(n) denotes the subset of NC(n) consisting of all interval partitions of [n] and τ(π)! := V∈π W ∈ π | W ⊆ {min(V), . . . , max(V)} . A crucial property of OV free and Boolean cumulants is the fact that they capture independence over B. Indeed, the notions of free and Boolean independence can be characterized by the property of vanishing mixed cumulants as described below: Proposition 2.5. Suppose that (A, E, B) is an OV C * -probability space. • ∈ A i 1 , . . . , x s ∈ A is , we have β B s (x 1 , . . . , x s ) = 0. In the free and Boolean case, it is easy to see -using the above property of vanishing of mixed cumulants -that independence is preserved when lifted to matrices. In the following proposition, we prove directly and without relying on cumulants that the statement remains valid in the case of monotone independence. Proposition 2.6. Let (A, E, B) be an OV C * -probability space and A 1 , . . . , A n be subalgebras over B. If A 1 ≺ A 2 ≺ · · · ≺ A n over B, then for each N ∈ N, M N (A 1 ) ≺ M N (A 2 ) ≺ · · · ≺ M N (A n ) over M N (B) in the OV C * -probability space (M N (A), id N ⊗ E, M N (B)). Proof. Let A m = [a (m) r,s ] N r,s=1 ∈ M N (A im ) for each m where i 1 , · · · , i k ∈ [n] with i 1 = i 2 = · · · = i n . Suppose that i k 0 −1 < i ko > i k 0 +1 , then we observe that for any i, j ∈ [N], (id N ⊗ E)[A 1 A 2 · · · A k 0 −1 A k 0 A k 0 +1 · · · A n ] i,j = i 1 ,··· ,i n−1 ∈[N] E a (1) i,i 1 a (2) i 1 ,i 2 · · · a (k 0 −1) i k 0 −2 ,i k 0 −1 a (k 0 ) i k 0 −1 ,i k 0 a (k 0 +1) i k 0 ,i k 0 +1 · · · a (n) i n−1 ,j = i 1 ,··· ,i n−1 ∈[N] E a (1) i,i 1 a (2) i 1 ,i 2 · · · a (k 0 −1) i k 0 −2 ,i k 0 −1 E a (k 0 ) i k 0 −1 ,i k 0 a (k 0 +1) i k 0 ,i k 0 +1 · · · a (n) i n−1 ,j = E i 1 ,··· ,i n−1 ∈[N] a (1) i,i 1 a (2) i 1 ,i 2 · · · a (k 0 −1) i k 0 −2 ,i k 0 −1 E a (k 0 ) i k 0 −1 ,i k 0 a (k 0 +1) i k 0 ,i k 0 +1 · · · a (n) i n−1 ,j = (id N ⊗ E) A 1 A 2 · · · A k 0 −1 (Id N ⊗ E) [A k 0 ] A k 0 +1 · · · A n i,j , which implies that (id N ⊗ E)[A 1 · · · A k 0 −1 A k 0 A k 0 +1 · · · A n ] = (id N ⊗ E)[A 1 · · · A k 0 −1 (Id N ⊗ E)[A k 0 ]A k 0 +1 · · · A n ], from which we conclude that monotone independence over B lifts to the matrix level. The development of OV probability spaces and different notions of independence led naturally to proving OV counterparts of the central limit theorem and studying their limiting laws. As in the scalar setting, the limiting distributions are universal in the sense that they only depend on the initial distribution of the variables through their variance. Note that for a B-valued element x with E[x] = 0, the variance is given by the completely positive map η x : B → B, η x (b) := E[xbx] rather than a positive scalar σ 2 x . The free case and its OV semicircular limiting distribution were first proven in [40] and [34] followed by the Boolean and monotone cases with their OV Bernoulli and arcsine limiting distributions in [8] and [16]. Before recalling their definitions precisely, we refer the reader to [17,Chapter 6] for more discussion around the OV central limit theorems. Definition 2.7. Let (A, E, B) be an OV C * -probability space and x ∈ A with E[x] = 0. We say that x is • a B-valued semi-circular element with variance η if E[b 0 xb 1 . . . xb k ] =    π∈NC 2 (k) b 0 η π (b 1 , . . . , b k−1 )b k if k is even 0 if k is odd. • a B-valued Bernoulli element with variance η if E[b 0 xb 1 . . . xb k ] = b 0 η(b 1 )b 2 . . . η(b k−1 )b k if k is even 0 if k is odd. • a B-valued arcsine element with variance η if E[b 0 xb 1 . . . xb k ] =    π∈NC 2 (k) 1 τ(π)! b 0 η π (b 1 , . . . , b k−1 )b k if k is even 0 if k is odd. where η π : B k−1 → B, η π (b 1 , . . . , b k−1 ) = E π [xb 1 , xb 2 , . . . , xb k−1 , x]. Note that by Notation 2.3, E π breaks for a pairing π ∈ NC 2 (k) the product into pairs of the form E[xbx] = η x (b), where π just indicates the way in which way the pairs are nested. We refer to [8] and [17,Chapter 6] for more details. Theorem 2.9. Suppose that (A, E, B) is an OV C * -probability space, and {x i } i∈N is a sequence of identically distributed freely (respectively Boolean, monotone) independent elements that are centered, E[x 1 ] = 0, and whose variance is given by a completely positive map η. For each given N, we let S N = 1 √ N (a 1 + · · · + a N ). Then (S N ) N converges to s in distribution over B, where s is a B-valued semi-circular (respectively Bernoulli, arcsine) element with variance η. 2.2. OV Infinitesimal Probability. One of the generalizations of OV free probability is OV infinitesimal (OVI) free probability that was first studied in [11]. Notions of OVI Boolean independence and OVI monotone independence were later developed in [27]. In this subsection, we will review the framework of various notions of infinitesimal independence. • Sub-algebras (A i ) i∈I of A that contain B are called infinitesimally freely independent (or just infinitesimally free for short) over B if for i 1 , i 2 , . . . , i n ∈ I, i 1 = i 2 = i 3 . . . = i n , and x j ∈ A i j with E[x j ] = 0 for all j = 1, 2, . . . , n, then E[x 1 . . . x n ] = 0; E ′ [x 1 . . . x n ] = n j=1 E[x 1 . . . x j−1 E ′ [x j ]x j+1 . . . x n ]. • (The possibly non-unital) B-bimodule subalgebras (A i ) i∈I of A are called infinitesimally Boolean independent over B if for all n ∈ N and x 1 , . . . , x n ∈ A such that x j ∈ A i j where i 1 = i 2 = . . . = i n ∈ I, we have E[x 1 . . . x n ] = E[x 1 ] . . . E[x n ]; E ′ [x 1 . . . x n ] = n j=1 E[x 1 ] . . . E[x j−1 ]E ′ [x j ]E[x j+1 ] . . . E[x n ]. • Assume that Λ is equipped with a linear order <. (The possibly non-unital) B-bimodule subalgebras (A i ) i∈Λ of A are called infinitesimally monotone independent over B if E[x 1 . . . x j . . . x n ] = E[x 1 . . . x j−1 E[x j ]x j+1 . . . x n ]; E ′ [x 1 . . . x j . . . x n ] = E ′ [x 1 . . . x j−1 E[x j ]x j+1 . . . x n ] + E[x 1 . . . x j−1 E ′ [x j ]x j+1 . . . x n ]. whenever x j ∈ A i j , i j ∈ Λ for all j and i j−1 < i j > i j+1 , where one of the inequalities is eliminated if j = 1 or j = n. Note that the notion of infinitesimal monotone independence is similar to the case of monotone independence, which is lack of symmetry. In this context, A α and A β are infinitesimally monotone independent if they are infinitesimally monotone independent with the order α < β. We write in this case A α ≺≺ ≺ A β . The elements (x i ) i∈I of A are infinitesimally free if the unital algebras generated by x i , (i ∈ I) are infinitesimally free over B. In addition, (x i ) i∈I are infinitesimally Boolean (respectively monotone) independent if the non-unital B-bimodule subalgebras generated by x i , (i ∈ I) over B form a infinitesimally Boolean (respectively monotone) independent family. Definition 2.11. Let X = (x 1 , . . . , x r ) be a random vector and i 1 , . . . , i k ∈ {1, . . . , r} for n ≥ 1. Then the (i 1 , . . . , i k )-infinitesimal moment of X is the multilinear map ∂µ X i 1 ,...,i k that is defined by ∂µ X i 1 ,...,i k (b 1 , . . . , b k−1 ) = E ′ (x i 1 b 1 . . . b k−1 x i k ). The infinitesimal distribution of X is the collection of all possible (i 1 , . . . , i n )-moments and infinitesimal moments of X. Moreover in the OV-infinitesimal setting, convergence in infinitesimal distribution means convergence in norm of all joint B-valued moments and B-valued infinitesimal moments. More precisely, for each n ∈ N, X n = (x (i) n ) i∈I ∈ A n and X = (x i ) i∈I ∈ A, we say (X n ) n converges to X in infinitesimal distribution over B if for any k ≥ 0, i 1 , . . . , i k ∈ {1, . . . , r} and b 0 , b 1 , . . . , b k ∈ B, Suppose that (A, E, E ′ , B) is an OV C * -infinitesimal probability space, and let x = x * ∈ A. The OV infinitesimal Cauchy transform of x is defined by ∂G B x (b) := E ′ [G x (b)] for all b ∈ B such that b − x is invertible. Note that the OV infinitesimal moments of x are obtained by considering all matrix amplifications {∂G M k (B) 1 k ⊗x } k∈N of ∂G B x , defined for any k ∈ N by G M k (B) 1 k ⊗x (b) := id k ⊗ E[G 1 k ⊗x (b)] for all b ∈ M m (B) such that b − 1 k ⊗ x is invertible. For more details, we refer the reader to [37]. Note that for a given OV C * -infinitesimal probability space (A, E, E ′ , B), the corresponding upper triangular probability space ( A, E, B) is an OV probability space where A = x x ′ 0 x x, x ′ ∈ A , B = b b ′ 0 b b, b ′ ∈ B , and E is a map from A to B that is defined by E x x ′ 0 x = E[x] E ′ [x] + E[x ′ ] 0 E[x] . The following proposition provides the connection between (A, E, E ′ , B) and ( A, E, B) (see [27,38]). Note that ( A, E, B) is an OV Banach non-commutative probability space (that is, A is a unital Banach algebra, B be a subalgebra with 1 ∈ B and E : A → B is a linear, bounded, B-bimodule projection) with the norm · on A which is defined by A = x A + x ′ A where A = x x ′ 0 x . Before recalling the OVI cumulants, we introduce some further notation. Notation 2.13. Suppose that A is a unital algebra, and B is a unital sub-algebra of A. Let {f B n : A n → B} n and {∂f B n : A n → B} n be two families of multilinear maps. Given a partition π ∈ NC(n), and a block V ∈ π, we define ∂f B π,V : A n → B to be the map that is equal to f B π , as given in Notation 2.3, except that for the block V, we replace f B |V| by ∂f B |V| . Then, the map ∂f B π : A n → B is defined by ∂f B π (x 1 , x 2 , . . . , x n ) := V∈π ∂f B π,V (x 1 , . . . , x n ). Definition 2.14. Suppose that (A, E, E ′ , B) is an OV C * -infinitesimal probability space. We define the • OV infinitesimally free cumulants {∂r B n : A n → B} n to be families of multilinear maps such that for all n ∈ N and x 1 , . . . , x n ∈ A, E ′ [x 1 . . . x n ] = π∈NC(n) ∂r B π (x 1 , . . . , x n ), where {r B n : A n → B} n denotes the OV free cumulants. • OV infinitesimally Boolean cumulants {∂β B n : A n → B} n to be families of multilinear maps such that for all n ∈ N and x 1 , . . . , x n ∈ A, E ′ [x 1 . . . x n ] = π∈I(n) ∂β B π (x 1 , . . . , x n ), where {β B n : A n → B} n denotes the OV Boolean cumulants. • OV infinitesimally monotone cumulants {∂h B n : A n → B} n to be families of multilinear maps such that for all n ∈ N and x 1 , . . . , x n ∈ A, E ′ [x 1 . . . x n ] = π∈NC(n) 1 τ(π)! ∂h B π (x 1 , . . . , x n ), where {h B n : A n → B} n denotes the OV monotone cumulants. Note that the notions of OV infinitesimally free and Boolean independence still share the property of vanishing mixed cumulants, see [38] and [27]. Proposition 2.15. Suppose that (A, E, E ′ , B) is an OV C * -infinitesimal probability space. • Unital subalgebras A 1 , . . . , A n that contain B are infinitesimally free if and only if for n ≥ 2 and i 1 , . . . , i s ∈ [n] which are not all equal, and for x 1 ∈ A i 1 , . . . , x s ∈ A is , we have r B s (x 1 , . . . , x s ) = ∂r B s (x 1 , . . . , x s ) = 0. • B-bimodule subalgebras A 1 , . . . , A n are infinitesimally Boolean independent if and only if for n ≥ 2 and i 1 , . . . , i s ∈ [n] which are not all equal, and for x 1 ∈ A i 1 , . . . , x s ∈ A is , we have β B s (x 1 , . . . , x s ) = ∂β B s (x 1 , . . . , x s ) = 0. Finally to recall and address the OV infinitesimal central limit theorems that were also proved in [27,37], we recall the definition of the associated limiting distributions. Let (A, E, E ′ , B) be an OV C * -infinitesimal probability space, and x be an element in A with E[x] = E ′ [x] = 0. Definition 2.16. Suppose that (A, E, E ′ , B) is an OV C * -infinitesimal probability space. Let x ∈ A with E[x] = E ′ [x] = 0. We denote by η ′ x : B → B the B-valued map given by η ′ x (b) := E ′ [xbx] and call the pair (η x , η ′ x ), or simply (η, η ′ ), the infinitesimal variance of x. We say that x is • a B-valued infinitesimal semi-circular element with infinitesimal variance (η, η ′ ) if x is a semi-circular element with variance η and E ′ [b 0 xb 1 . . . xb k ] =    π∈NC 2 (k) b 0 ∂η π (b 1 , . . . , b k−1 )b k if k is even, 0 if k is odd. • a B-valued infinitesimal Bernoulli element with infinitesimal variance (η, η ′ ) if x is a Bernoulli element with variance η and E ′ [b 0 xb 1 . . . xb k ] =      k/2−1 j=0 b 0 η(b 1 ) . . . η ′ (b 2j+1 ) . . . η(b k−1 )b k if k is even, 0 if k is odd. • a B-valued infinitesimal arcsine element with infinitesimal variance (η, η ′ ) if x is an arcsine element with variance η and E ′ [b 0 xb 1 . . . xb k ] =    π∈NC 2 (k) 1 τ(π)! b 0 ∂η π (b 1 , . . . , b k−1 )b k if k is even, 0 if k is odd. Theorem 2.17. Suppose that (A, E, E ′ , B) is an OV C * -infinitesimal probability space, and {x i } i∈N is a sequence of identically distributed infinitesimally freely (respectively Boolean, monotone) independent elements with infinitesimal variance (η, η ′ ) and are such that E[x 1 ] = E ′ [x 1 ] = 0. For each given N ∈ N, we let S N = 1 √ N (a 1 + · · · + a N ) . Then (S N ) N converges to s in infinitesimal distribution over B where s is a B-valued infinitesimal semi-circular (respectively Bernoulli, arcsine) element with infinitesimal variance (η, η ′ ). Remark 2.18. By applying Proposition 2.12 and [27, Lemma 2.1], we conclude that x is a Bvalued infinitesimal semi-circular (respectively Bernoulli, arcsine) element with infinitesimal variance (η, η ′ ) if and only if the OV free (respectively Boolean, monotone) cumulants (κ B n ) n are as in (2.1) and the OVI free (respectively Boolean, monotone) cumulants (∂κ B n ) n are as follows: ∂κ B n [xb 1 , xb 2 , . . . , xb n−1 , x] = η ′ (b 1 ) if n = 2, 0 if n = 2. Let x 1 and x 2 be elements in A such that E[x i ] = E ′ [x i ] = 0 for i = 1, 2. Assume that x 1 and x 2 are B-valued semicircular (respectively Bernoulli) elements with variances η x 1 and η x 2 respectively. Then x 1 +x 2 is a semicircular (respectively Bernoulli) element with variance η x 1 +x 2 = η x 1 + η x 2 . This easily follows from independence and the property of vanishing mixed moments, (see also [17]). We prove now the infinitesimal analogue as follows. Lemma 2.19. For each i = 1, 2, assume x i is an infinitesimal semicircular (respectively Bernoulli) element with infinitesimal variance (η x i , η ′ x i ) . If x 1 and x 2 are infinitesimally free (respectively Boolean), then x 1 + x 2 is an infinitesimal semicircular (respectively Bernoulli) element with infin- itesimal variance (η x 1 +x 2 , η ′ x 1 +x 2 ) = (η x 1 + η x 2 , η ′ x ! + η ′ x 2 ) . Proof. The statement follows from independence and the property of vanishing mixed infinitesimal moments in Proposition 2.15, which yield directly for any n = 2 and b 1 , . . . , b n−1 ∈ B that ∂κ B n ((x 1 + x 2 )b, . . . , (x 1 + x 2 )b, (x 1 + x 2 )) = 0, while for n = 2 and any b ∈ B, η ′ x 1 +x 2 (b) = ∂κ B 2 (x 1 + x 2 )b, (x 1 + x 2 ) = ∂κ B 2 (x 1 b, x 1 ) + ∂κ B 2 (x 2 b, x 2 ) = E ′ [x 1 bx 1 ] + E ′ [x 2 bx 2 ] = η ′ x 1 (b) + η ′ x 2 (b). Remark 2.20. Note that there are no analogue lemmas for the infinitesimal monotone case. To be precise, suppose that x 1 , . . . , x n are infinitesimal monotone independent arcsine elements in A such that E[ x i ] = E ′ [x i ] = 0 for i ∈ n. It is not guaranteed that x 1 + · · · + x n is still an infinitesimal arcsine element. We call, in this case, x 1 + · · · + x n an infinitesimal generalized arcsine element. Positivity of conditional expectations. Since the conditional expectation E is positive, it induces a B-valued pre-inner product ·, · : A × A → B, (x, y) → E[x * y] with respect to which A becomes a right pre-Hilbert B-module. In particular, we have the following analogue of the Cauchy-Schwarz inequality: E[x * y] 2 ≤ E[x * x] E[y * y] . (2.2) The positivity of E also implies the following important inequality E[x * wx] ≤ w E[x * x] ,(2.3) which holds for all x ∈ A n and w ∈ M n (A) satisfying w ≥ 0. If µ and ν are two Borel probability measures on R, then • the Lévy distance is defined by L(µ, ν) := inf{ε > 0 | ∀t ∈ R : F µ (t − ε) − ε ≤ F ν (t) ≤ F µ (t + ε) + ε}; • the Kolmogorov distance is defined by ∆(µ, ν) := sup t∈R |F µ (t) − F ν (t)|. The Lévy distance provides a metrization of convergence in distribution and may be bounded in terms of the associated Cauchy transforms of µ and ν as follows: L(µ, ν) ≤ 2 ε π + 1 π R |ℑm(G µ (t + iε)) − ℑm(G ν (t + iε))| dt. (2.4) For a proof of (2.4), we refer to Appendix B in [6]. PROOF OF THE MAIN RESULTS In order to prove our main results, Theorems 1.1 and 1.2, we rely on an operator-valued Lindeberg method, which allows replacing the x i 's by the y i 's, one at a time, and controlling the error of such a replacement. We follow the first lines of the proof of Theorem 3.1 in [6] for which we recall the following basic algebraic identity: Lemma 3.1. Let x and y be invertible in some unital complex algebra A, then for each m ∈ N, the following identity holds: x −1 − y −1 = m k=1 y −1 y − x y −1 k + x −1 y − x y −1 m+1 . Proof. The proof follows by induction on m and the algebraic identity x −1 y − x y −1 m+1 − y −1 y − x y −1 m+1 = x −1 y − x y −1 m+2 .z i = i j=1 x j + N j=i+1 y j and z 0 i = i−1 j=1 x j + N j=i+1 y j . Then for any b ∈ H + (B), G z N (b) − G z 0 (b) = N i=1 A i (b) + B i (b) + C i (b) , (3.1) where for each i = 1, . . . , N, A i (b) = G z 0 i (b) x i G z 0 i (b) − G z 0 i (b) y i G z 0 i (b), B i (b) = G z 0 i (b) x i G z 0 i (b) 2 − G z 0 i (b) y i G z 0 i (b) 2 , C i (b) = G z i (b) x i G z 0 i (b) 3 − G z i−1 (b) y i G z 0 i (b) 3 . Proof. We start by writing, for any b ∈ H + (B), the difference as a telescoping sum: G z N (b) − G z 0 (b) = N i=1 G z i (b) − G z i−1 (b) = N i=1 G z i (b) − G z 0 i (b) − G z i−1 (b) − G z 0 i (b) . Noting that z i − z 0 i = x i and z i−1 − z 0 i = y i , then by applying the algebraic identity in Lemma 3.1 up to order 3, we get G z i (b) − G z 0 i (b) = G z 0 i (b) x i G z 0 i (b) + G z 0 i (b) x i G z 0 i (b) 2 + G z i (b) x i G z 0 i (b) 3 and G z i−1 (b) − G z 0 i (b) = G z 0 i (b) y i G z 0 i (b) + G z 0 i (b) y i G z 0 i (b) 2 + G z i−1 (b) y i G z 0 i (b) 3 . Putting the above terms together and summing over i = 1, . . . , N ends the proof. 3.1. Boolean Case. We start with the proof of Theorem 1.1 relative to Boolean independence. While Boolean independence is simpler than the free case, we still need to treat it closely since Boolean independence is not well-behaved with respect to scalars. Proof of Theorem 1.1. The starting point of the proof is the operator-valued Lindeberg method is Proposition 3.2 where we note that x N = z N and y N = z 0 . Proof of (1.1). We will prove that E A i (b) = 0, E B i (b) = 0 and that E[C i (b)] ≤ ℑm(b) −1 4 α 2 (x) α 4 (x) + α 4 (y) . Considering the telescopic sum in (3.1), we fix i ∈ {1, . . . , N} and start by controlling the first order term E A i (b) = E G z 0 i (b) x i G z 0 i (b) − E G z 0 i (b) y i G z 0 i (b) . First, noting that for any a = a * ∈ A and b ∈ H ± (B) such that b −1 < 1/ a , we can write G a (b) as a convergent power series as follows: G a (b) = n≥0 b −1 (ab −1 ) n . (3.2) Hence, for b ∈ H + (B) such that b −1 ≤ 1/ z 0 i , we have E G z 0 i (b) x i G z 0 i (b) = n,m≥0 E b −1 (z i 0 b −1 ) n x i b −1 (z i 0 b −1 ) m (3.3) = n,m≥0 E b −1 (z i 0 b −1 ) n E[x i ] E b −1 (z i 0 b −1 ) m , = E G z 0 i (b) E[x i ] E G z 0 i (b) = 0, where the last equality follows from the fact that E[x i ] = 0. In the same way, we prove that E G z 0 i (b) y i G z 0 i (b) = 0 and hence E A i (b) = 0 for b ∈ H + (B) such that b −1 ≤ 1/ z 0 i . This identity extends by analyticity overall H(B) + . We turn now to the second order term in (3.1): E B i (b) = E G z 0 i (b) x i G z 0 i (b) 2 − E G z 0 i (b) y i G z 0 i (b) 2 . We develop the first term on the right-hand side using (3.2) again. For b ∈ H + (B) such that b −1 ≤ 1/ z 0 i , we write E G z 0 i (b) x i G z 0 i (b) x i G z 0 i (b) = n,m,k≥0 E b −1 (z i 0 b −1 ) n x i b −1 (z i 0 b −1 ) m x i b −1 (z i 0 b −1 ) k = n,k≥0 E b −1 (z i 0 b −1 ) n E x i b −1 x i E b −1 (z i 0 b −1 ) k + n,k≥0 m≥1 E b −1 (z i 0 b −1 ) n E[x i ] E b −1 (z i 0 b −1 ) m E[x i ] E b −1 (z i 0 b −1 ) k = n,k≥0 E b −1 (z i 0 b −1 ) n E x i b −1 x i E b −1 (z i 0 b −1 ) k = E G z 0 i (b) E x i b −1 x i E G z 0 i (b) . (3.4) The second equality follows from the fact that x i and z 0 i are Boolean independent over B while the third one follows from the fact that E[x i ] = 0. Similarly, we prove that E G z 0 i (b) y i G z 0 i (b) 2 = E G z 0 i (b) E y i b −1 y i E G z 0 i (b) . As the second order moments match by Assumption 1, we get that E B i (b) = 0 for b ∈ H + (B) such that b −1 ≤ 1/ z 0 i and hence by analyticity for all b ∈ H + (B). We are left with the third order term E C i (b) , namely E G x N (b) − E G y N (b) = N i=1 E G z i (b) x i G z 0 i (b) 3 − E G z i−1 (b) y i G z 0 i (b) 3 . Noting that G z i (b)x i G z 0 i (b) = G z 0 i (b)x i G z i (b), then (2.2) and (2.3) yield E G z i (b)x i G z 0 i (b)x i G z 0 i (b)x i G z 0 i (b) 2 = E G z 0 i (b)x i G z i (b)x i G z 0 i (b)x i G z 0 i (b) 2 ≤ E G z 0 i (b)x i G z i (b)G z i (b) * x i G z 0 i (b) * · E G z 0 i (b) * x i G z 0 i (b) * x 2 i G z 0 i (b)x i G z 0 i (b) ≤ G z i (b) 2 · E G z 0 i (b)x 2 i G z 0 i (b) * · E G z 0 i (b) * x i G z 0 i (b) * x 2 i G z 0 i (b)x i G z 0 i (b) . (3.5) First, we note that G z i (b) ≤ ℑm(b) −1 . We also note that by similar computations as in (3.3) and the positivity of E, we have E G z 0 i (b) x 2 i G z 0 i (b) * = E G z 0 i (b) E[x 2 i ] G z 0 i (b) * ≤ E[x 2 i ] · E G z 0 i (b) 2 ≤ ℑm(b) −1 2 α 2 (x), where α 2 (x) := max 1≤i≤N E[x 2 i ] . Again, by similar computations as in (3.4) and the positivity of E, we get E G z 0 i (b) * x i G z 0 i (b) * x 2 i G z 0 i (b)x i G z 0 i (b) = E G z 0 i (b) * · E x i (b −1 ) * x 2 i b −1 x i · G z 0 i (b) ≤ E x i (b −1 ) * x 2 i b −1 x i · E G z 0 i (b) 2 ≤ b −1 2 ℑm(b) −1 2 α 4 (x), where α 4 (x) := max 1≤i≤N sup E[x i b * x 2 i bx i ] with the supremum taken over b ∈ B such that b = 1. Putting the above bounds together, we get E G z i (b)x i G z 0 i (b)x i G z 0 i (b)x i G z 0 i (b) ≤ b −1 ℑm(b) −1 3 α 2 (x) α 4 (x) . Similarly, we prove that E G z i−1 (b)y i G z 0 i (b)y i G z 0 i (b)y i G z 0 i (b) ≤ b −1 ℑm(b) −1 3 α 2 (y) α 4 (y) . As the second moments match, we have α 2 (x) = α 2 (y), and hence we infer that E[C i (b)] ≤ b −1 ℑm(b) −1 3 α 2 (x) α 4 (x) + α 4 (y) ≤ ℑm(b) −1 4 α 2 (x) α 4 (x) + α 4 (y) . Finally, summing over i = 1, . . . , N we get the bound in (1.1). Remark 3.3. Note that the last inequality we use the fact that b −1 ≤ ℑm(b) −1 for all b ∈ H + (B). The proof of this inequality can be found in [14], or it also can be easily verified as follows: we write b = x + iy where x = x * and y = y * > 0 ∈ B. Then we observe that b −1 = y −1/2 [w + i] −1 y −1/2 where w = y −1/2 xy −1/2 , which implies that b −1 ≤ y −1/2 2 (w + i) −1 = y −1 (w + i) −1 . Finally, note that w is self-adjoint, and hence (w + i) −1 = 1 d(i, σ(w)) ≤ 1, where σ(w) is the spectrum of w and d(i, σ(w)) = inf { |i − s| | s ∈ σ(w)}. Proof of (1.2). As the conditional expectation E preserves ϕ, ϕ = ϕ • E, the first steps of the proof follow in the same way as for (1.1). For each z ∈ C + , we have that ϕ[G x N (z)] − ϕ[G y N (z)] = N i=1 ϕ C i (z) = N i=1 ϕ G z i (z) x i G z 0 i (z) 3 − G z i−1 (z) y i G z 0 i (z) 3 . We start by controlling the first term of C i (z). By similar computations as in (3.5), we get ϕ G z i (z)x i G z 0 i (z)x i G z 0 i (z)x i G z 0 i (z) 2 ≤ G z i (z) 2 · ϕ G z 0 i (z)x 2 i G z 0 i (z) * · ϕ G z 0 i (z) * x i G z 0 i (z) * x 2 i G z 0 i (z)x i G z 0 i (z) . First, we note that G z i (z) ≤ ℑm(z) −1 . As ϕ = ϕ • E,ϕ G z 0 i (z) x 2 i G z 0 i (z) * = ϕ G z 0 i (z) E[x 2 i ] G z 0 i (z) * ≤ E[x 2 i ] · ϕ G z 0 i (z)G z 0 i (z) * ≤ α 2 (x) G z 0 i (z) 2 L 2 (A,ϕ) . Again, by similar computations as in (3.4) and the positivity of E, we get ϕ G z 0 i (z) * x i G z 0 i (z) * x 2 i G z 0 i (z)x i G z 0 i (z) = ϕ G z 0 i (z) * · E x iz −1 x 2 i z −1 x i · G z 0 i (z) ≤ 1 |z| 2 E[x 4 i ] · ϕ G z 0 i (z) * G z 0 i (z) ≤ 1 ℑm(z) 2 α 4 (x) G z 0 i (z) 2 L 2 (A,ϕ) , where α 4 (x) := max 1≤i≤N E[x 4 i ] . Putting the above terms together we get for any z ∈ C + , ϕ G z i (z)x i G z 0 i (z)x i G z 0 i (z)x i G z 0 i (z) ≤ 1 ℑm(z) 2 α 2 (x) α 4 (x) G z 0 i (z) 2 L 2 (A,ϕ) . Similarly, we control the second term in ϕ[C i (z)] and get ϕ G z i−1 (z)y i G z 0 i (z)y i G z 0 i (z)y i G z 0 i (z) ≤ 1 ℑm(z) 2 α 2 (y) α 4 (y) G z 0 i (z) 2 L 2 (A,ϕ) . As the second order moments match, α 2 (x) = α 2 (y) and hence ϕ[G x N (z)] − ϕ[G y N (z)] ≤ 1 ℑm(z) 2 α 2 (x) α 4 (x) + α 4 (y) N i=1 G z 0 i (z) 2 L 2 (A,ϕ) . Now taking z = t + iε and using the fact that for each x = x * ∈ A, R G x (t + iε) 2 L 2 dt = π ε , (3.6) we prove that R ℑm ϕ[G x N (t + iε)] − ℑm ϕ[G y N (t + iε)] dt ≤ π ε 3 α 2 (x) α 4 (x) + α 4 (y) N. Monotone Case. We prove now Theorem 1.2 relative to monotone independence. The first lines of the proof are the same as in the Boolean case and rely on the operator-valued Lindeberg method in Proposition 3.2. However, as the order of the index set in the monotone case matters, this will require that we further expand the terms z 0 i in the power series expansions of the relevant resolvents before factorizing with respect to monotone independence. For this aim, we would need the following lemma: Lemma 3.4. Let x 1 , x 2 , y 1 , y 2 and w be self-adjoint elements in A that are such that {x 1 , x 2 } ≺ w ≺ {y 1 , y 2 } over B. Then, for any n, m ≥ 0, E (x 1 + y 1 ) n w(x 2 + y 2 ) m ) = n k=0 m ℓ=0 q 0 ,...,q k ≥0 q 0 +···+q k =n−k p 0 ,...,p ℓ ≥0 p 0 +···+p ℓ =m−ℓ E E[y q 0 1 ]x 1 E[y q 1 1 ] . . . E[y q k−1 1 ]x 1 E[y q k 1 ] · E[w] · E[y p 0 2 ]x 2 E[y p 1 2 ] . . . E[y p ℓ−1 2 ]x 2 E[y p ℓ 2 ] . Proof. For any x, y ∈ A and r ≥ 0, we have by the noncommutative binomial expansion (x + y) r = r k=0 q 0 ,...,q k ≥0 q 0 +···+q k =r−k y q 0 xy q 1 . . . y q k−1 xy q k . Hence, we write E (x 1 + y 1 ) n w(x 2 + y 2 ) m ) = n k=0 m ℓ=0 q 0 ,...,q k ≥0 q 0 +···+q k =n−k p 0 ,...,p ℓ ≥0 p 0 +···+p ℓ =m−ℓ E y q 0 1 x 1 y q 1 1 . . . y q k−1 1 x 1 y q k 1 · w · y p 0 2 x 2 y p 1 2 . . . y p ℓ−1 2 x 2 y p ℓ 2 . Now as {x 1 , x 2 } ≺ w ≺ {y 1 , y 2 } over B, from the definition of monotone independence with amalgamation over B, we get E y q 0 1 x 1 y q 1 1 . . . y q k−1 1 x 1 y q k 1 · w · y p 0 2 x 2 y p 1 2 . . . y p ℓ−1 2 x 2 y p ℓ 2 = E E[y q 0 1 ]x 1 E[y q 1 1 ] . . . E[y q k−1 1 ]x 1 E[y q k 1 ] · w · E[y p 0 2 ]x 2 E[y p 1 2 ] . . . E[y p ℓ−1 2 ]x 2 E[y p ℓ 2 ] = E E[y q 0 1 ]x 1 E[y q 1 1 ] . . . E[y q k−1 1 ]x 1 E[y q k 1 ] · E[w] · E[y p 0 2 ]x 2 E[y p 1 2 ] . . . E[y p ℓ−1 2 ]x 2 E[y p ℓ 2 ] , which ends the proof. Having the above factorization lemma in hand, we prove the following bounds that will be used at several occasions in the proof later on. Lemma 3.5. Let x, y and w be self-adjoint elements in A such that x ≺ w ≺ y over B, then for any b 1 , b 2 ∈ H ± (B), (i) E G x+y (b 1 ) w G x+y (b 2 ) = E G x E[G y (b 1 )] −1 · E[w] · G x E[G y (b 2 )] −1 . Moreover, if W ≥ 0, then for any b 1 , b 2 ∈ H ± (B), (ii) E[G x+y (b 1 ) w G x+y (b 2 )] ≤ E[w] · ℑm(E[G y (b 1 )] −1 ) −1 · ℑm(E[G y (b 2 )] −1 ) −1 , and for any z ∈ C + , (iii) ϕ[G x+y (z) w G x+y (z) * ] ≤ E[w] · G x E[G y (z)] −1 2 L 2 (A,ϕ) . Proof. (i) Writing the resolvent as a power series as in (3.2), we have for b 1 and b 2 that are such that max{ b −1 1 , b −1 2 } < 1/ x + y , E G x+y (b 1 ) w G x+y (b 2 ) = n,m≥0 b −1 1 E (xb −1 1 + yb −1 1 ) n wb −1 2 (xb −1 2 + yb −1 2 ) m . Now, we get by Lemma 3.4 after summing over n, m ≥ 0, E G x+y (b 1 ) w G x+y (b 2 ) = k,ℓ≥0 q 0 ,...,q k ≥0 p 0 ,...,p ℓ ≥0 E b −1 1 E (yb −1 1 ) q 0 xb −1 1 E (yb −1 1 ) q 1 · · · xb −1 1 E (yb −1 1 ) q k · E[w] · b −1 2 E (yb −1 2 ) p 0 xb −1 2 E (yb −1 2 ) p 1 · · · xb −1 2 E (yb −1 2 ) p ℓ = k,ℓ≥0 E E q 0 ≥0 b −1 1 (yb −1 1 ) q 0 · x · E q 1 ≥0 b −1 1 (yb −1 1 ) q 1 · · · x · E q k ≥0 b −1 1 (yb −1 1 ) q k · E[w] · E p 0 ≥0 b −1 2 (yb −1 2 ) p 0 · x · E p 1 ≥0 b −1 2 (yb −1 2 ) p 1 · · · x · E p ℓ ≥0 b −1 2 (yb −1 2 ) p ℓ = k,ℓ≥0 E E G y (b 1 ) x · E G y (b 1 ) k · E[w] · E G y (b 2 ) x · E G y (b 2 ) ℓ . For i ∈ {1, 2}, we set b i = E[G y (b i )] −1 and write E G x+y (b 1 ) w G x+y (b 2 ) = E k≥0 b −1 1 (x b −1 1 ) k · E[w] · ℓ≥0 b −1 2 (x b −1 2 ) ℓ = E G x E[G y (b 1 )] −1 · E[w] · G x E[G y (b 2 )] −1 , which proves the desired equality for b 1 , b 2 ∈ H ± (B) such that max( b −1 1 , b −1 2 ) < 1/||x + y||. The equality can be extended to H ± (B) since both of its sides are analytical. (ii) The second part of the lemma follows directly from the first: E[G x+y (b) w G x+y (b) * ] = E[G x+y (b) w G x+y (b * )] = E G x E[G y (b)] −1 · E[w] · G x E[G y (b * )] −1 ≤ G x E[G y (b)] −1 · G x E[G y (b * )] −1 · E[w] ≤ ℑm E[G y (b)] −1 −1 · ℑm E[G y (b * )] −1 −1 · E[w] ≤ ℑm(b) −1 2 · E[w] , where the last inequality follows from the fact that ℑm E[G y (b)] −1 ≥ ℑm(b) for any b ∈ H + (B), see Remark 2.5 in [7]. (iii) The last estimate also follows from (i): for any z ∈ C + , ϕ[G x+y (z) w G x+y (z) * ] = ϕ G x E[G y (z)] −1 · E[w] · G x E[G y (z)] −1 * ≤ E[w] · ϕ G x E[G y (z)] −1 G x E[G y (z)] −1 * = E[w] · G x E[G y (z)] −1 2 L 2 (A,ϕ) , where the inequality follows from the positivity of ϕ. Having the above lemmas in hand, we are now ready to prove Theorem 1.2 relative to the monotone case. Proof of Theorem 1.2. Without loss of generality, we assume that A x 1 ,y 1 ≺ A x 2 ,y 2 ≺ · · · ≺ A x N ,y N over B. Indeed, one can simply take copies of the families {x 1 , . . . x N } and {y 1 , . . . y N } where this holds, for instance, we could choose A x 1 ≺ A x 2 ≺ · · · ≺ A x N ≺ A y 1 ≺ A y 2 · · · ≺ A y N . The starting point is the operator-valued Lindeberg method in Proposition 3.2 where we note that x N = z N and y N = z 0 . Proof of (1.3). Again, we will prove that E A i (b) = 0, E B i (b) = 0 and that E[C i (b)] ≤ ℑm(b) −1 4 α 2 (x) α 4 (x) + α 4 (y) . As above, all the identities here will be proven for b ∈ H + (B) such that b −1 is small enough and then get extended analytically to H + (B). Considering the telescoping sum in (3.1), we fix i ∈ {1, . . . , N} and start by controlling the first order term E A i (b) = E G z 0 i (b) x i G z 0 i (b) − E G z 0 i (b) y i G z 0 i (b) . We set u i = i−1 j=1 x j and v i = N j=i+1 y j , and note that u i ≺ x i ≺ v i over B. Then taking into account that E[x i ] = 0, we get by Lemma 3.5, E G z 0 i (b) x i G z 0 i (b) = E G u i +v i (b) x i G u i +v i (b) = E G u i E[G v i (b)] −1 · E[x i ] · G u i E[G v i (b)] −1 = 0. Similarly, we prove that E G z 0 i (b) y i G z 0 i (b) = 0 and infer that E[A i (b)] = 0. For the second order term in (3.1): E B i (b) = E G z 0 i (b) x i G z 0 i (b) 2 − E G z 0 i (b) y i G z 0 i (b) 2 , we develop the first term using (3.2): E G z 0 i (b) x i G z 0 i (b) 2 = E G u i +v i (b)x i G u i +v i (b)x i G u i +v i (b) = n,m,r≥0 E b −1 (u i + v i )b −1 n x i b −1 (u i + v i )b −1 m x i b −1 (u i + v i )b −1 r = n,m,r≥0 b −1 E u i + v i n x i u i + v i m x i u i + v i r , where u i = u i b −1 , v i = v i b −1 and x i = x i b −1 . Now using the noncommutative binomial expan- sion, we write E u i + v i n x i u i + v i m x i u i + v i r = n k=0 m ℓ=0 r s=0 q 0 ,...,q k ≥0 q 0 +···+q k =n−k p 0 ,...,p ℓ ≥0 p 0 +···+p ℓ =m−ℓ t 0 ,...,ts≥0 t 0 +···+ts=r−s E v q 0 i u i v q 1 i . . . u i v q k i · x i · v p 0 i u i v p 1 i . . . u i v p ℓ i · x i · v t 0 i u i v t 1 i . . . u i v ts i . Noting that u i ≺ x i ≺ v i over B and taking into account that E[ x i ] = E[x i ]b −1 = 0, we get whenever m = 0 E v q 0 i u i v q 1 i . . . u i v q k i · x i · v p 0 i u i v p 1 i . . . u i v p ℓ i · x i · v t 0 i u i v t 1 i . . . u i v ts i = E E[ v q 0 i ] u i E[ v q 1 i ] . . . u i E[ v q k i ] · E[ x i ] · E[ v p 0 i ] u i E[ v p 1 i ] . . . u i E[ v p ℓ i ] · E[ x i ] · E[ v t 0 i ] u i E[ v t 1 i ] . . . u i E[ v ts i ] = 0. (3.7) This yields that E G z 0 i (b)x i G z 0 i (b)x i G z 0 i (b) = n,r≥0 n k=0 r s=0 q 0 ,...,q k ≥0 q 0 +···+q k =n−k t 0 ,...,ts≥0 t 0 +···+ts=r−s b −1 E E[ v q 0 i ] u i E[ v q 1 i ] . . . u i E[ v q k i ] · E x i b −1 x i · b −1 E[ v t 0 i ] u i E[ v t 1 i ] . . . u i E[ v ts i ] . Similarly, we prove that E G z 0 i (b)y i G z 0 i (b)y i G z 0 i (b) = n,r≥0 n k=0 r s=0 q 0 ,...,q k ≥0 q 0 +···+q k =n−k t 0 ,...,ts≥0 t 0 +···+ts=r−s b −1 E E[ v q 0 i ] u i E[ v q 1 i ] . . . u i E[ v q k i ] · E y i b −1 y i · b −1 E[ v t 0 i ] u i E[ v t 1 i ] . . . u i E[ v ts i ] . Subtracting the two terms and recalling that E[x i bx i ] = E[y i by i ] for any b ∈ B, we infer that E[B i (b)] = 0. It remains to control the third order term E[C i (b)] in (3.1); namely, E G x N (b) − E G y N (b) = N i=1 E G z i (b) x i G z 0 i (b) 3 − E G z i−1 (b) y i G z 0 i (b) 3 . (3.8) Following the lines in (3.5), we get by (2.2) and (2.3) E G z i (b)x i G z 0 i (b)x i G z 0 i (b)x i G z 0 i (b) 2 ≤ G z i (b) 2 · E G z 0 i (b)x 2 i G z 0 i (b) * · E G z 0 i (b) * x i G z 0 i (b) * x 2 i G z 0 i (b)x i G z 0 i (b) . First, we note that G z i (b) ≤ ℑm(b) −1 . Then applying Lemma 3.5 (ii) with x = u i , y = v i and W = x 2 i , and for b 1 = b * 2 = b, we get E G z 0 i (b)x 2 i G z 0 i (b) * = E G u i +v i (b)x 2 i G u i +v i (b * ) ≤ E[x 2 i ] · ℑm(b) −1 2 ≤ ℑm(b) −1 2 α 2 (x), where α 2 (x) = max 1≤i≤N E[x 2 i ] . As for the last inequality, we expand it using (3.2) and write E G z 0 i (b) * x i G z 0 i (b) * x 2 i G z 0 i (b)x i G z 0 i (b) = E G u i +v i (b * )x i G u i +v i (b * )x 2 i G u i +v i (b)x i G u i +v i (b) = n,m≥0 r,u≥0 E (b * ) −1 (u i + v i )(b * ) −1 n x i (b * ) −1 (u i + v i )(b * ) −1 m x 2 i b −1 (u i + v i )b −1 r x i b −1 (u i + v i )b −1 u = n,m≥0 r,u≥0 (b * ) −1 E u i + v i n x i u i + v i m x 2 i b −1 u i + v i r x i u i + v i u , where we have adopted the same notation as above in addition to u i = u i (b * ) −1 , v i = v i (b * ) −1 and x i = x i (b * ) −1 for i = 1, 2. Similarly, we develop using the noncommutative binomial expansion and write E u i + v i n x i u i + v i m x 2 i b −1 u i + v i r x i u i + v i u = n k=0 m ℓ=0 r s=0 u v=0 q 0 ,...,q k ≥0 q 0 +···+q k =n−k p 0 ,...,p ℓ ≥0 p 0 +···+p ℓ =m−ℓ t 0 ,...,ts≥0 t 0 +···+ts=r−s h 0 ,...,hv ≥0 h 0 +···+hv=u−v E v q 0 i u i v q 1 i . . . u i v q k i · x i · v p 0 i u i v p 1 i . . . u i v p ℓ i · x 2 i b −1 · v t 0 i u i v t 1 i . . . u i v ts i · x i · v h 0 i u i v h 1 i . . . u i v hv i . Noting that u i ≺ x i ≺ v i over B and taking into account that E[ x i ] = E[ x i ] = 0, then factorizing as in (3.7), we prove that the above term is zero whenever m = 0 or r = 0. Hence, we get E G z 0 i (b) * x i G z 0 i (b) * x 2 i G z 0 i (b)x i G z 0 i (b) = n≥0 u≥0 n k=0 u v=0 q 0 ,...,q k ≥0 q 0 +···+q k =n−k h 0 ,...,hv≥0 h 0 +···+hv=u−v (b * ) −1 E E[ v q 0 i ] u i E[ v q 1 i ] . . . u i E[ v q k i ] · E[x i (b * ) −1 x 2 i b −1 x i ] · b −1 E[ v h 0 i ] u i E[ v h 1 i ] . . . u i E[ v hv i ] = k≥0 v≥0 q 0 ,...,q k ≥0 h 0 ,...,hv≥0 (b * ) −1 E v q 0 i u i v q 1 i . . . u i v q k i · x i (b * ) −1 x 2 i b −1 x i · b −1 v h 0 i u i v h 1 i . . . u i v hv i = E G u i +v i (b * ) · x i (b * ) −1 x 2 i b −1 x i · G u i +v i (b) . (3.9) Finally, applying Lemma 3.5 with x = u i , y = v i and W = x i (b * ) −1 x 2 i b −1 x i for b 1 = b * = b 2 , yields E G z 0 i (b * )x i G z 0 i (b * )x 2 i G z 0 i (b)x i G z 0 i (b) = E G u i +v i (b * ) · x i (b * ) −1 x 2 i b −1 x i · G u i +v i (b) ≤ ℑm(b) −1 2 · E[x i (b * ) −1 x 2 i b −1 x i ] ≤ b −1 2 ℑm(b) −1 2 α 4 (x), where α 4 (x) = max 1≤i≤N sup E[x i bx 2 i bx i ] where the supremum is taken over b ∈ B such that b = 1. Putting the above bounds together, we get E G z i (b)x i G z 0 i (b)x i G z 0 i (b)x i G z 0 i (b) ≤ b −1 ℑm(b) −1 3 α 2 (x) α 4 (x) . Similarly, we prove that E G z i−1 (b)y i G z 0 i (b)y i G z 0 i (b)y i G z 0 i (b) ≤ b −1 ℑm(b) −1 3 α 2 (y) α 4 (y) . As the second moments match, we have α 2 (x) = α 2 (y), and hence we infer that E[C i (b)] ≤ b −1 ℑm(b) −1 3 α 2 (x) α 4 (x) + α 4 (y) ≤ ℑm(b) −1 4 α 2 (x) α 4 (x) + α 4 (y) . Finally, summing over i = 1, . . . , N we get the bound in (1.3). Proof of (1.4). Starting from (3.8), we have for any z ∈ C + , ϕ G x N (z) − ϕ G y N (z) = N i=1 ϕ G z i (z) x i G z 0 i (z) 3 − ϕ G z i−1 (z) y i G z 0 i (z) 3 . As before, noting that G z i (z)x i G z 0 i (z) = G z 0 i (z)x i G z i (z), then (2.2) and (2.3) and the positivity of ϕ yield ϕ G z i (z)x i G z 0 i (z)x i G z 0 i (z)x i G z 0 i (z) 2 ≤ ϕ G z 0 i (z)x i G z i (z)G z i (z) * x i G z 0 i (z) * · ϕ G z 0 i (z) * x i G z 0 i (z) * x 2 i G z 0 i (z)x i G z 0 i (z) ≤ G z i (z) 2 · ϕ G z 0 i (z)x 2 i G z 0 i (z) * · ϕ G z 0 i (z) * x i G z 0 i (z) * x 2 i G z 0 i (z)x i G z 0 i (z) . Applying Lemma 3.5 (iii) with x = u i , y = v i and W = x 2 i , ϕ G z 0 i (z)x 2 i G z 0 i (z) * ≤ G u i ϕ[G v i (z)] −1 2 L 2 (A,ϕ) · ϕ[x 2 i ] . By the same arguments as in (3.9), we get ϕ G z 0 i (z) * x i G z 0 i (z) * x 2 i G z 0 i (z)x i G z 0 i (z) = 1 |z| 2 ϕ G u i +v i (z) * · ϕ[x 4 i ] · G u i +v i (z) ≤ 1 |z| 2 G u i ϕ[G v i (z)] −1 2 L 2 (A,ϕ) · ϕ[x 4 i ] , where the last inequality follows again by Lemma 3.5 (iii) with x = u i , y = v i and W = x 4 i . Putting the above bounds together, we get ϕ G z i (z)x i G z 0 i (z)x i G z 0 i (z)x i G z 0 i (z) ≤ G z i (z) |z| G u i ϕ[G v i (z)] −1 2 L 2 (A,ϕ) |ϕ[x 4 i ]||ϕ[x 2 i ]| ≤ 1 ℑm(z) 2 G u i ϕ[G v i (z)] −1 2 L 2 (A,ϕ) α 4 (x)α 2 (x), where α 4 (x) := max 1≤i≤N sup |ϕ[x 4 i ]| and α 2 (x) := max 1≤i≤N sup |ϕ[x 2 i ]|. Similarly, we prove that ϕ G z i−1 (z)y i G z 0 i (z)y i G z 0 i (z)y i G z 0 i (z) ≤ 1 ℑm(z) 2 G u i ϕ[G v i (z)] −1 2 L 2 (A,ϕ) α 4 (y)α 2 (y), As the second moments match, we have α 2 (x) = α 2 (y), and hence we infer that ϕ G x N (z) − ϕ G y N (z) ≤ 1 ℑm(z) 2 N i=1 G u i ϕ[G v i (z)] −1 2 L 2 (A,ϕ) α 2 (x) α 4 (x) + α 4 (y) . Note that u i ≺ v i implies that F u i (F v i (z)) = F u i +v i (z) for all z ∈ C + where F x is the reciprocal of the Cauchy transform G x for any x = x * ∈ A. Thus, we have G u i (F v i (z)) = G u i +v i (z). Therefore, for all z ∈ C + , G u i ϕ[G v i (z)] −1 2 L 2 (A,ϕ) = − ℑmG u i (F v i (z)) ℑmF v i (z) = − ℑmG u i +v i (z) ℑmF v i (z) (3.10) ≤ − ℑmG u i +v i (z) ℑm(z) = G u i +v i (z) 2 L 2 (A,ϕ) . Hence by (3.6) and the above inequality, we infer that R ℑm ϕ[G x N (t + iε)] − ℑm ϕ[G y N (t + iε)] dt ≤ π ε 3 α 2 (x) α 4 (x) + α 4 (y) N. Infinitesimal case. The main idea of the proof is to bring the problem from the infinitesimal setting to the operator-valued framework where we can use already existing results. More precisely, Proposition 2.12 allows us to obtain the desired estimates in the infinitesimal free, Boolean, and monotone settings by passing to the operator-valued setting and applying respectively the results in [6][Theorem 3.1], Section 3.1, and Section 3.2. Let (A, E, E ′ , B) be an OV C * -infinitesimal probability space and recall the notation Section 2.2. The following Lemma demonstrates precisely how we can, with help of Proposition 2.12, pass to the operator-valued setting and still use the Lindeberg method to control the operator-valued infinitesimal Cauchy transform. Proposition 3.6. Let x = {x 1 , . . . , x N } and y = {y 1 , . . . , y N } be two infinitesimally independent families of selfadjoint infinitesimally freely/Boolean/monotone independent elements satisfying Assumption 2. Then for any given b ∈ H + (B), we have E ′ G x N (b) − E ′ G y N (b) = N i=1 E ′ G z i (b) x i G z 0 i (b) 3 − E ′ G z i−1 (b) y i G z 0 i (b) 3 . (3.11) Here x and y are infinitesimally monotone independent in the sense that A x 1 ≺≺ ≺ A x 2 ≺≺ ≺ · · · ≺≺ ≺ A x N ≺≺ ≺ A y 1 ≺≺ ≺ A y 2 ≺≺ ≺ · · · ≺≺ ≺ A y N . Proof. Consider x and y to be two self-adjoint families in A. Also note that if x and y are infinitesimally free, then by Proposition 2.12, it follows that A x 1 , A x 2 , . . . , A x N , A y 1 , A y 2 , . . . , A y N are free over B. In addition, we observe that under Assumption 2, for each j = 1, . . . , N, E[ x j ] = E x j 0 0 x j = E[x j ] E ′ [x j ] 0 E[x j ] = 0 0 0 0 . Moreover, for any B = b b ′ 0 b ∈ B and for each j = 1, . . . , N, we have E[ x j B x j ] = E[x j bx j ] E ′ [x j bx j ] + E[x j b ′ x j ] 0 E[x j bx j ] = E[y j by j ] E ′ [y j by j ] + E[y j b ′ y j ] 0 E[y j by j ] = E[ y j B y j ]. Therefore, we conclude that x and y satisfy the assumption in [6, Theorem 3.1], which implies that for all B = b b ′ 0 b ∈ B with b ∈ H + (B) and b ′ ∈ B, E G x N (B) − E G y N (B) = N i=1 E G z i (B) x i G z 0 i (B) 3 − E G z i−1 (B) y i G z 0 i (B) 3 . (3.12) Here we note that all the resolvents above are well-defined; indeed, G x N (B) = (b − x N ) −1 −(b − x N ) −1 b ′ (b − x N ) −1 0 (b − x N ) . In particular, for a given b ∈ H + (B), if we let B = b 0 0 b , then the left hand side of (3.12) is E[G x N (b)] − E[G y N (b)] E ′ [G x N (b)] − E ′ [G y N (b)] 0 E[G x N (b)] − E[G y N (b)] . On the other hand, we observe that E G z i (B)( x i G z 0 i (B)) 3 = E G z i (b) x i G z 0 i (b) 3 E ′ G z i (b) x i G z 0 i (b) 3 0 E G z i (b) x i G z 0 i (b) 3 , and E G z i−1 (B) y i G z 0 i (B) 3 = E G z i−1 (b) y i G z 0 i (b) 3 E ′ G z i−1 (b) y i G z 0 i (b) 3 0 E G z i−1 (b) y i G z 0 i (b) 3 for each i = 1, . . . , N. Therefore, the (1, 2)-entry of the right hand side of (3.12) is nothing but N i=1 E ′ G z i (b) x i G z 0 i (b) 3 − E ′ G z i−1 (b) y i G z 0 i (b) 3 . By comparing both sides of (3.12), we conclude that (3.11) holds. Similarly, if x and y are infinitesimally Boolean (respectively monotone) independent that satisfy Assumption 2, then x and y are Boolean (respectively monotone) independent that satisfy Assumption 1 with respect to E. Therefore, combining the estimates in Section 3.1 and Section 3.2, (3.11) also holds whenever x and y are either infinitesimally Boolean or monotone independent. Remark 3.7. Note that the fact that E ′ is completely bounded and self-adjoint implies that E ′ = E 1 − E 2 for some completely positive maps E 1 and E 2 . Therefore, for all a ∈ A, we have E ′ [a] = E 1 [a] − E 2 [a] ≤ E 1 [a] + E 2 [a] ≤ 2 a . Suppose ( A, E, B) is an upper triangular probability space that is induced by (A, E, E ′ , B), then for a given A = a a ′ 0 a ∈ A, E [A] = E[a] + E ′ [a] + E[a ′ ] ≤ 3( a + a ′ ) = 3 A . Proof of (1.5) Following Proposition 3.6 and Remark 3.7, E ′ G x N (b) − E ′ G y N (b) ≤ N i=1 E ′ G z i (b) x i G z 0 i (b) 3 + E ′ G z i−1 (b) y i G z 0 i (b) 3 ≤ 2 N i=1 G z i (b) x i G z 0 i (b) 3 + G z i−1 (b) y i G z 0 i (b) 3 . Note that for each 1 ≤ i ≤ N, G z i (b) x i G z 0 i (b) 3 ≤ G z i (b) · x i 3 · G z 0 i (b) 3 ≤ ℑm(b) −1 4 x i 3 . Similarly, we prove that G z i−1 (b) y i G z 0 i (b) 3 ≤ ℑm(b) −1 4 y i 3 . Thus, we obtain the desired result. OPERATOR-VALUED CENTRAL LIMIT THEOREMS The aim of this section is to provide an application of our main results in Theorems 1.1 and 1.2 to the operator-valued central theorems for Boolean and monotone independence respectively. We will show how with our quantitative bounds in terms of the fourth and second operator-valued moments, we can furthermore obtain results on the fourth moment theorem for infinitely divisible measures in Section 4.3. Finally, we provide in Section 4.4 the first quantitative estimates in the infinitesimal setting. Consider an operator-valued C * -probability space (A, E, B). All along this section, we let x := {x 1 , . . . , x n } be a family of self-adjoint elements in A that are centered with respect to E and set X n := 1 √ n n j=1 x j . 4.1. Boolean CLT. The Boolean Central Limit Theorem was first proved in the scalar-valued setting in the original paper by Speicher and Woroudi [35]. Quantitative versions were provided in [3] and [32] in terms of Lévy distance. In the operator-valued setting, the Boolean CLT was then proved in [8], where its quantitative extension may be found in [18] or in the notes [17]. We improve on the above-mentioned quantitative results by providing quantitative estimates in terms of the moments instead of the operator norm that can be pushed to obtaining quantitative bounds on the Lévy distance. We start by letting B n be a centered B-valued Bernoulli element whose variance is given by the completely positive map η n : B → B, b → η n (b) = 1 n n j=1 E[x j bx j ]. We provide in the following theorem quantitative results on the level of the operator-valued Cauchy transforms as well as the Lévy distance. E[G Xn (b)] − E[G Bn (b)] ≤ 1 √ n ℑm(b) −1 4 α 2 (x) α 4 (x) + α 2 (x) 2 . Furthermore, there is a universal positive constant c such that L(µ Xn , µ Bn ) ≤ c α 2 (x)( α 4 (x) + α 2 (x) 2 ) 1/14 n −1/14 . Proof. The proof follows by a direct application of Theorem j ] and hence we get that α 4 (y) = α 2 (x) 2 . To end the proof of the first part of the theorem, it remains to notice that . To prove the bound on the Lévy distance, we apply similarly (1.2) which yields for any ε > 0 R ℑm ϕ[G Xn (t + iε)] − ℑm ϕ[G Bn (t + iε)] dt ≤ 1 √ n π ε 3 α 2 (x) α 4 (x) + α 2 (x) 2 . Then using the bound in (2.4), we get for any ε > 0 L(µ Xn , µ Bn ) ≤ 2 ε π + 1 √ n 1 ε 3 α 2 (x) α 4 (x) + α 2 (x) 2 . Finally, optimizing over ε > 0, we get the desired bounds on the Lévy distance. 1 k ⊗B 1 (b) − G M k (B) 1 k ⊗B 0 (b) ≤ k ℑm(b) −1 3 η 1 − η 0 . (4.1) Moreover, if (A, ϕ) is a W * -probability space with ϕ = ϕ • E,1 π R |G B 1 (t + iε) − G B 0 (t + iε)| dt ≤ 1 ε 2 η 1 − η 0 (4.2) for each ε > 0 and, with the universal positive constant c = 5( 1 4π ) 2/5 < 1.817, we have that L(µ B 1 , µ B 0 ) ≤ c η 1 − η 0 1/5 . (4.3) While the proof is similar to that of the free case in [6, Theorem 3.5], we write it again for the convenience of the reader as its arguments will be used repeatedly in the coming sections. Proof. Fix m ∈ N and let x = {x 1 , . . . , x m } and y = {y 1 , . . . , y m } be two Boolean independent families consisting of Boolean independent copies of the given Bernoulli elements 1 √ m B 0 and 1 √ m B 1 respectively. Adopting the notation in Proposition 3.2, we note that z m and z 0 have the same distributions as B 0 and B 1 respectively. Hence following (3.1), we obtain E[G B 0 (b)] − E[G B 1 (b)] = 1 m m i=1 E G z 0 i (b) x i G z 0 i (b) 2 − E G z 0 i (b) y i G z 0 i (b) 2 + 1 m √ m m i=1 E G z i (b) x i G z 0 i (b) 3 − E G z i−1 (b) y i G z 0 i (b) 3 . Then for each i ∈ [m], we follow the steps of (3.4) to obtain E G z 0 i (b)x i G z 0 i (b)x i G z 0 i (b) = E[G z 0 i (b)]E[x i b −1 x i ]E[G z 0 i (b)] Hence, E G z 0 i (b) x i G z 0 i (b) 2 − E G z 0 i (b) y i G z 0 i (b) 2 ≤ E[G z 0 i (b)] (η 0 − η 1 )(b −1 ) E[G z 0 i (b)] ≤ b −1 ℑm(b) −1 2 η 0 − η 1 . As for the third order term, we get for each i ∈ [m], E G z i (b) x i G z 0 i (b) 3 + E G z i−1 (b) y i G z 0 i (b) 3 ≤ 2 ℑm(b) 4 x i 3 + y i 3 . Therefore, we obtain E[G B 0 (b)] − E[G B 1 (b)] ≤ b −1 ℑm(b) −1 2 η 0 − η 1 + 2 √ m ℑm(b) −1 4 ( max 1≤i≤m x i 3 + max 1≤i≤m y i 3 ). (4.4) Finally, noting that (4.4) holds for any m, we let m → ∞ to obtain the desired bounded for k = 1. The proof of the assertion for general k ∈ N can be easily proved by noting that Boolean independence is preserved under matrix amplification and then applying (4.4) to 1 k ⊗B 1 and 1 k ⊗B 0 in the operator-valued C * -probability space (M k (A), id k ⊗ E, M k (B)). Hence, we directly obtain for all b ∈ H + (M k (B)), G M k (B) 1 k ⊗B 0 (b) − G M k (B) 1 k ⊗B 1 (b) ≤ ℑm(b) −1 3 id k ⊗η 0 − id k ⊗η 1 . Using the fact that id k ⊗η 0 − id k ⊗η 1 ≤ k η 0 − η 1 , which follows from [26, Exercise 3.10], we arrive at the desired bound for general k. Finally, to prove (4.2) and (4.3), we note that for each i ∈ [m] and z ∈ C + , ϕ G z 0 i (z) (η 0 − η 1 )(z)ϕ G z 0 i (z) ≤ 1 |z| η 0 − η 1 ϕ G z 0 i (z) 2 ≤ η 0 − η 1 ℑm(z) G z 0 i (z) 2 L 2 . Then, the remaining of the proof follows the analogous argument of proof in [6, Theorem 3.5]. Monotone CLT. This section is devoted to studying the operator-valued monotone CLT. The scalar-valued case was first proved by [23] before getting extended to the operator-valued setting, see [31] and [16]. Moreover, quantitative versions can be also found in [4] in the scalar case and in [18] and [17] in the operator-valued case. It is similar to Boolean case in the previous subsection that our main improvement is providing quantitative estimates in terms of moments instead of the operator norm, and it allows us to get the quantitative bounds on Lévy distance. For any 1 ≤ j ≤ n, we denote by ν η j the B-valued arcsine distribution with variance given by the completely positive map η j : B → B, b → η j (b) = E[x j bx j ]. We denote by A n the B-valued generalized arcsine element of A whose distribution is given by dil n −1/2 (ν η 1 ⊲ · · · ⊲ ν ηn ). For more details on generalized arcsine distributions, see [17,Chapter 7]. E[G Xn (b)] − E[G An (b)] ≤ 1 √ n ℑm(b) −1 4 α 2 (x) α 4 (x) + 3 2 α 2 (x) 2 . Furthermore, in the scalar case where B = C, there is a universal positive constant c such that L(µ Xn , µ An ) ≤ c α 2 (x) α 4 (x) + 3 2 α 2 (x) 2 1/14 n −1/14 . Proof. The proof follows by a direct application of Theorem 1.2 where we choose y = {y 1 , . . . , y n } to be a family of B-valued arcsine elements that are such that A x 1 ≺ A x 2 ≺ · · · ≺ A x N ≺ A y 1 ≺ A y 2 ≺ · · · ≺ A y N over B, E[y j ] = 0 and E[y j by j ] = E[x j bx j ] for any j ∈ [n] and b ∈ B. Now note that since y j is a B-valued arcsine element, then E[y j b * y 2 j by j ] = E[y j b * y j ]E[y j by j ]+ 1 2 E y j b * E[y 2 j ]by j = E[x j b * x j ]E[x j bx j ]+ 1 2 E x j b * E[x 2 j ]bx j where we have used the fact that the moments of the second order of y j and x j match. Using the fact that E is positive and that sup E[x j bx j ] = E[x 2 j ] where the supremum is taken over b ∈ B with b = 1, we conclude that α 4 (y) ≤ 3 2 α 2 (x) 2 . Setting A n = 1 √ n n j=1 y j , it remains to notice that A n is a B-valued generalized arcsine element whose distribution is given by dil n −1/2 (ν η 1 ⊲ · · · ⊲ ν ηn ) where ν η j is the arcsine distribution of variance η j . To prove the bound on the Lévy distance, we apply similarly (1.4) which yields for any ε > 0 R ℑm ϕ[G Xn (t +iε)] −ℑm ϕ[G An (t +iε)] dt ≤ 1 √ n π ε 3 α 2 (x) α 4 (x) + 3 2 α 2 (x) 2 . Then using the bound in (2.4), we get for any ε > 0, L(µ Xn , µ An ) ≤ 2 ε π + 1 √ n 1 ε 3 α 2 (x) α 4 (x) + 3 2 α 2 (x) 2 . Finally, optimizing over ε > 0, we get the desired bounds on the Lévy distance. 1 k ⊗A 1 (b) − G M k (B) 1 k ⊗A 0 (b) ≤ k ℑm(b) −1 3 η 1 − η 0 . (4.5) Moreover, if (A, ϕ) is a W * -probability space, then the scalar-valued Cauchy transforms of A 1 and A 0 satisfy 1 π R |G A 1 (t + iε) − G A 0 (t + iε)| dt ≤ 1 ε 2 η 1 − η 0 (4.6) for each ε > 0 and, with the universal positive constant c = 5( 1 4π ) 2/5 < 1.817, we have that L(µ A 1 , µ A 0 ) ≤ c η 1 − η 0 1/5 . (4.7) Proof. Fix m ∈ N and let x = {x 1 , . . . , x m } and y = {y 1 , . . . , y m } be two families whose elements are arcsine elements with variances 1 m η 0 and 1 m η 1 respectively and that are such that A x 1 ≺ · · · ≺ A xm ≺ A y 1 ≺ · · · ≺ A ym . The estimate in (4.5) follows by similar arguments as for proving (4.1). We shall still illustrate in the following how to control the second-order term where the main difference lies. Following the lines of the proof of (1.3) and using (i) of Lemma 3.5, we have E G z 0 i (b) x i G z 0 i (b) 2 = E G u i +v i (b)x i b −1 x i G u i +v i (b) = E G u i (E[G v i (b)] −1 ) E[x i b −1 x i ]E G u i (E[G v i (b)] −1 ) where u i = i−1 j=1 x j and v i = m j=i+1 y j . Similarly, E G z 0 i (b) y i G z 0 i (b) 2 = E G u i (E[G v i (b)] −1 ) E[y i b −1 y i ]E G u i (E[G v i (b)] −1 ) , Hence, E G z 0 i (b) x i G z 0 i (b) 2 −E G z 0 i (b) y i G z 0 i (b) 2 = E G u i (E[G v i (b)] −1 ) (η 0 − η 1 )(b −1 )E[G u i (E G v i (b)] −1 ) ≤ ℑm(E[G v i (b)] −1 ) −1 2 · η 0 − η 1 b −1 ≤ b −1 ℑm(b) −1 2 η 0 − η 1 ,(4.8) where we have used again the fact that ℑm E[G y (b)] −1 ≥ ℑm(b) for any b ∈ H + (B). This proves the assertion in (4.5) for the case k = 1. Noting that, by Proposition 2.6, monotone independence is preserved under amplification with the identity matrix, we prove the assertion for general k ∈ N, by applying (4.8) for 1 k ⊗ A 1 and 1 k ⊗ A 0 in the framework of the operator-valued C * -probability space (M k (A), id k ⊗ E, M k (B)). Hence, we get for all b ∈ H + (M k (B)), G M k (B) 1 k ⊗A 0 (b) − G M k (B) 1 k ⊗A 1 (b) ≤ ℑm(b) −1 3 id k ⊗η 0 − id k ⊗η 1 . Using the fact that id k ⊗η 0 − id k ⊗η 1 ≤ k η 0 − η 1 , which follows from [26, Exercise 3.10], we arrive at the desired bound for general k. Now to prove (4.6) and (4.7), we observe that for z ∈ C + , ϕ G z 0 i (z) (η 0 − η 1 )(z)ϕ G z 0 i (z) ≤ 1 ℑm(z) η 0 − η 1 ϕ G u i (ϕ[G v i (z)] −1 ) 2 ≤ 1 ℑm(z) η 0 − η 1 G z 0 i (z) 2 L 2 (A,ϕ) . Note that the last inequality holds due to the Cauchy Schwarz inequality and following (3.10): ϕ) . Then, the remaining argument follows the analogous proof in [6, Theorem 3.5]. G u i (ϕ[G v i (z)] −1 ) L 2 (A,ϕ) ≤ G u i +v i (z) L 2 (A, 4.3. Fourth moment theorem for monotone infinitely divisible measures. Fourth moment theorems refer to a simplification of the moment method to prove the weak convergence of a sequence of random variables y n to a given random variable y. Such theorems state that if the fourth moment of y n , ϕ[y 4 n ], approaches the fourth moment of a random variable y, ϕ[y 4 ], then y n → y weakly. In certain cases, one can even quantify such convergence in terms of the difference |ϕ[y 4 n ]−ϕ[y 4 ]|. Here we are concerned with the class of infinitely divisible measures with respect to the monotone convolution, which we denote by ID(⊲). This completes the main theorems from [2] where the authors give a quantitative version of the results in [1], in the cases of free and tensor independence. A fourth moment theorem was given in [3] for the Boolean case keeping the monotone case open. Our results complete the picture by including the monotone case and proving quantitative bounds for the fourth moment theorem in this setting. The main observation is that a Berry-Esseen estimate may be used to prove such a theorem. Indeed, using the fact that ϕ[x 4 ] ≥ ϕ[x 2 ] 2 , we deduce by Theorem 4.3 that if the x i 's are monotonically independent and identically distributed variables with mean 0, variance 1 and finite fourth moment ϕ[x 4 i ], then L(µ Xn , µ An ) ≤ K ϕ[x 4 i ] n 1/14 , (4.9) for some constant K > 0. In order to connect the above bound to a fourth moment theorem we will use the monotone cumulants. It was shown by Hasebe and Saigo [15] that the sequence of monotone cumulants {h n } n≥1 satisfies the property that for any m ∈ N: h n ( m i=1 x i , . . . , m i=1 x i ) = mh n (x 1 , . . . , x 1 ), whenever {x i } m i=1 is a collection of identically distributed monotone independent variables. The first four cumulants are written in terms of the first four moments as follows: h 1 (x) = ϕ[x], h 2 (x, x) = ϕ[x 2 ] − ϕ[x] 2 , h 3 (x, x, x) = ϕ[x 3 ] − 5 2 ϕ[x 2 ]ϕ[x] + 3 2 ϕ[x] 3 , h 4 (x, x, x, x) = ϕ[x 4 ] − 3 2 ϕ[x 2 ] 2 − 3ϕ[x 3 ]ϕ[x] + 37 6 ϕ[x 2 ]ϕ[x] 2 − 8 3 ϕ[x] 4 . (4.10) Note that if we normalize so that ϕ[x] = 0 and ϕ[ x 2 ] = 1, then h 2 (x, x) = ϕ[x 2 ] = 1 and h 4 (x, x, x, x) = ϕ[x 4 ] − 3 2 ϕ[x 2 ] 2 = ϕ[x 4 ] − 1.5. Having this in hand, we now state our fourth moment theorem for the scalar-valued monotone case, which is completely new. The key point is that now we have a bound in terms of the second and fourth moment, which was not the case in previous works, see [ Proof. Since Y is infinitely divisible, then for each n ∈ N, we may write Y as a sum of n identically monotone independent random variables Y = y 1 + · · · + y n = 1 √ n ( √ ny 1 + · · · + √ ny n ). Noting that ϕ[( √ ny i ) 2 ] = h 2 ( √ ny i , √ ny i ) = nh 2 (y i , y i ) = h 2 (Y, Y) = 1 and h 4 ( √ ny i ) = n 2 h 4 (y i , y i , y i , y i ) = nh 4 (Y, Y, Y, Y), then ϕ[( √ ny i ) 4 ] = h 4 ( √ ny i , √ ny i , √ ny i , √ ny i ) + 1.5 = nh 4 (Y, Y, Y, Y) + 1.5 . For a standard arcsine element A, we may assume, without loss of generality, that A = A n = 1 √ n (x 1 +· · ·+x n ) where the x i 's are also standard arcsine variables. With this observation, Theorem 4.3 holds and hence the bound (4.9) yields that L(µ Y , µ A ) = L(µ Y , µ An ) ≤ K ϕ[( √ ny i ) 4 ] n 1/14 = K h 4 ( √ ny i , √ ny i , √ ny i , √ ny i ) + 1.5 n 1/14 = K h 4 (Y, Y, Y, Y) + 1.5 n 1/14 . Since n is arbitrary, we let it go to infinity to obtain as desired L(µ A , µ Y ) ≤ K(h 4 (Y, Y, Y, Y)) 1/14 = K(ϕ[Y 4 ] − 1.5) 1/14 . In the case of operator-valued monotone independent variables, we can give a bound for the difference of the Cauchy transforms in terms of the fourth B-valued cumulants. For this aim, we need analogue operator-valued formulas to those in (4.10). As before, when we restrict to the centered case, i.e. E[x] = 0, the formulas simplify and we get that h B (M k (B)), 1 (x) = E(x) = 0, h B 2 (xb, x) = E[xbx] = η(b), h B 3 (xb 1 , xb 2 , x) = E[E[G M k (B) 1 k ⊗Y (b)] − E[G M k (B) 1 k ⊗A (b)] ≤ k ℑm(b) −1 4 η(1) sup h B 4 (Yb, Y, Yb ′ , Y) ,b, b ′ ∈ B with b = b ′ = 1. As the above bound holds on the level of the fully matricial extensions of Cauchy transforms, it is sufficient to capture convergence in distribution over B which yields the following operatorvalued monotone fourth moment theorem: Corollary 4.7. Let (A, E, B) be a C * -probability space and (Y N ) N∈N a family of monotone n- divisible elements over B such that E[Y N ] = 0, E[Y N bY N ] = η(b) and sup N∈N Y N < ∞. Pro- vided that sup h B 4 (Y N b, Y N , Y N b ′ , Y N ) → 0 as N → ∞ where the supremum is taken over all b, b ′ ∈ B with b = b ′ = 1 , then Y N converges in distribution over B to a B-valued centered arcsine element A with variance η. If, in addition, (A, ϕ) is a C * -probability space and ϕ = ϕ • E, then the distribution of Y N with respect to ϕ, converges to that of A. Proof of Theorem 4.6. Let Y be monotone n-divisible over B. We write Y as a sum of n identically distributed monotone independent random variables Y = y 1 + · · · + y n = 1 √ n ( √ ny 1 + · · · + √ ny n ). For a B-valued arcsine element A, we may assume that A = A n = 1 √ n (x 1 + · · · + x n ) where the x ′ i s are themselves B-valued arcsine variables with mean zero and variance map η. With this observation, Theorem 4.3 holds and yields following bound: for any n ∈ N and any b ∈ H + (B), E[G Y (b)] − E[G A (b)] ≤ 1 √ n ℑm(b) −1 4 B n (y), where B n (y) = α 2 ( √ ny 1 ) α 4 ( √ ny 1 ) + 3 2 α 2 ( √ ny 1 ) 2 . To control the above estimate, we note that α 2 ( √ ny 1 ) = E[ √ ny 1 1 √ ny 1 ] = h B 2 ( √ ny 1 1, √ ny 1 ) = h B 2 (Y1, Y) = η(1) . Moreover, we observe that E[ √ ny 1 b * ( √ ny 1 ) 2 b √ ny 1 ] = h B 4 ( √ ny 1 b * , √ ny 1 , √ ny 1 b, √ ny 1 ) + η(b * )η(b) + 1 2 η(b * η(1)b) = nh B 4 (Yb * , Y, Yb, Y) + η(b * )η(b) + 1 2 η(b * η(1)b), which yields that α 4 ( √ ny 1 ) = sup b∈B, b =1 E[ √ ny 1 b * ( √ ny 1 ) 2 b √ ny 1 ] ≤ sup b∈B, b =1 n h B 4 (Yb * , Y, Yb, Y) + η(1) 2 + 1 2 η(1) . Hence, we for any n ∈ N 1 √ n B n (y) ≤ η(1) sup b∈B, b =1 h B 4 (Yb * , Y, Yb, Y) + η(1) 2 n + η(1) 2n + 3 η(1) 2 2n . Finally letting n → ∞ and putting the above bounds together, we conclude that E[G Y (b)] − E[G A (b)] ≤ ℑm(b) −1 4 η(1) sup b∈B, b =1 h B 4 (Yb * , Y, Yb, Y) . (4.11) This proves the assertion for k = 1. To prove it for general k ∈ N, we note first that, by Proposition 2.6, monotone independence is preserved under amplification with the identity matrix, we prove the assertion for general k ∈ N, by applying (4.11) for 1 k ⊗Y and 1 k ⊗A in the framework of the operator-valued C * -probability space (M k (A), id k ⊗ E, M k (B)). we first note that by [6,Lemma 5.4], we have that (id k ⊗ η)(1) ≤ η(1) . Moreover, for any B ∈ M k (B), one can easily check that h M k (B) 4 (1 k ⊗ Y)B * , (1 k ⊗ Y), (1 k ⊗ Y)B, (1 k ⊗ Y) = k i,j,ℓ=1 E ij ⊗ h B 4 (YB *(1 k ⊗ Y)B * , (1 k ⊗ Y), (1 k ⊗ Y)B, (1 k ⊗ Y) ≤ k i,j=1 k ℓ=1 h B 4 (YB * iℓ , Y, YB ℓj , Y) 2 1/2 ≤ k 2 B 2 sup b∈B, b =1 b ′ ∈B, b ′ =1 h B 4 (Yb, Y, Yb ′ , Y) . From this, we infer that sup B∈M k (B), B =1 h M k (B) 4 (1 k ⊗ Y)B * , (1 k ⊗ Y), (1 k ⊗ Y)B, (1 k ⊗ Y) ≤ k i,j=1 k ℓ=1 h B 4 (YB * iℓ , Y, YB ℓj , Y) 2 1/2 ≤ k 2 sup b∈B, b =1 b ′ ∈B, b ′ =1 h B 4 (Yb, Y, Yb ′ , Y) . Putting the above terms together, we prove the assertion for any k ∈ N. Given the setting we consider, we will denote by S n a B-valued infinitesimal semicircle, Bernoulli or generalized arcsine element (see [37,Section 3.3]) with the infinitesimal variance (η n , η ′ n ) given by η n (b) = 1 n n j=1 E[x j bx j ] and η ′ n (b) = 1 n n j=1 E ′ [x j bx j ]. We assume that our OVI probability space is rich enough to contain a family y := {y 1 , . . . , y n } of independent B-valued infinitesimal semicircular/Bernoulli/arcsine elements, that is itself independent of x and is such that E[y i by i ] = E[x i bx i ] and E ′ [y i by i ] = E ′ [x i bx i ] for any i = 1, . . . , n. In the monotone case, the independence of y from x is assumed to hold in the sense that A x 1 ≺≺ ≺ · · · ≺≺ ≺ A xn ≺≺ ≺ A y 1 ≺≺ ≺ · · · ≺≺ ≺ A yn over B. We state now the first quantitative estimates in the infinitesimal setting that follow directly from Theorem 1.3 and that provide analogue estimates to those in Section 4. E ′ G Xn (b) − E ′ G Sn (b) ≤ 2 √ n ℑm(b) −1 4 max 1≤i≤N x i 3 + 8 max 1≤i≤N E[x 2 i ] 3 2 , where S n is a B-valued infinitesimal semicircular/Bernoulli/generalized arcsine element with the infinitesimal variance (η n , η ′ n ). The bounds can be slightly improved in the Boolean case by removing the factor 8 on the right-hand side. Proof. To obtain the above estimates, we choose the family y = {y 1 , . . . , y n } as above and then apply Theorem 1.3. We give a complete proof for the free case whereas we only indicate what choices of the family y we do in the Boolean and monotone cases. For the free case, we choose y = {y 1 , . . . , y n } to be a family of B-valued infinitesimal semicircular elements that are infinitesimally free and that are such that E[y j ] = E ′ [y j ] = 0, E[x j bx j ] = E[y j by j ], and E ′ [x j bx j ] = E ′ [y j by j ] for any b ∈ B and j ∈ [n]. Finally, we set S n = 1 √ n n j=1 y j , and note that by Lemma 2.19, S n is a B-valued infinitesimal semicircular element with infinitesimal variance (η n , η ′ n ). Hence, Theorem 1.3 yields E ′ G Xn (b) − E ′ G Sn (b) ≤ 2 √ n ℑm(b) −1 4 max 1≤i≤N x i 3 + max 1≤i≤N y i 3 . Finally, we note that y i ≤ 2 η y i (1) 1/2 = 2 E[x 2 i ] 1/2 for each i = 1, . . . , n (see [17, Proposition 6.2.1]) which completes the proof of the free case. For the Boolean case, we choose y = {y 1 , . . . , y n } to be a family of B-valued infinitesimal Bernoulli elements that are infinitesimally Boolean independent over B with matching moments as described above. Finally, we note that for any j ∈ [n], y j = η y j (1) 1/2 which is the reason why the factor 8 can be improved to 1, see [17,Proposition 6.2.4]). Finally, for the monotone case, we choose y = {y 1 , . . . , y n } to be a family of B-valued infinitesimal arcsine elements that are infinitesimally monotone independent in the sense that A x 1 ≺≺ ≺ · · · ≺≺ ≺ A xn ≺≺ ≺ A y 1 ≺≺ ≺ · · · ≺≺ ≺ A yn over B with matching moments as described above. Finally, we note that for any j ∈ [n], y j ≤ 2 η y j (1) 1/2 , see [17,Proposition 6.2.6]. Conjecture: Let (A 1 , E 1 , E ′ 1 ), . . . , (A k , E k , E ′ k ) be OVI C * -probability spaces over B and let (A, E) be their OV C * -free/Boolean/monotone product. Then there exists a linear B-bimodule self-adjoint completely bounded map E ′ : A → B with E ′ (1) = 0 such that E ′ | A j = E ′ j for 1 ≤ j ≤ k and such that A 1 , . . . , A k are infinitesimally free/Boolean/monotone independent over B. Remark 4.9. We give an algebraic construction of the OVI free/Boolean/monotone products in Appendix A. The mapping E ′ that we construct, satisfies all the properties in the above conjecture except for complete boundedness which is still open. b ∈ H + (B), E ′ [G c 0 (b)] − E ′ [G c 1 (b)] ≤ 9 ℑm(b) −1 3 ( η 0 − η 1 + η ′ 0 − η ′ 1 ). Proof. We shall prove only the free case and comment at the end on the Boolean and monotone cases. Let s 0 and s 1 be infinitesimal semicircular elements with respective infinitesimal variances (η 0 , η ′ 0 ) and (η 1 , η ′ 1 ). Given m ∈ N, we let x = {x 1 , . . . , x m } and y = {y 1 , . . . , y m } be two infinitesimally free families whose elements are themselves infinitesimally free copies of 1 The proof relies as before on the operator-valued Lindeberg method with the only difference that it is now applied with respect to E instead of E. With this aim we set, for each i ∈ [m], x i = x i 0 0 x i and y i = y i 0 0 y i , and note that the x i 's and y i 's are centered with respect to E but do not have matching moments of second order. Thus, we have for any B = b b ′ 0 b ∈ B with b ∈ H + (B), E[G S 0 (B)] − E[G S 1 (B)] = 1 m m i=1 E[G z 0 i (B)( x i G z 0 i (B)) 2 ] − E[G z 0 i (B)( y i G z 0 i (B)) 2 ] + 1 m √ m m i=1 E[G z i (B)( x i G z 0 i (B)) 3 ] − E[G z i−1 (B)( y i G z 0 i (B)) 3 ] where z i = i j=1 x j + m j=i+1 y j and z 0 i = i−1 j=1 x j + m j=i+1 y j . Hence, E[G S 0 (B)] − E[G S 1 (B)] ≤ 1 m m i=1 E[G z 0 i (B)( x i G z 0 i (B)) 2 ] − E[G z 0 i (B)( y i G z 0 i (B)) 2 ] + 1 m √ m m i=1 E[G z i−1 (B)( x i G z 0 i (B)) 3 ] + E[G z i−1 (B)( y i G z 0 i (B)) 3 ] . To control the above terms, we observe first that G z 0 i (B) = (b − z 0 i ) −1 −(b − z 0 i ) −1 b ′ (b − z 0 i ) −1 0 (b − z 0 i ) −1 ≤ ℑm(b) −1 (1 + b ′ ℑm(b) −1 ), and that if we let η = η 0 − η 1 , η = η 0 − η 1 , and η ′ = η ′ 0 − η ′ 1 , then η ≤ η + η ′ . Indeed, we have η(B) ≤ η(b) + η(b ′ ) + η ′ (b) ≤ η b + η b ′ + η ′ b ≤ ( η + η ′ )( b + b ′ ) = ( η + η ′ ) B , By Remark 3.7, the third order terms are controlled as follows: for each i ∈ [m], E G z i (B)( x i G z 0 i (B)) 3 ≤ 3 G z 0 i (B) 4 x i 3 ≤ 3 ℑm(b) −1 4 (1 + b ′ ℑm(b) −1 ) 4 x i 3 , and E G z i−1 (B)( y i G z 0 i (B)) 3 ≤ 3 ℑm(b) −1 4 (1 + b ′ ℑm(b) −1 ) 4 y i 3 . Following the lines of proof [6, Theorem 3.1] for free case and Remark 3.7, we then obtain for each i ∈ [m], E[G z i (B)( x i G z 0 i (B)) 2 ] − E[G z i (B)( y i G z 0 i (B)) 2 ] = E G z 0 i (B) ( η 0 − η 1 ) ( E[G z 0 i (B)])G z 0 i (B) ≤ 3 G z 0 i (B) 2 η 0 − η 1 E[G z 0 i (B)] ≤ 9 G z 0 i (B) 3 η 0 − η 1 . Now, choosing B = b 0 0 b (i.e, b ′ = 0), we get E ′ [G s 0 (b)] − E ′ [G s 1 (b)] ≤ E[G s 0 (b)] − E[G s 0 (b)] E ′ [G s 0 (b)] − E ′ [G s 1 (b)] 0 E[G s 0 (b)] − E[G s 0 (b)] = E[G S 0 (B)] − E[G S 1 (B)] . Therefore, E ′ [G s 0 (b)] − E ′ [G s 1 (b)] ≤ 9 ℑm(b) −1 3 η 0 − η 1 + 3 √ m ℑm(b) −1 4 max 1≤i≤m x i 3 + max 1≤i≤m y i 3 . (4.12) Finally, note that (4.12) holds for any m ∈ N, then we let m → ∞ to complete the proof. For Boolean and monotone cases, we apply the proofs of (1.1 and 1.3 respectively, E G z i (B)( x i G z 0 i (B)) 2 − E G z i (B)( y i G z 0 i (B)) 2 ≤ 9 G z 0 i (B) 2 B −1 η 0 − η 1 and note that B −1 ≤ b −1 (1 + b ′ b −1 ) and G z 0 i (B) ≤ ℑm(b) −1 (1 + b ′ ℑm(b) −1 ) . The remaining arguments follow the analogous ones of the free case. MATRICES WITH BOOLEAN ENTRIES Let (A, ϕ) be a W * -probability space and denote by M n (A) the algebra of n × n matrices whose entries are elements of A, i.e. M n (A) = M n (C) ⊗ A. The aim of this section is to study the distribution of self-adjoint matrices whose entries are Boolean independent with a variance profile. In Section 5.1, we will first illustrate some properties of B-valued Bernoulli elements and then study distributions of matrices whose entries are in particular Boolean independent η-circular elements. Finally, in Section 5.2, we extend the study to the general Boolean Wigner case. Now applying ϕ on both sides, we obtain (5.1) and end the proof. Having Lemma 5.1 in hand, we obtain immediately the following result. We give a proof for the convenience of the reader. In order to study matrices with Boolean entries, we consider the quadruple (M n (A), tr n ⊗ϕ, id n ⊗ ϕ, M n (C)) where (A, ϕ) is a given W * -probability space. For this aim, we call an element x ∈ A a η-circular element if x is η-diagonal (see [9,Definition 2.6]) and its Boolean cumulants satisfy in addition the property that β n (x, x * , · · · , x, x * ) = β n (x * , x, · · · , x * , x) = 0 unless n = 2. Remark 5.3 (Norm of η-circular operator). We note that one can construct η-circular elements such that β 2 (A, A * ) = √ α, β 2 (A * , A) =α and ||A|| = max( √ α, α). Indeed, in [9, Remark 5.3 ], a construction is given for η-diagonal elements. Their construction is of the form A = V(X ⊗ Y), where V is the flip operator, X = T 1 ⊕ T 2 and Y is a partial isometry. Since the norm of V(X ⊗ Y) = X ⊗ Y and X ⊗ Y = X Y , then V(X ⊗ Y) = X , because Y is a partial isometry. In the specific case of the η-circular elements with β 2 (A, A * ) = α and β 2 (A * , A) =α, we choose T 1 and T 2 to be the Bernoulli variables given by T 1 = 0 √ α √ α 0 and T 2 = 0 √α √α 0 . Hence, X = T 1 ⊗ T 2 = max{ T 1 , T 2 } = max( √ α, √α ). Let b = {b ij | 1 ≤ j ≤ i ≤ n}B n = 1≤j≤i≤n e ij ⊗ b ij + e * ij ⊗ b * ij ,(5.2) which we will prove to be a M n (C)-valued Bernoulli element in the space of matrices over A, see Proposition 5.5 for the precise statement. To do this, we need the following lemma whose proof follows similarly to the free case, see [20,Section 9.3]. With Lemma 5.4 in hand, we are ready to describe the OV-distribution of B n . Proposition 5.5. In the quadruple (M n (A), tr n ⊗ϕ, id n ⊗ ϕ, M n (C)), the matrix B n is an operator valued Bernoulli element over D n , the subalgebra of diagonal matrices with variance map η n : D n → D n , D → η n (D) where for any D = (d ij ) n i,j=1 ∈ D n , η n (D) i,j = δ ij i−1 k=1 d kk α ik + d ii σ i + n k=i+1 d kkαik . In particular, the distribution of B n with respect to tr n ⊗ ϕ is given by µ Bn = 1 2n n i=1 (δ √ λ i + δ − √ λ i ). (5.3) Proof. The fact that B n is an operator-valued Bernoulli element follows directly from Lemma 5.4. Indeed, as the diagonal entries are Bernoulli elements and the off-diagonal entries are ηcircular elements that are all Boolean independent up to symmetry, it follows that for any m ∈ N and C, C 1 , . . . , C m−1 ∈ M n (C), we have that β As any M n (C)-valued joint Boolean cumulants of order larger than 2 is equal to 0, we deduce that B n is an operator-valued Bernoulli element over M n (C). It suffices to remark that the covariance map η n sends the subalgebra of diagonal matrices D n ⊂ M n (C) into itself, to see that B n is in fact an operator-valued Bernoulli element over D n . We omit the proof of this statement in the Boolean case, which can be proven in the same way as its free analogue in [25,Theorem 3.1]. To prove the last part of the proposition, we note that by Lemma 5.1, we have Let a = {a ij | 1 ≤ j ≤ i ≤ n} be a family of elements of A that are Boolean independent, centered with respect to ϕ and are such that a ii = a * ii , ϕ[a 2 ii ] := σ i for all i and ϕ[a 2 ij ] = 0, ϕ[a ij a * ij ] := α ij and ϕ[a * ij a ij ] :=α ij for all j < i. Note that the entries of A n are assumed to be, up to symmetry, Boolean independent but do not need to be identically distributed. We will give quantitative estimates on the distribution of such matrices in terms of Cauchy transforms. Our results generalize the ones in Popa and Hao [30] in the case of C*-algebras, since we do not impose a condition on the convergence of moments of order larger than 4 for the entries and more important we allow having a variance profile in the matrix. With this aim, we apply Theorem 1.1 in the quadruple (M n (A), tr n ⊗ϕ, id n ⊗ ϕ, M n (C)) associated with (A, ϕ). Theorem 5.7. Let A n be an operator-valued Wigner matrix described above and B n an operatorvalued Bernoulli B n as in (5.2). Then for any z ∈ C + , (tr n ⊗ϕ)[G An (z)] − (tr n ⊗ϕ)[G Bn (z)] ≤ 16 ℑm(z) 4 max j≤i a ij 3 1 √ n . (5.5) Proof. Let b = {b ij | 1 ≤ j ≤ i ≤ n} be a family of Boolean independent elements such that • for any 1 ≤ i ≤ n, b ii is a Bernoulli element with ϕ[b ii ] = 0 and ϕ[b 2 ii ] = ϕ[a 2 ii ], • for any 1 ≤ j < i ≤ N, b ij is a η-diagonal circular element with ϕ[b ij ] = 0 and ϕ[b ij b * ij ] = ϕ[a ij a * ij ] := α ij and ϕ[b * ij b ij ] = ϕ[a * ij a ij ] :=α ij . Without loss of generality, b can be assumed to be Boolean independent from a. Now set S N to be the N × N operator-valued Wigner matrix whose entries are given by the family b, i.e. B n = 1≤j≤i≤n e ij ⊗ b ij + e * ij ⊗ b * ij . Finally, set x ij = e ij ⊗ a ij + e * ij ⊗ a * ij and y ij = e ij ⊗ b ij + e * ij ⊗ b * ij for all 1 ≤ j ≤ i ≤ n. By an application of Lemma 5.4, one sees that, when lifted to matrices, Boolean independence is preserved, in the sense that independence of the entries {a ij , b ij | 1 ≤ j ≤ i ≤ n} with respect to ϕ implies independence with amalgamation over M n (C) of {x ij , y ij | 1 ≤ j ≤ i ≤ n} with respect to id n ⊗ϕ. Moreover, the variables are centered and the moments of second order match: for any b ∈ M n (C), (id n ⊗ϕ)[x ij ] = (id n ⊗ϕ)[y ij ] = 0 and (id n ⊗ϕ) x ij bx ij = (id n ⊗ϕ) y ij b y ij . Having this in hand and setting N = n(n + 1)/2, we follow the same steps as Theorem 1.1. By adopting its notation, we get for any z ∈ C + , where W ij (z) = G z ij (z)x ij G z 0 ij (z)x ij G z 0 ij (z)x ij G z 0 ij (z), W ij (z) = G z i−1,j (z) y ij G z 0 ij (z)y ij G z 0 ij (z)y ij G z 0 ij (z). We start by controlling the term: (tr n ⊗ϕ) G z ij (z)(e ij ⊗ a ij )G z 0 ij (z)(e ij ⊗ a ij )G z 0 ij (z)(e ij ⊗ a ij )G z 0 ij (z) ≤ G z ij (z) G z 0 ij (z) (tr n ⊗ ϕ) (e ij ⊗ a ij )G z 0 ij (z)(e ij ⊗ a ij )G z 0 ij (z)(e ij ⊗ a ij ) ≤ 1 n √ n G z ij (z) G z 0 ij (z) (tr n ⊗ ϕ) E ij ⊗ a ij [G z 0 ij (z)] ji a ij [G z 0 ij (z)] ji a ij ≤ 1 n 2 √ n G z ij (z) G z 0 ij (z) 3 a ij 3 ≤ 1 n 2 √ n 1 ℑm(z) 4 a ij 3 . As the other terms are treated in the same way, we get that Summing over 1 ≤ j ≤ i ≤ n, we finally get for any z ∈ C + , As the b ij 's are η-diagonal elements with the same variance as the a ij 's, then by Remark 5.3 we have that b ij = max{ √ α ij , α ij }. To obtain 5.5 it remains to notice that ||a ij || 2 = ||a ij a * ij || ≥ ϕ[a ij a * ij ] = α ij and ||a ij || 2 = ||a * ij a ij || ≥ ϕ[a * ij a ij ] = α ij . Combining Theorem 5.7 with Theorem 4.2 we can get limit theorem for Wigner Matrices with Boolean entries. Proposition 5.8. Let A n be as in (5.7) and suppose that ||a ij || ≤ C, for some constant C independent of N. Furthermore let η n (b) = (id n ⊗ ϕ)[A n bA n ] and suppose that η n (1) has distribution ρ n with respect tr ⊗ ϕ. If, ρ n to converges to ρ in distribution, then, the distribution of A n with respect to tr ⊗ ϕ converges to the unique symmetric measure µ such that µ (2) = ρ. Proof. Recall that convergence in distribution is equivalent to uniform convergence of the Cauchy transform on compact sets. Let µ An be the distribution of A n with respect to tr ⊗ ϕ. We will prove that for any ε > 0, and any K a compact sets of C + there is an N such that, for any n > N, G µ An (z) − G µ (z) ≤ ε, for all z ∈ K. To do this, let B n be an operator-valued Bernoulli as in Theorem 5.7 with distribution ρ n and let B be an operator-valued Bernoulli random variable with variance map η such that η(1) has distribution ρ. Since ρ n converges weakly to ρ, then by Corollary 5.2, µ Bn converges to µ B , weakly. Hence, there exists an integer N 1 > 1 such that, for all n > N 2 , G µ Bn (z) − G µ B (z) ≤ ε/2, for all z ∈ K. On the other hand, equation (5.5) yields that there exists N 2 > 1 such that for all n > N 2 , |G µ An (z) − G µ Bn (z) ≤ ε/2, for all z ∈ K. (5.6) The result follows by taking N = max(N 1 , N 2 ). Let us finish by considering the case when {a ij } n≥i>j≥0 are identically distributed, which corresponds to Theorem 5.1 of Popa and Hao [30]. Example 5.9. Let {a ij } n≥i>j≥0 be a family of Boolean independent identically distributed random variables in some C * -probability space (A, ϕ) and let A n be the n × n matrix defined by A n = 1≤j≤i≤n e ij ⊗ a ij + e * ij ⊗ a * ij . (5.7) If we denote by ϕ(a ij a * ij ) = α and ϕ(a * ij a ij ) =α, then ρ n := η n (b) = (id n ⊗ ϕ)[A n bA n ] = (id n ⊗ ϕ)[B n bB n ] where B n is the matrix-valued Bernoulli as in part (i) from Example 5.6 with σ = 0. Thus ρ n converges to the uniform distribution Uniform α, α and hence we conclude by Theorem 5.7 the limiting distribution of A n with respect to tr n ⊗ϕ is µ(dt) = |t| (α −α) dt, |t| ∈ ( √α , √ α). In the case α =α, ρ n converges to δ α and µ An → 1 2 δ √ α + 1 2 δ √ α . This appendix is dedicated to the study of operator-valued infinitesimal products of operatorvalued infinitesimal probability spaces. Note that the OV free product was introduced in [34], while the OV Boolean and monotone cases have been studied in [31] and [28], respectively. We also refer the reader to [17,Chapter 7] for a nice summary on the subject. However, this is less studied in the infinitesimal setting where only the (scalar-valued) infinitesimal free product was constructed in [12]. In this appendix, we give an algebraic construction of the operator-valued infinitesimal product algebraically in the free, Boolean, and monotone settings. First of all, let us recall the following result (see [17,Theorem 4.3.1]): suppose that H 1 , . . . , H n are Hilbert-B-bimodules and ξ j is a unit vector in H j for each j = 1, . . . , n. In addition, for each j, we define the map E j [b] = ξ j , bξ j for all b ∈ B(H j ). Then there exists a Hilbert B-bimodule H and a unit vector ξ in H, and an injective * -homomorphism ρ j : B(H j ) → B(H) such that E[ρ j (b)] = E j [b] for each b ∈ B(H j ). Moreover, ρ 1 (B(H 1 )), ρ 2 (B(H 2 )), . . . , ρ n (B(H n )) are freely/Boolean/monotone independent with respect to E. Suppose that for each j, there exists a self-adjoint B-bimodule linear map E ′ j : B(H j ) → B that is completely bounded with E ′ j [1] = 0. Here we let A be be the unital C * -algebra generated by ρ 1 (B(H 1 )), . . . , ρ n (B(H n )). For each notion of independence, our aim is to construct a selfadjoint B-bimodule linear map E ′ on A that vanishes on B and for which ρ 1 (B(H 1 )), ρ 2 (B(H 2 )), . . . , ρ n (B(H n )) are infinitesimally independent in the OVI probability space (A, E, E ′ , B). Note that for each j, we shall define E ′ on ρ j (B(H j )) as follows: E ′ [ρ j (b)] := E ′ j [b] for all b ∈ B(H j ). To describe the construction of E ′ on A, it is sufficient to show how E ′ is defined on terms of the following form ρ j 1 (B(H j 1 ))ρ j 2 (B (H j 2 )) . . . ρ j k (B(H j k )) where j 1 , . . . , j k ∈ [n] and j 1 = j 2 = . . . = j k . The Infinitesimally Free Case: E ′ [ρ j 1 (b 1 ) . . . ρ j k (b k )] := k s=1 E[ρ j 1 (b 1 ) . . . ρ j s−1 (b s−1 )E ′ [ρ js (b s )]ρ j s+1 (b s+1 ) . . . ρ j k (b k )] where b 1 ∈ H j 1 , . . . , b k ∈ H j k for some j 1 = j 2 = . . . = j k that E j 1 (b 1 ) = · · · = E j k (b k ) = 0. The Infinitesimally Boolean Case: E ′ [ρ j 1 (b 1 ) . . . ρ j k (b k )] := k s=1 E[ρ j 1 (b 1 )] . . . E[ρ j s−1 (b s−1 )]E ′ [ρ js (b s )]E[ρ j s+1 (b s+1 )] . . . E[ρ j k (b k )] where b 1 ∈ H j 1 , . . . , b k ∈ H j k for some j 1 = j 2 = . . . = j k . The Infinitesimally Monotone Case: E ′ [ρ j 1 (b 1 ) . . . ρ j k (b k )] := E ′ [ρ j 1 (b 1 ) . . . ρ j s−1 (b s−1 )E[ρ js (b s )]ρ j s+1 (b s+1 ) . . . ρ j k (b k )] +E[ρ j 1 (b 1 ) . . . ρ j s−1 (b s−1 )E ′ [ρ js (b s )]ρ j s+1 (b s+1 ) . . . ρ j k (b k )] whenever b 1 ∈ H j 1 , . . . , b k ∈ H j k and j s−1 < j s > j s+1 . Note that the fact that E ′ is well-defined on A in the Boolean case is obvious. For the free case, we observe that A can be written as follows: where ρ i j (B(H i j )) • is the set of centered elements of ρ i j (B(H i j )) • . Due to the direct sum decomposition, we only need to define it on B1 and each of ρ i 1 (B(H i 1 )) • ρ i 2 (B(H i 2 )) • . . . ρ in (B(H in )) • . Therefore, for the free case, E ′ is well-defined on A. In addition, E ′ is also well-defined on A for the monotone case by recursively computing the mixed moments. A = B1 ⊕ Since E j and E ′ j are both self-adjoint for each j, we can easily see that E ′ is self-adjoint for each case. Moreover, following the definition of E ′ , it is clear that ρ 1 (B(H 1 )), ρ 2 (B(H 2 )), . . . , ρ n (B(H n )) are infinitesimally freely/ Boolean/monotone independent with respect to (E, E ′ ). However, it is not clear whether we have any continuity property for E ′ . Hence, we only have the operator-valued infinitesimal products in the algebraic sense. Assumption 1 . 1The two families {x 1 , . . . , x N } and {y 1 , . . . , y N } consist of self-adjoint elements in A such that for any j = 1, . . . , N, • E[x j ] = E[y j ] = 0, • E x j b x j = E y j b y j , for all b ∈ B. Theorem 1. 1 ( 1Boolean Case). Let (A, E, B) be an operator-valued C * -probability space. Let N ∈ N and consider two families x = {x 1 , . . . , x N } and y = {y 1 , . . . , y N } of self-adjoint elements in A that are Boolean independent and satisfy Assumption 1. Then for any b ∈ H + (B),E[G x N (b)] − E[G y N (b)] ≤ ℑm(b) −1 4 α 2 (x)α 4 (x) + α 4 (y) N.(1.1) Assumption 2 . 2The two families x and y consist of self-adjoint elements in A such that for any j = 1, . . . , N,• E[x j ] = E[y j ] = E ′ [x j ] = E ′ [y j ] = 0,• E x j b x j = E y j b y j and E ′ x j b x j = E ′ y j b y j , for all b ∈ B. Theorem 1. 3 ( 3Infinitesimal Case). Let (A, E, E ′ , B) be an OV C * -infinitesimal probability space. Let N ∈ N and consider two infinitesimally independent families x = {x 1 , . . . , x N } and y = {y 1 , . . . , y N } of self-adjoint elements in A that are infinitesimally freely/Boolean/monotone independent satisfy Assumption 2. Then for each b ∈ H + (B), Definition 2 . 1 . 21Given an OV C * -probability space (A, E, B). Remark 2 . 8 . 28Note that x is a B-valued semicircular (respectively Bernoulli, arcsine) element with variance η if and only if the OV free (respectively Boolean, monotone) cumulants κ B n are given by κ B n (xb 1 , xb 2 , . . . , xb n−1 , x) = η(b 1 ) if n = 2, 0 if n = 2. (2.1) For a given OV C * -probability space (A, E, B), if we have an additional self-adjoint linear Bbimodule map E ′ : A → B that is completely bounded with E ′ (1) = 0, then (A, E, E ′ , B) is called an OV C * -infinitesimal probability space. Definition 2.10. Given an OV C * -infinitesimal probability space (A, E, E ′ , B). µ Xn i 1 ,...,i k (b 1 , . . . , b k−1 ) − µ X i 1 ,...,i k (b 1 , . . . , b k−1 ) −→ 0, and ∂µ Xn i 1 ,...,i k (b 1 , . . . , b k−1 ) − ∂µ X i 1 ,...,i k (b 1 , . . . , b k−1 ) −→ 0 as n → ∞. Proposition 2 . 12 . 212Let (A, E, E ′ , B) be an OV C * -infinitesimal probability space and ( A, E, B) be its upper triangular probability space. Then subalgebras (A i ) i∈I that contain B are infinitesimally free/Boolean/monotone over B if and only if ( A i ) i∈I are free/Boolean/monotone over B respectively, where for each i, A i and B i are as defined above. Proposition 3 . 2 . 32Let (A, E, B) be an operator-valued C * -probability space. Let N ∈ N and consider the families x = {x 1 , . . . , x N } and y = {y 1 , . . . , y N } of self-adjoint elements in A. For any i = 1, . . . , N, set then by similar computations as in (3.3) and the positivity of E, we have Let ( A, E, B) be the corresponding upper triangular space of (A, E, E ′ , B) and set x = { x 1 , . . . , x N } and y = { y 1 , . . . , y N } where each j = 1, . . . , N. Finally, for each i = 1, . . . , N := x N and z 0 = y N 0 0 y N := y N . Theorem 4. 1 . 1Let x := {x 1 , . . . , x n } be a family of self-adjoint elements in A that are Boolean independent with amalgamation over B and that are such that E[x j ] = 0. Then, for any b ∈ H + (B), j is a centered B-valued Bernoulli element with variance η n , (See [17, Lemma 6.2.5]) Theorem 4. 2 . 2Let (A, E, B) be an operator-valued C * -probability space. Consider two operatorvalued Bernoulli elements B 0 , B 1 with respective covariance maps η 0 , η 1 : B → B. Then, for every k ∈ N and each b ∈ H + (M k (B)), we have G M k (B) Theorem 4. 3 . 3Let x := {x 1 , . . . , x n } be a family of self-adjoint elements in A that are monotone independent with amalgamation over B, i.e. A x 1 ≺ A x 2 ≺ · · · ≺ A x N over B, and that are such that E[x j ] = 0. Then, for any b ∈ H + (B), Theorem 4. 4 . 4Let (A, E, B) be an operator-valued C * -probability space and consider two Bvalued arcsine elements A 0 , A 1 with respective covariance maps η 0 , η 1 : B → B. Then, for every k ∈ N and each b ∈ H + (M k (B)), we have G M k (B) Theorem 4 . 5 . 45Let Y be a monotone infinitely divisible random variable with mean 0 and variance 1 and A be a standard arcsine element. Then L(µ Y , µ A ) ≤ K(ϕ[Y 4 ] − 1.5) 1/14 , for some K > 0. Theorem 4. 6 . 6Let (A, E, B) be a C * -probability space and Y be monotone n-divisible element over B with E[Y] = 0 and E[YbY] = η(b). Then, for any any k ∈ N and b ∈ H + where A is the B-valued arcsine element with E[A] = 0 and E[AbA] = η(b) and the supremum is taken over all 4. 4 . 4Infinitesimal CLTs. This section is devoted to studying operator-valued infinitesimal central limit theorems, that were introduced in[27], see Theorem 2.17. We provide in this paper the first quantitative results in this setting that are on the level of operator-valued infinitesimal Cauchy transforms.Suppose that (A, E, E ′ , B) is an OV C * -infinitesimal probability space. Let x := {x 1 , . . . , x n } be a family of self-adjoint elements in A such that E[x j ] = E ′ [x j ] = 0 for all j and set Theorem 4 . 8 . 48Let x := {x 1 , . . . , x n } be a family of self-adjoint elements in A that are infinitesimally free/Boolean/monotone independent with amalgamation over B and that are such that E[x j ] = E ′ [x j ] = 0. Then, for any b ∈ H + (B), Theorem 4. 10 . 10Let (A, E, E ′ , B) be an OV C * -infinitesimal probability space, and let c 0 and c 1 be infinitesimal semicircular/Bernoulli/arcsine elements with respective infinitesimal variances (η 0 , η ′ 0 ) and (η 1 , η ′ 1 ). Provided that the above conjecture holds, then for any 1 respectively. We consider the upper triangular probability space ( A, E, B) that is induced by (A, E, E ′ , B), set define η 0 (B) := E[S 0 BS 0 ] and η 1 (B) := E[S 1 BS 1 ] for any B ∈ B. 5. 1 . 1Matrix Valued Bernoulli. Let (A, E, B) be an OV C * -probability space, and (A, ϕ) be a C * -probability space such that ϕ = ϕ • E. Let B be a B-valued Bernoulli element with variance map η(b) = E[BbB]. We illustrate in the following lemma what the scalar-valued distribution of B is. For a measure µ, let us denote by µ (2) the push forward of the measure µ along the map t → t 2 . Lemma 5 . 1 . 51Let B be a B-valued Bernoulli element with η(b) = E[BbB] as variance map, then for any n ∈ N ϕ[B 2n ] = ϕ[η(1) n ] and ϕ[B 2n+1 ] = 0. (5.1) In particular, if µ B and µ η(1) are the distributions of B and η(1) with respect to ϕ, respectively, then µ B is the unique symmetric measure such that µ (2) B = µ η(1) . Proof. To compute the moments of a B-valued Bernoulli variable B, we apply the Boolean momentcumulant formula and use the fact that the only non-vanishing cumulant of B is β 2 (Bb, B) = η(b). Since there is only one pair interval partition on [2n], namely π = {{1, 2}, . . . , {2n − 1, 2n}}, we get E[B 2n ] = η(1) n and E[B 2n+1 ] = 0. Corollary 5 . 2 . 52Let B and {B n } n ≥ 0 be B-valued Bernoulli elements with variance maps η and {η n } n ≥ 0, respectively. Then, µ Bn → µ B weakly, if and only if µ ηn(1) → µ η(1) , weakly. Proof. To relate µ ηn(1) and µ η(1) , we first note that µ Bn ([−t, t]) = µ ηn(1) ([0, t 2 ]), and similarly, µ B ([−t, t]) = µ η(1) ([0, t 2 ]). If µ Bn → µ B , then µ Bn ([−t, t]) → µ B ([−t, t]) and then µ ηn(1) ([0, t 2 ]) → µ η(1) ([0, t 2 ]), which implies that η n (1) → η(1), since by definition η n (1) → η(1) are measure supported on R + . The converse follows the same lines by the observations that symmetric measures are determined by the measure on the intervals of the form [−t, t]. be a family of Boolean independent elements such that• for any 1 ≤ i ≤ n, b ii is a Bernoulli element with ϕ(b ii ) = 0 and ϕ(b 2 ii ) := σ i . • for any 1 ≤ j < i ≤ n, b ij is a η-circular element with ϕ(b ij ) = 0 and ϕ(b ij b * ij ) := α ij and ϕ(b * ij b ij ) :=α ij .Let us set e ii = 1 2 √ n E ii and e ij = 1 √ n E ij for j < i, with (E ij ) 1≤i,j≤n are the standard matrix units in M n (C). Our main interest in this section is the matrix Lemma 5. 4 . 4Let (A, ϕ) be a non-commutative probability space. We denote by β n the scalarvalued Boolean cumulants with respect to ϕ and by β Mn(C) n the operator-valued cumulants with respect to id ⊗ ϕ. Consider the matrices A ∈ M n (A). Then the operator-valued cumulants of the family {A (k) } k are given in terms of the cumulants of their entries as follows: ( B n C, B n ) := η n (C) is given for any i, j ∈ [n] tr n ⊗ϕ[B 2k n ] = tr n ⊗ϕ[η(1) On the other hand, denoting by δ λ the Dirac measure at λ, we can easily check that the m-th moment of the probability measure µ = 1 2n n i=1 (δ √ λ i + δ − √ λ i ) is equal to tr n ⊗ϕ[B m n ], which proves (5.3). ( tr n ⊗ϕ)[G An (z)] − (tr n ⊗ϕ)[G Bn (z)] ≤ 1≤j≤i≤n (tr n ⊗ ϕ)[W ij (z)] + (tr n ⊗ ϕ)[ W ij (z)] , ( tr n ⊗ϕ)[G An (z)] − (tr n ⊗ϕ)[G Bn (z)] ACKNOWLEGMENTS OA was supported by CONACYT Grant CB-2017-2018-A1-S-9764 and by the SFB-TRR 195 Symbolic Tools in Mathematics and their Application of the German Research Foundation (DFG). This project started when OA and MB were visiting Saarland University. They would like to thank Roland Speicher for his hospitality. APPENDIX A. OPERATOR-VALUED INFINITESIMAL PRODUCTS n≥0 i 1 1=··· =in ρ i 1 (B(H i 1 )) • ρ i 2 (B(H i 2 )) • . . . ρ in (B(H in )) • Unital subalgebras A 1 , . . . , A n that contain B are free over B if and only if for n ≥ 2 and i 1 , . . . , i s ∈ [n] which are not all equal and for x 1 ∈ A i 1 , . . . , x s ∈ A is , we have • B-bimodule subalgebras A 1 , . . . , A n are Boolean independent over B if and only if for n ≥ 2 and i 1 , . . . , i s ∈ [n] which are not all equal and for x 1r B s (x 1 , . . . , x s ) = 0. 1.1 where we choose y = {y 1 , . . . , y n } to be a family of B-valued Bernoulli elements that are Boolean independent over B and are such that E[y j ] = 0 and E[y j by j ] = E[x j bx j ] for any j ∈ [n] and b ∈ B. Moreover, we note that in this case, we have that E[y j b * y 2 j by j ] = E[y j b * y j ]E[y j by j ]. As E is a completely positive map, we note that sup b∈B, b =1 E[y j b * y j ] = E[y 2 Example 5.6. We show in the following examples how we can explicitly compute the spectral distribution of B n using Proposition 5.5 and find their limiting distribution, as n tends to infinity. We consider the cases when the entries are identically distributed and when they have a variance profile.(i) Identically distributed entries: For any 1 ≤ j ≤ i ≤ n, let σ i = σ, α ij = α/n and α ij =α/n, and assume without loss of generality that 0 <α ≤ α. In this case,Now as λ i+1 − λ i = (α −α)/n, we notice that the λ i 's are equally spaced with λ 1 = σ + n−1 nα and λ n = σ + n−1 n α. Thus, in the limit as n → ∞,One can also see this by computing the cumulative distribution function of X n for any ω ∈ R,From this, one can easily show that µ Bn :Moreover, in the case whereα = α, B n is a scalar Bernoulli element with distributionn α , which converges to 1 2 δ √ σ+α + 1 2 δ − √ σ+α . (ii) Variance profile: Assume for any 1 ≤ j ≤ i ≤ n, α ij =α ij = |i − j|/n 2 and σ i = 0. Then in this case, it is easy to see thatThis means that for tThus if ρ is the limiting distribution of 1 n i δ λ i , it satisfies that ρ((1/4, (t 2 + 1)/4)) = t, for t ∈ (0, 1). From where the the density of ρ is given by ρ(dt) = 4 √ 4t−1 dt for t ∈ (1/4, 1/2) and thus where we recall that e ii = 1 2 √ n E ii and e ij = 1 √ n E ij for j < i with (E ij ) 1≤i,j≤n denoting the standard matrix units in M n (C). O Arizmendi, Convergence of the fourth moment and infinite divisibility. Probability and Mathematical Statistics. 33O. Arizmendi. Convergence of the fourth moment and infinite divisibility. Probability and Mathematical Statis- tics, 33, 2013. Convergence of the fourth moment and infinite divisibility: quantitative estimates. O Arizmendi, A Jaramillo, Electronic Communications in Probability. 19O. Arizmendi and A. Jaramillo. Convergence of the fourth moment and infinite divisibility: quantitative esti- mates. Electronic Communications in Probability, 19:1-12, 2014. A Berry-Esseen type limit theorem for Boolean convolution. O Arizmendi, M Salazar, Arch. Math. (Basel). 1111O. Arizmendi and M. Salazar. A Berry-Esseen type limit theorem for Boolean convolution. Arch. Math. (Basel), 111(1):101-111, 2018. Berry-esseen type estimate and return sequence for parabolic iteration in the upper half-plane. O Arizmendi, M Salazar, J.-C Wang, International Mathematics Research Notices. 23O. Arizmendi, M. Salazar, and J.-C. Wang. Berry-esseen type estimate and return sequence for parabolic iteration in the upper half-plane. International Mathematics Research Notices, 2021(23):18037-18056, 2021. Operator-valued matrices with free or exchangeable entries. M Banna, G Cébron, arXiv:1811.05373To appear in Annales de l'IHP, Probabilités et statistiques. M. Banna and G. Cébron. Operator-valued matrices with free or exchangeable entries. To appear in Annales de l'IHP, Probabilités et statistiques. arXiv:1811.05373, 2018. Berry-Esseen bounds for the multivariate B-free CLT and operator-valued matrices. M Banna, T Mai, arXiv:2105.02044Trans. Am. Math. Soc. To appear inM. Banna and T. Mai. Berry-Esseen bounds for the multivariate B-free CLT and operator-valued matrices. To appear in Trans. Am. Math. Soc. arXiv:2105.02044, 2021. Infinite divisibility and a non-commutative Boolean-to-free Bercovici-Pata bijection. S T Belinschi, M Popa, V Vinnikov, J. Funct. Anal. 2621S. T. Belinschi, M. Popa, and V. Vinnikov. Infinite divisibility and a non-commutative Boolean-to-free Bercovici- Pata bijection. J. Funct. Anal., 262(1):94-123, 2012. On the operator-valued analogues of the semicircle, arcsine and Bernoulli laws. S T Belinschi, M Popa, V Vinnikov, J. Operator Theory. 701S. T. Belinschi, M. Popa, and V. Vinnikov. On the operator-valued analogues of the semicircle, arcsine and Bernoulli laws. J. Operator Theory, 70(1):239-258, 2013. Eta-diagonal distributions and infinite divisibility for R-diagonals. H Bercovici, A Nica, M Noyes, K Szpojankowski, Annales de l'Institut Henri Poincaré, Probabilités et Statistiques. 54Institut Henri PoincaréH. Bercovici, A. Nica, M. Noyes, and K. Szpojankowski. Eta-diagonal distributions and infinite divisibility for R-diagonals. In Annales de l'Institut Henri Poincaré, Probabilités et Statistiques, volume 54, pages 907-937. Institut Henri Poincaré, 2018. A generalization of the lindeberg principle. S Chatterjee, The Annals of Probability. 346S. Chatterjee. A generalization of the lindeberg principle. The Annals of Probability, 34(6):2061-2076, 2006. Asymptotic infinitesimal freeness with amalgamation for haar quantum unitary random matrices. S Curran, R Speicher, Communications in mathematical physics. 3013S. Curran and R. Speicher. Asymptotic infinitesimal freeness with amalgamation for haar quantum unitary ran- dom matrices. Communications in mathematical physics, 301(3):627-659, 2011. Infinitesimal non-crossing cumulants and free probability of type B. M Février, A Nica, Journal of Functional Analysis. 2589M. Février and A. Nica. Infinitesimal non-crossing cumulants and free probability of type B. Journal of Func- tional Analysis, 258(9):2983-3023, 2010. Non-commutative notions of stochastic independence. A B Ghorbal, M Schürmann, Mathematical Proceedings of the Cambridge Philosophical Society. Cambridge University Press133A. B. Ghorbal and M. Schürmann. Non-commutative notions of stochastic independence. In Mathematical Pro- ceedings of the Cambridge Philosophical Society, volume 133, pages 531-561. Cambridge University Press, 2002. A new application of random matrices: Ext(C * red (F 2 )) is not a group. U Haagerup, S Thorbjørnsen, Ann. of Math. 1622U. Haagerup and S. Thorbjørnsen. A new application of random matrices: Ext(C * red (F 2 )) is not a group. Ann. of Math. (2), 162(2):711-775, 2005. The monotone cumulants. T Hasebe, H Saigo, Annales de l'IHP Probabilités et statistiques. 47T. Hasebe and H. Saigo. The monotone cumulants. Annales de l'IHP Probabilités et statistiques, 47(4):1160- 1170, 2011. On operator-valued monotone independence. T Hasebe, H Saigo, Nagoya Mathematical Journal. 215T. Hasebe and H. Saigo. On operator-valued monotone independence. Nagoya Mathematical Journal, 215:151- 167, 2014. Operator-valued non-commutative probability. D , Lecture Notes. D. Jekel. Operator-valued non-commutative probability. Lecture Notes, 2019. An operad of non-commutative independences defined by trees. D Jekel, W Liu, Dissertationes Mathematicae. 553D. Jekel and W. Liu. An operad of non-commutative independences defined by trees. Dissertationes Mathemati- cae, 553:1-100, 2020. Eine neue herleitung des exponentialgesetzes in der wahrscheinlichkeitsrechnung. J W Lindeberg, Mathematische Zeitschrift. 151J. W. Lindeberg. Eine neue herleitung des exponentialgesetzes in der wahrscheinlichkeitsrechnung. Mathematis- che Zeitschrift, 15(1):211-225, 1922. Free probability and random matrices. J A Mingo, R Speicher, Springer35J. A. Mingo and R. Speicher. Free probability and random matrices, volume 35. Springer, 2017. Noncommutative Brownian motion in monotone Fock space. N Muraki, Comm. Math. Phys. 1833N. Muraki. Noncommutative Brownian motion in monotone Fock space. Comm. Math. Phys., 183(3):557-570, 1997. Monotonic convolution and monotonic Lévy-Hincin formula. N Muraki, preprintN. Muraki. Monotonic convolution and monotonic Lévy-Hincin formula. preprint, 2000. Monotonic independence, monotonic central limit theorem and monotonic law of small numbers. Infinite Dimensional Analysis, Quantum Probability and Related Topics. N Muraki, 4N. Muraki. Monotonic independence, monotonic central limit theorem and monotonic law of small numbers. Infinite Dimensional Analysis, Quantum Probability and Related Topics, 4(01):39-58, 2001. N Muraki, The five independences as natural products. Infinite Dimensional Analysis, Quantum Probability and Related Topics. 6N. Muraki. The five independences as natural products. Infinite Dimensional Analysis, Quantum Probability and Related Topics, 6(03):337-371, 2003. Operator-valued distributions. I. Characterizations of freeness. A Nica, D Shlyakhtenko, R Speicher, Int. Math. Res. Not. 29A. Nica, D. Shlyakhtenko, and R. Speicher. Operator-valued distributions. I. Characterizations of freeness. Int. Math. Res. Not., (29):1509-1538, 2002. Completely bounded maps and operator algebras. V Paulsen, Cambridge Studies in Advanced Mathematics. 78Cambridge University PressV. Paulsen. Completely bounded maps and operator algebras, volume 78 of Cambridge Studies in Advanced Mathematics. Cambridge University Press, Cambridge, 2002. Infinite Dimensional Analysis, Quantum Probability and Related Topics. D Perales, P.-L Tseng, 242150019On operator-valued infinitesimal Boolean and monotone independenceD. Perales and P.-L. Tseng. On operator-valued infinitesimal Boolean and monotone independence. Infinite Di- mensional Analysis, Quantum Probability and Related Topics, 24(03):2150019, 2021. A combinatorial approach to monotonic independence over a C * -algebra. M Popa, Pacific Journal of Mathematics. 2372M. Popa. A combinatorial approach to monotonic independence over a C * -algebra. Pacific Journal of Mathemat- ics, 237(2):299-325, 2008. A new proof for the multiplicative property of the Boolean cumulants with applications to the operatorvalued case. M Popa, Colloquium Mathematicum. 1M. Popa. A new proof for the multiplicative property of the Boolean cumulants with applications to the operator- valued case. In Colloquium Mathematicum, volume 1, pages 81-93, 2009. An asymptotic property of large matrices with identically distributed boolean independent entries. Infinite Dimensional Analysis, Quantum Probability and Related Topics. M Popa, Z Hao, 221950024M. Popa and Z. Hao. An asymptotic property of large matrices with identically distributed boolean independent entries. Infinite Dimensional Analysis, Quantum Probability and Related Topics, 22(04):1950024, 2019. Non-commutative functions and the non-commutative free Lévy-Hinčin formula. M Popa, V Vinnikov, Advances in Mathematics. 236M. Popa and V. Vinnikov. Non-commutative functions and the non-commutative free Lévy-Hinčin formula. Advances in Mathematics, 236:131-157, 2013. On a Berry-Esseen type limit theorem for Boolean convolution. M Salazar, Electronic Communications in Probability. 27M. Salazar. On a Berry-Esseen type limit theorem for Boolean convolution. Electronic Communications in Prob- ability, 27:1-10, 2022. Independence and product systems. M Skeide, Recent developments in stochastic analysis and related topics. World ScientificM. Skeide. Independence and product systems. In Recent developments in stochastic analysis and related topics, pages 420-438. World Scientific, 2004. Combinatorial theory of the free product with amalgamation and operator-valued free probability theory. R Speicher, American Mathematical Soc627R. Speicher. Combinatorial theory of the free product with amalgamation and operator-valued free probability theory, volume 627. American Mathematical Soc., 1998. . R Speicher, R Wourodi, Boolean convolution. Fields Inst Commun. 12R. Speicher and R. Wourodi. Boolean convolution. Fields Inst Commun., 12:267-279, 1997. Theory of operator algebras II. M Takesaki, Springer125M. Takesaki et al. Theory of operator algebras II, volume 125. Springer, 2003. Infinitesimal Probability Theory with Amalgamation. P.-L Tseng, Canada. 2021Queen's UniversityPhD thesisP.-L. Tseng. Infinitesimal Probability Theory with Amalgamation. PhD thesis, Queen's University, Canada, 2021. A unified approach to infinitesimal freeness with amalgamation. P.-L Tseng, arXiv:1904.11646arXiv preprintP.-L. Tseng. A unified approach to infinitesimal freeness with amalgamation. arXiv preprint arXiv:1904.11646, 2022. Conditional expectation in an operator algebra. H Umegaki, Tohoku Mathematical Journal, Second Series. 62-3H. Umegaki. Conditional expectation in an operator algebra. Tohoku Mathematical Journal, Second Series, 6(2- 3):177-181, 1954. Operations on certain non-commutative operator-valued random variables. D Voiculescu, Astérisque. 2321D. Voiculescu. Operations on certain non-commutative operator-valued random variables. Astérisque, 232(1):243-275, 1995. . Guanajuato Cimat, address: [email protected], GUANAJUATO, MEXICO Email address: [email protected]
[]
[ "Physical States and Gauge Independence of the Energy-Momentum Tensor in Quantum Electrodynamics", "Physical States and Gauge Independence of the Energy-Momentum Tensor in Quantum Electrodynamics" ]
[ "Taro Kashiwa \nDepartment of Physics\nKyushu University\n812-81FukuokaJAPAN\n", "Naoki Tanimura \nDepartment of Physics\nKyushu University\n812-81FukuokaJAPAN\n" ]
[ "Department of Physics\nKyushu University\n812-81FukuokaJAPAN", "Department of Physics\nKyushu University\n812-81FukuokaJAPAN" ]
[]
Discussions are made on the relationship between physical states and gauge independence in QED. As the first candidate take the LSZ-asymptotic states in a covariant canonical formalism to investigate gauge independence of the (Belinfante's) symmetric energy-momentum tensor. It is shown that expectation values of the energy-momentum tensor in terms of those asymptotic states are gauge independent to all orders. Second, consider gauge invariant operators of electron or photon , such as the Dirac's electron or Steinmann's covariant approach, expecting a gauge invariant result without any restriction. It is, however, demonstrated that to single out gauge invariant quantities is merely synonymous to a gauge fixing, resulting again in use of the asymptotic condition when proving gauge independence. Nevertheless, it is commented that these invariant approaches is helpful to understand the mechanism of the LSZ-mapping and furthermore of quark confinement in QCD. As the final candidate, it is shown that gauge transformations are freely performed under the functional representation or the path integral expression on account of the fact that the functional space is equivalent to a collection of infinitely many inequivalent Fock spaces. The covariant LSZ formalism is shortly reviewed and the basic facts on the energy-momentum tensor are also illustrated. *
10.1002/prop.2190450503
[ "https://export.arxiv.org/pdf/hep-th/9605207v1.pdf" ]
17,858,819
hep-th/9605207
19ffcec76930667db33b82c64da18f2b2a8749e3
Physical States and Gauge Independence of the Energy-Momentum Tensor in Quantum Electrodynamics May 1996 May 29, 1996 Taro Kashiwa Department of Physics Kyushu University 812-81FukuokaJAPAN Naoki Tanimura Department of Physics Kyushu University 812-81FukuokaJAPAN Physical States and Gauge Independence of the Energy-Momentum Tensor in Quantum Electrodynamics May 1996 May 29, 1996arXiv:hep-th/9605207v1 29 1 Discussions are made on the relationship between physical states and gauge independence in QED. As the first candidate take the LSZ-asymptotic states in a covariant canonical formalism to investigate gauge independence of the (Belinfante's) symmetric energy-momentum tensor. It is shown that expectation values of the energy-momentum tensor in terms of those asymptotic states are gauge independent to all orders. Second, consider gauge invariant operators of electron or photon , such as the Dirac's electron or Steinmann's covariant approach, expecting a gauge invariant result without any restriction. It is, however, demonstrated that to single out gauge invariant quantities is merely synonymous to a gauge fixing, resulting again in use of the asymptotic condition when proving gauge independence. Nevertheless, it is commented that these invariant approaches is helpful to understand the mechanism of the LSZ-mapping and furthermore of quark confinement in QCD. As the final candidate, it is shown that gauge transformations are freely performed under the functional representation or the path integral expression on account of the fact that the functional space is equivalent to a collection of infinitely many inequivalent Fock spaces. The covariant LSZ formalism is shortly reviewed and the basic facts on the energy-momentum tensor are also illustrated. * Introduction Symmetries play an important role in physics. In the usual situation, invariance, once accepted, should be maintained throughout the story. However gauge symmetry in quantum field theory (QFT) has a rather different scenario: classically electric and magnetic fields are gauge invariant but quantization has to be carried out in terms of gauge potentials by fixing a gauge according to a standard recipe such as Dirac's [1] for instance. Start with a classical Lagrangian, follow the canonical procedure until setting up the Dirac bracket then utilize the corresponding principle to obtain quantum theory. Accordingly each quantization in various gauges be carried out in a different Fock space, so what do we mean by "gauge invariance" of the S-matrix or expectation values of observables after quantization? Moreover in order to get a representation in QFT, it is unavoidable to introduce asymptotic fields, satisfying linear hyperbolic field equations as well as being gauge invariant. Then how to bridge between asymptotic fields and the Heisenberg fields (the LSZ-mapping) satisfying nonlinear equation in a fixed gauge? A way for proving gauge independence of, for instance, the S-matrix, has so far been to show it under the perturbation theory with the use of a gauge dependent photon propagator [2]. There takes part a notion of physical states, such as the on-shell condition for electron or the photon polarization condition. An ambitious trial is to introduce gauge invariant fields for electron and photon [3,4,5,6], expecting a fully gauge invariant result without any conditions. The approach furthermore leads us toward the understanding of the issue of quark confinement [7]. Apart from these, the most widely adopted method of proving gauge independence is that of functional integration [8]: start with some gauge by inserting the delta-function into path integral and move to another gauge by means of the change of variables. However, as for the authors' knowledge, there seems almost no clarification of this fact. In this paper, we first study gauge independence of the energy-momentum tensor in a covariant canonical formalism. Reasons for taking this issue are as follows: • (a) There have been trials for proving gauge independence of the S-matrix [2] but seems very few for checking that of energy-momentum or the energy-momentum tensor. Needless to say that energy-momentum must be gauge independent and so should be the energymomentum tensor coupled to gravity. Furthermore the energy-momentum tensor is a good object to check some invariance; since it is a composite operator so if higher order corrections are taken into account there sometimes emerge serious problems which cannot be seen in a formal discussion [9]: the self-stress problem of electron [10] or the trace anomaly [11] is well-known. Freedman et al. [12] studied that for tadpole contributions of the energy-momentum tensor in the scalar QED but they do not make any explicit calculation for other parts. • (b) It is preferable to utilize the covariant formulation under the perturbation theory so that the covariant LSZ-formalism [13] must be suitable. It is then necessary to impose physical state conditions and check whether those give us a gauge independent result. To make the situation clearer consider the canonical energy-momentum tensor: T µν ≡ a ∂L ∂(∂ µ φ a ) ∂ ν φ a − g µν L. (1.1) In terms of the (classical) Lagrangian L c = ψ i 2 ↔ D −m ψ − 1 4 F µν F µν ,(1.2) with D ≡ γ µ D µ , D µ ≡ ∂ µ −ieA µ , Φ * ↔ D µ Ψ ≡ Φ * D µ Ψ − (D µ Φ) * Ψ, (1.3) it reads T c µν ≡ ψ i 2 γ µ ↔ ∂ ν ψ − F µρ ∂ ν A ρ − g µν L c (1.4) which is apparently gauge variant, contrary to the Belinfante's symmetric energy-momentum tensor, Θ c µν ≡ i 4 ψ γ µ ↔ D ν +γ ν ↔ D µ ψ − F µρ F ν ρ − g µν L c ,(1.5) considered as the source of gravity, except for the scalar case in which an improvement must be necessary [9]. However there is no problem for energy-momentum: as is well-known from the process of construction, difference is given by the total derivative, T c µν = Θ c µν + ∂ ρ X [ρ,µ]ν (with ρ, µ being antisymmetric), then P µ = d 3 x T c 0µ = d 3 x Θ c 0µ . (1.6) Therefore energy-momentum itself and the Belinfante tensor is gauge invariant and can be considered as observables classically. The quantum Lagrangian is, due to Nakanishi and Lautrup [14], L ≡ L c + L GF = ψ i 2 ↔ D −m ψ − 1 4 F µν F µν − A µ ∂ µ B + α 2 B 2 ,(1.7) with L GF ≡ −A µ ∂ µ B + α 2 B 2 ,(1.8) where B is an auxiliary field, called the Nakanishi-Lautrup field, and α is the gauge parameter. Although gauge has been fixed we are left with the BRS-symmetry [15,16]: δ B A µ = ∂ µ c, δ B c = 0, δ B B = 0, δ B c = −iB, δ B ψ = iecψ, δ B ψ = −iecψ,(1.9) with c(c) being the Faddeev-Popov ghost (anti-ghost). This keeps the following Lagrangian intact: L + L F P = L c + L GF + L F P , (1.10) with L F P ≡ i∂ µ c∂ µ c. (1.11) Note that L GF + L F P = δ B i α 2 cB − i∂ µ cA µ . (1.12) Thus gauge symmetry has been taken over by the BRS symmetry in quantum theory. The generator is called the BRS charge, Q B ; [A µ , Q B ] = iδ B A µ , {ψ, Q B } = iδ B ψ, . . . (1.13) which gives a physical state condition: Q B |phys B = 0. (1.14) (Here ({a, b})[a, b] is the (anti-)commutator.) Since the canonical energy-momentum tensor is not observable even classically, we only concentrate on the Belinfante's one given as Θ µν = Θ c µν − (A µ ∂ ν B + A ν ∂ µ B) + i∂ µ c∂ ν c + i∂ ν c∂ µ c − g µν (L GF + L F P ) = Θ c µν + δ B −i∂ µ cA ν − i∂ ν cA µ − g µν i α 2 cB − i∂ µ cA µ , (1.15) which is, of course, BRS invariant; Q B , Θ µν = 0 . (1.16) In view of (1.13) and (1.14), B phys ′ |Θ µν |phys B = B phys ′ |Θ c µν |phys B . (1.17) So when sandwiched between properly chosen physical states obeying (1.14), the expectation value of the symmetric energy-momentum tensor would become gauge independent provided that higher orders make no harmful effects. In order to check this, we work with the perturbative method in the covariant formalism to calculate expectation values of the energy-momentum tensor in terms of the loop expansion in §2 and convince ourselves of gauge independence (BRS invariance) to all orders. Main machinery is the Ward-Takahashi relation with the aid of the dimensional regularization which does not break the Poincaré as well as the gauge invariance and is handier than the Pauli-Villars regularization. We then discuss about gauge invariant operators, expecting that the result would be gauge (BRS) invariant unconditionally. However, from the discussion of §3, we recognize that picking up gauge invariant operators for basic fields, that is, for electrons and photons, is merely synonymous to gauge fixing so that we again need physical state conditions to prove gauge independence. We argue the LSZ-mapping in terms of these invariant fields also in §3. There is another physical state frequently adopted; the Gauss's law constraint in the A 0 = 0 gauge, Φ(x)|phys ≡ 3 k=1 (∂ k E k (x)) + J 0 (x) |phys = 0, (1.18) where E k (J 0 ) is the electric fields (the charge density). According to the common sense in QFT [18] (1.18) implies Φ(x) = 0, but there have been a number of discussions in terms of this method. In §4 we clarify the reason by means of the functional representation. Also on account of that we can build up the path integral formula starting from the Coulomb gauge and prove gauge independence more rigorously than the previous work [8]. The final §5 is devoted to discussion. In Appendix A, we review the covariant LSZ formalism, and in Appendix B we study the violation of the Ward-Takahashi identity for the energy-momentum tensor and then perform an explicit renormalization for the energy-momentum tensor to illustrate that the usual procedure does indeed work well. LSZ and the Energy-Momentum Tensor In this section in order to examine gauge independence of the expectation value we apply the covariant LSZ formalism to the energy-momentum tensor and demonstrate that asymptotic states can indeed be interpreted as physical states and there is no harmful contribution from higher orders. Physical States in the LSZ formalism Start with a discussion on physical states in the LSZ formalism. Details must be seen in Appendix A. In view of the BRS invariant Lagrangian, L + L F P = ψ i 2 ↔ D −m ψ − 1 4 F µν F µν − A µ ∂ µ B + α 2 B 2 + i∂ µ c∂ µ c,(2.1) the ghosts are free, c = c = 0, all the time so that we can reduce physical state |phys B (1.14) to a simpler form: first decompose the total space such that [16] V ⊗ |0 F P (2.2) where |0 F P is the vacuum of the FP ghost sector and V is the remainder. Q B |phys B = 0, (1.14), then implies Q B |phys B = Q B |phys ⊗ |0 F P = i d 3 qB q |phys ⊗ c † q |0 F P = 0 , (2.3) where the BRS charge has been given by Q B = i d 3 q c † q B q − B † q c q (2.4) with c † q (c q ) and B † q (B q ) being the creation (annihilation) operator of the ghost and the B field respectively. From (2.3) physical state in this reduced space reads B q |phys = 0 , giving [14] B (+) (x) |phys = 0 . (2.5) Thus throwing away the ghosts in (2.1) we begin with L = ψ i 2 ↔ D −m ψ − 1 4 F µν F µν − A µ ∂ µ B + α 2 B 2 . (2.6) Since B(x) is free, B(x) = 0, it goes to the asymptotic field itself:B(x) −→ Z −1/2 3 B as (x) , where "as" designates the asymptotic field (in or out), so that the physical state reads B as(+) (x) |phys; as = 0 . (2.7) The commutation relations with respect to B, since the interaction should fade away in the asymptotic region. If we admit (2.11) asymptotic states of electron would all be physical, that is, ψ as (y) is BRS invariant. As for those of photon, the B-state |q; as ≡ B as [ A µ (x) , B(y) ] = −i ∂ x µ D(x − y) , [ B(x) , B(y) ] = 0 , (2.8) trivially become [ A as µ (x) , B as (y) ] = −i ∂ x µ D(x − y) , [ B as (x) , B as (y) ] = 0 ,q † |0 ,(2.12) is physical owing to the second relation in (2.9), but |qσ; as ≡ A as qσ † |0 ,(2.13) with A as qσ † A as qσ being the creation (annihilation) operator of photon needs an additional constraint to be physical. Introduce the photon wave functions, h qσ µ (x) , f qσ µ (x) , defined through h qσ µ (x) = f qσ µ (x) ; f qσ µ (x) = 0; (2.14) those which are related each other: h qσ µ (x) = 1 2 ∇ 2 −1 x 0 ∂ 0 − 3 2 f qσ µ (x) + g µ0 f qσ 0 (x) . (2.15) Write the Fourier transformation as f q µ σ (x) = d 3 p (2π) 3 2p 0 ξ σ µ (p) ϕ q (p) e −ipx ; p 0 = |p| ,(2.16) where ϕ q (p)'s are some orthonormal set and ξ σ µ (p) is the photon polarization vector satisfying ξ σ µ (p) ξ τ µ (p) =      1 −1 −1 −1      ≡ η στ ,(2.17) where the repeated indices imply a summation. The physical state condition for (2.13) then gives B as(+) (x) |qσ; as = [ B as(+) (x) , A as qσ † ] |0 = ∂ µ f qσ µ (x) |0 ≡ f qσ (x) |0 −→ 0 , (2.18) where use has been made of the first relation in (2.9). Thus f qσ (x) = ∂ µ f qσ µ (x) = 0 . (2.19) Note that the transversal condition of photon, 20) belongs to (2.19). In terms of the momentum representation, (2.19) turns out to be p µ ξ σ µ (p) = 0 . The LSZ reduction formula of some operator O for photon reads, for example, as f qσ 0 (x) = 0, as well as 3 l=1 ∇ l f qσ l (x) = 0 ,(2.qσ; out| T (A ν (y)O) |0 = d 4 x f qσ µ * (x) x 0| T (A µ (x)A ν (y)O) |0 −(1−α) [ h * qσ (x) x + f qσ µ * (x) ∂ x µ ] 0| T (B(x)A ν (y)O) |0 , (2.22) where h * qσ (x) ≡ ∂ µ h qσ µ * (x) . Imposing the physical state condition (2.19) to (2.22) by the help of (2.15) and (2.20) we find that the B-containing term in the right hand side is dropped, giving a naïve amplitude consisting solely of A µ 's. The reduction for electrons can be obtained in a usual manner. Accordingly the task is to calculate the vacuum expectation value of the energy-momentum tensor, Θ µν ≡ Θ µν g + Θ µν m , (2.23) where Θ µν g ≡ −F µρ F ν ρ − (A µ ∂ ν B + A ν ∂ µ B) − g µν − 1 4 F µν F µν − A µ ∂ µ B + α 2 B 2 , (2.24) is the photon part and Θ µν m ≡ 1 4 ψi(γ µ ↔ D ν +γ ν ↔ D µ )ψ − g µν ψ i 2 ↔ D −m ψ , (2.25) is the (gauge invariant) electron part. Remove external legs from the vacuum expectation value and multiply photon wave functions, f qσ λ (x)'s, or the free spinors, u(k ′ ), u(k), to obtain the desired quantity; energy-momentum tensor in the physical state. It is convenient to introduce the generating functional such that exp iW [J µ , J, η, η, τ m µν , τ g µν ] ≡ exp iW [J , η, τ ] ≡ 0| T * exp i d 4 x J µ A µ + JB + ηψ + ψη + τ m µν Θ µν m + τ g µν Θ µν g |0 (2.26) from which we obtain the connected Green's functions, G µν;λ 1 ···λm a(=g or m) (0; x 1 , · · · , x n ; y 1 , · · · y m , z 1 , · · · , z m ) ≡ δ δτ a µν (0) δ n iδJ λ 1 (x 1 ) · · · iδJ λn (x n ) δ 2m W [J, η, τ ] δη(y 1 ) · · · δη(y m )δη(z 1 ) · · · δη(z m ) , = 0| T * Θ µν a (0)A λ 1 (x 1 ) · · · A λn (x n )ψ(y 1 ) · · · ψ(y m )ψ(z 1 ) · · · ψ(z m ) |0 conn ≡ n j=1 d 4 q j (2π) 4 m l=1 d 4 k l (2π) 4 d 4 p l (2π) 4 exp   −i n j=1 q j x j − i m l=1 (k l y l − p l z l )   ×G µνλ 1 ···λn a (q 1 , · · · , q n ; k 1 , · · · , k m , p 1 , · · · , p m ) , (2.27) where T * designates a covariant T -product. Calculations are performed by means of the loop expansion. Our ingredients are summarized as follows: • (i) The physical state conditions: for photon q µ ξ σ µ (q) = 0 . (2.28) For electron (p − m) u(p, s) = 0 , u(k, s) (k − m) = 0 . (2.29) • (ii) Dimensional regularization; which preserves both gauge symmetry and the Poincaré symmetry. (Note that using a naïve cutoff breaks the situation; see Appendix B.) • (iii) The notion of finiteness of the energy-momentum tensor [9,12]; under which we can proceed only with the unrenormalized form. Divergences can be subtracted in a gauge invariant way in anytime. (A short discussion on renormalization is seen also in Appendix B.) Tree Calculation The tree graphs give us basic vertices of the Feynman rule: q q ′ λ κ Q : G µν;λκ g (q, q ′ ) (0) ≡ −i q 2 X µνλκ (q, q ′ ) −i q ′2 , Q k p : G µν m (k, p) (0) ≡ i k − m Y µν (k + p) i p − m , Q k p q λ : G µν;λ m (q, k, p) (0) 1PI ≡ i k − m −ie q 2 Z µνλ (q) i p − m , (2.30) where the cross, "×" , and "1PI" designate the insertion of Θ µν and the one-particle-irreducible part respectively. X µνλκ (q, q ′ ) in (2.30) is explicitly given as X µνλκ (q, q ′ ) ≡ X µνλκ (q, q ′ ) − q λ X µνκ (q, q ′ ) + q ′ κ X µνλ (q, q ′ ) + αq λ q ′ κ g µν , (2.31) where X µνλκ (q, q ′ ) ≡ (δ µ ρ δ ν σ + δ µ σ δ ν ρ − 1 2 g µν g ρσ )(q ρ g λα − q α g λρ )(q ′ σ δ κ α − q ′ α g κσ ) ,(2.32) comes from F F term in (2.24) thus is gauge invariant but the remaining terms are from AB and BB and then gauge variant: X µνκ (q, q ′ ) ≡ (δ µ ρ δ ν σ + δ µ σ δ ν ρ − g µν g ρσ )q ρ d κσ (q ′ ) , (2.33) where d µν (q) ≡ g µν − (1 − α) q µ q ν q 2 , (2.34) is the numerator of the photon propagator, D µν (q) ≡ −i d µν (q) q 2 . (2.35) Note that the transverseness of X µνλκ (q, q ′ ), q λ X µνλκ (q, q ′ ) = q ′ κ X µνλκ (q, q ′ ) = 0 , (2.36) and the structure of gauge dependent terms: those depend on the external photon momentum as well as the index, i.e. , on q λ , and/or q ′κ , leaving no effect owing to the physical photon condition (2.28). G µν;λκ g (q, q ′ ) (0) is therefore gauge independent. Next check G µν m (k, p) (0) : Y µν (q) in (2.30) is given by Y µν (q) ≡ 1 2 Γ µνλ q λ + mg µν , (2.37) with Γ µνλ ≡ 1 2 (γ µ g νλ + γ ν g µλ ) − γ λ g µν . (2.38) According to our assumption, (2.10) to (2.11), there is no gauge dependence in the electron sector: Y µν is gauge invariant so is G µν m (k, p) (0) . The 1PI part of G µν;λ m (q; k, p) (0) , Z µνλ (q) in (2.30) , is expressed, in terms of (2.38), by Z µνλ (q) ≡ Γ µνρ d λ ρ (q) . (2.39) As is seen from Fig.2 reducible graphs take part in k p q k p q k p q Q Q Q Fig.2 λ λ λ giving totally k − m i G µν;λ m (q; k, p) (0) p − m i = ie q 2 Y µν (k+p−q) 1 p − q − m γ λ + γ λ 1 k + q − m Y µν (k+p+q) − Γ µνλ − ie(1 − α)q λ (q 2 ) 2 Y µν (k+p−q) 1 p − q − m (p − m) − (k − m) 1 k + q − m Y µν (k+p+q) , (2.40) whose α-dependent terms vanish due to the factor q λ or (k − m) as well as (p − m). Finally G µν;λ g (q; k, p) (0) is also gauge invariant: k − m i G µν;λ g (q; k, p) (0) p − m i = −ie q 2 (k − p) 2 γ ρ X µνλ ρ (q, k−p) + ie q 2 (k − p) 2 q λ γ ρ X µν ρ (q, k−p) + (k − p ) X µνλ (k−p, q) − αq λ g µν ; (2.41) since the second term in the right hand side again vanishes because of q λ and (k − m) as well as (p − m). (Recall that X µνλρ (q, q ′ ) is gauge invariant.) All the tree graphs coupled to physical states are thus gauge independent. One-loop Calculation G µν;λκ g (q, q ′ ) (1) : contributions are from the graphs, p 1 and p 2 . q ′ Q q q ′ Q q q ′ Q q q ′ Q q q ′ Q q q ′ Q q Fig.3 (p 1 ) (p 2 ) (p 3 ) (p 5 ) (p 4 ) (p 6 ) λ κ λ λ λ λ λ κ κ κ κ κ G µν;λκ g (q, q ′ ) (1) = −i q 2 X µνλρ (q, q ′ ) −i q ′2 Π ρσ (q ′ ) −i q ′2 d σκ (q ′ ) + −i q 2 d λρ (q)Π ρσ (q) −i q 2 X µνσκ (q, q ′ ) −i q ′2 ,(2.42) where Π ρσ (q) is the vacuum polarization, Π ρσ (q) ≡ −e 2 d n l (2π) n tr γ ρ 1 l + q − m γ σ 1 l − m ; Π ρσ (q) = (q 2 g ρσ − q ρ q σ )Π(q 2 ) , Π(q) ≡ −ie 2 2 tr1 (4π) 2 Γ (2 − n 2 ) 1 0 dx x(1 − x) m 2 − x(1 − x)q 2 4π n 2 −2 ; (2.43) obeying the transversal condition: q ρ Π ρσ (q) = 0 . (2.44) As was mentioned before, gauge dependent parts, in X µνλκ (2.31) and d λκ (2.34), are proportional to q λ and/or q ′κ , then they vanish on account of the physical photon condition (2.28) when the momentum is external or of the transverseness of the vacuum polarization (2.44) when it is internal. G µν;λκ m (q, q ′ ) (1) : graphs are given in p 3 ∼ p 6 . G µν;λκ m (q, q ′ ) (1) ≡ −i q 2 d λ ρ (q)Π µν;ρσ (q, q ′ )d σ κ (q ′ ) −i q ′2 , (2.45) where Π µν;λκ (q, q ′ ) ≡ ie 2 d n l (2π) n tr Γ µνλ 1 l − q ′ /2 − m γ κ 1 l + q ′ /2 − m +γ λ 1 l + q /2 − m Γ µνκ 1 l − q /2 − m −γ λ 1 l + q /2 + q ′ /2 − m Y µν (2l) 1 l − q /2 − q ′ /2 − m γ κ 1 l − q /2 + q ′ /2 − m −γ λ 1 l + q /2 − q ′ /2 − m γ κ 1 l + q /2 + q ′ /2 − m Y µν (2l) 1 l − q /2 − q ′ /2 − m , (2.46) which is free from gauge dependence, reflecting the gauge independence of Θ µν m . The only dangerous part is therefore the gauge term in the propagators, d λ ρ (q), d σ κ (q ′ ). However, Π µν;λκ (q, q ′ ) has a remarkable property, q λ Π µν;λκ (q, q ′ ) = q ′ κ Π µν;λκ (q, q ′ ) = 0 ,(2.q = (l + q /2 − m) − (l − q /2 − m) or the Ward-Takahashi relation discussed below. G µν m (k, p) (1) : graphs are f 1 ∼ f 5 . k p Q k p Q k p Q k p Q k p Q k p Q (f 1 ) (f 3 ) (f 5 ) (f 2 ) (f 4 ) (f 6 ) Fig.4 k − m i G µν m (k, p) (1) p − m i = −ie 2 d n l (2π) n 1 l 2 − Γ µνρ 1 p − l − m γ ρ − γ ρ 1 k − l − m Γ µνρ +Y µν (k + p) 1 p − m γ ρ 1 p − l − m γ ρ + γ ρ 1 k − l − m γ ρ 1 p − l − m Y µν (k + p) +γ ρ 1 k − l − m Y µν (k + p − 2l) 1 p − l − m γ ρ − (1 − α) (l 2 ) 2 (k − m) 1 k − l − m Y µν (k + p − 2l) 1 p − l − m (p − m) − Y µν (k + p) . (2.48) The gauge dependence appears only in the last line whose first term, however, vanishes by the on-shell condition of electron (2.29) so does the second term owing to the property of the dimensional regularization: d n l (2π) n 1 (l 2 ) N = 0 ; N : integer. (2.49) G µν g (k, p) (1) : there contributes only one graph, f 6 . k − m i G µν g (k, p) (1) p − m i = ie 2 d n l (2π) n 1 L 2 (L − Q) 2 X µνλκ (Q − L, L)γ λ 1 p + L − m γ κ = ie 2 d n l (2π) n 1 L 2 (L − Q) 2 X µνλκ (Q − L, L)γ λ 1 p + L − m γ κ −X µνκ (Q − L, L)(k − m) 1 p + L − m γ κ +X µνλ (L, Q − L)γ λ 1 p + L − m (p − m) +αg µν k + p 2 − m − (k − m) 1 p + L − m (p − m) , (2.50) where L ≡ l + Q 2 , Q ≡ k − p . (2.51) In the final expression, terms from the second to the last line are gauge dependent but the on-shell condition of electron wipes them out. Note that the important relations for obtaining the gauge independent results are (2.44) and (2.47). We must discuss G µν;λ a (q; k, p) (1) to complete the one loop calculation. The scenario, however, can be realized and furthermore generalized to any order with the aid of the Ward-Takahashi relation. General Proof for Gauge Independence We here show that gauge independence of the energy-momentum tensor holds in any order of the loop expansion. However as is seen in the following, all photon lines can be treated as external and there needs only for considering the tree and the one loop graphs of electron. To grasp this we should note that • (a) All gauge dependent terms, in Θ µν g (, in view of X µνλκ (q, q ′ ) (2.31),) and the propagator (2.34), possess a momentum contractible with a vertex or a photon wave function. The latter vanishes trivially on account of the physical photon condition (2.28) so that we concentrate on the former. To check gauge independence is therefore to check the consequence of the photon momentum contraction to the vertex. The procedure is exactly the same as studying gauge independence of the S-matrix; in other words, checking the cancellation of gauge terms in the photon propagator [17,2]. • (b) On the other hand, the insertion of Θ µν m forms new types of graphs. However Θ µν m itself is gauge invariant so that gauge dependence lies in the photon propagator. Checking gauge independence is again realized by the same manipulation of momentum as in (a) to those new graphs. • (c) Because of the fact (a) any internal photon line can be cut out. Graphs that must be taken into account are therefore the tree and the one loop graphs of electron. The program is carried out by means of the Ward-Takahashi relation (WT) derived by applying the BRS transformation to the generating functional W (2.26) [16]: 0| Q B , T * exp i d 4 x J µ A µ + JB + ηψ + ψη + τ m µν Θ µν m + τ g µν Θ µν g |0 = 0 , (2.52) which becomes in view of (1.13) and (1.9) e η δ δη − η δ δη + δ iδJ +(δ µ ρ δ ν σ +δ µ σ δ ν ρ −g µν g ρσ )∂ ρ τ g µν ∂ σ δ iδJ W [J, η, τ ]+ i∂ ρ J ρ = 0 . (2.53) In order to simplify a discussion we further introduce the generating functional of photon amputated Green's functions, Γ [A; η, τ ] ≡ W [J , η, τ ] − d 4 xJ µ (x)A µ (x) − d 4 xJ(x)B(x) , (2.54) where δW [J , η, τ ] δJ µ ≡ A µ , δW [J , η, τ ] δJ ≡ B . (2.55) Γ also obeys WT obtaining from (2.53): e η δ δη − η δ δη − i∂ ρ δ δA ρ Γ [A; η, τ ] − i B −i(δ µ ρ δ ν σ + δ µ σ δ ν ρ − g µν g ρσ )∂ ρ τ g µν ∂ σ B = 0 . (2.56) Owing to the above discussion, we need only the tree Γ (0) and the one loop Γ (1) . Θ µν g -inserted part :a typical subdiagram is seen in Fig.5. Fig.5 From (2.56), e η δ δη − η δ δη − i∂ ρ δ δA ρ Γ [A; η, τ ] B=τ =0 = 0 . (2.57) Write the tree Green's functions of electron as i(−e) n+1 I ρρ 1 ···ρn (q, q 1 , · · · , q n ; k, p) ≡ d 4 yd 4 z n j=1 d 4 x j exp[iky − ipz + i n j=1 q j x j ] × δ 2 δη(y)δη(z) iδ n+1 Γ (0) [A; η, τ ] δA ρ (0)δA ρ 1 (x 1 ) · · · δA ρn (x n ) A=η=τ =0 ≡ permutation {( q ρ ) , ( q 1 ρ 1 ) ,···, ( qn ρn )} k p q q 1 q n ρ ρ 1 ρ n ,(2.58) with q = p − k − q j . Therefore WT reads q ρ I ρρ 1 ···ρn (q, q 1 , · · · , q n ; k, p) = −I ρ 1 ···ρn (q 1 , · · · , q n ; k + q, p) + I ρ 1 ···ρn (q 1 , · · · , q n ; k, p − q) (2.59) whose left hand side stands for the contribution from the gauge dependent part in the photon propagator. The LSZ amplitude is obtained by sandwiching (2.59) with u(k, s)(k − m) and (p − m)u(p, s ′ ). The gauge dependent part is therefore becomes finally u(k, s)(k − m)q ρ I ρρ 1 ···ρn (q, q 1 , · · · , q n ; k, p)(p − m)u(p, s ′ ) = 0 ; (2.60) since the right hand side of (2.59) cannot escape the cancellation because of the momentum shift k + q or p − q. For the one loop subgraphs define a Green's function, (−e) n+1 Π ρρ 1 ···ρn (q, q 1 , · · · , q n ) ≡ n j=1 d 4 x j exp[i n j=1 q j x j ] iδ n+1 Γ (1) [A; η, τ ] δA ρ (0)δA ρ 1 (x 1 ) · · · δA ρn (x n ) A=η=τ =0 ≡ permutation {( q 1 ρ 1 ) ,···, ( qn ρn )} q q 1 q 2 q k q n ρ ρ 1 ρ 2 ρ k ρ n , (2.61) where q = − q j . WT reads q ρ Π ρρ 1 ···ρn (q, q 1 , · · · , q n ) = 0 (2.62) which tells us that gauge dependence also disappears. Θ µν m -inserted part : there are two type of typical subdiagrams with a Θ µν m insertion in Fig.8. (−e) n+1 I µν;ρρ 1 ···ρn (q, q 1 , · · · , q n ; k, p) ≡ d 4 yd 4 zd 4 x n j=1 d 4 x j exp[iky − ipz + iqx + i n j=1 q j x j ] × δ 2 δη(y)δη(z) δ δτ m µν (0) δ n+1 Γ (0) [A; η, τ ] δA ρ (x)δA ρ 1 (x 1 ) · · · δA ρn (x n ) A=η=τ =0 ≡ permutation {( q ρ ) , ( q 1 ρ 1 ) ,···, ( qn ρn )} all possible insertions of Θ µν m k p q q 1 q n ρ ρ 1 ρ n ,(2.64) and in the one loop: −i(−e) n+1 Π µν;ρρ 1 ···ρn (q, q 1 , · · · , q n ) ≡ d 4 x n j=1 d 4 x j exp[iqx + i n j=1 q j x j ] δ δτ m µν (0) δ n+1 Γ (1) [A; η, τ ] δA ρ (x)δA ρ 1 (x 1 ) · · · δA ρn (x n ) A=η=τ =0 ≡ permutation {( q 1 ρ 1 ) ,···, ( qn ρn )} all possible insertions of Θ µν m q q 1 q 2 q k q n ρ ρ 1 ρ 2 ρ k ρ n . (2.65) WT for I µν 's and Π µν 's are given as the same as (2.59) and (2.62); q ρ I µν;ρρ 1 ···ρn (q, q 1 , · · · , q n ; k, p) = −I µν;ρ 1 ···ρn (q 1 , · · · , q n ; k + q, p) + I µν;ρ 1 ···ρn (q 1 , · · · , q n ; k, p − q) , (2.66) q ρ Π µν;ρρ 1 ···ρn (q, q 1 , · · · , q n ) = 0 . The asymptotic electron field is therefore considered as gauge invariant but the relationship to the interpolating field is still unclear. To investigate that let us recall the gauge invariant quantities in (classical) electrodynamics: the minimal coupling term, 1) and the field strength tensor F µν (x). The gauge transformation is expressed as ψ(x)iγ µ (∂ µ − ieA µ (x))ψ(x) ,(3.A µ (x) → A µ (x) + ∂ µ χ(x) , ψ(x) → e ieχ(x) ψ(x) , ψ(x) → ψ(x)e −ieχ(x) . (3.2) In terms of the components, the photon part of (3.2) reads as A 0 (x) → A 0 (x) +χ(x) , A(x) → A(x) − ∇χ(x) . (3.3) Now we decompose the vector potential A(x) into A(x) = A T (x) + A L (x) , (3.4) where A T (x)(A L (x)) denotes the transverse (longitudinal) component with respect to the derivative ∇; thus ∇ · A T (x) = 0 , ∇ × A L (x) = 0 . (3.5) In view of (3.2), we then obtain the transformation rule: A T (x) → A T (x) , A L (x) → A L (x) − ∇χ(x) . (3.6) From this we recognize that the transverse part, A T (x),is gauge invariant [5]. The vector ∇ sets up a reference axis along which gauge invariant quantities can be constructed. In order to find other invariant quantities, let us go back to (3.1). First it should be noticed that ψ C inv (x) ≡ exp ie x dz · A L (x 0 , z) ψ(x) , ψ C inv (x) ≡ ψ(x) exp − ie x dz · A L (x 0 , z) ,(3.7) are gauge invariant under (3.3) and (3.6), path-independent owing to (3.5), (hence the beginning point of the integral can be arbitrary), and in fact local. This is essentially the Dirac's physical electron [3,7]; since A L (x) = ∇ ∇ · A ∇ 2 (x) , (3.8) where f ∇ 2 (x) ≡ − 1 4π ∞ −∞ d 3 y f (x 0 , y) |x − y| ,(3.9) so that (3.7) becomes ψ C inv (x) = exp ie ∇ · A ∇ 2 (x) ψ(x) , ψ C inv (x) = ψ(x) exp −ie ∇ · A ∇ 2 (x) ,(3.10) which is the Dirac's electron. The minimal coupling term (3.1) becomes ψ C inv (x)i γ 0 ∂ 0 − ie A 0 (x) + x dz ·Ȧ L (x 0 , z) + γ · ∇ + ieA T (x) ψ C inv (x) , (3.11) leading to the gauge invariant potential, A C µ (x) ≡ A C 0 (x), −A C (x) , (3.12) with A C (x) ≡ A T (x), and A C 0 (x) ≡ A 0 (x) + x dz ·Ȧ L (x 0 , z) . (3.13) Apparently ∇ · A C (x) = 0 . (3.14) In view of (3.14) this is nothing but the Coulomb gauge. From this lesson, it should be recognized that to set up gauge invariant operators is nothing but to fix the gauge, whose result also can be seen in a covariant form by Steinmann [4]. We should use the terminology "BRS invariance" here instead of gauge invariance since we now move into quantum theory. The starting Lagrangian is the Nakanishi-Lautrup one (1.7) and the BRS transformation is given by (1.9). (The Faddeev-Popov ghosts are irrelevant all the time in QED.) The BRS invariant fermion fields are thus defined by Ψ (x) ≡ ψ φ inv (x) ≡ exp −ie d 4 y φ µ (x − y)A µ (y) ψ(x) , Ψ (x) ≡ ψ φ inv (x) ≡ ψ(x) exp ie d 4 y φ µ (x − y)A µ (y) ,(3.15) with a real distribution φ µ (x) satisfying ∂ µ φ µ (x) = δ 4 (x) . (3.16) The minimal coupling term becomes Ψ (x) i∂ − m + eγ ν d 4 y φ µ (x − y)F µν (y) Ψ (x) (3.17) so that the BRS invariant potential in the Steinmann's approach is A φ µ (x) ≡ − d 4 y φ ν (x − y)F µν (y) = A µ (x) − ∂ x µ d 4 y φ ν (x − y)A ν (y) = d 4 q (2π) 4 e −iqx (δ λ µ + iq µ φ λ (q)) d 4 y e iqy A λ (y) ,(3.18) where use has been made of the Fourier transformation φ µ (x) = d 4 q (2π) 4 φ µ (q)e −iqx . (3.19) According to the second expression in (3.18), it can be regarded that A µ has been decomposed into the gauge invariant part, A φ µ , and the variant part, A µ (x); A µ (x) = A φ µ (x) + A µ (x) ; A µ (x) ≡ ∂ x µ d 4 y φ ν (x − y)A ν (y) ,(3.20) which corresponds to (3.4). In this case, the reference vector is of course φ µ . (3.16) implies q µ φ µ (q) = i ,(3.21) yielding to q µ δ µ ν + iq ν φ µ (q) = 0 ; φ µ (−q) = −φ µ (q) ,(3.22) which furthermore leads to a projection property: 0| T * A φ λ 1 (x 1 ) · · · A φ λn (x n )Ψ (y 1 ) · · · Ψ (y m )Ψ (z 1 ) · · · Ψ (z m ) |0 , (3.25) in terms of perturbation, imparts a meaning to those BRS invariant operators. (3.25) can, however, be more simplified, in view of (3.18), such that 0| T * A λ 1 (x 1 ) · · · A λn (x n )Ψ (y 1 ) · · · Ψ (y m )Ψ (z 1 ) · · · Ψ (z m ) |0 = 0| T * A λ 1 (x 1 ) · · · A λn (x n )ψ(y 1 ) · · · ψ(y m )ψ(z 1 ) · · · ψ(z m ) δ µ ν + iq ν φ µ (q) δ ν λ + iq λ φ ν (q) = δ µ λ + iq λ φ µ (q) . (3.23) If φ µ (x) = 0, x 4π|x| 3 δ(x 0 ) ,(3.× exp −ie d 4 s φ ρ (y 1 − s)A ρ (s) · · · exp ie d 4 s φ ρ (z m − s)A ρ (s) |0 . (3.26) Therefore the effect of the physical electron is solely found in the additional vertex, φ µ -vertex as in Fig.11. k k+q q p-q p q -ieφ(-q) ieφ(-q) Fig.11 Furthermore by noting that Ψ simply turns out to be ψ under the loops, tree graphs of electron are only relevant. Especially for the two-point function, 0| T * Ψ (y)Ψ (z) |0 = + + + + + + = d 4 k (2π) 4 e −ik(y−z) i q − m + d 4 l (2π) 4 i q − m ieγ ρ i q − l − m ieγ σ i q − m × −i l 2 g ρσ + il ρ φ σ (l) + iφ ρ (l)l σ − l ρ l σ φ(l)φ(l) + O(e 4 ) . (3.27) Note that the photon propagator has been replaced such that −i l 2 d ρσ (l) −→ −i l 2 g ρσ + il ρ φ σ (l) + iφ ρ (l)l σ − l ρ l σ φ(l)φ(l) . (3.28) This is our statement: the α-dependent propagator turns into a φ-dependent one. Similarly in a multi-point function (3.26), take a single fermion line to which n+1 vertices are attaching. With regard to a special vertex ρ out of which the momentum q flows, there arise two new contributions from φ ρ (q). In terms of the notations in the previous section 2.4 it gives i(−e) n+1 I ρρ 1 ···ρn (q, q 1 , · · · , q n ; k, p) +ieφ ρ (q)i(−e) n I ρ 1 ···ρn (q 1 , · · · , q n ; k + q, p) +i(−e) n I ρ 1 ···ρn (q 1 , · · · , q n ; k, p − q)ieφ ρ (q) = i(−e) n+1 δ ρ λ + iφ ρ (q)q λ I λρ 1 ···ρn (q, q 1 , · · · , q n ; k, p) = permutation {( q ρ ) , ( q 1 ρ 1 ) ,···, ( qn ρn )} k p q q 1 q n ρ ρ 1 ρ n + permutation {( q 1 ρ 1 ) ,···, ( qn ρn )}          q n ρ n q 1 ρ 1 q p k q 1 ρ 1 q n ρ n q p k +          , (3.29) where use has been made of WT about I's (2.59) to the second expression. By applying the same manipulation to each vertex, the n + 1-th photon amputated part of (3.26) becomes to (3.26) −→ i(−e) n+1 δ ρ λ + iφ ρ (q)q λ δ ρ 1 λ 1 + iφ ρ 1 (q 1 )q 1 λ 1 · · · δ ρn λn + iφ ρn (q n )q n λn ×I λλ 1 ···λn (q, q 1 , · · · , q n ; k, p) . (3.30) Accordingly we find that each photon index, I ρ··· , is modified to δ ρ κ +iφ ρ (q)q κ I κ··· as the result of adopting the gauge invariant electron Ψ , which, combined with the photon part (3.18), leads us to the result that the photon propagator is modified to D φ µν (q) ≡ δ ρ µ + iφ ρ (q)q µ −i q 2 d ρσ (q) δ σ ν + iφ σ (q)q ν = −i q 2 g µν + iq µ φ ν (q) + iφ µ (q)q ν − q µ q ν φ(q) · φ(q) , (3.31) where use has been made of the projection property (3.22). As expected, gauge (α) dependence has been taken over by φ µ . The covariant Landau gauge is realized by choosing φ µ (l) as φ µ (l) = il µ l 2 , (3.32) giving D L µν (q) ≡ −i q 2 g µν − q µ q ν q 2 . (3.33) Also the Coulomb gauge propagator is, in view of (3.24), given, by choosing φ µ (l) = 0, −il l 2 , (3.34) as D C µν (q) ≡ −i l 2 g µν + j g νj l µ l j + g µj l j l ν − l µ l ν l 2 . (3.35) In this way, we have recognized that building up gauge (BRS) invariant electron and photon is merely synonymous to fixing the gauge. Although all the operators in the expectation value, 0| T Θ µν (x)A φ λ 1 (x 1 ) · · · A φ λn (x n )Ψ (y 1 ) · · · Ψ (y m )Ψ (z 1 ) · · · Ψ (z m ) |0 , (3.36) with Θ µν (x) being given in (2.23), are BRS invariant, the value itself depends thus on φ. Even in this invariant approach there still need the physical state conditions (2.28) and (2.29) for proving φ-independence of Θ µν (x). In spite of the fact that the BRS invariant electron and photon are not so useful for the proof of gauge independence, they serve us as a probe into the structure of the theory. For example, the LSZ-mapping is easily clarified with the aid of Ψ : Ψ (x) → Z 1/2 2 ψ as (x) + higher orders , (3.37) together with the photon sector Note that the relation (3.37) could be established with the aid of the invariant approach: since from (2.11) the asymptotic electron has been confirmed as BRS invariant. A µ (x) → Z 1/2 3 A as µ (x) + higher orders , B(x) → Z −1/2 3 B as (x) . As was stressed before, if the support of φ µ is space-like, like in the case of the Dirac's electron, (3.37) can hold as the (weak) operator relation. Therefore strictly speaking, we can declare that the LSZ-mapping can be confirmed only in the case of Dirac's electron. This fact also tells us that in QED electron can behave as observable, whose statement could then be generalized to QCD as a trial for illustrating the dynamical mechanism of quark confinement [7]. Physical States in Functional Representation In this section we discuss other physical states in terms of the functional representation. Also by use of that we build up the path integral formula in the Coulomb gauge and make an explicit transformation to the covariant gauge. We clarify the reason for this ability. Other Physical States Consider A 0 = 0 gauge in the conventional treatment [20]: all three components A are assumed dynamical and obey the commutation relations, [ j (x),Ê k (y)] = iδ jk δ(x − y), [ j (x), k (x)] = 0 = [Ê j (x),Ê k (x)]; (j, k = 1, 2, 3) . (4.1) Here and hereafter the caret designates operators. The physical state condition (1.18) is then Φ(x)|phys ≡ 3 k=1 (∂ kÊk (x)) + J 0 (x) |phys = 0 , (4.2) where J µ (x) is supposed as a c-number current. First this should be read such that there is no gauge transformation in the physical space. As was mentioned in the introduction, the representation of the physical state cannot be obtained within the usual Fock space sinceΦ(x) is a local operator to result inΦ(x) = 0 [18], but can be in the functional (Schrödinger) representation [21]: A(x)|{A} = A(x)|{A} , E(x)|{E} = E(x)|{E} , {A}| E(x) = −i δ δA(x) {A}| , . . . (4.3) To see the reason take the states, |{A} , |{E} , which can be constructed in terms of the Fock states. The creation and annihilation operators are given by Now recall the quantum mechanical case [22]: A(x) = d 3 k (2π) 3/2 2|k| (a(k)e ik·x + a † (k)e −ik·x ) [a i (k), a † j (k ′ )] = δ ij δ(k − k ′ ), [a i (k), a j (k ′ )] = 0 ,q|q = q|q ,p|p = p|p , q = 1 √ 2 a + a † ,p = 1 √ 2i a − a † ; a|0 = 0 ,(4.6) then |q = 1 π 1/4 exp − q 2 2 + √ 2qa † − (a † ) 2 2 |0 , |p = 1 π 1/4 exp − p 2 2 + √ 2ipa † + (a † ) 2 2 |0 . (4.7) These bring us to |{A} ∼ exp − 1 2 d 3 x d 3 y A(x)K(x − y)A(y) + d 3 x d 3 k 2|k| (2π) 3 A(x)·a † (k)e −ik·x − 1 2 d 3 k a † (k)·a † (−k) |0 ,(4.8) where K(x) ≡ d 3 k (2π) 3 |k|e ik·x ,(4.9) which is apparently divergent so we must introduce some cut-off. The physical state in the functional representation is thus found as {A}|Φ(x)|phys = −i∇ δ δA(x) − J 0 (x) Ψ phys [A] = 0 , (4.10) where Ψ phys [A] ≡ {A}|phys . (4.11) Now we can see the reason for having the physical state in this case. Within a single Fock state the physical state condition (4.2) merely impliesΦ(x) = 0 but the functional representation consists of infinitely many collections of inequivalent Fock spaces: since the inner product of |{A} (4.8), to the Fock vacuum is found to be {A}|0 ∼ exp − 1 2 d 3 x d 3 y A(x) K(x − y)A(y) = exp − 1 2 d 3 k (2π) 3 |k| A(k)A(−k) −→ 0 ; ∀ {A} ,(4.12) when the cut-off becomes infinity. This happens in any value of A(x). Therefore (apart from the mathematical rigorousness of that) any local first class constraint can be realized by means of the functional representation. Furthermore the fact that the functional representation contains an infinite set of the Fock states enables us to perform an explicit gauge transformation and prove gauge independence without recourse to any physical state conditions. Proof of Gauge Independence by Path Integral Recall that the path integral formula can be obtained with the aid of the functional representation. It then might be easily convinced that we can move freely from one gauge to another in the path integral [8,23]. Take the Coulomb case. The Hamiltonian is given by H = d 3 x 1 2 E T 2 + 1 2 (∇ i A T ) 2 + 1 2 J 0 1 ∇ 2 J 0 − J · A ,(4.13) where the third term in the right hand side is the nonlocal Coulomb energy term with J µ being assumed as c-number sources. The equal time commutation relations are [ i (x),Ê j (y)] = i δ ij − ∇ i ∇ j ∇ 2 (x, y) ≡ iP ij (x, y) , [ i , j ] = [Ê i ,Ê j ] = 0 ,(4.14) where P ij (x, y) = d 3 k e ik·(x−y) δ ij − k i k j k 2 , (4.15) which can be diagonalized [24] by means of S such that SP S T =   1 1 0   ,(4.16) where T designates the transpose. Explicitly S 1i = n i , S 2i = ∇ |∇| × n i , S 3i = ∇ i |∇| , (4.17) where n is some vector perpendicular to ∇; ∇ · n = 0. Owing to S two components can be picked out so that we can write (omitting the caret) (SA) a = A a , (SE) a = E a , (a = 1, 2) ,(4.18) and [ A a , E b ] = iδ ab , (a, b = 1, 2) . (4.19) Also by noting that E 2 T = E i P ij E j = ( E a ) 2 ,(4.20) we obtain H(t) = d 3 x 1 2 ( E a ) 2 + 1 2 (∇ i A a ) 2 + 1 2 J 0 1 ∇ 2 J 0 − J a A a ,(4.21) where J a ≡ (SJ ) a and to specify the explicit time dependence through the Coulomb energy term we have written the Hamiltonian as H(t). The summation convention for the repeated indices must be implied. The starting point of the path integral is [23], Z(T ) ≡ lim N →∞ (I − i∆tH N ) (I − i∆tH N −1 ) · · · (I − i∆tH 1 ) ,(4.22) where ∆t ≡ T /N and H j ≡ H(j∆t). (Usually the Euclidean technique, T → −iT , must be used [23]. Here in order to illustrate the way to get the path integral expression we keep i in the trace formula. Also to make a whole discussion well-defined it is necessary to discretize space x but in the following the continuum expression is employed only for the notational simplicity.) The essential ingredients 1 are the functional (Schrödinger) representation (4.3) together with D A a (x)|{ A} { A}| = I , D E a (x)|{ E} { E}| = I ; (4.23) { E}|{ A} = x 1 2π exp −i d 3 x E a (x) A a (x) , (4.24) whose (infinite) constant will be absorbed in the functional measure in the following. Inserting (4.23) into (4.22) successively and using (4.24), we obtain the path integral expression, Z(T ) = lim N →∞ N j=1 D A a (x; j)D E a (x; j) exp i∆t d 3 x × E a (x; j) A a (x; j) − A a (x; j − 1) ∆t − 1 2 E a (x; j) 2 + 1 2 ∇ i A a (x; j) 2 + 1 2 d 3 y J 0 (x; j) 1 ∇ 2 (x, y)J 0 (y; j) − J a (x; j) A a (x; j) = D A a (x)D E a (x) exp i d 4 x E a˙ A a − 1 2 E a 2 + 1 2 ∇ i A a 2 + 1 2 J 0 1 ∇ 2 J 0 − J a A a ,1 = D A 3 D E 3 δ( A 3 )δ( E 3 ) (4.26) into (4.25) and changing the variables to the original A and E in view of (4.17) and (4.18), we obtain Z(T ) = DE DA (det ∇ 2 )δ(∇·E)δ(∇·A) exp i d 4 x E ·Ȧ −( 1 2 E 2 + 1 4 (F ij ) 2 + 1 2 J 0 1 ∇ 2 J 0 − J · A) . (4.27) With the use of Fourier transformation of the delta function, δ(∇·E) = Dβ exp i d 4 x β ∇·E ,(4.28) the integration with respect to E can be performed to obtain Z(T ) = DA Dβ (det ∇ 2 )δ(∇·A) × exp i d 4 x 1 2 (Ȧ − ∇β) 2 − 1 4 (F ij ) 2 − 1 2 J 0 1 ∇ 2 J 0 + J · A . (4.29) Here by reviving A 0 in the form of A 0 = β + J 0 ∇ 2 (4.30) the nonlocal (as well as instantaneous) Coulomb interaction is eliminated to leave the final form; Z(T ) = DA µ (det ∇ 2 ) δ(∇·A) exp −i d 4 x 1 4 F µν F µν + J µ A µ . (4.31) Here using the relation, δ(∇·A) ∼ lim α→0 exp − i 2α d 4 x (∇·A) 2 , (4.32) to (4.31) then integrating with respect to the gauge fields we obtain Z(T ) = exp i d 4 x d 4 y 1 2 J µ (x)D µν (x − y)J ν (y) ,(4.33) where D µν (x) is the Fourier transformation of the propagator (3.35). It is now a simple task to go to another gauge [23]. Suppose the new gauge condition is given by ∂ µ A ′ µ (x) = f (x) ,(4.34) where f (x) is an arbitrary function. The gauge transformation is A ′ µ (x) = A µ (x) + ∂ µ χ(x) . (4.35) In order to find such χ(x), first we rewrite (4.34) aṡ A ′ 0 (x) = f (x) − ∇ · A ′ (x) ,(4.36) and substituting (4.35) into this to find χ(x) = f (x) −Ȧ 0 (x) . (4.37) Thus the Jacobian is read as det δA ′ µ δA ν = det δ ν µ − δ ν 0 ∂ µ ∂ 0 = det(−∇ 2 −1 ) ,(4.38) giving DA µ (det ∇ 2 )δ(∇·A) = DA ′ µ (det )δ(∂ µ A ′ µ − f ) ,(4.39) where the minus sign in the determinant is irrelevant so we have dropped it. Therefore Z(T ) = DA µ (det )δ(∂ µ A µ − f ) exp − i d 4 x 1 4 F µν F µν + J µ A µ . (4.40) In this way, the gauge transformation in the path integral expression can be performed straightforwardly according to the fact that the functional representation has infinitely many inequivalent representations. Discussion In this paper, we recognize that the Belinfante's energy-momentum tensor is gauge independent for all orders under the LSZ asymptotic conditions in §2. Meanwhile, we know that to pick up the gauge invariant electron or photon is merely synonymous to fix the gauge: the transverse part, A T , ∇·A T = 0, is gauge invariant; which is equivalent to the Coulomb gauge. In this case, ψ itself becomes gauge invariant. It has been sometimes argued that gauge symmetry is not a symmetry rather a redundancy [26]. There need only two components but in order to recover the rotational as well as the Lorentz invariance, spurious two components have been added. These spurious components move around under gauge transformations leaving the physical component unchanged. Therefore, our observation in §3 is natural; that is, picking up the gauge invariant quantities leaves the Lorentz or rotational non-invariance. (If the Lorentz invariance is kept respectable, the negative metric must be introduced, so that operators themselves lose their significance without recourse to physical state conditions [27].) The situation reminds us of that of lattice gauge theory [28]; in which the gauge invariance has been maintained at the sacrifice of the Lorentz (Euclidean) as well as the rotational invariance. The method provides us the nonperturbative treatment in a gauge invariant way, leading to confinement in terms of an area law [28] by the help of an analogy with the critical phenomena in statistical mechanics. However, more physical and concrete views must be necessary in order to understand the confinement problem thoroughly: for example, the existence of Dirac's physical electron assures us that there is no confinement in QED. Therefore if proof could be given in QCD that the physical quark fields cannot be built up, the issue is resolved. The Gribov ambiguity [19] would be the cornerstone of the proof: canonical commutation relations (as a result of gauge fixing) between gauge fields can only hold within some small region around a point, where the coupling constant is small enough. Enlarging the region, we see that there takes part another gauge configuration, bringing us to the impossibility of observing quarks [7]. According to the discussion in §4 the path integral formulation would be most suitable for treating gauge transformation; since infinitely many Hilbert spaces compose the functional representation which is the basic of the path integral formula. The issue is then how to patch those small regions together to cover the whole functional space. There might be some hints from the recent observations in quantum mechanics on nontrivial manifolds [29] and the path integral formula for a generic constraint [30]. Acknowledgments T. K. has got profound questions on gauge invariance from Y. Takahashi, which was the starting point of this work and is grateful to N. Nakanishi for his guidance to the covariant canonical LSZ formalism. Discussions with D. McMullan are beneficial. The authors also thank to H. Yonezawa for his calculation of the energy-momentum tensor on the violation of the Ward-Takahashi relation. A The Covariant LSZ-Formalism in QED In this appendix we review the asymptotic behavior of photon fields and the LSZ reduction formula given by Nakanishi [13] for a self-contained purpose. A.1 Asymptotic Photon Field In order to know the behavior of asymptotic fields, it is necessary to investigate the Heisenberg fields. Then start with the Nakanishi-Lautrup Lagrangian (1.7) in a renormalized form, L = − 1 4 Z 3 F µν F µν + ψ( i 2 ↔ ∂ +eZ 1 2 3 A − m)ψ − A µ ∂ µ B + α 2 B 2 , (A.1) where all quantities have been assumed renormalized except ψ, m, and e, and Z 3 is the wave function renormalization constant for photon. The equations of motion are A µ − (1 − α) ∂ µ B = j µ , ∂ µ A µ + αB = 0 , B = 0 , (A.2) where j µ = Z 3 − 1 2 j µ − 1 − Z 3 −1 ∂ µ B , (A.3) and j µ ≡ −eψγ µ ψ . (A.4) The four-dimensional commutation relations among A µ 's and B are found as [ A µ (x) , B(y) ] = −i ∂ x µ D(x − y) , [ B(x) , B(y) ] = 0 , [ B(x) , j µ (y) ] = 0 , (A.5) where D(x) is the invariant delta function, D(x) ≡ d 4 p (2π) 3 i ǫ(p 0 )δ(p 2 ) e −ipx . (A.6) In order to obtain those for A µ 's compute first 0| j µ (x)j ν (y) |0 and then utilize (A.2), (A.5), as well as [ A k (x 0 , x) , . A l (x 0 , y) ] = − i Z 3 g kl δ 3 (x − y) , (A.7) to find [31] 0| [ A µ (x) , A ν (y) ] |0 = −i g µν − K ∂ µ ∂ ν D(x − y) + i (1 − α) ∂ µ ∂ ν E(x − y) − i Z 3 ∞ +0 ds σ(s) g µν + ∂ µ ∂ ν s ∆(x − y; s) , (A.8) where K ≡ 1 Z 3 ∞ +0 ds σ(s) s , (A.9) Z 3 ≡ 1 − ∞ +0 dsσ(s) , (A.10) with σ(s) being the spectral function, and ∆(x; s) and E(x) are expressed as ∆(x; s) ≡ d 4 p (2π) 3 i ǫ(p 0 )δ(p 2 − s) e −ipx , E(x) ≡ d 4 p (2π) 3 i ǫ(p 0 )δ ′ (p 2 ) e −ipx ; δ ′ (a) ≡ d da δ(a) . (A.11) Once the four-dimensional commutation relations is obtained, so can be those for the asymptotic fields, A as µ (x), B as (x), by simply throwing away the continuous spectrum part in (A.5) and (A.8): [ A as µ (x) , A as ν (y) ] = −i g µν − K ∂ µ ∂ ν D(x − y) + i (1 − α) ∂ µ ∂ ν E(x − y) , [ A as µ (x) , B as (y) ] = −i ∂ x µ D(x − y) , [ B as (x) , B as (y) ] = 0 . (A.12) From this the equations of motion for the asymptotic fields reads A as µ − (1 − α) ∂ µ B as = 0 , ∂ µ A as µ + αB as = 0 , B as = 0 . (A.13) It should be noted that the canonical structure (A.12) differs from that of the free theory in which the equations of motion is the same as (A.13) but the commutation relations is given without K! The Lagrangian leading to (A.12) as well as (A.13) is found as L as = − 1 4 F asµν F as µν − A asµ ∂ µ B as + K 2 ∂ µ B as ∂ µ B as + α 2 (B as ) 2 . (A.14) Note the existence of the kinetic term of B. A.2 Wave Functions To obtain the LSZ formula, there needs to construct wave functions. When α = 1 three sets of positive frequency functions {h kσ µ (x)}, {f kσ µ (x)} and {g k (x)} must be prepared to meet the equations, 2 A µ (x) = 0 , B(x) = 0 . (A.15) Those then must obey h kσ 16) where k and σ denote the momentum and the polarization of photon respectively. An explicit representation is obtained by an orthonormal set, {ϕ k (p)}, µ (x) = f kσ µ (x) , f kσ µ (x) = 0 , g k (x) = 0 , (A.k ϕ k (p) ϕ k * (q) = δ(p − q) , d 3 p ϕ k * (p) ϕ l (p) = δ kl , (A.17) and by a polarization vector ξ σ µ (p), 3 σ=0 3 τ =0 ξ σ µ (p)η στ ξ τ ν (p) = g µν , 3 µ=0 ξ σ µ (p) ξ τ µ (p) = η στ , (A. 18) where diag(η στ ) = (1, −1, −1, −1); diag(g µν ) = (1, −1, −1, −1). With these we have B Invariant Regularization and Finiteness for the Energy-Momentum Tensor In this appendix, we show, although might be well-known as a "common" sense, that naïvely introduced cut-off, 1 p 2 −→ lim Λ→∞ 1 p 2 −Λ 2 p 2 − Λ 2 N , (B.1) where N is some suitable number to make the whole integral finite, breaks the Lorentz invariance but the dimensional regularization does not; since there seems very few examples for demonstrating this "common" sense explicitly. In the subsequent section, finiteness for the energy-momentum tensor is argued. B.1 A Need for Invariant Regularization To simplify the discussion, consider the single scalar model described by L = 1 2 ∂ µ φ∂ µ φ − µ 2 2 φ 2 − λ 4! φ 4 + (Z − 1) 2 ∂ µ φ∂ µ φ − µ 2 φ 2 − µ 2 2 Z(Z µ − 1)φ 2 − λ 4! (Z λ Z 2 − 1)φ 4 , (B.2) where all quantities are renormalized and φ bare = Z 1 2 φ, µ 2 bare = Z µ µ 2 , λ bare = Z λ λ . (B. 3) The energy-momentum tensor is, Θ µν = Z∂ µ φ∂ ν φ − g µν Z 2 ∂ λ φ∂ λ − µ 2 2 ZZ µ φ 2 − λ 4! Z 2 Z λ φ 4 . (B.4) (For brevity's sake we do not consider an improvement; which accuires importance in the case of the trace identity [11].) By following the standard procedure [9], the Ward-Takahashi relation (WT) for the amputated Green's function, Γ , is found to be ∂ x µ Γ µν (x; y, z) + i∂ ν x δ 4 (x − y) Γ (x, z) + i∂ ν x δ 4 (x − z) Γ (x, y) = 0 , Up to the one-loop, in view of Fig.3, it gives G µν;λκ (q, q ′ ) = −i q 2 X µνλκ (q, q ′ ) −i q ′2 + −i q 2 X µνλρ (q, q ′ ) −i q ′2 Π ρσ (q ′ ) −i q ′2 d σκ (q ′ ) + −i q 2 d λρ (q)Π ρσ (q) −i q 2 X µνσκ (q, q ′ ) −i q ′2 + −i q 2 d λ ρ (q)Π µνρσ (q, q ′ )d σ κ (q ′ ) −i q ′2 , (B.15) whose expressions from the first to the final line come from (2.30), (2.42), and (2.45) respectively. The Lagrangian is now the Nakanishi-Lautrup one in the fully renormalized form, L = − 1 4 F µν F µν − A µ ∂ µ B + α 2 B 2 + ψ( i 2 ↔ ∂ −m)ψ + eψA ψ −(Z 3 − 1) 1 4 F µν F µν + (Z 2 − 1)ψ( i 2 ↔ ∂ −m)ψ −(Z m − 1)Z 2 mψψ + (Z 1 − 1)eψA ψ , (B.16) with the relation between the bare and the renormalized quantities: A µ bare = Z The energy-momentum tensor is therefore found as Θ µν = −Z 3 F µρ F ν ρ − g µν 4 F ρσ F ρσ −(A µ ∂ ν B + A ν ∂ µ B) − g µν α 2 B 2 − A ρ ∂ ρ B +Z 2 i 4 ψ(γ µ ↔ ∂ ν +γ ν ↔ ∂ µ )ψ − g µν ψ i 2 ↔ ∂ −Z m m ψ +eZ 1 1 2 ψ(γ µ A ν + γ ν A µ )ψ − g µν ψA ψ . (B.18) In view of (B.15) divergences lies in the vacuum polarization, Π µν (q), and Π µν;ρσ (q, q ′ ). As usual we remove the divergence of Π µν (q): the photon propagator up to the one loop reads d 4 xe iqx 0| T A µ (x)A ν (0) |0 = −i q 2 d λκ (q) + −i q 2 (g λκ q 2 − q λ q κ ){Π(q) − i(Z 3 − 1)} −i q 2 , (B.19) where Π(q) has been given in (2.43), Π(q) = −ie 2 2 tr1 (4π) 2 Γ (2 − Z 3 is chosen to cancel out the divergent part of Π(q). While the superficial degree of divergence for Π µν;λκ (q, q ′ ) is two so that the Taylor expansion in the expression in (2.46) gives Π µν;λκ (q, q ′ ) = iΠ(0) X µνλκ (q, q ′ ) + O((q, q ′ ) 3 ) , (B.21) where O((q, q ′ ) 3 ) is finite. Now recall that (B.15) is rewritten in terms of the renormalized form such as G µν;λκ (q, q ′ ) = −i q 2 Z 3 X µνλκ (q, q ′ ) −i q ′2 + −i q 2 {α q λ q ′κ g µν − q λ X µνκ (q, q ′ ) − q ′κ X µνλ (q ′ , q)} −i q ′2 + −i q 2 X µνλρ (q, q ′ ) −i q ′2 (q ′2 g ρσ − q ′ ρ q ′ σ ){Π(q ′ ) − i(Z 3 − 1)}d σκ (q ′ ) −i q ′2 + −i q 2 d λρ (q)(q 2 g ρσ − q ρ q σ ){Π(q) − i(Z 3 − 1)} −i q 2 X µνσκ (q, q ′ ) −i q ′2 + −i q 2 Π µν;λκ (q, q ′ ) −i q ′2 , (B.22) where use has been made of the transversal condition of Π µν;λκ (q, q ′ ) (2.47) in the final line. Note that it has a finite combination Π(q) − iZ 3 ; since the first term + the last term = −i q 2 i(Π(0) − iZ 3 ) X µνλκ (q, q ′ ) −i q ′2 + O((q, q ′ ) 3 ) . (B.23) Therefore there are no divergences in G µν;λκ (q, q ′ ). [ B(x) , ψ(y) ] = eψ(y)D(x − y) , [ B(x) , ψ(y) ] = −eψ(y)D(x − y) , (2.10) would become [ B as (x) , ψ as (y) ] = 0 , [ B as (x) , ψ as (y) ] = 0 ; (2.11) ( 2 . 21 ) 221Note that there remain only two components out of 16 ξ σ µ (p)'s; since there are 10 orthonormal conditions, (2.17), together with 4 physical state conditions, (2.21), leaving us two components.(Notations and further details should be consulted for Appendix A.) Fig. 8 8In this case WT readse η δ δη − η δ δη − i∂ ρ δ δA ρ δΓ [A; η, τ ] δτ m µν = 0 . (2.63)Define Θ µν m inserted Green's functions in the tree order: 15) becomes the Dirac's electron (3.10) and(3.18) becomes(3.12). Likewise, if the support of φ µ lies in a space-like region BRS invariant operators, (3.15) and(3.18), are well-defined by all means. On the contrary, if φ µ 's support includes a time-like region, the expressions for Ψ 's lose its meaning by themselves because of the non-commutativity of A's and ψ. However, even in this case perturbation can lay down the definition: for example, considering the quantity, have again employed a continuous expression in the final line. (The periodic boundary condition A a (x; T ) = A a (x; 0) is now irrelevant so we have not specified it.) Now inserting p) µ Γ µν (k, p) + ik ν Γ (−p) + ip ν Γ (k) = 0 , ) 4 e −iky e −ipz e i(k+p)z Γ µν (k, p) , Γ (x, y) = d 4 k (2π) 4 e −ik(x−y) Γ (k) . (B.7)With the use of the Feynman cut-off (B.1), Γ (k) is obtained asΓ (k) = −i(k 2 − µ 2 ) − Σ(k 2 ) + i(Z − 1)(k 2 − µ 2 ) − iµ 2 Z(Z µ − 1) , (B.8) 47 ) 47with which there remains no gauge dependence in (2.45). (Or the physical state condition for photon (2.28) also wipes out the gauge dependence in this case.) The relation (2.47) is easily recognized by means of a simple and straightforward manipulation such that Gauge Invariant ApproachesAccording to discussions in the foregoing sections, the LSZ asymptotic states is gauge invariant with the use of the physical state conditions (2.28) and(2.29). However, it is preferable to introduce gauge invariant operators for photon and electron, which would guarantee gauge independence more directly. Path integral expressions for relativistic field for instance, φ(x), obtained via holomorphic (canonical coherent) representation[25] in terms of the creation and the annihilation operator a(k) † , a(k), must suffer from nonlocality whenever going back to the φ(x)-representation, due to the mixing of particle and anti-particle. In order to get rid of this difficulty, we should start with the field φ(x)-diagonal representation which is nothing but the functional Schrödinger one. (A.19)Now it is almost straightforward to see that the following relations hold:where E (+) (x − y) and D (+) (x − y) are the positive frequency parts of E(x − y) and D(x − y) respectively, and(There needs some regularization to define E (+) (x − y) because of its logarithmic divergence.However this divergence goes out when differentiated with respect to x µ .) There hold additional relations:A.3 Fock State and the LSZ formulaBefore constructing the LSZ formula, we define the creation and annihilation operators,The asymptotic one-photon states are given byThose satisfy, by means of (A.12),Armed with these we can construct the LSZ formula: to begin with, we rewrite (A.23) aswhere use has been made of the equations of motion (A.13). Under the asymptotic conditionTaking the vacuum expectation values of (A.28), we obtain the LSZ formulas:For the case of the Feynman gauge (α = 1), 0| T (OB(x)) |0 terms in (A.30) disappear, so that we have much simpler formulas.withPut Z = 1 and choose Z µ such that Σ(0) − iµ 2 (Z µ − 1) be finite. ThereforeAs for Γ µν (k, p), since Z λ = 1 and Z = 1,where q = k + p. The integral in the first line is just the same as Σ(0), then WT (B.6) readsTherefore WT (B.6) cannot be met by the Feynman cut-off scheme (B.1), which implies that the translational invariance is broken in this regularization. On the contrary adopting the dimensional regularization, we have, instead of (B.13),because of the fact that we are free to make a shift of the loop momentum in the dimensional regularization.B.2 Cancellation of the Divergence in G µν;λκ (q, q ′ ) Finiteness of the energy-momentum tensor has already been proven in[9], and QED is indeed the case. To see this, we here show the cancellation of divergences in G µν;λκ (q, q ′ ) ≡ G µν;λκ g (q, q ′ ) + G µν;λκ m (q, q ′ ) .(B.14) Belfer Graduate School of Science. P A M Dirac, Lectures on Quantum Mechanics. New YorkYeshiva UniversityP. A. M. Dirac, "Lectures on Quantum Mechanics," Belfer Graduate School of Science, Yeshiva University, New York 1964. . I Bialynicki-Birula ; Also, B W Lee, J Zinn-Justin, ; M Veltman, Nucl. Phys. 2318Phys. Rev.I. Bialynicki-Birula, Phys. Rev. D2 (1970) 2877. Also B.W. Lee and J. Zinn-Justin, Phys. Rev. D5 (1972) 3121, 3137. G. 't Hoot and M. Veltman, Nucl. Phys. B50 (1972) 318. Principle of Quantum Mechanics. P A M Dirac, Oxford University Press302OxfordP. A. M. Dirac, "Principle of Quantum Mechanics," p. 302, Oxford University Press, Ox- ford, 1958. . O Steinmann, Ann. Phys. 157232O. Steinmann, Ann. Phys. 157 (1984) 232. T Kashiwa, Y Takahashi, KYUSHU-HET-14Gauge Invariance in Quantum Electrodynamics. unpublishedT. Kashiwa and Y. Takahashi, "Gauge Invariance in Quantum Electrodynamics" (KYUSHU-HET-14, January 1994) unpublished. . M Lavelle, D Mcmullan, Phys. Rev. Lett. 71211Phys. Lett.M. Lavelle and D. McMullan, Phys. Rev. Lett. 71 (1993) 3758. Phys. Lett. 312B (1993) 211. Also see "Constituent Quarks From QCD. M Lavelle, D Mcmullan, Phys. Lett. 32968Plymouth Preprint MS-95-06M. Lavelle and D. McMullan, Phys. Lett. 329B (1994) 68. Also see "Constituent Quarks From QCD" (Plymouth Preprint MS-95-06). . E S See For Example, B W Abers, Lee, Phys. Rep. 91See for example E. S. Abers and B. W. Lee, Phys. Rep. 9c (1973) 1. . C G Callan, S Coleman, R Jackiw, Ann. Phys. 5942C. G. Callan, S. Coleman, and R. Jackiw, Ann. Phys. 59 (1970) 42. The Theory of Photons and Electrons. J M Jauch, F Rohrlich, Spriger-Verlag415New YorkJ. M. Jauch and F. Rohrlich, "The Theory of Photons and Electrons," p.410 ∼ p.415, Spriger-Verlag, New York 1976 . . T Kashiwa, Prog. Theor. Phys. (Kyoto). 16250Nuovo CimentoT. Kashiwa, Lettere al Nuovo Cimento 16 (1976) 283, and Prog. Theor. Phys. (Kyoto) 62 (1979) 250. . S See For Example, R Coleman, Jackiw, Ann. Phys. 67552See for example, S. Coleman and R. Jackiw, Ann. Phys. 67 (1971) 552. . D Z Freeman, I V Muzinich, E J Weinberg, Ann. Phys. 95. D. Z. Freeman and E. J. Weinberg87354Ann. Phys.D. Z. Freeman, I. V. Muzinich and E. J. Weinberg, Ann. Phys. 87 (1974) 95. D. Z. Freeman and E. J. Weinberg, Ann. Phys. 87 (1974) 354. . N Nakanishi, Prog. Theor. Phys. 521929N. Nakanishi, Prog. Theor. Phys. 52 (1974) 1929. . N Nakanishi, 640. B. Lautrup, Mat. Fys. Medd. Dan. Vid. Selsk. 3529Prog. Theor. Phys.N. Nakanishi, Prog. Theor. Phys. 35 (1966) 1111; 49 (1973) 640. B. Lautrup, Mat. Fys. Medd. Dan. Vid. Selsk. 35 (1967) 29. . C Becchi, A Rouet, R Stora, Phys. Lett. 287. T. Kugo and I. Ojima98459Ann. Phys.C. Becchi,A. Rouet and R. Stora, Ann. Phys. 98 (1976) 287. T. Kugo and I. Ojima, Phys. Lett. 73B (1978) 459. . T Kugo, I Ojima, Prog. Theor. Phys. Suppl. No. 661T. Kugo and I. Ojima, Prog. Theor. Phys. Suppl. No. 66, (1979) 1. Relativistic Quantum Fields. J D Bjorken, S D Drell, McGraw-Hill, Inc197J. D. Bjorken and S. D. Drell, "Relativistic Quantum Fields," p.197, McGraw-Hill, Inc. 1965. . P G Federbush, K A Johnson, Phys. Rev. 1201926P. G. Federbush and K. A. Johnson, Phys. Rev. 120 (1960) 1926. Introduction to Quantum Field Theory. P Roman, John Wiley & Sons, Inc381P. Roman, "Introduction to Quantum Field Theory," p.381, John Wiley & Sons, Inc. 1969. . H D I Abarbanel, J N Bartels ; V, Gribov, Nucl. Phys. bf. 1361Nucl. Phys.H. D. I. Abarbanel and J. Bartels, Nucl. Phys. 136 (1978) 237 . V. N. Gribov, Nucl. Phys. bf 139 (1978) 1. . J L Gervais, B Sakita, Phys. Rev. 453. N. H. Christ and T. D. Lee18939Phys. Rev.J. L. Gervais and B. Sakita, Phys. Rev. D18 (1978) 453. N. H. Christ and T. D. Lee, Phys. Rev. D22 (1980) 939. . R Floreanini, R Jackiw, Phys. Rev. 372206R. Floreanini and R. Jackiw, Phys. Rev. 37 (1988) 2206. . T Kashiwa, Prog. Theor. Phys. 701124T. Kashiwa, Prog. Theor. Phys. 70 (1983) 1124. Also see T. Kashiwa. T Kashiwa, M Sakamoto, Prog. Theor. Phys. 671858Prog. Theor. Phys.T. Kashiwa and M. Sakamoto, Prog. Theor. Phys. 67 (1982) 1927. Also see T. Kashiwa, Prog. Theor. Phys. 66 (1981) 1858. . Y Takahashi, Physica. 205Y. Takahashi, Physica, 31 (1965) 205. Gauge Fields. L D Faddeev, A A Slavnov, Benjamin, IncL. D. Faddeev and A. A. Slavnov, "Gauge Fields," chap.3, Benjamin, Inc. 1980. . P Nelson, L Alvarez-Gaume, Comm. Math. Phys. 99103P. Nelson and L. Alvarez-Gaume, Comm. Math. Phys. 99 (1985) 103. The Theory of Photons and Electrons. J M Jauch, F Rohrlich, Spriger-Verlag485New YorkJ. M. Jauch and F. Rohrlich, "The Theory of Photons and Electrons," p.485, Spriger- Verlag, New York 1976 . . K G Wilson, Phys. Rev. 102445K. G. Wilson, Phys. Rev. D10 (1974) 2445 . Quarks Gluons and Lattices. M Creutz, Cambridge University PressM. Creutz, "Quarks Gluons and Lattices," Cambridge University Press 1983. . Y Ohnuki, S Kitakado, J. Math. Phys. 342827Y. Ohnuki and S. Kitakado, J. Math. Phys. 34 (1993) 2827. . D Mcmullan, I Tsutsui, Ann. Phys. 237269D. McMullan and I. Tsutsui, Ann. Phys. 237 (1995) 269. . T Kashiwa, Prog. Theor. Phys. 95421T. Kashiwa, Prog. Theor. Phys. 95 (1996) 421. . N Nakanishi, Prog. Theor. Phys. Suppl. 511N. Nakanishi, Prog. Theor. Phys. Suppl. 51 (1972) 1. . S N Gupta, Proc. Phys. Soc. 63681S. N. Gupta, Proc. Phys. Soc. A63 (1950) 681. . K Bleuler, Helv. Phys. Acta. 23567K. Bleuler, Helv. Phys. Acta 23 (1950) 567.
[]
[ "Improving Neural Cross-Lingual Abstractive Summarization via Employing Optimal Transport Distance for Knowledge Distillation", "Improving Neural Cross-Lingual Abstractive Summarization via Employing Optimal Transport Distance for Knowledge Distillation" ]
[ "Thong Nguyen \nVinAI Research\nVietnam\n", "Anh Luu ", "Tuan \nNanyang Technological University\nSingapore\n" ]
[ "VinAI Research\nVietnam", "Nanyang Technological University\nSingapore" ]
[]
Current state-of-the-art cross-lingual summarization models employ multi-task learning paradigm, which works on a shared vocabulary module and relies on the self-attention mechanism to attend among tokens in two languages. However, correlation learned by self-attention is often loose and implicit, inefficient in capturing crucial cross-lingual representations between languages. The matter worsens when performing on languages with separate morphological or structural features, making the cross-lingual alignment more challenging, resulting in the performance drop. To overcome this problem, we propose a novel Knowledge-Distillation-based framework for Cross-Lingual Summarization, seeking to explicitly construct cross-lingual correlation by distilling the knowledge of the monolingual summarization teacher into the cross-lingual summarization student. Since the representations of the teacher and the student lie on two different vector spaces, we further propose a Knowledge Distillation loss using Sinkhorn Divergence, an Optimal-Transport distance, to estimate the discrepancy between those teacher and student representations. Due to the intuitively geometric nature of Sinkhorn Divergence, the student model can productively learn to align its produced cross-lingual hidden states with monolingual hidden states, hence leading to a strong correlation between distant languages. Experiments on cross-lingual summarization datasets in pairs of distant languages demonstrate that our method outperforms state-of-the-art models under both high and low-resourced settings.
null
[ "https://arxiv.org/pdf/2112.03473v1.pdf" ]
244,920,920
2112.03473
2634de8e1c9f36d2fb40f74d80206b27a50ab244
Improving Neural Cross-Lingual Abstractive Summarization via Employing Optimal Transport Distance for Knowledge Distillation Thong Nguyen VinAI Research Vietnam Anh Luu Tuan Nanyang Technological University Singapore Improving Neural Cross-Lingual Abstractive Summarization via Employing Optimal Transport Distance for Knowledge Distillation Current state-of-the-art cross-lingual summarization models employ multi-task learning paradigm, which works on a shared vocabulary module and relies on the self-attention mechanism to attend among tokens in two languages. However, correlation learned by self-attention is often loose and implicit, inefficient in capturing crucial cross-lingual representations between languages. The matter worsens when performing on languages with separate morphological or structural features, making the cross-lingual alignment more challenging, resulting in the performance drop. To overcome this problem, we propose a novel Knowledge-Distillation-based framework for Cross-Lingual Summarization, seeking to explicitly construct cross-lingual correlation by distilling the knowledge of the monolingual summarization teacher into the cross-lingual summarization student. Since the representations of the teacher and the student lie on two different vector spaces, we further propose a Knowledge Distillation loss using Sinkhorn Divergence, an Optimal-Transport distance, to estimate the discrepancy between those teacher and student representations. Due to the intuitively geometric nature of Sinkhorn Divergence, the student model can productively learn to align its produced cross-lingual hidden states with monolingual hidden states, hence leading to a strong correlation between distant languages. Experiments on cross-lingual summarization datasets in pairs of distant languages demonstrate that our method outperforms state-of-the-art models under both high and low-resourced settings. Introduction Cross-Lingual Summarization (CLS) is the task of condensing a document of one language into its shorter form in the target language. Most of contemporary works can be classified into two categories, i.e. low-resourced and high-resourced CLS approaches. In high-resourced scenarios, models are provided with an enormous number of document /summary pairs on which they can be trained (Zhu et al. 2019;Cao, Liu, and Wan 2020;Zhu et al. 2020). On the other hand, in low-resourced settings, those document/summary pairs are scarce, which restrains the amount of information that a model can learn. While high-resourced Figure 1: Examples of Chinese-English cross-lingual summarization. Here we present the English output generated by the NCLS model of (Zhu et al. 2019) settings are preferred, in reality it is difficult to attain a sufficient amount of data, especially for less prevalent languages. Most previous works resolving the issue of little training data concentrate on multi-task learning framework by utilizing the relationship of Cross-Lingual Summarization (CLS) with Monolingual Summarization (MLS) or Neural Machine Translation (NMT). Their approach can be further divided into two groups. The first group equips their module with two independent decoders, one of them targets the auxiliary task (MLS or NMT). Nevertheless, since two decoders do not share their parameters, this approach undermines the model's ability to align between two tasks (Bai, Gao, and Huang 2021), making the ancillary and the main task less relied upon each other. Hence, the trained model might produce output that does not match up the topic, or miss important spans of text. In Figure 1, we list two samples of documents with their gold and generated summaries by the model NCLS of (Zhu et al. 2019). As it can be seen, their cross-lingual outputs do not include key spans from the summary, e.g., "underground", "parking lot" in sample 1, and "consumers" in sample 2. In both samples, the content of the cross-lingual summary diverges significantly from the one of the monolingual gold summary. The second group decides to employ a single decoder dealing with both CLS and MLS tasks. To this end, the method concatenates the monolingual to cross-lingual summary and designate the model to sequentially generate the monolingual summary, and then the cross-lingual one. Unfortunately, notwithstanding lessening the computational overhead during training by using solely one decoder, this method is not efficacious in capturing the connection between two languages in the output, consequently producing representations that do not take into account language relationships (Luo et al. 2021). In that case, the correlation of cross-lingual representations will be tremendously impacted by the structural and morphological similarity of those languages (Bjerva et al. 2019). As a result, in case of summarizing the document from one language to another that possesses distinct morphology and structure properties, such as from Chinese to English, the decoder might be prone to underperformance, due to the dearth of language correlation between two sets of hidden representations in the bilingual vector space (Luo et al. 2021). To solve the aforementioned problem, we propose a novel Knowledge-Distillation framework for Cross-Lingual Summarization task. Particularly, our framework consists of a teacher model targetting Monolingual Summarization, and a student for Cross-Lingual Summarization. We initiate our procedure by finetuning the teacher model on monolingual document/summary pairs. Subsequently, we continue to distill summarization knowledge of the trained teacher into the student model. Because the hidden vectors of the teacher and student lie upon two disparate monolingual and cross-lingual spaces, respectively, we propose a Sinkhorn-Divergence-based Knowledge Distillation loss, for the distillation process. Whereas multiple distances such as Cosine Distance or Euclidean Distance demand two sets share the sample size and are sensitive to outliers (Zimek, Schubert, and Kriegel 2012), Sinkhorn divergence does not enforce any requirement that relates to the number of samples and is also robust to noise . Furthermore, compared with other types of divergences such as KL divergence, the computation of Sinkhorn divergence does not require two distributions to lie on the same probability space. This is important because two languages might possess distinct features that cannot be projected one-to-one, such as the vocabulary set. Consequently, employing divergences different from Sinkhorn would need additional constraint to the distillation loss. Lastly, Sinkhorn divergence is able to capture geometric nature (Feydy et al. 2019) which has been shown to benefit myriad cross-lingual and multilingual representation learning settings ). We will empirically prove the superiority of Sinkhorn divergence in the Experiment section. Since the proposed module perpetuates the one-decoder employment, our framework is able to explicitly correlate representations from two languages, thus resolving the issue of two distant languages without demanding any additional computation overhead. To evaluate the efficacy of our framework, we proceed to conduct the experiments on myriad datasets containing document /summary pairs of couples of distant languages, for example, English-to-Chinese, English-to-Arabic, Japanese-to-English, etc. The empirical results demonstrate that our model outperforms previous state-of-the-art Cross-Lingual Summarization approaches. In sum, our contributions are three-fold: • We propose a Knowledge Distillation framework for Cross-Lingual Summarization task, which seeks to enhance the summarization performance on distant languages by aligning the cross-lingual with monolingual summarization, through distilling the knowledge of monolingual teacher into cross-lingual student model. • We propose a novel Knowledge Distillation loss using Optimal-Transport distance, i.e. Sinkhorn Diveregence, with a view to coping with the spatial discrepancy formed by the hidden representations produced by teacher and student model. • We conducted extensive experiments in both high and low-resourced settings on multiple Cross-Lingual Summarization datasets that belong to pairs of morphologically and structurally distant languages, and found that our method significantly outperforms other baselines in both automatic metrics and by human evaluation. Related Work Neural Cross-Lingual Summarization Due to the advent of Transformer architecture with its selfattention mechanism, Text Generation has received ample attention from researchers (Tuan, Shah, and Barzilay 2020;Lyu et al. 2021;Zhang et al. 2021), especially Document Summarization ). In addition to Monolingual Summarization, Neural Cross-Lingual Summarization has been receiving a tremendous amount of interest, likely due to the burgeoning need in cross-lingual information processing. Conventional approaches designate a pipeline in two manners. The first one is translate-then-summarize, which copes with the task by initially translating the document into the target language and then performing the summarization (Wan, Li, and Xiao 2010;Ouyang, Song, and McKeown 2019;Wan 2011;Zhang, Zhou, and Zong 2016). The second approach is summarize-then-translate, which firstly summarizes the document and then creates its translated version in the target language (Lim, Kang, and Lee 2004;Orǎsan and Chiorean 2008;Wan, Li, and Xiao 2010). Nonetheless, both of these approaches are vulnerable to error propagation caused by undertaking multiple steps (Zhu et al. 2019). Recent works apply a general architecture combined with large-scale training to conduct Cross-Lingual Summarization. The main approach is to utilize the multi-task framework, in which CLS task benefits from the process of making use of other tasks such as Monolingual Summarization or Machine Translation (Zhu et al. 2019). Further approaches design ancillary mechanisms such as pointergenerator to exploit the translation scheme in the crosslingual summary (Zhu et al. 2020). Other work uses a pair of encoders and decoders to co-operate the cross-lingual alignment with summarization (Cao, Liu, and Wan 2020). Optimal Transport in Natural Language Processing Introduced in 19th century as a method to find the optimal solution to transport a mass from one place to another destination, researchers have found its use in a wide variety of scientific fields, such as computational fluid mechanics (Benamou and Brenier 2000), economics (Carlier, Oberman, and Oudet 2015), physics (Cole et al. 2021), and notably machine learning (Peyré, Cuturi et al. 2019;Cuturi 2013;Courty et al. 2016;Danila et al. 2006). Recently, beside Contrastive Learning framework Pan et al. 2021a,b), Optimal Transport has been omnivorously employed in Natural Language Processing field, as used through Optimal Transport distance, for instance Word Mover's Distance (Werner and Laber 2019), to estimate the necessary quantity of alignment. Its application includes text classification (Kusner et al. 2015), capturing spatial alignment in word embedding (Alvarez-Melis and Jaakkola 2018), machine translation , abstractive summarization , etc. Nevertheless, the adaptation of Optimal Transport distance, especially Sinkhorn divergence, for Neural Cross-Lingual Summarization task has been attracting limited amount of research effort. Background Neural Cross-Lingual Summarization Given a document X L1 = {x 1 , x 2 , . . . , x N }, a monolingual summarization model's task is to create a summary Y L1 = {y L1 1 , y L1 2 , . . . , y L1 M1 }, where both X L1 and Y L1 are in language L 1 . On the contrary, a cross-lingual summarization model will produce a cross-lingual summary Y L2 = {y L2 1 , y L2 2 , . . . , y L2 M2 } that is in language L 2 . It is worth noting here that M 1 < N and M 2 < N . Analogous to monolingual summarization, current stateof-the-art cross-lingual summarization methods employ the Transformer-based architecture. Relying mainly on selfattention mechanism, Transformer-based architecture consists of an encoder and a decoder. The bidirectional selfattention in the encoder will extract contextualized representations of the input, which will be fed to the decoder to generate the output. Due to its generation nature, the decoder will use unidirectional self-attention to learn the context of previously generated tokens. During training procedure, the whole framework is updated based upon the cross-entropy loss as follows L CLS = − M2 t=1 log P (y L2 t |y L2 <t , X L1 )(1) Knowledge Distillation (KD) Proposed by (Hinton, Vinyals, and Dean 2015), knowledge distillation is a method to train a model, called the student, by leveraging valuable information provided by soft targets output by another model, called the teacher. In particular, the framework initially trains a model on one designated task to extract useful features. Subsequently, given a dataset D = {(X 1 , Y 1 ), (X 2 , Y 2 ), . . . (X |D| , Y |D| )}, where |D| is the size of the dataset, the teacher model will generate the output H T i = {h T 1 , h T 2 , . . . , h T L T } for each input X i . Dependent on the researchers' decision, the output might be hidden representations or final logits. As a consequence, in order to train the student model, the framework will use a KD loss that discriminates the output of the student model H S i = {h S 1 , h S 2 , . . . , h S L S } given input X i from the teacher output H T i . Eventually, the KD loss for input X i will possess the form as follows L KD = dist(H T i , H S i )(2) where dist is a distance function to estimate the discrepancy of teacher and student outputs. The explicated Knowledge Distillation framework has shown its efficiency in a tremendous number of tasks, such as Neural Machine Translation (Tan et al. 2019;Wang et al. 2021;Li and Li 2021;, Question Answering (Hu et al. 2018;Arora, Khapra, and Ramaswamy 2019;Yang et al. 2020b), Image Classification (Yang et al. 2020a;Chen, Chang, and Lee 2018;Fu et al. 2020), etc. Nonetheless, its application for Neural Cross-Lingual Summarization has received little interest. Methodology To resolve the issue of distant languages, the output representations from two vector spaces denoting two languages should be indistinguishable, or easily transported from one space to another. In order to accomplish that goal, we seek to relate the cross-lingual output of the student model to the monolingual output of the teacher model, via utilizing Knowledge Distillation framework and Sinkhorn Divergence calculation. The complete framework is illustrated in Figure 2. Knowledge Distillation Framework for Cross-Lingual Summarization We inherit the architecture of Transformer model for our module. In particular, both the teacher and student model uses the encoder-architecture paradigm combined with two fundamental mechanisms. Firstly, the self-attention mechanism will attempt to learn the context of the tokens by attending tokens among each other in the input and output document. Secondly, there is a cross-attention mechanism to correlate the contextualized representations of the output tokens to ones of the input tokens. In our KD framework, we initiate the process by training the teacher model on monolingual summarization task. In detail, given an input X L1 = {x 1 , x 2 , . . . , x N }, the teacher model will aim to generate its monolingual summary Self-Attention Fully-Connected Layer Add-Norm Encoder Self-Attention Decoder Monolingual Teacher Model Cross-Lingual Student Model Add-Norm Add-Norm Self-Attention Add-Norm Fully-Connected Layer Add-Norm Self-Attention Y L1 = {y L1 1 , y L1 2 , . . . , y L1 M1 }. Similar to previous monolingual summarization schemes, our model is trained by maximizing the likelihood of the groundtruth tokens, which takes the cross-entropy form as follows L MLS = − M1 t=1 log P (y L1 t |y L1 <t , X L1 )(3) After finetuning the teacher model, we progress to train the student model, which also employs the Transformer architecture. Contrary to the teacher, the student model's task is to generate the cross-lingual output Y L2 = {y L2 1 , y L2 2 , . . . , y L2 M2 } in language L 2 , given the input document X L1 in language L 1 . We update the parameters of the student model by minimizing the objective function that is formulated as follows L CLS = − M2 t=1 log P (y L2 t |y L2 <t , X L1 )(4) With a view to pulling the cross-lingual and monolingual representations nearer, we implement a KD loss to penalize the large distance of two vector spaces. Particularly, let H T = {h T 1 , h T 2 , . . . , h T L T } denote the contextualized representations produced by the decoder of the teacher model, and H S = {h S 1 , h S 2 , . . . , h S L S } denote the representations from the decoder of the student model, we define our KD loss as follows L KD = dist(H T , H S )(5) where dist is the Optimal-Transport distance to evaluate the difference of two representations, which we will delineate in the following section. Sinkhorn Divergence for Knowledge Distillation Loss Due to the dilemma that the hidden representations of the teacher and student model stay upon two disparate vector spaces (as they represent two different languages), we will consider the distance of the two spaces as the distance of two probability measures. To elaborate, we choose to adapt Sinkhorn divergence, a variant of Optimal Transport distance, to calculate the aforementioned spatial discrepancy. Let H T , H S denote the representations of the teacher decoder and the student decoder, we encode the sample measures of them α = L T i=1 α i δ h T i , β = L S j=1 β j δ h S j(6) where α and β are probability distributions that satisfy dist(H T , H S ) = OT(α, β)− 1 2 OT(α, α)− 1 2 OT(β, β) (7) where OT(α, β) = N i=1 α i f i + M j=1 β j g j(8) in which f i , g j are estimated by Sinkhorn loop. We thorougly delineate the loop in Algorithm 1. Training Objective We amalgamate the Cross-Lingual Summarization and Knowledge Distillation objective to obtain the ultimate ob- Algorithm 1: Sinkhorn loop Input: Probability distributions α, β, regularization hyperparameter ε, number of iterations N I , log-sum-entropy function LSE N k=1 (z k ) = log N k=1 exp (z k ), distance function C(x, y) = ||x − y|| 2 1: for i = 1 to N I do 2: Compute f i = ε · LSE L S k=1 [log(β k ) + 1 ε g k − 1 ε C(h T i , h S k )] 3: Compute g j = ε · LSE L T k=1 [log(α k ) + 1 ε f k − 1 ε C(h T k , h S j ) ] 4: end for jective function. Mathematically, for each input, our training loss is computed as follows L = L CLS + λ · L KD (9) where λ is the hyperparameter that controls the influence of the cross-lingual alignment of two vector spaces. Experiments Datasets We evaluate the effectiveness of our methods on En2Zh and Zh2En datasets processed by (Bai, Gao, and Huang 2021). We also inherit their minimum, medium, and maximum settings in order to verify the effectiveness of our method under limited-resourced settings. The sample size under each setting is depicted in Table 2. Furthermore, to further evaluate the performance of our method in various languages, we also preprocess datasets of Wikilingua (Ladhak et al. 2020) in the manner that every sample is converted to a triple of document, MLS summary, and CLS summary. We choose 4 variants of Wikilingua to proceed our evaluation, i.e. English to Arabic (En2Ar), English to Japanese (En2Ja), Japanese to English (Ja2En), and English to Vietnamese (En2Vi). It should be noted here that (En, Ja), (En, Ar), (En, Zh), and (En, Vi) are all couples of languages that are distant in terms of structure or morphology. The statistics of the datasets is demonstrated in Table 1 Implementation Details We initialize the encoder with multilingual BERT (Devlin et al. 2018), whereas the decoder with Xavier intialization (Glorot and Bengio 2010). The dimensions of our encoder and decoder hidden states are both 768. We use two seperate Adam optimizers for encoder and decoder, and the learning rate for encoder and decoder is 0.002 and 0.2, respectively. The model is trained with the warmup phase of 25000 steps. We train the model on one Nvidia GeForce A100 GPU that accumulates gradient every 5 steps. Moreover, we apply Dropout probability of 0.1 to all fully-connected layers in the model. The teacher and student model shares the architecture and scale of parameters in our Knowledge Distillation framework. To estimate the Sinkhorn divergence, we employ the entropic regularization rate ε of 0.0025 and the iteration length N I of 14. The weight λ of KD Loss in Equation 9 is set to 1. Baselines We compare our proposed architecture against the following baselines: • Automatic Evaluation Full-dataset Scenario The experimental results under the full-dataset scenario are given in Table 3, 4, 5, 6, 7, and 8. For Zh2En dataset, our method outperforms MCLAS model by ROUGE-1 of 1.3 points, ROUGE-2 of 4.0 points, ROUGE-3 of 0.4 point, and ROUGE-L of 1.2 points. Our model also improves the performance of NCLS model for dataset En2Zh, with 0.6 point in ROUGE-1, 1.5 points in ROUGE-2, 0.1 point in ROUGE-3, and 0.8 point in ROUGE-L. For Arabic language, our model achieves the enhancement compared against NCLS model by 0.1 in ROUGE-1 score, 2.9 in ROUGE-2 score, 1.6 in ROUGE-3 score, and 5.1 in ROUGE-L score. In En2Ja dataset, we outperformed previous best method MCLAS by 0.6 point in ROUGE-1, 0.2 point in ROUGE-2, 0.2 point in ROUGE-3, and 0.5 point in ROUGE-L. Additionally, for the reverse dataset Ja2En, our method significantly achieves higher performance with the improvement of 1.0 point of ROUGE-1, 0.5 point of ROUGE-2, 0.4 point of ROUGE-3, and 0.4 point of ROUGE-L, compared with MCLAS model. Those results substantiate our hypothesis that our framework is able to enhance the capability of apprehending and summarizing a document into a summary of another distant language, since English alphabet does not have any character in common with Japanese, Arabic, and Chinese counterparts. For En2Vi dataset, our method also obtains notable improvement over other state-of-the-art methods. As shown in Table 8, our model outperforms MCLAS model by 0.1 in ROUGE-1, 2.9 in ROUGE-2, 1.6 in ROUGE-3, and 5.1 in ROUGE-L. This demonstrates that our method is also capable of buttressing the model capacity in situations where two languages are slightly morphologically or structurally similar, since Vietnamese and English do share a number of characters in their alphabets. Low-resource Scenario We denote results of the experiments conducted under minimum, medium, and maximum scenarios in Table 9, 10, and 11. For the minimum setting, our model achieves the improvement over previous methods. In particular, we outperformed MCLAS model by Under the medium setting, the performance of our method is also higher than MCLAS model with 0.1 point in ROUGE-1, 1.1 points in ROUGE-2, 0.5 point in ROUGE-3, and 3.0 points in ROUGE-L. The improvement is more critical for dataset En2Zh with an increase of 3.0 in ROUGE-1, 1.9 in ROUGE-2, 0.6 in ROUGE-3, and 0.5 in ROUGE-L. Last but not least, in maximum scenario, for dataset Zh2En, our gains compared against MCLAS model are 0.4 point in ROUGE-1, 0.4 point in ROUGE-2, 0.5 point in ROUGE-3, and 0.7 point in ROUGE-L. In dataset En2Zh, our improvements are 2.9 points in ROUGE-1, 0.3 point in ROUGE-2, 0.4 point in ROUGE-3, and 0.7 point in ROUGE-L. Those aforementioned results have shown that our method is also capable of elevating the Cross-Lingual Summarization performance when the available training dataset is scarce. Models Zh2En En2Zh NCLS 20.93/5.88/2.47/17.58 34.14/12.45/4.38/21.20 NCLS+MS 20.50/5.45/2.22/17.25 33.96/12.38/4.36 Human Evaluation Because automatic metrics do not completely betray the quality of the methods, we conduct further human evaluation for more precise assessment. To fulfil our objective, we design two tests in order to elicit human judgements in two manners. In the first experiment, we present summaries generated by NCLS, MCLAS, our model, and the gold summary, then asked seven professional English speakers to indicate the best and worst summaries in terms of informativeness, faithfulness, topic coherence, and fluency. We randomly sampled 50 summaries from En2Vi dataset and 50 others from Ja2En dataset. The score of a model will be estimated as the percentage of times it was denoted as the best minus the percentage of times it was denoted as the worst. For the second experiment, we decide to adapt Question Answering (QA) paradigm to our framework. For each sample, we create two independent questions that underscore the key information from the input document. Participants would read and answer each question as best as they could. The score of a system will be equal to the proportion of questions that the participants answer correctly. Fleiss' Karpa scores of our experiments are shown in Table 12. It is obvious that the scores prove a strong interagreement among the participants. The experimental results in Table 13 indicate that our model generates summaries that are conducive to human judgements, and have more likelihood to preserve important content in the original documents than summaries of other systems. Case Study Figure 1 demonstrates case studies on the summarization results of NCLS, MCLAS, and our method. It is noticeable that summaries generated by NCLS and MCLAS missed important terms denoted by monolingual ones, such as "consumers", "cost", "underground", "parking lot", etc. Consequently, those summaries unfortunately fail to mention the main idea of the document, particularly the comparison of parking prices in sample 1, and the announcement of "sinopec" in sample 2. In contrast, our outputs cover almost all of them, closely related the cross-lingual to the monolingual summary. This shows that attracting CLS and MLS representations towards each other through Sinkhorn divergence helps CLS model grasp key information, which is an advantage of our proposed method. Impact of Sinkhorn Divergence on Geometric Distance of Cross-Lingual Representations We propose to adapt Sinkhorn Divergence to align the crosslingual decoder hidden states of the student model with monolingual decoder hidden states of the teacher model. Nevertheless, whether this geometrically brings two sets of representations nearer remains a quandary. To further verify the benefit of leveraging Sinkhorn Divergence, we estimate the distances of those hidden vectors by using other metrics, i.e. Cosine Similarity and Mean Squared Error. Particularly, for each input, after getting the decoder to generate the hidden vectors of the output tokens, we take the average of those vectors and measure the distance between the mean of the vectors generated by the CLS model (NCLS, MCLAS, and Our Model) with the mean of the vectors created by the MLS model. We denote the expected value and standard deviation of each method in Table 15. As it can be obviously seen, employing Sinkhorn Divergence actually pulls the vectors in the cross-lingual spaces towards one another. Conclusion In this paper, we propose a novel Knowledge Distillation framework to tackle Neural Cross-Lingual Summarization for morphologically or structurally distant languages. Our framework trains a monolingual teacher model, and then finetunes the cross-lingual student model which is distilled knowledge from the aforementioned teacher. Since the hidden representations of the teacher and student model lie upon two different lingual spaces, we continually proposed to adapt Sinkhorn Divergence to efficiently estimate the cross-lingual discrepancy. Extensive experiments show that our method significantly outperforms other approaches under both low-resourced and full-dataset settings. Figure 2 : 2Diagram of Knowledge Distillation Framework for Cross-Lingual Summarization Inspired by (Feydy et al. 2019), we estimate the difference of the representations through determining the Sinkhorn divergence between them NCLS (Zhu et al. 2019): a Transformer-based model to conduct CLS. • NCLS + MS (Zhu et al. 2019): a multi-task framework that leverages an auxiliary MS decoder to enhance crosslingual summarization performance. • TLTran (Bai, Gao, and Huang 2021): a CLS pipeline that firstly performs MLS and then utilizes a finetuned NMT model to translate the monolingual summary into the target language. • MCLAS (Bai, Gao, and Huang 2021): a multi-task framework that sequentially performs MLS, and CLS which is based upon the MLS result. 1.3 points of ROUGE-1, 0.5 point of ROUGE-2, 0.2 point of ROUGE-3, and 0.3 point of ROUGE-L in Zh2En dataset. For En2Zh dataset, we obtain an increase of 3.6 points in ROUGE-1, 0.6 point in ROUGE-2, 0.3 point in ROUGE-3, and 1.4 points in ROUGE-L. .Dataset l Input l CLS l MLS Zh2En 105 19 19 En2Zh 912 97 69 En2Ar 1589 227 133 En2Ja 1463 212 133 Ja2En 2103 133 212 En2Vi 1657 175 135 Table 1: Statistics of Cross-Lingual Summarization datasets. Scenarios Minimum Medium Maximum Full-dataset Zh2En 5,000 25,000 50,000 1,693,713 En2Zh 1,500 7,500 15,000 364,687 Table 2 : 2Dataset sizes of multiple low-resource scenarios for CLS datasets. Table 3 : 3Full-dataset Cross-Lingual Summarization results in Zh2En datasetModel R1 R2 R3 RL TLTran 30.20 12.20 11.79 27.02 NCLS 44.16 24.28 17.13 30.23 NCLS+MS 42.68 23.51 15.62 29.24 MCLAS 42.27 24.60 16.07 30.09 Our Model 44.75 25.76 17.20 31.05 Table 4 : 4Full-dataset Cross-Lingual Summarization results in En2Zh datasetModel R1 R2 R3 RL NCLS 36.80 17.36 10.79 27.25 NCLS+MS 35.53 17.01 10.33 26.36 MCLAS 36.28 17.27 10.81 27.56 Our Model 36.89 20.28 12.40 32.38 Table 5 : 5Full-dataset Cross-Lingual Summarization results in En2Ar datasetModel R1 R2 R3 RL NCLS 29.55 15.99 10.25 23.03 NCLS+MS 29.42 15.83 10.12 23.00 MCLAS 29.60 16.08 10.14 33.20 Our Model 30.21 16.27 10.46 23.90 Table 6 : 6Full-dataset Cross-Lingual Summarization results in En2Ja datasetModel R1 R2 R3 RL NCLS 32.78 12.66 6.33 26.43 NCLS+MS 32.50 12.02 6.15 26.41 MCLAS 33.20 12.57 6.33 27.27 Our Model 34.21 13.08 6.70 27.63 Table 7 : 7Full-dataset Cross-Lingual Summarization results in Ja2En datasetModel R1 R2 R3 RL NCLS 36.75 16.37 8.04 28.69 NCLS+MS 36.28 16.14 8.03 28.61 MCLAS 36.31 15.91 7.75 28.62 Our Model 37.38 16.20 8.09 28.97 Table 8 : 8Full-dataset Cross-Lingual Summarization results in En2Vi dataset Table 9 : 9Minimum Cross-Lingual Summarization ResultsModels Zh2En En2Zh NCLS 26.42/8.90/4.49/22.05 35.98/15.88/8.97/23.79 NCLS+MS 26.86/9.06/4.58/22.47 38.95/18.09/9.73/25.39 MCLAS 27.84/10.41/4.91/24.12 37.28/18.10/9.48/25.26 Our Model 27.97/11.51/5.37/27.16 40.30/20.01/10.05/25.79 Table 10 : 10Medium Cross-Lingual Summarization ResultsModels En2Zh Zh2En NCLS 29.05/10.88/6.56/24.32 40.18/19.86/10.33/26.52 NCLS+MS 28.63/10.63/6.24/24.00 39.86/19.87/10.23/26.64 MCLAS 30.73/12.26/6.98/26.51 38.35/19.75/10.64/26.41 Our Model 31.08/12.70/7.45/27.16 41.24/20.01/11.00/27.06 Table 11 : 11Maximum Cross-Lingual Summarization Results Table 12 : 12Fleiss' Kappa and Overall Agreement percentage of each human evaluation test. Higher score indicates better agreement.Models Preference Score QA score NCLS -0.123 51.11 MCLAS 0.169 59.26 Our Model 0.498 71.85 Gold Summary 0.642 95.52 Table 13 : 13Human evaluationAnalysis on Distance MethodsWe compare our implemented Sinkhorn Divergence with other distance methods. Particularly, we perform the mean or max-pooling of the teacher and student hidden representations. Subsequently, we evaluate the teacher and student discrepancy via Cosine Similarity (CS) or Mean Squared Error (MSE) of two pooled vectors. We show the numerical results inTable 14. The results demonstrate the superiority of Sinkhorn Divergence over other approaches. We hypothesize that those approaches do not efficaciously capture the geometry nature of cross-lingual output representations.Distance Methods R-1 R-2 R-3 R-L Mean-CS 44.20 24.54 16.96 30.27 Mean-MSE 44.14 24.27 16.22 30.19 Max-CS 44.29 25.65 17.07 30.82 Max-MSE 44.23 24.61 16.44 30.21 Our Method 44.75 25.76 17.20 31.05 Table 14 : 14Results when applying different distance methods in En2Zh dataset under full-dataset setting. Table 15 : 15Results when applying different distance methods in Zh2En dataset under full-dataset setting. Gromov-Wasserstein alignment of word embedding spaces. D Alvarez-Melis, T S Jaakkola, arXiv:1809.00013arXiv preprintAlvarez-Melis, D.; and Jaakkola, T. S. 2018. Gromov- Wasserstein alignment of word embedding spaces. arXiv preprint arXiv:1809.00013. On knowledge distillation from complex networks for response prediction. S Arora, M M Khapra, H G Ramaswamy, Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesLong and Short Papers1Arora, S.; Khapra, M. M.; and Ramaswamy, H. G. 2019. On knowledge distillation from complex networks for response prediction. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), 3813-3822. Y Bai, Y Gao, H Huang, arXiv:2105.13648Cross-Lingual Abstractive Summarization with Limited Parallel Resources. arXiv preprintBai, Y.; Gao, Y.; and Huang, H. 2021. Cross-Lingual Ab- stractive Summarization with Limited Parallel Resources. arXiv preprint arXiv:2105.13648. A computational fluid mechanics solution to the Monge-Kantorovich mass transfer problem. J.-D Benamou, Y Brenier, Numerische Mathematik. 843Benamou, J.-D.; and Brenier, Y. 2000. A computational fluid mechanics solution to the Monge-Kantorovich mass transfer problem. Numerische Mathematik, 84(3): 375-393. What do language representations really represent? Computational Linguistics. J Bjerva, R Östling, M H Veiga, J Tiedemann, I Augenstein, 45Bjerva, J.;Östling, R.; Veiga, M. H.; Tiedemann, J.; and Au- genstein, I. 2019. What do language representations really represent? Computational Linguistics, 45(2): 381-389. Jointly Learning to Align and Summarize for Neural Cross-Lingual Summarization. Y Cao, H Liu, X Wan, Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. the 58th Annual Meeting of the Association for Computational LinguisticsCao, Y.; Liu, H.; and Wan, X. 2020. Jointly Learning to Align and Summarize for Neural Cross-Lingual Summariza- tion. In Proceedings of the 58th Annual Meeting of the As- sociation for Computational Linguistics, 6220-6231. Numerical methods for matching for teams and Wasserstein barycenters. G Carlier, A Oberman, E Oudet, ESAIM: Mathematical Modelling and Numerical Analysis. 49Carlier, G.; Oberman, A.; and Oudet, E. 2015. Numerical methods for matching for teams and Wasserstein barycen- ters. ESAIM: Mathematical Modelling and Numerical Anal- ysis, 49(6): 1621-1642. Improving sequence-to-sequence learning via optimal transport. L Chen, Y Zhang, R Zhang, C Tao, Z Gan, H Zhang, B Li, D Shen, C Chen, Carin , L , arXiv:1901.06283arXiv preprintChen, L.; Zhang, Y.; Zhang, R.; Tao, C.; Gan, Z.; Zhang, H.; Li, B.; Shen, D.; Chen, C.; and Carin, L. 2019. Improving sequence-to-sequence learning via optimal transport. arXiv preprint arXiv:1901.06283. Knowledge distillation with feature maps for image classification. W.-C Chen, C.-C Chang, C.-R Lee, Asian Conference on Computer Vision. SpringerChen, W.-C.; Chang, C.-C.; and Lee, C.-R. 2018. Knowl- edge distillation with feature maps for image classifica- tion. In Asian Conference on Computer Vision, 200-215. Springer. . S Cole, M Eckstein, S Friedland, K Andżyczkowski, arXiv:2105.06922Quantum Optimal Transport. arXiv preprintCole, S.; Eckstein, M.; Friedland, S.; andŻyczkowski, K. 2021. Quantum Optimal Transport. arXiv preprint arXiv:2105.06922. Optimal transport for domain adaptation. IEEE transactions on pattern analysis and machine intelligence. N Courty, R Flamary, D Tuia, A Rakotomamonjy, 39Courty, N.; Flamary, R.; Tuia, D.; and Rakotomamonjy, A. 2016. Optimal transport for domain adaptation. IEEE trans- actions on pattern analysis and machine intelligence, 39(9): 1853-1865. Sinkhorn distances: Lightspeed computation of optimal transport. M Cuturi, Advances in neural information processing systems. 26Cuturi, M. 2013. Sinkhorn distances: Lightspeed computa- tion of optimal transport. Advances in neural information processing systems, 26: 2292-2300. Optimal transport on complex networks. B Danila, Y Yu, J A Marsh, K E Bassler, Physical Review E. 74446106Danila, B.; Yu, Y.; Marsh, J. A.; and Bassler, K. E. 2006. Optimal transport on complex networks. Physical Review E, 74(4): 046106. Bert: Pre-training of deep bidirectional transformers for language understanding. J Devlin, M.-W Chang, K Lee, K Toutanova, J Feydy, T Séjourné, F.-X Vialard, S Amari, A Trouvé, G Peyré, arXiv:1810.04805PMLRThe 22nd International Conference on Artificial Intelligence and Statistics. arXiv preprintInterpolating between optimal transport and MMD using Sinkhorn divergencesDevlin, J.; Chang, M.-W.; Lee, K.; and Toutanova, K. 2018. Bert: Pre-training of deep bidirectional transformers for lan- guage understanding. arXiv preprint arXiv:1810.04805. Feydy, J.; Séjourné, T.; Vialard, F.-X.; Amari, S.-i.; Trouvé, A.; and Peyré, G. 2019. Interpolating between optimal trans- port and MMD using Sinkhorn divergences. In The 22nd In- ternational Conference on Artificial Intelligence and Statis- tics, 2681-2690. PMLR. . S Fu, Z Li, J Xu, M.-M Cheng, Z Liu, Yang , X , arXiv:2007.01476Interactive knowledge distillation. arXiv preprintFu, S.; Li, Z.; Xu, J.; Cheng, M.-M.; Liu, Z.; and Yang, X. 2020. Interactive knowledge distillation. arXiv preprint arXiv:2007.01476. Understanding the difficulty of training deep feedforward neural networks. X Glorot, Y Bengio, Proceedings of the thirteenth international conference on artificial intelligence and statistics. the thirteenth international conference on artificial intelligence and statisticsJMLR Workshop and Conference ProceedingsGlorot, X.; and Bengio, Y. 2010. Understanding the diffi- culty of training deep feedforward neural networks. In Pro- ceedings of the thirteenth international conference on arti- ficial intelligence and statistics, 249-256. JMLR Workshop and Conference Proceedings. G Hinton, O Vinyals, J Dean, arXiv:1503.02531Distilling the knowledge in a neural network. arXiv preprintHinton, G.; Vinyals, O.; and Dean, J. 2015. Distill- ing the knowledge in a neural network. arXiv preprint arXiv:1503.02531. Attention-guided answer distillation for machine reading comprehension. M Hu, Y Peng, F Wei, Z Huang, D Li, N Yang, M Zhou, arXiv:1808.07644arXiv preprintHu, M.; Peng, Y.; Wei, F.; Huang, Z.; Li, D.; Yang, N.; and Zhou, M. 2018. Attention-guided answer distilla- tion for machine reading comprehension. arXiv preprint arXiv:1808.07644. Improving Zero-Shot Cross-Lingual Transfer Learning via Robust Training. K.-H Huang, W U Ahmad, N Peng, K.-W Chang, M Kusner, Y Sun, N Kolkin, K Weinberger, Pmlr, F Ladhak, E Durmus, C Cardie, K Mckeown, arXiv:2104.08645arXiv:2010.03093WikiLingua: A new benchmark dataset for cross-lingual abstractive summarization. arXiv preprintInternational conference on machine learningHuang, K.-H.; Ahmad, W. U.; Peng, N.; and Chang, K.-W. 2021. Improving Zero-Shot Cross-Lingual Transfer Learn- ing via Robust Training. arXiv preprint arXiv:2104.08645. Kusner, M.; Sun, Y.; Kolkin, N.; and Weinberger, K. 2015. From word embeddings to document distances. In Interna- tional conference on machine learning, 957-966. PMLR. Ladhak, F.; Durmus, E.; Cardie, C.; and McKeown, K. 2020. WikiLingua: A new benchmark dataset for cross-lingual ab- stractive summarization. arXiv preprint arXiv:2010.03093. Y Li, W Li, arXiv:2104.08448Data Distillation for Text Classification. arXiv preprintLi, Y.; and Li, W. 2021. Data Distillation for Text Classifi- cation. arXiv preprint arXiv:2104.08448. Multi-Document Summarization Using Cross-Language Texts. J.-M Lim, I.-S Kang, J.-H Lee, NTCIR. Lim, J.-M.; Kang, I.-S.; and Lee, J.-H. 2004. Multi- Document Summarization Using Cross-Language Texts. In NTCIR. VECO: Variable and Flexible Crosslingual Pre-training for Language Understanding and Generation. F Luo, W Wang, J Liu, Y Liu, B Bi, S Huang, F Huang, L Si, Luo, F.; Wang, W.; Liu, J.; Liu, Y.; Bi, B.; Huang, S.; Huang, F.; and Si, L. 2021. VECO: Variable and Flexible Cross- lingual Pre-training for Language Understanding and Gen- eration. Improving Unsupervised Question Answering via Summarization-Informed Question Generation. C Lyu, L Shang, Y Graham, J Foster, X Jiang, Q Liu, arXiv:2109.07954arXiv preprintLyu, C.; Shang, L.; Graham, Y.; Foster, J.; Jiang, X.; and Liu, Q. 2021. Improving Unsupervised Question Answering via Summarization-Informed Question Generation. arXiv preprint arXiv:2109.07954. T Nguyen, A T Luu, Contrastive Learning for Neural Topic Model. Advances in Neural Information Processing Systems. 34Nguyen, T.; and Luu, A. T. 2021. Contrastive Learning for Neural Topic Model. Advances in Neural Information Pro- cessing Systems, 34. T Nguyen, A T Luu, T Lu, T Quan, arXiv:2109.10616Enriching and controlling global semantics for text summarization. arXiv preprintNguyen, T.; Luu, A. T.; Lu, T.; and Quan, T. 2021. Enrich- ing and controlling global semantics for text summarization. arXiv preprint arXiv:2109.10616. Evaluation of a crosslingual romanian-english multi-document summariser. C Orǎsan, O A Chiorean, Orǎsan, C.; and Chiorean, O. A. 2008. Evaluation of a cross- lingual romanian-english multi-document summariser. A robust abstractive system for cross-lingual summarization. J Ouyang, B Song, K Mckeown, Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesLong and Short Papers1Ouyang, J.; Song, B.; and McKeown, K. 2019. A robust abstractive system for cross-lingual summarization. In Pro- ceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), 2025-2031. Contrastive learning for many-to-many multilingual neural machine translation. L Pan, C.-W Hang, A Sil, S Potdar, M Yu, X Pan, M Wang, L Wu, L Li, arXiv:2107.10137arXiv:2105.09501Improved Text Classification via Contrastive Adversarial Training. arXiv preprintPan, L.; Hang, C.-W.; Sil, A.; Potdar, S.; and Yu, M. 2021a. Improved Text Classification via Contrastive Adversarial Training. arXiv preprint arXiv:2107.10137. Pan, X.; Wang, M.; Wu, L.; and Li, L. 2021b. Con- trastive learning for many-to-many multilingual neural ma- chine translation. arXiv preprint arXiv:2105.09501. Computational optimal transport: With applications to data science. Foundations and Trends® in Machine Learning. G Peyré, M Cuturi, 11Peyré, G.; Cuturi, M.; et al. 2019. Computational optimal transport: With applications to data science. Foundations and Trends® in Machine Learning, 11(5-6): 355-607. Knowledge distillation for multilingual unsupervised neural machine translation. T Séjourné, J Feydy, F.-X Vialard, A Trouvé, G Peyré, H Sun, R Wang, K Chen, M Utiyama, E Sumita, T Zhao, arXiv:1910.12958arXiv:2004.10171arXiv preprintSinkhorn divergences for unbalanced optimal transportSéjourné, T.; Feydy, J.; Vialard, F.-X.; Trouvé, A.; and Peyré, G. 2019. Sinkhorn divergences for unbalanced op- timal transport. arXiv preprint arXiv:1910.12958. Sun, H.; Wang, R.; Chen, K.; Utiyama, M.; Sumita, E.; and Zhao, T. 2020. Knowledge distillation for multilingual unsupervised neural machine translation. arXiv preprint arXiv:2004.10171. . X Tan, Y Ren, D He, T Qin, Z Zhao, T.-Y Liu, Tan, X.; Ren, Y.; He, D.; Qin, T.; Zhao, Z.; and Liu, T.-Y. Multilingual neural machine translation with knowledge distillation. arXiv:1902.10461arXiv preprintMultilingual neural machine translation with knowl- edge distillation. arXiv preprint arXiv:1902.10461. Capturing greater context for question generation. L A Tuan, D Shah, R Barzilay, Proceedings of the AAAI Conference on Artificial Intelligence. the AAAI Conference on Artificial Intelligence34Tuan, L. A.; Shah, D.; and Barzilay, R. 2020. Capturing greater context for question generation. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 34, 9065-9072. Using bilingual information for crosslanguage document summarization. X Wan, Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies. the 49th Annual Meeting of the Association for Computational Linguistics: Human Language TechnologiesWan, X. 2011. Using bilingual information for cross- language document summarization. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, 1546-1555. Cross-language document summarization based on machine translation quality prediction. X Wan, H Li, J Xiao, Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics. the 48th Annual Meeting of the Association for Computational LinguisticsWan, X.; Li, H.; and Xiao, J. 2010. Cross-language doc- ument summarization based on machine translation quality prediction. In Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, 917-926. F Wang, J Yan, F Meng, J Zhou, arXiv:2105.12967Selective Knowledge Distillation for Neural Machine Translation. arXiv preprintWang, F.; Yan, J.; Meng, F.; and Zhou, J. 2021. Selec- tive Knowledge Distillation for Neural Machine Translation. arXiv preprint arXiv:2105.12967. Speeding up Word Mover's Distance and its variants via properties of distances between embeddings. M Werner, E Laber, arXiv:1912.00509arXiv preprintWerner, M.; and Laber, E. 2019. Speeding up Word Mover's Distance and its variants via properties of distances between embeddings. arXiv preprint arXiv:1912.00509. Knowledge distillation via adaptive instance normalization. J Yang, B Martinez, A Bulat, G Tzimiropoulos, arXiv:2003.04289arXiv preprintYang, J.; Martinez, B.; Bulat, A.; and Tzimiropoulos, G. 2020a. Knowledge distillation via adaptive instance normal- ization. arXiv preprint arXiv:2003.04289. Model compression with two-stage multi-teacher knowledge distillation for web question answering system. Z Yang, L Shou, M Gong, W Lin, D Jiang, Proceedings of the 13th International Conference on Web Search and Data Mining. the 13th International Conference on Web Search and Data MiningYang, Z.; Shou, L.; Gong, M.; Lin, W.; and Jiang, D. 2020b. Model compression with two-stage multi-teacher knowl- edge distillation for web question answering system. In Proceedings of the 13th International Conference on Web Search and Data Mining, 690-698. Data Augmentation with Hierarchical SQL-to-Question Generation for Cross-domain Text-to-SQL Parsing. A Zhang, K Wu, L Wang, Z Li, X Xiao, H Wu, M Zhang, H Wang, J Zhang, Y Zhao, M Saleh, P Liu, arXiv:2103.02227PMLRInternational Conference on Machine Learning. arXiv preprintPegasus: Pre-training with extracted gap-sentences for abstractive summarizationZhang, A.; Wu, K.; Wang, L.; Li, Z.; Xiao, X.; Wu, H.; Zhang, M.; and Wang, H. 2021. Data Augmentation with Hierarchical SQL-to-Question Generation for Cross-domain Text-to-SQL Parsing. arXiv preprint arXiv:2103.02227. Zhang, J.; Zhao, Y.; Saleh, M.; and Liu, P. 2020. Pega- sus: Pre-training with extracted gap-sentences for abstrac- tive summarization. In International Conference on Ma- chine Learning, 11328-11339. PMLR. Abstractive crosslanguage summarization via translation model enhanced predicate argument structure fusing. J Zhang, Y Zhou, C Zong, IEEE/ACM Transactions on Audio, Speech, and Language Processing. 2410Zhang, J.; Zhou, Y.; and Zong, C. 2016. Abstractive cross- language summarization via translation model enhanced predicate argument structure fusing. IEEE/ACM Transac- tions on Audio, Speech, and Language Processing, 24(10): 1842-1853. J Zhu, Q Wang, Y Wang, Y Zhou, J Zhang, S Wang, C Zong, arXiv:1909.00156NCLS: Neural cross-lingual summarization. arXiv preprintZhu, J.; Wang, Q.; Wang, Y.; Zhou, Y.; Zhang, J.; Wang, S.; and Zong, C. 2019. NCLS: Neural cross-lingual summariza- tion. arXiv preprint arXiv:1909.00156. Attend, translate and summarize: An efficient method for neural cross-lingual summarization. J Zhu, Y Zhou, J Zhang, C Zong, Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. the 58th Annual Meeting of the Association for Computational LinguisticsZhu, J.; Zhou, Y.; Zhang, J.; and Zong, C. 2020. Attend, translate and summarize: An efficient method for neural cross-lingual summarization. In Proceedings of the 58th An- nual Meeting of the Association for Computational Linguis- tics, 1309-1321. A survey on unsupervised outlier detection in high-dimensional numerical data. Statistical Analysis and Data Mining. A Zimek, E Schubert, H.-P Kriegel, The ASA Data Science Journal. 55Zimek, A.; Schubert, E.; and Kriegel, H.-P. 2012. A sur- vey on unsupervised outlier detection in high-dimensional numerical data. Statistical Analysis and Data Mining: The ASA Data Science Journal, 5(5): 363-387.
[]
[ "Automated Reasoning for Physical Quantities, Units, and Measurements in Isabelle/HOL", "Automated Reasoning for Physical Quantities, Units, and Measurements in Isabelle/HOL" ]
[ "Simon Foster \nUniversity of York\nUniversity Paris-Saclay\n\n", "Burkhart Wolff \nUniversity of York\nUniversity Paris-Saclay\n\n" ]
[ "University of York\nUniversity Paris-Saclay\n", "University of York\nUniversity Paris-Saclay\n" ]
[]
Formal verification of cyber-physical and robotic systems requires that we can accurately model physical quantities that exist in the real-world. The use of explicit units in such quantities can allow a higher degree of rigour, since we can ensure compatibility of quantities in calculations. At the same time, improper use of units can be a barrier to safety and therefore it is highly desirable to have automated sanity checking in physical calculations. In this paper, we contribute a mechanisation of the International System of Quantities (ISQ) and the associated SI unit system in Isabelle/HOL. We show how Isabelle can be used to provide a type system for physical quantities, and automated proof support. Quantities are parameterised by dimension types, which correspond to base vectors, and thus only quantities of the same dimension can be equated. Since the underlying "algebra of quantities" induces congruences on quantity and SI types, specific tactic support is developed to capture these. Our construction is validated by a test-set of known equivalences between both quantities and SI units. Moreover, the presented theory can be used for type-safe conversions between the SI system and others, like the British Imperial System (BIS).
10.48550/arxiv.2302.07629
[ "https://export.arxiv.org/pdf/2302.07629v1.pdf" ]
256,868,545
2302.07629
78948b947103ddb81eec06b13cb556bae9b106e6
Automated Reasoning for Physical Quantities, Units, and Measurements in Isabelle/HOL Simon Foster University of York University Paris-Saclay Burkhart Wolff University of York University Paris-Saclay Automated Reasoning for Physical Quantities, Units, and Measurements in Isabelle/HOL Formal verification of cyber-physical and robotic systems requires that we can accurately model physical quantities that exist in the real-world. The use of explicit units in such quantities can allow a higher degree of rigour, since we can ensure compatibility of quantities in calculations. At the same time, improper use of units can be a barrier to safety and therefore it is highly desirable to have automated sanity checking in physical calculations. In this paper, we contribute a mechanisation of the International System of Quantities (ISQ) and the associated SI unit system in Isabelle/HOL. We show how Isabelle can be used to provide a type system for physical quantities, and automated proof support. Quantities are parameterised by dimension types, which correspond to base vectors, and thus only quantities of the same dimension can be equated. Since the underlying "algebra of quantities" induces congruences on quantity and SI types, specific tactic support is developed to capture these. Our construction is validated by a test-set of known equivalences between both quantities and SI units. Moreover, the presented theory can be used for type-safe conversions between the SI system and others, like the British Imperial System (BIS). I. Introduction Cyber-physical systems use software to control interactions with the physical world, and so must account for quantifiable properties of physical phenomena. Modern physics uses quantities such as mass, length, time, and current, to characterise such physical properties. Such quantities are linked via an algebra to derived concepts such as velocity, force, and energy. The latter allows for a dimensional analysis of physical equations, which is the backbone of Newtonian Physics. In parallel, physicians developed their own research field called "metrology" for scientific study of the measurement of physical quantities. The integration of metrology into software engineering is of great importance to ensure that the physical components behave in a safe and predictable way [1], [2], [3]. The international standard for quantities and measurements is distributed by the Bureau International des Poids et des Mesures (BIPM), which also provides the Vocabulaire International de Métrologie (VIM) [4]. The VIM actually defines two systems: the International System of Quantities (ISQ) and the International System of Units (SI, abbreviated from the French 'Système international d'unités'). The latter is also documented in the SI Brochure [5], a standard that is updated periodically, most recently in 2019. Finally, the VIM defines concrete reference measurement procedures as well as a terminology for measurement errors. Conceived as a refinement of the ISQ, the SI comprises a coherent system of units, built on seven base units, which are the metre, kilogram, second, ampere, kelvin, mole, candela, and a set of twenty-four prefixes to the unit names and unit symbols, such as milli-and kilo-, which may be used when specifying multiples and fractions of the units. The system also specifies names for 22 derived units, such as lumen and watt, for other common physical quantities. While there remains a wealth of measuring systems such as the British Imperial System (BIS), the SI can be considered as the defacto reference behind them all. The contribution of this paper is a mechanisation of the ISQ and SI in Isabelle/HOL, together with a deep integration into Isabelle's order-sorted polymorphic type system [6]. Our aim is two fold: (1) to provide a coherent mechanisation of the ISQ and its ontology of units; and (2) to support the use of units in formal specifications and models for cyber-physical systems [7]. Our treatment of physical quantities allows the use of Isabelle's type system in checking for correct use of units. Since the algebra of quantities induces congruences on quantity types, specific tactic support is developed to capture these. Our construction is validated by a test-set of known equivalences between both quantities and SI units. Moreover, the presented theory can be used for type-safe conversions between the SI system and others, like the BIS. Concretely, we introduce a novel parametric type for quantities, N[D, S], where N is the numeric type (e.g. Q, R), D is a dimension type (e.g. L, M, T), and S is the system of units being employed. This accompanied by a formal ontology of units, which can be used in measurements. We can then write down specific quantities such as 20 * Q metre :: R[L, SI], which represents a measurement of 20 metres in the SI (with dimension length), and 30 * Q pound :: R[M, BIS], which represents a measurement of 30 pounds in the BIS (with dimension mass). Only quantities of the same dimension and unit system are comparable, and thus 20 * Q metre = 30 * Q pound is a type error. Nevertheless, we can convert between different unit systems, such that metrify(30 * Q pound) ≈ 9.07 * Q kilogram. Our work employs several advanced features of Isabelle/HOL to implement the quantity type system without relying on the complexity of dependent types. In summary, our contributions are: 1) an embedding of the ISQ into Isabelle/HOL, including dimensions, quantities, units, and conversions; 2) a sound-by-construction quantity type system for employs checking of dimensions and dimension coercions; 3) automated proof support for quantity conjectures; 4) a formal ontology of units from the VIM [4] and SI Brochure [5], for use in specifications and models. The structure of our paper is as follows. In §II we briefly survey related work to put our contributions into context. In §III we begin our contributions with our account of dimension types, which use a universe construction and type-class based characterisation. In §IV we use dimensions to implement the quantity type, N[D, S], including automated proof. In §V, we implement the SI unit system, and an associated ontology of units and equations. In §VI we describe conversions between unit systems, such as the SI, CGS, and BIS. Finally, in §VII we evaluate our work and conclude. Our entire theory development can be found on the Archive of Formal Proofs [8]. II. Related Work The need for physical quantities and measurement in software and formal specifications is widely acknowledged [9], [1], [2], [3]. Burgueño et al. [1], [10] argue the importance for safety of having physical quantities in robotic software models, and extend UML with types for quantities, dimensions, and units. Flater [3] argues for the extension of the SI standard with dimensions and units to support software metrology. Quantity types are implemented in several mainstream numerical computation systems, such as MATLAB 1 and Mathematica 2 , usually to support conversion between units, checking for unit consistency, and simplification of dimensions. Hall [2] describes a recent library for Python that implements quantities and facilitates dimensional analysis. Our work can serve as a baseline for verified implementations of the ISQ, particularly through the Isabelle code generator [11]. There have also been numerous direct implementations of ISQ and SI for programming languages. Dimension Types have been presented by Kennedy [12], [13] for F , as a way of parametrising data, and a more recent account along this line is by Garrigue and Ly [14]. These works directly implement a type system for dimensions and units in an ML-like language, while our approach formally derives such type inference inside the framework of parametric polymorphism and the framework of HOL. Thus, in contrast to direct implementations, our approach assures correctness by construction. Hayes et al. [9] develop an extension of the Z specification language to incorporate units, dimensions, and quantities. Their main innovation is the addition of an operation M D, which is effectively a type constructor for a quantity of numeric type M and dimension type D. The latter has served as inspiration for our approach. However, their work lacks a supporting implementation, whereas we effectively provide a type system for quantities embedded into Isabelle/HOL. This is beyond the expressive power of Z. Moreover, unlike [9] our quantity types convey semantic information about the underlying dimensions, through our dimension universe construction, which can be used in reasoning. Aragon [15] explores the algebraic structure of dimensions and quantities. He formalises quantities as ordered pairs called q-numbers, consisting of a complex number and a label, denoting the unit. He then explores the algebraic properties of unit labels and quantities. There is no explicit characterisation of dimensions, but units only. Nevertheless, the resulting properties have served as a benchmark for our work, for example showing that dimensions form an additive group. Our work provides an implementation of the ISQ that is foundational, in that we precisely implement the quantity calculus, but also applicable, because it permits automatic checking of dimensions, efficient proof support, and code generation. We also provide a verified ontology of measurement units, which can be used in formal specifications and models [7]. We are not aware of a comparable implementation of the ISQ in a proof assistant to date. III. Dimensions In this section we mechanise dimensions, which will be used in the following sections to parametrise physical quantities. Dimensions are used to differentiate quantities of different kinds. For example, quantities of 10 kg and 10 m have the same magnitude, but are incomparable since they have the dimensions of mass and length, respectively. In the ISQ there are seven base quantities, including length, mass, and time, corresponding to seven base dimensions, which we will consider later in this article. The base dimensions are each denoted by a symbol, such as L, M, T respectively, and a dimension is then a product of such symbols, each raised to an integer power. For example, the area quantity is represented by the dimension L 2 , and the velocity quantity by L·T −1 . Since we wish to support different unit systems, we here support a generic dimension system based on vectors. In a type theoretical context, dimensions can be seen as a parameter for physical quantities. Specifically, we can conceptually parametrise a quantity by its dimension, and since the equality and order relations are typically homogeneous, we can only compare quantities with the same dimension. However, to achieve this in a theorem prover like Isabelle/HOL, which lacks dependent types, we need to characterise dimension types using a different mechanism. In our case, we use type classes, which effectively allows us to isolate a given subset of type constructors which can be used to define dimensions. However, we first need to characterise a universe that these type constructors will be closed under. Consequently, we begin by defining a universe of dimensions, and then later use type classes to effectively define a homomorphism between dimension types and this universe. This overall approach is illustrated in Figure 1. We define (1) the dimension universe; (2) a type class (dim-type) to syntactically characterise a class of types that characterise dimensions; (3) define a set of unitary types and type constructors that instantiate dim-type, and can be used to parametrise quantities. Effectively, this achieves an inductively defined family of types over the dimension arithmetic operators. Our approach is generic, and can be applied to different measurement systems, though our focus is on the ISQ for the moment. A. Universe of Dimensions We begin by defining the dimension universe, the core operators for constructing dimensions, and their properties. If we assume there are n ∈ N base quantities, then a dimension has the form d x 1 1 · d x 2 2 · · · d x n n , a product of dimension symbols (d i ) each raised to a power drawn from the vector x. The encoding of the dimension vector x in Isabelle is shown below. The typedef command in Isabelle/HOL introduces a new type characterised by a non-empty subset of an existing type. In this case, we introduce a new type dimvec with two parameters N and I. We choose the set UNIV of all total functions from I to N as the characteristic set. In future type definitions, we will use typedef to further restrict the subset. Conceptually, a dimension vector is simply a total function from an enumerable index type I, for the possible dimensions, to a numeric type N, which should minimally form a ring (e.g. Z). The enumerable (enum) sort constraint (I :: enum) requires that I is isomorphic to a list of values, and is thus also a finite type. In the ISQ we have I = {L, M, T, I, Θ, N, J}, for example. Next, we define the core dimension constructors. For simplicity, we present these as functions using λ-terms, but they are technically defined using the lifting package [16]. Definition 2 (Dimension Constructors). 1 (λ i. 0) b(i) 1(i → 1) x · y (λ i. x(i) + y(i)) x −1 (λ i. − x(i)) Here, 1 denotes a null dimension, which does not map to any physical quantity. It can characterise dimensionless quantities, such as mathematical constants (π, e, etc.) and functions. The function b(i), for i ∈ I, constructs a base dimension from the base quantity i by updating the mapping for i in 1 to have the power 1. A base dimension has exactly one entry in the vector mapping to 1, with the others all 0. We also define a predicate is-BaseDim :: (N, I) dimvec ⇒ B, which determines whether a dimension vector corresponds to a base dimension. A product of two dimensions (x · y) simply pointwise sums together all of the powers, and an inverse (x −1 ) negates each of the powers. We can also now obtain division using the usual definition: x/y x · y −1 . With these definitions, we can prove the following group theorem: Theorem 1. If (N, +, 0, −) forms an abelian group then also ((N, I)dimvec, ·, 1, −1 ) forms an abelian group. The abelian group laws can therefore be used to equationally rewrite dimension expressions, which is automated using Isabelle's simplifier. Another avenue to efficient proof for dimensions is provided through the Isabelle code generator [11]. Since the set of base quantities I is enumerable, we can always convert a dimension vector to a list of N, and vice-versa. We achieve this using a function mk-dimvec, which converts a list of N with length | I | to a dimension vector in N. Definition 3 (Converting Lists to Dimensions). mk-dvec(ds) if (length(ds) = | I |) then (λ d. ds(enum-ind(d))) else 1 Since I is enumerable, every dimension can be assigned a natural number, which also denotes its position in the underlying list. The function enum-ind :: (I :: enum) ⇒ N extracts this positional index of a value in an enumerable type. For ISQ, we have enum-ind(L) = 0 and enum-ind(T) = 2, for example. We can then construct a dimension from a list ds simply by looking up the value at the enumeration index. Every possible dimension can be constructed using mk-dvec, and so we can use it as a so-called "code datatype" for the Isabelle code generator. Dimensions are then encoded in SML or Haskell as an algebraic datatype with a single constructor corresponding to mk-dvec, for example: datatype ('a, 'b) dimvec = Mk_dimvec of 'a list We then prove code equation theorems for the group operators, which are homomorphism laws, and enable efficient execution: Theorem 2. For a dimension vector space (N, I) dimvec, with | xs | = | ys | = | I |, the following code equations hold: 1 = mk-dvec (replicate | I | 0) mk-dvec(xs) · mk-dvec(ys) = map (λ(x, y). x + y) (zip xs ys) (mk-dvec(xs)) n = mk-dvec (map (λ x. n · x) xs (mk-dvec(xs)) −1 = mk-dvec (map (λ x. − x) xs These theorems give concrete definitional equations for the executable functions on the datatype. The null dimension is a list of 0 powers of length | I |. Multiplication of two equilength lists xs and ys is pairwise addition of each element. Raising to the nth power multiplies each list element by n. Taking the inverse power negates each element. With such equations we can perform efficient dimension arithmetic on dimensions constructed from lists. Dimensions in the ISQ are represented using the concrete dimension index type sdim: Definition 4 (ISQ Base Quantities). datatype sdim = Length | Mass | Time | Current | Temperature | Amount | Intensity type-synonym Dimension = (Z, sdim) dimvec It suffices to show that sdim is enumerable, using a type class instantiation, and then we can create a specific type synonym Dimension, for dimension vectors in the ISQ. For convenience, we then define dimension vectors for each of the base quantities, for example L b(Length). B. Dimension Types Having defined our dimension universe, the next step is to characterise the family of dimension types. These dimension types will be used to parameterise our quantities, and ensure only quantities of the type dimension may be compared. We avoid the need for dependent types by first introducing a type class for dimension types. Definition 5 (Dimension Type Classes). class dim-type = unitary + fixes dim-ty-sem :: D itself ⇒ Dimension class basedim-type = dim-type + assumes is-BaseDim : is-BaseDim(QD(D)) A type class characterises a family of types that each implement a given function signature with certain properties, such as algebraic structures like monoids and groups. The class command introduces a type class with a given name, potentially extending existing classes. The fixes subcommand declares a new typed symbol in the signature, and assumes introduces a property of the symbols in the signature. The dim-type class characterises a unitary type D (i.e. a type with cardinality 1) and associates it to a particular dimension. The type D itself represents a type as a value in Isabelle/HOL. Thus, dim-ty-sem can be seen as a function from types inhabiting the dim-type class to particular dimensions, as shown in Figure 1. We can use the syntactic constructor TYPE(α) to obtain a value of type α itself , for a particular type α. This effectively introduces an isomorphism between dimensions at the value level and the type level. For convenience, we introduce the notation QD(D) dim-ty-sem TYPE(D), which obtains the dimension of a given dimension type. The class basedim-type further specialises dim-type by requiring that the mapped dimension is a base dimension. We use these classes to capture the set of type constructors for dimension types. First we construct types to denote the base dimensions, as unitary types. For example, we define the type length as below: typedef Length = (UNIV :: unit set) which exploits the fact that a type definition generates a fresh type name from a set (in this case, the set that just contains the only element of the unit type). Though there is a seeming clash with the Length constructor introduced in definition 4, these names inhabit different name spaces. Length here is a "tag type" whose members do not convey information, but represent dimension types syntactically. We define seven such types, one for each of the ISQ base quantities, and also a further special type called 1, which corresponds to a dimensionless quantity. Each of the base dimensions instantiates the basedim-type class by mapping to the corresponding dimension symbol introduced in the previous section, such that, for example, QD(Length) = Length. Next, we introduce the arithmetic operators for dimensions at the type level. The product and inverse type constructors are defined as shown below: They are similarly tag types, but the parameters must inhabit the dim-ty class. This ensures that the dimension types are closed under products and inverse. Using these type constructors, and the base dimension types, we can inductively define algebraic dimensions at the type level. We assign the type constructors the following implementations of dim-ty-sem: These link together the type constructors and the underlying dimension operators. The semantics of a DimTimes type calculates the underlying value-level dimension of each parameter D 1 and D 2 , and multiplies them together. The DimInv type similarly calculates the dimension and then takes the inverse. We give these type constructors the usual mathematical syntax, so that we can write dimension types like M · L and T −1 . We also define a type synonym for division, namely (D 1 , D 2 ) DimDiv D 1 · D −1 2 , and give it the usual syntax. Moreover, we define a fixed number of powers and inverse powers at the type level, such as D −3 = (D · D · D) −1 . We can now also create the set of derived dimensions specified in the ISQ using type synonyms. For example, we define Velocity L · T −1 and Pressure L −1 · M · T −1 , which provides a terminology of dimensions for use in formal specifications. We show further example in Figure 2, which also demonstrates the mathematical syntax for dimensions implemented in Isabelle/HOL. C. Dimension Normalisation Unlike dimensions at the value level, dimension types with different syntactic forms are incomparable, because they are distinct type expressions. For example, it is intuitively a fact that L · T −1 · T = L, which can be proved using the group laws. However, at the type level L · T −1 · T and L are different type expressions, and no built-in normalisation is available in Isabelle. As a result, we need to implement our own normalisation function, normalise(D), for ISQ dimensions in Isabelle/ML, so that quantities over dimensions with distinct syntactic forms can be related. Our normalisation function evaluates the dimension vector of a dimension expression, and then uses this to produce a normal form. We implement dimension type evaluation using the ML function typ-to-dim :: typ ⇒ int list. It converts types formed of the base dimensions and dimension arithmetic operators into a dimension vector list, using the representation given in definition 3. For example, typ-to-dim(L) = [1, 0, 0, 0, 0, 0, 0] and typ-to-dim( D −1 ) = map (λ x. − x) (typ-to-dim(x)). Having evaluated the dimension expression, we can use it to construct the normal form. This is an ordered dimension expression of the form L x 1 · M x 2 · T x 3 · · · J x 7 , except that we omit terms where x i = 0. If every such term is 0, then the function produces the dimensionless quantity, 1. As an example, normalise(T 4 · L −2 · M −1 · I 2 · M) yields the dimension type L −2 · T 4 · I 2 . This normalisation function is used later in this paper to facilitate coercion between quantities with distinct dimension expressions. IV. Physical Quantities and Measurement In this section we turn our attention to quantities themselves. As for dimensions, we will model quantities at both the value and type level. We also introduce the concept of measurement system, which is used to specify the units being used for the different dimensions, such as metres for L and seconds for T. A. Quantity Universe and Measurement Systems We specify our quantity universe as a record with fields for the magnitude and dimension of the quantity. The Quantity type is parametric over a numeric type N (e.g. Q, R), which should form a field, and the dimension index type I. The magnitude is then a number in N, and a dimension vector in over I. We can now specify the core arithmetic operators on quantities. For convenience of presentation, we use tuple syntax (x, D), though in Isabelle the record fields are used. Definition 8 (Quantity Arithmetic Operators). 0 (0, 1) 1 (1, 1) (x, D 1 ) · (y, D 2 ) = (x · y, D 1 · D 2 ) (x, D) −1 = (x −1 , D −1 ) (x, D 1 )/(y, D 2 ) = (x/y, D 1 /D 2 ) (x, D) + (y, D) = (x + y, D) (x, D) − (y, D) = (x − y, D) (x, D 1 ) ≤ (y, D 2 ) ⇔ (x ≤ y ∧ D 1 = D 2 ) The arithmetic operators are overloaded in Isabelle/HOL, which is why they can validly appear on both sides of these equations. The "0" and "1" quantities are specified as dimensionless quantities with magnitude 0 and 1, respectively. Multiplication, inverse, and division are total operations that simply distribute through the pair. When multiplying two quantities, we need to multiply both the magnitudes and dimensions. For example, (7, L · T −1 ) · (2, T) = (14, L). In contrast, addition and subtraction are partial operators that may be applied only when the two quantities have the same dimension. In Isabelle/HOL, the value of an addition or subtraction for different quantities of different dimensions is unspecified. Finally, the order on quantities is simply the order on the magnitudes, but with the requirement that the two dimensions are equal. Quantities as formalised so far specify the form of dimension, but not the system of units being employed. For this, we extend the Quantity type to create "measurement systems": We extend the Quantity record with an additional field unit-sys. A measurement system is a quantity that specifies the system of units being used via an additional type parameter S, which must inhabit the type class unit-system. A unit system type is a unitary type that effectively allows us to tag quantities. This allows us to distinguish quantities using different systems of units, and so prevent improper mixing. For example, the presence of the SI tag means that a quantity of length is fundamentally measured in metres, whereas the presence of a tag such as BIS may indicate that length is measured in yards. Later, we will use these to facilitate typesafe conversions between different unit systems. All the arithmetic operators can be straightforwardly lifted to measurement systems. Since all such functions are monomorphic (e.g. of type α ⇒ α ⇒ α), mixing of systems is avoided by construction. B. Dimension Typed Quantities Having defined our universe for quantities, we next enrich this representation with type-level dimensions. For expediency, we assume that all such quantities also have a measurement system attached. Moreover, we focus on quantities with dimensions from the ISQ. Lifting of arithmetic operators x + y and x − y is straightforward for typed quantities, since they are monomorphic and only defined when the dimensions of x and y agree. We can then easily show that typed quantities form an additive abelian group. We also define a scalar multiplication scaleQ :: N ⇒ N[D, S] ⇒ N[D, S], with notation n * Q x, which scales a quantity by a given number without changing the dimension. We can then show that typed quantities form an additive abelian group, and a real vector space, with ( * Q ) as the scalar multiplication operator. Things are more involved when dealing with general multiplication and division, since these need to perform dimension arithmetic at the type level. For example, if we have quantities x :: R[I, SI] and y :: R[T, SI], then multiplication of x and y is well-defined, and should have the type R[I · T, SI]. As a result, we introduce bespoke functions, qtimes, qinverse, and qdivide. We first give the types for these functions: The first function multiplies two quantities, with the same measurement system, and "multiplies" the dimension types using the type constructors introduced in §III-B. Technically, no multiplication computation takes place, but rather a type constructor denoting multiplication is inserted. Similarly, qinverse represents the inverse of the parametrised dimension, and qdivide stands for a division. What is achieved here is analogous to dependent types, though we require additional machinery for normalising dimension types (cf. §III-C). The definitions of qtimes and qinverse are obtained simply by lifting of the corresponding functions on quantities in Definition 8, which is technically achieved using the lifting package [16]. In order to do this we need to prove that the invariant of the QuantT type is satisfied, which involves showing that the family of typed quantities is closed under the two functions. For qmult, we need to prove that dim(x · y) = QD(D 1 · D 2 ), whenever dim(x) = QD(D 1 ) and dim(y) = QD(D 2 ), which follows simply through Definitions 6 and 8. For convenience, we give these functions the usual notation of x • y, x −1 , and x/y, but in Isabelle we embolden the operators to syntactically distinguish them. With qtimes and qinverse, we can also define positive and negative powers, such as x −2 = (x • x) −1 . Equality (x = y) in HOL is a homogeneous function of type α → α → B; therefore, it cannot be used to compare objects of different types. Consequently, it cannot be used to compare quantities whose dimension types have different syntactic forms (e.g. L · T −1 · T and L). This motivates a definition of heterogeneous (in)equality for quantities: and (≤) on the underlying quantities. They ignore the dimension types, but the underlying dimensions must nevertheless be the same as per the definitions in §IV-A. We give these functions the notation x y and x y, respectively. Relation ( ) forms an equivalence relation, and ( ) forms a preorder. Moreover, ( ) is a congruence relation for (•), ( −1 ), and ( * Q ). C. Proof Support We implement an interpretation-based proof strategy for typed quantity (in)equalities, which allows us to split a conjecture into two parts: (1) equality of the magnitudes; and (2) equivalence of the dimensions. This is supported by a function magQ :: N[D, S] ⇒ N, with syntax − Q , which extracts the magnitude from a typed quantity. We can calculate magnitudes using interpretation laws, like the ones below: x + y Q = x Q + y Q x • y Q = x Q · y Q Such laws derive directly from the definition of the quantity operators in Definition 8. The equation for addition implicitly makes use of the fact that x and y have the same dimension, and so addition is well-defined in the quantity universe. We then have the following transfer theorems, for the case of two quantitites with the same (syntactic) dimensions: Theorem 3 (Quantity Transfer Laws). x = y ⇔ ( x Q = y Q ) x ≤ y ⇔ ( x Q ≤ y Q ) In both cases, we need not check the equivalence of the dimensions as by construction we know that x and y have the same type, and so also have the same dimensions. It is sufficient simply to check the relation holds of the underlying magnitudes. For our heterogeneous (in)equality relations, we have the following transfer theorems: We can then prove heterogeneous equalities by calculation of the underlying magnitudes and dimensions, and use of the numeric and dimensions laws. We supply a proof method called si-simp, which uses the simplifier to perform transfer and interpretation, and additionally invokes field simplification laws. An additional method called si-calc also compiles dimension vectors (cf. Definition 3) using the code generator, and can thus efficiently prove dimension equalities. We can, for example, prove the following algebraic laws automatically: Theorem 5 (Quantity Algebraic Laws). a * Q (x + y) = (a * Q x) + (a * Q y) x • y y • x (x • y) −1 x −1 • y −1 D. Coercion and Dimension Normalisation The need for heterogeneous quantity relations ( , ) can be avoided by the use of coercions to convert between two syntactic representations of the same dimension. Moreover, we can use Isabelle's sophisticated syntax and checking pipeline to normalise dimensions, and so automatically coerce quantities to a normal form. This improves the usability of the library, since the usual relations (=) and (≤) can be used directly. We implement a function dnorm :: N[D 1 , S] ⇒ N[D 2 , S] , which can convert between quantities with different dimension forms. In order to use it effectively, it is necessary to know the target dimension D 2 in advance. It is defined below: dnorm(x) (if QD(D 1 ) = QD(D 2 ) then toQ ( fromQ(x)) else 0) The function checks whether the source and target dimensions (D 1 and D 2 ) are the same. If they agree, then it performs the coercion by erasing the types with fromQ and reinstating the new dimension type with toQ. Otherwise, it returns a valid quantity of the target dimension, but with magnitude 0. For example, if we have x :: R[L · T −1 · T, SI], then we can use dnorm(x) :: R[L, SI] to obtain a quantity with an equivalent dimension, since QD(L · T −1 · T) = QD(L). In general, for two equivalent quantities x y, we have it that dnorm(x) = y. Next, we extend Isabelle's checking pipeline to allow dimension normalisation, so that D 2 can be automatically calculated. We do this by implementing an SML function checkquant, which takes a term and enriches it with dimension information. Whenever it encounters an instance of dnorm(t), it extracts the type of t, which should be N[D, S]. This being the case, we enrich the instance of dnorm to have the type N[D, S] ⇒ N[normalise(D), S]. We then insert check-quant into Isabelle's term checking pipeline. Technically, this is achieved using an Isabelle/ML API function called Syntax Phases.term check, which allow us to add a new phase into the term checking process. In this case, we add it after type inference has occurred so that we can use the unnormalised dimension type expression as an input to check-quant. The soundness of this transformation does not depend on the correctness of normalise, since if an incorrect dimension is calculated, dnorm will return 0. Nevertheless, the effect is to achieve something akin to dependent types, but in a first-order polymorphic type system. V. Unit Systems and the SI In this section we implement units generally, and in particular the SI unit system. We then implement a formal ontology of derived units and prefixes, drawn from the VIM standard [4] and SI Brochure [5]. Our ontology consists of the 7 base units, 32 derived and accepted units, and 24 unit prefixes. An SI unit is simply a quantity in the ISQ with magnitude 1, which is typically combined with a magnitude to describe a measurement. A base unit for a particular unit system S is a quantity whose dimension is one of the base dimensions. Base units are described by the predicate is-base-unit :: N[D, S] ⇒ B, defined as is-base-unit(x) (mag(x) = 1 ∧ is-BaseDim). We introduce the constructor BUNIT(D, S), which constructs a base unit using the base dimension type D in the system S, with BUNIT(D, S) Q = 1. For the SI, we create a unitary type SI, and instantiate the unit-system class. We then define the 7 base units of the SI: Since the second is very often used as the unit of time, we characterise it as a polymorphic base unit, so that it can effectively exist in several systems. For convenience we create type synonyms, which allow us to specify units at the type level, for example N meter N[Length, SI], which is a quantity of dimension length in the SI. We now at last have the facilitates to write quantities with SI units. At the basic level, we can write quantities like 20 * Q metre, which is the metre unit scaled by 20, and has the the inferred type of R[L, SI]. We can also write compound units, such as 10 * Q (metre • second −1 ), which has inferred type R[L · T −1 ]. We can also prove unit equations like (metre • second −1 ) • second metre using the si-calc proof strategy, as shown below: Similarly we can use coercions to prove conjectures such as dnorm(((5 * Q (metre/second)) • (10 * Q second)) = 50 * Q metre. We can now turn our attention to constructing a formal ontology of derived SI units in Isabelle/HOL taken from the VIM and SI Brochure [5, page 137]: Definition 12 (Core Derived Units). Isabelle can infer the dimension type of each such unit, for example watt has the dimension M · L 2 · T −3 . Radians and steradians have the dimensions L · L −1 and L 2 · L −2 , which are distinct dimension types, but both semantically equal to the dimensionless quantity 1. Interestingly, it has been argued elsewhere than there should be a separate angle dimension [17], [3], which would be necessary to formally distinguish them. Nevertheless, we choose to implement the SI as it is defined, though future extension is possible. The SI defines 24 prefixes, which can be used to scale SI units. We give a selection of these below: Definition 13 (SI Prefixes). Prefixes are not quantities, but simply abstract numbers in N, which can be used to scale units. For example, we can write a quantity such as 40 * Q milli * Q metre. The SI also has a notion of "accepted" units [5, page 145], which are quantities often used as units, but not technically SI because they have a magnitude other than 1. We give a selection of these below: Definition 14 (Accepted Non-SI Units). minute 60 * Q second hour 60 * Q minute day 24 * Q hour degree (π/180) * Q radian litre 1/1000 * Q metre 3 tonne = 10 3 * Q kilogram These quantities can readily be treated as units in our mechanisation, though the type of such a quantity does not reflect the unit. For example, the units day, hour, and year all have the dimension T, as expected, meaning they are comparable. We can therefore prove unit equation theorems such as 1 * Q hour = 3600 * Q second, 1 * Q day = 86400 * Q second, and 1 * Q hectare = 1 * Q (hecto * Q metre) 2 using the si-simp method, which can act as the basis for unit conversions. Similarly, we can use prefixes to express relations between derived quantities, such as 25 * Q metre/second = 90 * Q (kilo * Q metre)/hour. The SI units are defined in terms of exact values for 7 physical constants [5, page 127]. We define these in Isabelle: k is the Boltzmann constant. N A is the Avagadro constant. K cd is the luminous efficacy of monochromatic radiation of frequency 540 · 10 12 Hz. These physical constants serve to ground measurements using a particular SI unit. With these constants, we can arrange their definitional equations to verify defining theorems for each unit, as shown below: Theorem 6 (Foundational Equalities). newton kilogram • metre • second −2 pascal kilogram • metre −1 • second −2 volt kilogram • metre 2 • second −3 • ampere −1 farad kilogram • metre −2 • second 4 • ampere 2 ohm kilogram • metre 2 • second −3 • ampere −2 Also, temperature in the SI is defined in Kelvin, but it is more usual to express temperature in terms of degree celcius. We therefore define T • C (T + 273.15) * Q kelvin, where 273.15 is the freezing point of water. We can prove the corresponding unit equations, which show equivalences between SI units: astromonical-unit 149597870700 * Q metre parsec 648000/π * Q astronomical-unit The light year, astronomical unit, and parsec are all quantities of dimension L. The light year is the distance travelled by light in one Julian year. We define it by multipliying c by julian-year and normalising the result. The astronomical unit is the approximate distance between the earth and the sun. The parsec is the distance at which 1 astronomical unit subtends an angle of one arcsecond. We can give the parsec an exact mathematical value using Isabelle's Cauchy real characterisation of π. VI. Unit Conversions and Non-SI Systems In this section we describe unit conversion schemas, which can be used to convert quantities between different unit systems. Aside from the SI, other units systems remain in wide spread use today, notably other metric systems such as CGS (centimetre-gram-second), and imperial systems, including the United States Customary system (USC) and the British Imperial System (BIS). Interoperability with these systems therefore remains important. With our present system of quantities, we can already describe imperial units, in terms of the SI units, as shown below: Definition 18 (Imperial Units in the SI). Here, we define the international yard and pound, units of length and mass, which were given exact metric definitions in 1959. From these, we derive units like the mile and stone. The pint is according to the imperial definition standardies in the UK in 1995, and similarly for the gallon. Such units can then be used to construct quantities in the usual way. However, this masks an inherent problem with units like yards, pounds, and pints: they have several definitions depending on the context. Whilst the international yard is 0.9144 metres, the BIS yard has a slighty different definition of around 0.9143992 metres. This definition is based on a measurement of the imperial standard yard, a physical measure that was manufactured in 1845 and, after rigorous testing, made the official standard in 1855 by Act of Parliament [18], [19]. The standard yard was then measured in 1895 against the metric standard, and found to have a length of 0.9143992 metres 3 . On the other hand, the USC has a slightly different definition again of around 0.9144018 metres, standardised by the 1866 Metric Law [20]. Moreover, the volume unit "gallon" in the BIS and UCS have quite different definitions of 277.421 and 231 cubic inches (the 1707 "wine gallon" [19]), respectively, and similarly for derived units, such as the pint. These inconsistencies are particularly a problem with historical measurements, which are more likely to use one of the older definitions. Consequently, when precise measurements are crucial, it is necessary to characterise explicitly the unit system being employed, and define conversion factors between different systems. Even for metric systems, it is sometimes desirable to use different units, such as in the CGS system, where centimetres and grams are used as base units. In this case, we would also like the type system to enforce compatibility between measurements. We therefore formalise both unit systems and conversion schemas. A conversion schema is a 7-tuple of rational numbers each greater than zero. Each rational number encodes a conversion factor for each of the dimensions of the ISQ. We define a type for conversion schemata, S 1 ⇒ U S 2 , which can be used to convert quantities between unit systems S 1 and S 2 . Technically, we implement conversion schemata using a record type and type definition in Isabelle/HOL, whose definition is omitted for space reasons. We define the identity conversion schema id C :: S ⇒ S, which has 1 for each of the factors. We can compose two conversion schemas C 1 :: S 1 ⇒ U S 2 and C 2 :: S 2 ⇒ U S 3 using the operator C 2 • C 1 :: S 1 ⇒ U S 3 , which pairwise multiplies each of the conversion factors in C 1 and C 2 respectively. Similary we can invert C 1 using inv C (C 1 ) :: S 2 ⇒ U S 1 , which takes the reciprocal of each conversion factor. These operators induce a simple category of conversion schemas. We use conversion schemas to define a quantity conversion function qconv :: (S 1 ⇒ U S 2 ) ⇒ N[D, S 1 ] ⇒ N[D, S 2 ] , whose definition is below: Definition 19 (Quantity Conversion). qconv C (m, d) =                 1≤i≤7 C d i i         · m, d         Given a quantity (m, d), and a conversion schema C, the qconv function calculates the conversion factor for the magnitude m by raising each element of C to the corresponding dimension element d i . For example, if we wish to convert cubic (international) yards to cubic metres, then we first need the conversion factor from yards to metres, which is 0.9144. Then, we take this value and raise it to the power of 3, and so the overall conversion factor is 0.764555. The dimension itself is unchanged by this operation, as expected. The BIS is a non-metric standard for weights and measures in the UK, that was passed by an act of the UK parliament in 1824. It specifies the standard units for length and mass as the yard and pound, respectively. We model the BIS by creation of a unit system with the type BIS, and define yard BUNIT(L, BIS) and pound BUNIT(M, BIS). Moreover, we can create derived units such as foot 1/3 * Q yard, inch 1/12 * Q foot, and gallon 277.421 * Q inch 3 . Then, we can formally specify that certain quantities are measured according to the BIS. We can convert quantities between the SI and BIS by the creation of a suitable conversion schema BSI :: BIS ⇒ U SI. The factors for length and mass required for this conversion are 0.9143993 and 0.453592338, respectively. Since time is measured in seconds, and the other dimensions have no interpretation, we set them to 1 in the conversion schema. We can then, for example, convert a BIS quantity of 1 ounce to grams using the conversion qconv BSI (1 * Q ounce) ≈ 37.8 * Q gram 4 . We also create unit systems for the UCS and CGS systems, with suitable conversion factors. Whilst we can use quantity conversions between systems directly, it is often more convenient to use the SI as a frame of reference for different unit systems. Indeed, this is a key application of the SI for resolving mismatches between unit systems. We therefore create a type class to representation metrification: Definition 20 (Metrifiable Unit Systems). class metrifiable = unit-system+ fixes convschema :: S itself ⇒ (S ⇒ U SI) A unit system S is metrifiable if there is a conversion schema from S to SI. Consequently, the BIS, UCS, CGS, and the SI itself are all metrifiable. Consequently, for any pair of metrifiable systems, S 1 and S 2 , we define a generic conversion function QMC S 1 →S 2 :: N[D, S 1 ] ⇒ N[D, S 2 ], which performs conversion via metrification. This function first uses the conversion schema for S 1 to convert to the SI system, and then uses the inverse schema for S 2 to convert from SI to S 2 . For example, we can show that QMC CGS→BIS (12 * Q centimetre) ≈ 4.724 * Q inch. We can therefore use the Isabelle type system to precisely specify what system a measurement is made in, and seamlessly convert between a variety of other systems. VII. Conclusions In this paper, we have presented a comprehensive mechanisation of the ISQ in Isabelle/HOL. Our mechanisation allows us to precisely define the dimension, unit, and unit system that are employed by a particular system. Moreover, we can use the Isabelle type system to ensure that only measurements of the same dimension and unit system can be combined in a calculation. We have presented a substantial theory development of about 2500 lines of definitions and proofs that captures the ISQ and SI as defined in the international standard of the VIM [4]. The theories available on the Isabelle/HOL Archive of Formal Proofs provide a type system for physical quantities and measurements that is by construction sound and complete. Given the fact that Isabelle's type-system is far from being trivial, we believe that this is both significant and useful for applications in the hybrid system domain. We provided a validation of our theory by checking the mandatory definitions and described corollaries in the VIM and the SI Brochure [5]. An earlier version of our implementation was also applied in an industrial case study on a formal model for an autonomous underwater vehicle [7], which provides further validation. There are a number of directions for future work. The current approach to handling dimension mismatches using coercions could be better automated by using the coercive subtyping mechanism [21]. This effectively extends the type inference algorithm so that type mismatches can be automatically resolved by insertion of registered coercion functions. At the same time, our approach to characterising dimension types, illustrated in Figure 1, is not specific to the ISQ, and could be generalised to other problems that are typically solved with dependent types. For example, we could normalise type expressions containing arithmetic operators to relate vectors parametrised by the length. Therefore, in future work we will investigate a generic approach using universe constructions to justify type-level functions and coercions. Fig. 1 . 1Mapping dimension types into the dimension universe Definition 1 ( 1Dimension Vectors). typedef (N, I) dimvec = (UNIV :: (I :: enum) ⇒ N) typedef (D 1 :: dim-ty, D 2 :: dim-ty) DimTimes = (UNIV :: unitset) typedef (D :: dim-ty) DimInv = (UNIV :: unitset) Definition 6 ( 6Semantic Interpretation of Dimension Types). dim-ty-sem(d :: (D 1 , D 2 ) DimTimes) = QD(D 1 ) · QD(D 2 ) dim-ty-sem(d :: (D) DimInv) = QD(D) −1 Fig. 2 . 2Derived dimension type expressions in Isabelle/HOL Definition 7 ( 7Quantity Universe).record (N, I :: enum) Quantity = mag :: N dim :: (int, I) dimvec Definition 9 ( 9Measurement Systems). record(N, I :: enum, S :: unit-system) Measurement-System = (N, I) Quantity + unit-sys :: S Definition 10 ( 10Quantity Type). typedef (N, D :: dim-type, S :: unit-system) QuantT = {x :: (N, sdim, S) Measurement-System. dim(x) = QD(D)} The (N, D, S) QuantT type represents a quantity with numeric type N, dimension type D, and unit system S. The type definition introduces an invariant that requires that the dimension of the underlying quantity x agrees with the one specified in the dimension type. At this level, we use sdim as the concrete interpretation of dimensions, as this is required by the dim-type class. For convenience, we introduce the type syntax N[D, S] to stand for (N, D, S) QuantT. Our typedef also induces two functions for converting between typed and untyped quantitities: fromQ :: N[D, S] ⇒ (N, sdim, S) Measurement-System and toQ in the opposite direction. qtimes :: N[D 1 , S] ⇒ N[D 2 , S] ⇒ N[D 1 · D 2 , S] qinverse :: N[D, S] ⇒ N[D −1 , S] qdivide :: N[D 1 , S] ⇒ N[D 2 , S] ⇒ N[D 1 /D 2 , S] qequiv :: N[D 1 , S] ⇒ N[D 2 , S] ⇒ B qless-eq :: N[D 1 , S] ⇒ N[D 2 , S] ⇒ B These functions are defined simply by lifting the functions (=) Theorem 4 ( 4Heterogeneous Transfer Laws). Given quantities x :: N[D 1 , S] and y :: N[D 2 , S], we have x y ⇔ x Q = y Q ∧ QD(D 1 ) = QD(D 2 ) . 2 • metre −2 joule kilogram • metre 2 • second −2 watt kilogram • metre 2 • second −3 coulomb ampere • second lumen candela • steradian hecto 10 2 kilo 10 3 mega 10 6 giga 10 9 deci 10 −1 centi 10 −2 milli 10 −3 micro 10 −6 Definition 15 ( 15Defining Constants of the SI). ∆v Cs = 9192631770 * Q hertz c = 299792458 * Q (metre • second −1 ) h = (6.62607015 · 10 −34 ) * Q (joule • second) e = (1.602176634 · 10 −19 ) * Q coulomb k = (1.380649 · 10 −23 ) * Q (joule/kelvin)N A = 6.02214076 · 10 23 * Q (mole −1 ) K cd = 683 * Q (lumen/watt) ∆v Cs is the hyperfine transition frequency of the caesium 133 atom. Constant c is the speed of light in a vacuum, and h is the Planck constant. Constant e is the elementary charge, and second (9192631770 * Q 1)/∆v Cs metre (c/(299792458 * Q 1)) • second kilogram (h/(6.62607015 · 10 −34 ) * Q 1) • metre −2 • secondThe second is equal to the duration of 9192631770 periods of the radiation of the 133 Cs atom. The metre is the length travelled by light in a period of 1/299792458 seconds. For kilogram, the equation effectively defines the unit kg m s −1 , and then applies the unit m −2 s to obtain a quantity of dimension M. Each equation is proved using si-calc, which serves to validate our implementation of the SI. Finally, we complete our ontology of derived units [5, page 137]:Definition 16 (Further Derived Units (selection)). Theorem 7 ( 7Derived Unit Equivalences). joule newton • metre watt joule/second volt = watt/ampere farad coloumb/volt The remaining derived units from the standard are all mechanised in Isabelle. Finally, as an application of our approach, we implement a selection of astronomical units: Definition 17 (Astronomical Units). julian-year 365.25 * Q day light-year dnorm(c • julian-year) yard 0.9144 * Q metre mile 1760 * Q yard pound 0.4535937 * Q kilogram stone 14 * Q pound pint 0.56826125 * Q litre gallon 8 * Q pint https://uk.mathworks.com/discovery/dimensional-analysis.html 2 https://reference.wolfram.com/language/ref/Quantity.html Likely this discrepancy is due to the fact that the yard standard was slowly shrinking over time, a critical fact that was discovered later. We use exact rational arithmetic for this in Isabelle/HOL, but we present an approximate decimal expansion for ease of comprehension. Using physical quantities in robot software models. L Burgueño, T Mayerhofer, M Wimmer, A Vallecillo, Proceedings of the 1st International Workshop on Robotics Software Engineering, ser. RoSE '18. the 1st International Workshop on Robotics Software Engineering, ser. RoSE '18New York, NY, USAAssociation for Computing MachineryL. Burgueño, T. Mayerhofer, M. Wimmer, and A. Vallecillo, "Using physical quantities in robot software models," in Proceedings of the 1st International Workshop on Robotics Software Engineering, ser. RoSE '18. New York, NY, USA: Association for Computing Machinery, 2018, pp. 23-28. Software for calculation with physical quantities. B D Hall, 2020 IEEE International Workshop on Metrology for. Industry 4.0 & IoT, 2020B. D. Hall, "Software for calculation with physical quantities," in 2020 IEEE International Workshop on Metrology for Industry 4.0 & IoT, 2020, pp. 458-463. A system of quantities from software metrology. D Flater, Measurement. 168D. Flater, "A system of quantities from software metrology," Measure- ment, vol. 168, 2021. Basic and general concepts and associated terms (vim. Bureau International des Poids et Mesures and Joint Committee for Guides in Metrology. 3rd ed.. with minor correctionsBureau International des Poids et Mesures and Joint Committee for Guides in Metrology, "Basic and general concepts and associated terms (vim) (3rd ed.)." BIPM, JCGM, Tech. Rep., 2012, version 2008 with minor corrections. The International System of Units (SI). 9th edition--, "The International System of Units (SI)," BIPM, JCGM, Tech. Rep., 2019, 9th edition. Type classes and overloading resolution via order-sorted unification. T Nipkow, G Snelting, Functional Programming Languages and Computer Architecture, 5th ACM Conference. Cambridge, MA, USASpringer523Proceedings, ser. LNCST. Nipkow and G. Snelting, "Type classes and overloading resolution via order-sorted unification," in Functional Programming Languages and Computer Architecture, 5th ACM Conference, Cambridge, MA, USA, August 26-30, 1991, Proceedings, ser. LNCS, vol. 523. Springer, 1991, pp. 1-14. Formal model-based assurance cases in Isabelle/SACM: An autonomous underwater vehicle case study. S Foster, Y Nemouchi, C O&apos;halloran, N Tudor, K Stephenson, Proceedings of the 8th International Conference. the 8th International ConferenceACMFormal Methods in Software EngineeringS. Foster, Y. Nemouchi, C. O'Halloran, N. Tudor, and K. Stephen- son, "Formal model-based assurance cases in Isabelle/SACM: An autonomous underwater vehicle case study," in Formal Methods in Software Engineering (FormaliSE 2020): Proceedings of the 8th Inter- national Conference. ACM, 2020. A sound type system for physical quantities, units, and measurements. S Foster, B Wolff, Archive of Formal Proofs. Formal proof developmentS. Foster and B. Wolff, "A sound type system for physical quantities, units, and measurements," Archive of Formal Proofs, October 2020, https://isa-afp.org/entries/Physical Quantities.html, Formal proof devel- opment. Using units of measurement in formal specifications. I J Hayes, B P Mahony, 10.1007/BF01211077Formal Aspects of Computing. 73I. J. Hayes and B. P. Mahony, "Using units of measurement in formal specifications," Formal Aspects of Computing, vol. 7, no. 3, pp. 329-347, 1995. [Online]. Available: https://doi.org/10.1007/BF01211077 Specifying quantities in software models. L Burgueño, T Mayerhofer, M Wimmer, A Vallecillo, Information and Software Technology. 113L. Burgueño, T. Mayerhofer, M. Wimmer, and A. Vallecillo, "Specifying quantities in software models," Information and Software Technology, vol. 113, pp. 82-97, 2019. Code generation via higher-order rewrite systems. F Haftmann, T Nipkow, 10th Intl. Symp. on Functional and Logic Programming (FLOPS), ser. LNCS. Springer6009F. Haftmann and T. Nipkow, "Code generation via higher-order rewrite systems," in 10th Intl. Symp. on Functional and Logic Programming (FLOPS), ser. LNCS, vol. 6009. Springer, 2010, pp. 103-117. Dimension types. A Kennedy, Programming Languages and Systems -ESOP '94, D. Sannella. Berlin, Heidelberg; Berlin HeidelbergSpringerA. Kennedy, "Dimension types," in Programming Languages and Sys- tems -ESOP '94, D. Sannella, Ed. Berlin, Heidelberg: Springer Berlin Heidelberg, 1994, pp. 348-362. Types for units-of-measure: Theory and practice. Central European Functional Programming School -Third Summer School. ser. LNCS, Z. Horváth, R. Plasmeijer, and V. Zsók6299Springer--, "Types for units-of-measure: Theory and practice," in Central European Functional Programming School -Third Summer School, (CEFP 2009), ser. LNCS, Z. Horváth, R. Plasmeijer, and V. Zsók, Eds., vol. 6299. Springer, 2009, pp. 268-305. Des unités dans le typeur. J Garrigue, D Ly, 28ièmes Journées Francophones des Langages Applicatifs. Gourette, FranceJ. Garrigue and D. Ly, "Des unités dans le typeur," in 28ièmes Journées Francophones des Langages Applicatifs, Gourette, France, Jan. 2017. [Online]. Available: https://hal.archives-ouvertes.fr/hal-01503084 The algebraic structure of physical quantities. S Aragon, Journal of Mathematical Chemistry. 311S. Aragon, "The algebraic structure of physical quantities," Journal of Mathematical Chemistry, vol. 31, no. 1, May 2004. Lifting and transfer: A modular design for quotients in Isabelle/HOL. B Huffman, O Kuncar, CPP 2013, ser. LNCS. Springer8307B. Huffman, , and O. Kuncar, "Lifting and transfer: A modular design for quotients in Isabelle/HOL," in CPP 2013, ser. LNCS, vol. 8307. Springer, 2013, pp. 131-146. On the status of plane and solid angles in the International System of Units (SI). M I Kalinin, Metrologia. 566M. I. Kalinin, "On the status of plane and solid angles in the International System of Units (SI)," Metrologia, vol. 56, no. 6, November 2019. The United Kingdom standards of the yard in terms of the metre. P H Bigg, P Anderton, Brit. J. Appl. Phys. 15P. H. Bigg and P. Anderton, "The United Kingdom standards of the yard in terms of the metre," Brit. J. Appl. Phys, vol. 15, 1964. Revolution in measurement: Western European weights and measures since the age of science. R E Zupko, American Philosophical SocietyR. E. Zupko, Revolution in measurement: Western European weights and measures since the age of science. American Philosophical Society, 1990. Guide for the use of the international system of units. A Thompson, B N Taylor, National Institute of Standards and Technology (NIST). 811A. Thompson and B. N. Taylor, "Guide for the use of the international system of units," National Institute of Standards and Technology (NIST), Tech. Rep. 811, 2008. Extending hindley-milner type inference with coercive structural subtyping. D Traytel, S Berghofer, T Nipkow, APLAS 2011, ser. LNCS. Springer7078D. Traytel, S. Berghofer, and T. Nipkow, "Extending hindley-milner type inference with coercive structural subtyping," in APLAS 2011, ser. LNCS, vol. 7078. Springer, 2011, pp. 89-104.
[]
[ "arXiv:dg-ga/9509001v1 6 Sep 1995 Affine connections on involutive G-structures", "arXiv:dg-ga/9509001v1 6 Sep 1995 Affine connections on involutive G-structures" ]
[ "Sergey A Merkulov \nDepartment of Pure Mathematics\nGlasgow University\n\n\nUniversity Gardens\nG12 8QWGlasgowUK\n" ]
[ "Department of Pure Mathematics\nGlasgow University\n", "University Gardens\nG12 8QWGlasgowUK" ]
[]
0.Introduction. An affine connection is one of the central objects in differential geometry. One of its most informative characteristics is the (restricted) holonomy group which is defined, up to a conjugation, as a subgroup of GL(T t M) consisting of all automorphisms of the tangent space T t M at a point t ∈ M induced by parallel translations along the t-based contractible loops in M. The list of groups which can be holonomies of affine connections is dissappointingly dull -according to Hano and Ozeki [H-O], any closed subgroup of a general linear group can be realized in this way. The situation, however, is very different in the subclass of affine connections with zero torsion. Long ago, Berger [Be] presented a very restricted list of possible irreducibly acting holonomies of torsion-free affine connections. His list was complete in the part of metric connections (and later much work has been done to refine this "metric" part of his list, see, e.g., [Br1] and references cited therein), while the situation with holonomies of non-metric torsion-free affine connections was and remains very unclear. One of the results that will be discussed in this paper asserts that any torsion-free holomorphic affine connection with irreducibly acting holonomy group can, in principle, be constructed by twistor methods. Another result reveals a new natural subclass of affine connections with very little torsion which shares with the class of torsion-free affine connections two basic properties -the list of irreducibly acting holonomy groups of affine connections in this subclass is very restricted and the links with the twistor theory are again very strong.The purpose of this paper is to explain the key elements of the above mentioned twistor constructions without indulging in rather lengthy proofs. We work throughout in the category of complex manifolds, holomorphic affine connections, etc., though many results can be easily adapted to the real analytic case along the lines explained in [M].
10.1201/9781003072393-21
[ "https://export.arxiv.org/pdf/dg-ga/9509001v1.pdf" ]
16,492,164
dg-ga/9509001
617a0e26b3e4488abb85f059a4537f6e535867c9
arXiv:dg-ga/9509001v1 6 Sep 1995 Affine connections on involutive G-structures Sergey A Merkulov Department of Pure Mathematics Glasgow University University Gardens G12 8QWGlasgowUK arXiv:dg-ga/9509001v1 6 Sep 1995 Affine connections on involutive G-structures 0.Introduction. An affine connection is one of the central objects in differential geometry. One of its most informative characteristics is the (restricted) holonomy group which is defined, up to a conjugation, as a subgroup of GL(T t M) consisting of all automorphisms of the tangent space T t M at a point t ∈ M induced by parallel translations along the t-based contractible loops in M. The list of groups which can be holonomies of affine connections is dissappointingly dull -according to Hano and Ozeki [H-O], any closed subgroup of a general linear group can be realized in this way. The situation, however, is very different in the subclass of affine connections with zero torsion. Long ago, Berger [Be] presented a very restricted list of possible irreducibly acting holonomies of torsion-free affine connections. His list was complete in the part of metric connections (and later much work has been done to refine this "metric" part of his list, see, e.g., [Br1] and references cited therein), while the situation with holonomies of non-metric torsion-free affine connections was and remains very unclear. One of the results that will be discussed in this paper asserts that any torsion-free holomorphic affine connection with irreducibly acting holonomy group can, in principle, be constructed by twistor methods. Another result reveals a new natural subclass of affine connections with very little torsion which shares with the class of torsion-free affine connections two basic properties -the list of irreducibly acting holonomy groups of affine connections in this subclass is very restricted and the links with the twistor theory are again very strong.The purpose of this paper is to explain the key elements of the above mentioned twistor constructions without indulging in rather lengthy proofs. We work throughout in the category of complex manifolds, holomorphic affine connections, etc., though many results can be easily adapted to the real analytic case along the lines explained in [M]. 1. Irreducible G-structures. When studying an affine connection ∇ with the irreducibly acting holonomy group G, it is suitable to work with the associated G-structure. In this section we recall some notions of the theory of G-structures. Let M be an m-dimensional complex manifold and L * M the holomorphic coframe bundle π : L * M → M whose fibres π −1 (t) consist of all C-linear isomorphisms e : C m → Ω 1 t M, where Ω 1 t M is the cotangent space at t ∈ M. The space L * M is a principle right GL(m, C)-bundle with the right action given by R g (e) = e • g. If G is a closed subgroup of GL(m, C), then a (holomorphic) G-structure on M is a principal subbundle G of L * M with the group G. It is clear that there is a one-to-one correspondence between the set of G-structures on M and the set of holomorphic sections σ of the quotient bundleπ : L * M/G → M whose typical fibre is isomorphic to GL(m, C)/G. A G-structure on M is called locally flat if there exits a coordinate patch in the neighbourhood of each point t ∈ M such that in the associated canonical trivialization of L * M/G over this patch the section σ is represented by a constant GL(m, C)/G-valued function. A G-structure is called k-flat if, for each t ∈ M, the k-jet of the associated section σ of L * M/G at t is isomorphic to the k-jet of some locally flat section of L * M/G. It is not difficult to show that a G-structure admits a torsion-free affine connection if and only if it is 1-flat (cf. [Br1]). A G-structure on M is called irreducible if the action of G on C m leaves no non-zero invariant subspaces. Given an affine connection ∇ on a connected simply connected complex manifold M with the irreducibly acting holonomy group G, the associated irreducible G-structure G ∇ ⊂ L * M can be constructed as follows. Define two points u and v of L * M to be equivalent, u ∼ v, if there is a holomorphic path γ in M from π(u) to π(v) such that u = P γ (v), where P γ : Ω 1 π(v) M → Ω 1 π(u) M is the parallel transport along γ. Then G ∇ can be defined, up to an isomorphism, as {u ∈ L * M | u ∼ v} for some coframe v. The G-structure G ∇ is the smallest subbundle of L * M which is invariant under ∇-parallel translations. It will be shown later that for any holomorphic irreducible G-structure G → M there is associated an analytic family of compact isotropic submanifolds {X t ֒→ Y | t ∈ M} of a certain complex contact manifold Y which encodes much information about G. To explain this correspondence in more detail, we first digress in the next two sections to the Kodaira [K] deformation theory of compact complex submanifolds and to its particular generalization studied in [Me1]. 2. Kodaira relative deformation theory. Let Y and M be complex manifolds and let π 1 : Y × M −→ Y and π 2 : Y × M −→ M be natural projections. An analytic family of compact submanifolds of the complex manifold Y with the parameter space M is a complex submanifold F ֒→ Y × M such that the restriction of the projection π 2 on F is a proper regular map (regularity means that the rank of the differential of ν := π 2 | F : F −→ M is equal to dim M at every point). The parameter space M is called a Kodaira moduli space. Thus the family F has a double fibration structure Y µ ←− F ν −→ M where µ := π 1 | F . For each t ∈ M we say that the compact complex submanifold X t = µ • ν −1 (t) ֒→ Y belongs to the family F . Sometimes we use a more explicit notation {X t ֒→ Y | t ∈ M} to denote an analytic family F of compact submanifolds. If F ֒→ Y × M is an analytic family of compact submanifolds, then, for any t ∈ M, there is a natural linear map [K], k t : T t M −→ H 0 (ν −1 (t), N ν −1 (t)|F ) µ * −→ H 0 (X t , N Xt|Y ), which is a composition of the natural lift of a tangent vector at t to a global section of the normal bundle of the submanifold ν −1 (t) ֒→ F with the Jacobian of µ (here the symbol N A|B stands for the normal bundle of a submanifold A ֒→ B). An analytic family F ֒→ Y × M of compact submanifolds is called complete if the map k t is an isomorphism for all t ∈ M which in particular implies that dim M = h 0 (X t , N Xt|Y ). In 1962 Kodaira [K] proved the following existence theorem: if X ֒→ Y is a compact complex submanifold with normal bundle N such that H 1 (X, N) = 0, then X belongs to a complete analytic family F ֒→ Y × M of compact submanifolds of Y . 3. Deformations of compact complex Legendre submanifolds of complex contact manifolds. In this section we shall be interested in the following specialisation (which will eventually turn out to be a generalisation) of the Kodaira relative deformation problem: the initial data is a pair X ֒→ Y consisting of a compact complex Legendre submanifold X of a complex contact manifold Y and the object of study is the set, M, of all holomorphic deformations of X inside Y which remain Legendre. First, we recall some standard notions, then give a better formulation of the problem, and finally present its solution. Let Y be a complex (2n + 1)-dimensional manifold. A complex contact structure on Y is a rank 2n holomorphic subbundle D ⊂ T Y of the holomorphic tangent bundle to Y such that the Frobenius form Φ : D × D −→ T Y /D (v, w) −→ [v, w] mod D is non-degenerate. Define the contact line bundle L by the exact sequence 0 −→ D −→ T Y θ −→ L −→ 0. One can easily verify that maximal non-degeneracy of the distribution D is equivalent to the fact that the above defined "twisted" 1-form θ ∈ H 0 (Y, L⊗Ω 1 M) satisfies the condition θ ∧ (dθ) n = 0. A complex submanifold X ֒→ Y is called isotropic if T X ⊂ D. An isotropic submanifold of maximal possible dimension n is called Legendre. In this paper we shall be primarily interested in compact Legendre submanifolds. The normal bundle N X|Y of any Legendre submanifold X ֒→ Y is isomorphic to J 1 L X [L2], where L X = L| X . Therefore, N X|Y fits into the exact sequence 0 −→ Ω 1 X ⊗ L X −→ N X|Y pr −→ L X −→ 0. Let Y be a complex contact manifold. An analytic family F ֒→ Y × M of compact submanifolds of Y is called an analytic family of compact Legendre submanifolds if, for any point t ∈ M, the corresponding subset X t := µ • ν −1 (t) ֒→ Y is a Legendre submanifold. The parameter space M is called a Legendre moduli space. Let F ֒→ Y × M be an analytic family of compact Legendre submanifolds. According to Kodaira [K], there is a natural linear map k t : T t M −→ H 0 (X t , N Xt|Y ). We say that the family F is complete at a point t ∈ M if the composition s t : T t M kt −→ H 0 (X t , N Xt|Y ) pr −→ H 0 (X t , L Xt ) provides an isomorphism between the tangent space to M at the point t and the vector space of global sections of the contact line bundle over X t . One of the motivations behind this definition is the fact [Me1] that an analytic family of compact Legendre submanifolds {X t ֒→ Y | t ∈ M} which is complete at a point t 0 ∈ M is also maximal at t 0 in the sense that, for any other analytic family of compact Legendre submanifolds {Xt ֒→ Y |t ∈M } such that X t 0 = Xt 0 for a pointt 0 ∈M , there exists a neighbourhoodŨ ⊂M oft 0 and a holomorphic map f :Ũ −→ M such that f (t 0 ) = t 0 and X f (t ′ ) = Xt′ for eacht ′ ∈Ũ . An analytic family F ֒→ Y × M is called complete if it is complete at each point of the Legendre moduli space M. In this case M is also called complete. The following result [Me1] reveals a simple condition for the existence of complete Legendre moduli spaces. Theorem 1 Let X be a compact complex Legendre submanifold of a complex contact manifold (Y, L). If H 1 (X, L X ) = 0, then there exists a complete analytic family of compact Legendre submanifolds F ֒→ Y × M containing X. This family is maximal and dim M = h 0 (X, L X ). Let X be a complex manifold and L X a line bundle on X. There is a natural "evaluation" map H 0 (X, L X ) ⊗ O X −→ J 1 L X whose dualization gives rise to the canonical map L X ⊗ S k+1 (J 1 L X ) * −→ L X ⊗ S k (J 1 L X ) * ⊗ H 0 (X, L X ) * which in turn gives rise to the map of cohomology groups H 1 X, L X ⊗ S k+1 (J 1 L X ) * φ −→ H 1 X, L X ⊗ S k (J 1 L X ) * ⊗ H 0 (X, L X ) * . For future reference, we define a vector subspacẽ H 1 X, L X ⊗ S k+1 (J 1 L X ) * := ker φ ⊂ H 1 X, L X ⊗ S k+1 (J 1 L X ) * . 4. G-structures induced on Legendre moduli spaces of generalized flag varieties. Recall that a generalised flag variety X is a compact simply connected homogeneous Kähler manifold [B-E]. Any such a manifold is of the form X = G/P , where G is a complex semisimple Lie group and P ⊂ G a fixed parabolic subgroup. Assume that such an X is embedded as a Legendre submanifold into a complex contact manifold (Y, L) with contact line bundle L such that L X := L| X is very ample. Then the Bott-Borel-Weil theorem and the fact that any holomorphic line bundle on X is homogeneous imply that H 1 (X, L X ) = 0. Therefore, by Theorem 1, there exists a complete analytic family of compact Legendre submanifolds {X t ֒→ Y | t ∈ M}, i.e. the initial data "X ֒→ Y " give rise to a new complex manifold M which, as the following result shows, comes equipped with a rich geometric structure. Theorem 2 [Me1] Let X be a generalised flag variety embedded as a Legendre submanifold into a complex contact manifold Y with contact line bundle L such that L X is very ample on X. Then (i) There exists a complete analytic family F ֒→ Y × M of compact Legendre submanifolds with moduli space M being an h 0 (X, L X )-dimensional complex manifold. For each t ∈ M, the associated Legendre submanifold X t is isomorphic to X. (ii) The Legendre moduli space M comes equipped with an induced irreducible G-structure, G ind → M, with G isomorphic to the connected component of the identity of the group of all global biholomorphisms φ : L X → L X which commute with the projection π : L X → X. The Lie algebra of G is isomorphic to H 0 (X, L X ⊗ (J 1 L X ) * ). (iii) If G ind is k-flat, k ≥ 0, then the obstruction for G ind to be (k + 1)-flat is given by a tensor field on M whose value at each t ∈ M is represented by a cohomology class ρ [k+1] t ∈H 1 X t , L Xt ⊗ S k+2 (J 1 L Xt ) * . (iv) If G ind is 1-flat, then the bundle of all torsion-free connections in G ind has as the typical fiber an affine space modeled on H 0 (X, L X ⊗ S 2 (J 1 L X ) * ). Remark. Theorem 2 is actually valid for a larger class of compact complex manifolds X than the class of generalized flag varieties -the only vital assumptions are [Me1] that X is rigid and the cohomology groups H 1 (X, O X ) and H 1 (X, L X ) vanish. The geometric meaning of cohomology classes ρ [k+1] t ∈H 1 X t , L Xt ⊗ S k+2 (J 1 L Xt ) * of Theorem 2(iii) is very simple -they compare to (k + 2)th order the germ of the Legendre embedding X t ֒→ Y with the "flat" model, X t ֒→ J 1 L Xt , where the ambient contact manifold is just the total space of the vector bundle J 1 L Xt together with its canonical contact structure and the Legendre submanifold X t is realised as a zero section of J 1 L Xt → X t . Therefore, the cohomology class ρ [k] t can be called the kth Legendre jet of X t in Y . Then it is natural to call a Legendre submanifold X t ֒→ Y k-flat if ρ This general construction can be illustrated by three well known examples which were among the motivations behind the present work (in fact the list of examples can be made much larger -Theorem 2 has been checked for all "classical" torsion-free geometries as well as for a large class of locally symmetric structures). The first example is a "generic" GL(m, C)-structure on an m-dimensional manifold M. The associated twistorial data X ֒→ Y is easy to describe: the complex contact manifold Y is the projectivized cotangent bundle P(Ω 1 M) with its natural contact structure while X = CP m−1 is just a fiber of the projection P(Ω 1 M) → M. The corresponding complete family {X t ֒→ Y | t ∈ M} is the set of all fibres of this fibration. Since L X = O(1) and J 1 L X = C m ⊗ O X , we have H 1 X, L X ⊗ S k+2 ((J 1 L X ) * ) = 0 for all k ≥ 0 which confirms the well-known fact that any GL(m, C)-structures on an m-dimensional manifold are locally flat. The second example [L1] is a pair X ֒→ Y consisting of an n-quadric Q n embedded into a (2n + 1)-dimensional contact manifold (Y, L) with L| X ≃ i * O CP n+1 (1), i : Q n → CP n+1 being a standard projective realisation of Q n . It is easy to check that in this case H 0 (X, L X ⊗ (J 1 L X ) * ) is precisely the conformal algebra implying that the associated (n + 2)-dimensional Legendre moduli space M comes equipped canonically with a conformal structure. Since H 1 (X, L X ⊗ S 2 (J 1 L X ) * ) = 0, the induced conformal structure must be torsion-free in agreement with the classical result of differential geometry. Easy calculations show that the vector space H 1 (X, L X ⊗ S 3 (J 1 L X ) * ) is exactly the subspace of T M ⊗ Ω 1 M ⊗ Ω 2 M consisting of tensors with Weyl curvature symmetries. Thus Theorem 2(iii) implies the well-known Schouten conformal flatness criterion. Since H 0 (X, L X ⊗ S 2 (J 1 L X ) * ) is isomorphic to the typical fibre of Ω 1 M, the set of all torsion-free affine connections preserving the induced conformal structure is the affine space modeled on H 0 (M, Ω 1 M), again in agreement with the classical result. The third example is Bryant's [Br2] relative deformation problem X ֒→ Y with X being a rational Legendre curve CP 1 in a complex contact 3-fold (Y, L) with L X = O(3). Calculating H 0 (X, L X ⊗(J 1 L X ) * ), one easily concludes that the induced G-structure on the associated 4-dimensional Legendre moduli space is exactly an exotic G 3 -structure which has been studied by Bryant in his search for irreducibly acting holonomy groups of torsionfree affine connections which are missing in the Berger list [Be] (the missing holonomies are called exotic). Since H 1 (X, L X ⊗ S 2 (J 1 L X ) * ) = 0, Theorem 2(iii) says that the induced G 3 -structure is torsion-free in accordance with [Br2]. Since H 0 (X, L X ⊗ S 2 (J 1 L X ) * ) = 0, G ind admits a unique torsion-free affine connection ∇. The cohomology class ρ [2] t ∈ H 1 (X, L X ⊗ S 3 (J 1 L X ) * ) from Theorem 2(iii) is exactly the curvature tensor of ∇. How large is the family of G-structures which can be constructed by twistor methods of Theorem 2? As the following result [Me1] shows, in the category of irreducible 1-flat G-structures this class as large as one could wish. Theorem 3 (i) Let H be one of the following representations: (a) Spin(2n + 1, C) acting on C 2 n , n ≥ 3; (b) Sp(2n, C) acting on C 2n , n ≥ 2; (c) G 2 acting on C 7 . Suppose that G ⊂ GL(m, C) is a connected semisimple Lie subgroup whose decomposition into a locally direct product of simple groups contains H. If G is any irreducible 1-flat G·C * -structure on an m-dimensional manifold M, then there exists a complex contact manifold (Y, L) and a generalised flag variety X embedded into Y as a Legendre submanifold with L X being very ample, such that, at least locally, M is canonically isomorphic to the associated Legendre moduli space and G ⊂ G ind . In particular, when G = H one has in the case (a) X = SO(2n + 2, C)/U(n + 1) and G ind is a Spin(2n + 2, C) · C * -structure; in the case (b) X = CP 2n−1 and G ind is a GL(2n, C)structure; and in the case (c) X = Q 5 and G ind is a CO(7, C)-structure. (ii) Let G ⊂ GL(m, C) be an arbitrary connected semisimple Lie subgroup whose decomposition into a locally direct product of simple groups does not contain any of the groups H considered in (i). If G is any irreducible 1-flat G · C * -structure on an mdimensional manifold M, then there exists a complex contact manifold (Y, L) and a Legendre submanifold X ֒→ Y with X = G/P for some parabolic subgroup P ⊂ G and with L X being very ample, such that, at least locally, M is canonically isomorphic to the associated Legendre moduli space and G = G ind . The conclusion is that there are very few irreducible G-structures which can not be constructed by twistor methods discussed in this paper. It is also worth pointing out that Theorem 2(iii) gives rise to a new and rather effective machinery to search for exotic holonomies. The new results in this direction will be discussed elsewhere -here we only note that the claimed efficiency of the twistor technique is largerly due to the simple observation that the key cohomology groups H 1 (X t , L Xt ⊗ S 2 (J 1 L Xt ) * ) and H 1 (X t , L Xt ⊗ S 3 (J 1 L Xt ) * ), which provide us with the full information about torsion and curvature tensors, can be computed by a combination of the representation theory methods (such as Bott-Borel-Weil theorem) and the methods of complex analysis. In some important cases it is even enough to use the complex analysis methods only. 5. Torsion-free affine connections. Let F ֒→ Y × M be a complete analytic family of compact Legendre submanifolds. Any point t in M is represented thus by a compact complex Legendre submanifold X t . The first floors of the two towers of infinitesimal neighbourhoods of the analytic spaces t ֒→ M and X t ֒→ Y are related to each other via the isomorphism T t M = H 0 (X t , L Xt ). What happens at the second floors of these two towers? If J t ⊂ O M is the ideal of holomorphic functions which vanish at t ∈ M, then the tangent space T t M is isomorphic to (J t /J 2 t ) * . Define a second order tangent bundle, T −→ T t M −→ T [2] t M −→ S 2 (T t M) −→ 0(1) For each t ∈ M there exists a holomorphic vector bundle, ∆ [2] Xt , on the associated Legendre submanifold X t ֒→ Y such that there are an exact sequence of locally free sheaves 0 −→ L Xt α −→ ∆ [2] Xt −→ S 2 (J 1 L Xt ) −→ 0 (2) and a commutative diagram 0 −→ T t M −→ T [2] t M −→ S 2 (T t M) −→ 0       0 −→ H 0 (X t , L Xt ) −→ H 0 X t , ∆ [2] Xt −→ H 0 (X t , S 2 (J 1 L Xt )) −→ 0(3) which extends the canonical isomorphism T t M → H 0 (X t , L Xt ) to second order infinitesimal neighbourhoods of t ֒→ M and X t ֒→ Y . For the details of the construction of ∆ [2] Xt we refer the interested reader to [Me1]. In this paper we need only to know that this bundle exists and has the stated properties. The extension (2) defines a cohomology class ρ [1] t ∈ Ext 1 O X t S 2 (J 1 L Xt ), L Xt = H 1 (X t , L Xt ⊗ S 2 (J 1 L Xt ) * ). This is exactly the class of Theorem 2(iii) which is the obstruction to 1-flatness of X t in Y . Therefore, if X t is 1-flat, then extension (2) splits, i.e. there exists a morphism β : ∆ [2] Xt → L Xt such that β • α = id. Any such a morphism induces via the commutative diagram (3) an associated splitting of the exact sequence (1) which is equivalent to a torsion-free affine connection at t ∈ M. A torsion-free connection on the Legendre moduli space M which arises at each t ∈ M from a splitting of the extension (2) is called an induced connection. Now we can formulate the main theorem about torsion-free affine connections. Theorem 4 [Me1] Let ∇ be a holomorphic torsion-free affine connection on a complex manifold M with irreducibly acting reductive holonomy group G. Then there exists a complex contact manifold (Y, L) and a 1-flat Legendre submanifold X ֒→ Y with X = G s /P for some parabolic subgroup P of the semisimple factor G s of G and with L X being very ample, such that, at least locally, M is canonically isomorphic to the associated Legendre moduli space and ∇ is an induced torsion-free affine connection in G ind . The conclusion is that any holomorphic torsion-free affine connection with irreducibly acting holonomy group can, in principle, be constructed by twistor methods. 6. From Kodaira to Legendre moduli spaces and back. In this subsection we first show that any complete Kodaira moduli space can be interpreted as a complete Legendre moduli space and then use this fact to prove a proposition about canonically induced geometric structures on Kodaira moduli spaces. If X ֒→ Y is a complex submanifold, there is an exact sequence of vector bundles 0 −→ N * X|Y −→ Ω 1 Y X −→ Ω 1 X −→ 0, which induces a natural embedding, P(N * X|Y ) ֒→ P(Ω 1 Y ), of total spaces of the associated projectivised bundles. The manifoldŶ = P(Ω 1 Y ) carries a natural contact structure such that the constructed embeddingX = P(N * X|Y ) ֒→Ŷ is a Legendre one [Ar]. Indeed, the contact distribution D ⊂ TŶ at each pointŷ ∈Ŷ consists of those tangent vectors Vŷ ∈ TŷŶ which satisfy the equation <ŷ, τ * (Vŷ) >= 0, where τ :Ŷ −→ Y is a natural projection and the angular brackets denote the pairing of 1-forms and vectors at τ (ŷ) ∈ Y . Since the submanifoldX ⊂Ŷ consists precisely of those projective classes of 1-forms in Ω 1 Y | X which vanish when restricted on T X, we conclude that TX ⊂ D|X . One may check that this association Kodaira moduli space −→ Legendre moduli space {X t ֒→ Y | t ∈ M} −→ X t := P(N * Xt|Y ) ֒→Ŷ := P(Ω 1 Y ) | t ∈ M preserves completeness while changing its meaning, i.e. a complete Kodaira family of compact complex submanifolds is mapped into a complete family of compact complex Legendre submanifolds (which is usually not complete in the Kodaira sense). The contact line bundle L onŶ is just the dual of the tautological line bundle OŶ (−1). Simplifying the notations, N := N X|Y andN := NX |Ŷ , we write down the following commutative diagram which explains howN is related to ρ * (N) and L 0 0 ↓ ↓ ρ * (Ω 1 X) ⊗ LX = ρ * (Ω 1 X) ⊗ LX ↓ ↓ 0 −→ Ω 1X ⊗ LX −→N −→ LX −→ 0 ↓ ↓ || 0 −→ Ω 1 ρ ⊗ LX −→ ρ * (N) −→ LX −→ 0 ↓ ↓ 0 0 Here LX = L|X , ρ is a natural projectionX → X, and Ω 1 ρ is the bundle of ρ-vertical 1-forms, i.e. the dual of T ρ = ker : TX → T X. Using this diagram it is not hard to show that there is a long exact sequence of cohomology groups 0 → H 0 (X, N ⊗ S 2 (N * )) → H 0 X , LX ⊗ S 2 (N * ) → H 0 (X, N * ⊗ T X) → → H 1 (X, N ⊗ S 2 (N * )) → H 1 X , LX ⊗ S 2 (N * ) → H 1 (X, N * ⊗ T X) → . . . Proposition 5 Let X ֒→ Y be a compact complex rigid submanifold with rigid normal bundle N such that H 1 (X, N) = 0 and let M be the associated Kodaira moduli space. If H 1 X, N ⊗ S 2 (N * ) = H 1 (X, N * ⊗ T X) = 0,(4) then the associated Kodaira moduli space M comes equipped with an induced 1-flat Gstructure with the Lie algebra g of G being characterized by the following exact sequence of Lie algebras 0 −→ H 0 (X, N ⊗ N * ) −→ g −→ H 0 (X, T X) −→ 0 Proof. By Kodaira theorem [K], there is a complete family, {X t ֒→ Y | t ∈ M}, of compact complex submanifolds and hence the associated complete family, {X t ֒→Ŷ | t ∈ M}, of compact complex Legendre submanifolds withX t = P(N * t ) andŶ = P(Ω 1 M). Equations (4) together with the above long exact sequence of cohomology groups imply the vanishing of H 1 X , LX ⊗ S 2 (N * ) . This together with Theorem 2(iii) and the subsequent Remark implies in turn that the induced G-structure on M is 1-flat. The final statement about the Lie algebra of G follows from Theorem 2(ii), the exact sequence 0 −→ LX ⊗ ρ * (N * ) −→ LX ⊗N * −→ ρ * (T X) −→ 0 and the fact that H 0 (X, LX ⊗ ρ * (N * )) = H 0 (X, N ⊗ N * ). ✷ If, for example, X is the projective line CP 1 embedded into a complex manifold Y with normal bundle N = C 2k ⊗ O(1), k ≥ 1, then by Proposition 5 the associated Kodaira moduli space M, which is a 4k-dimensional complex manifold, comes equipped canonically with a complexified quaternionic structure, in accordance with [Pe, P-P]. For other results on geometric structures induced on Kodaira moduli spaces we refer to [Me2,. 7. Involutive G-structures. Let M be a complex m-dimensional manifold and let G ⊂ L * M be an irreducible holomorphic G-structure with reductive G (hence G must be isomorphic to G s or G s × C * for some semisimple Lie group G s ⊂ GL(m, C)). Since G is irreducible, there is a naturally associated subbundleF ⊂ Ω 1 M whose typical fiber is the cone in C m defined as the G s -orbit of the line spanned by a highest weight vector. DenoteF =F \ 0F , where 0F is the "zero" section ofp :F → M whose value at each t ∈ M is the vertex of the conep −1 (t). The quotient bundle ν : F =F /C * −→ M is then a subbundle of the projectivized cotangent bundle P(Ω 1 M) whose fibres X t are isomorphic to the generalised flag variety G s /P , where P is the parabolic subgroup of G s which preserves the highest weight vector in C m up to a scale. The total space of the cotangent bundle Ω 1 M has a canonical holomorphic symplectic 2-form ω which makes the sheaf of holomorphic functions on Ω 1 M into a sheaf of Lie algebras via the Poisson bracket {f, g} = ω −1 (df, dg). Definition 6 An irreducible G-structure G → M is called involutive ifF is a coisotropic submanifold of the symplectic manifold Ω 1 M \ 0 Ω 1 M . The first motivation behind this definition is the following Lemma 7 Every irreducible 1-flat G-structure is involutive. Proof. It is well known that a submanifold of a symplectic manifold is isotropic if and only if the associated ideal sheaf is the sheaf of Lie algebras relative to the Poisson bracket. This condition obviously holds for a locally flat G-structure. Since the Poisson bracket involves only a 1st order differential operator, this condition must also be satisfied for a 1-flat G-structure. ✷ The pullback, i * ω, of the symplectic form ω from Ω 1 M \ 0 Ω 1 M to its submanifold i :F −→ Ω 1 M \ 0 Ω 1 M defines a distribution D ⊂ TF as the kernel of the natural "lowering of indices" map TF ω −→ Ω 1F , i.e. D e = V ∈ T eF : V i * ω = 0 at each point e ∈F . Using the fact that d(i * ω) = i * dω = 0, one can show that this distribution is integrable and thus defines a foliation ofF by holomorphic leaves. We shall assume from now on that the space of leaves,Ỹ , is a complex manifold. This assumption imposes no restrictions on the local structure of M. The fact that the Lie derivative, L V i * ω = V i * dω + d(V i * ω) = 0, vanishes for any vector field V tangent to the leaves implies that i * ω is the pullback relative to the canonical projectionμ :F →Ỹ of a closed 2-formω onỸ . It is easy to check that ω is non-degenerate which means that (Ỹ ,ω) is a symplectic manifold. There is a natural action of C * onF which leaves D invariant and thus induces an action of C * onỸ . The quotient Y :=Ỹ /C * is an odd dimensional complex manifold which has a double fibration structure Y µ ←− F =F /C * ν −→ M and thus contains an analytic family of compact submanifolds {X t = µ • ν −1 (t) ֒→ Y | t ∈ M} with X t =X t /C * ≃ G s /P . Next, inverting a well-known procedure of symplectivisation of a contact manifold [Ar], it is not hard to show that Y has a complex contact structure such that all the submanifolds X t ֒→ Y are isotropic. The contact line bundle L on Y is just the quotient L =F × C/C * relative to the natural multiplication mapF × C −→F × C, (p, c) → (λp, λc), where λ ∈ C * . This can be summarized as follows. Proposition 8 Given an irreducible G-structure G → M with reductive G. There is canonically associated a complex contact manifold (Y, L) containing a dim M-parameter family {X t ֒→ Y | t ∈ M} of isotropically emdedded generalized flag varieties X t = G s /P , where G s is the semisimple part of G and P is the parabolic subgroup of G s leaving invariant a highest weight vector in the typical fibre of Ω 1 M → M up to a scale. where N * e is the fibre of the conormal bundle ofF ֒→ Ω 1 M \ 0 Ω 1 M . Therefore, the rank of the distribution D is equal at most to rank N * = dim M − dim X t − 1. It is easy to check that rank D is maximal possible if and only if G is involutive. In this case the contact manifold Y associated to G has dimension dim Y = dimỸ − 1 = dimF − rankD − 1 = (dim M + dim X t + 1) − (dim M − dim X t − 1) − 1 = 2 dim X t + 1 which means that the associated complete family {X t ֒→ Y | t ∈ M} is an analytic family of compact Legendre submanifolds. This argument partly explains the following result Theorem 9 Let G ⊂ GL(m, C) be an arbitrary connected semisimple Lie subgroup whose decomposition into a locally direct product of simple groups does not contain any of the groups H considered in Theorem 3(ii). If G is any involutive G × C * -structure on an m-dimensional manifold M, then there exists a complex contact manifold (Y, L) and a Legendre submanifold X ֒→ Y with X = G/P for some parabolic subgroup P ⊂ G and with L X being very ample, such that, at least locally, M is canonically isomorphic to the associated Legendre moduli space and G = G ind . In the case of involutive G-structures G → M with G as in Theorem 3(i) one can still canonically identify the base manifold M with a Legendre moduli space, but now G is properly contained in G ind . In conclusion, the earlier posed question -how large is the family of G-structures which can be constructed by twistor methods of Theorem 2? -has the following answer: this family consists of involutive G-structures. 8. Affine connections with "very little torsion". With any irreducible G-structure G on a complex manifold M one can associate the torsion number defined as follows l = 1 2 (dim M − dim G/P − rankD − 1) . Here P is the parabolic subgroup of G leaving invariant up to a scale a highest weight vector in the typical fibre of T M, and D is the distribution associated to G as explained in section 7. We see that the torsion number l is composed of two very different parts: the first part, dim M − dim G/P , encodes only "linear" information about the particular irreducible representation of G, while the second part, rankD − 1, measures how this particular representation is "attached" to the base manifold M. It is not difficult to prove that l is always a non-negative integer. This fact alone shows that the proposed combination l of four natural numbers does give some insight into the structure of G. This impression can be further strengthened by the fact that l has a nice geometric interpretation. Remember that, by Proposition 8, the G-structure G gives rise to a complex contact manifold (Y, L) and a family {X t ֒→ Y | t ∈ M} of isotropic submanifolds parameterised by M. Then, in these terms, l = 1 2 (dim Y − 1) − dim X t , i.e. l measures how much X t lacks to be a Legendre submanifold. Why is l called a torsion number? It is not difficult to show that the torsion number of any 1-flat G-structure is zero. Therefore, a G-structure on M may have non-vanishing l only if it has a non-vanishing invariant torsion (but not vise versa, as we shall see in a moment). Moreover, the larger l is, the less integrable is the distribution D and, in this sense, the "larger" is the invariant torsion. Definition 10 The torsion number of an affine connection ∇ is the torsion number of the associated holonomy bundle G ∇ . Definition 11 An affine connection with torsion is said to have very little torsion if its torsion number is zero. The class of affine connections with very little torsion is a sibling of the class torsionfree affine connections in the sense that both these classes (and only these two classes) have involutive holonomy bundles (it is not difficult to show that a G-structure has l = 0 if and only if it is involutive). This means in particular that all connections of both types can be constructed by twistor methods on appropriate Legendre moduli spaces. Another conclusion is that the class of affine connections with very little torsion is non emptyone can construct plenty of them using Legendre deformations problems "X ֒→ (Y, L)" such that H 1 (X, L X ⊗ S 2 (J 1 L X ) * ) = 0. This does not mean, however, that this class is enormously large -on the contrary, as the table below shows, the list of irreducibly acting holonomies of affine connections with very little torsion must be very restricted. In conclusion we note that the problem of classifying all irreducibly acting reductive holonomies of affine connections with zero or very little torsion has a strong purely symplectic flavour -it is nearly equivalent to the problem of classifying all generalized flag varieties X which can be realized as complex Legendre submanifolds of non-trivial contact manifolds Y (non-trivial in the sense that the germ of Y at X is not isomorphic to the germ of the total space of the jet bundle J 1 L X , for some line bundle L X → X, at its zero section). terminology, the item (iii) of Theorem 2 acquires a rather symmetric form: the induced G-structure on the moduli space M of a complete analytic family of compact Legendre submanifolds is k-flat if and only if the family consists of k-flat Legendre submanifolds. t M, at the point t as (J t /J 3 t ) * . Then, evidently, T [2] t M fits into an exact sequence of complex vector spaces 0 Let e be any point ofF ⊂ Ω 1 M \ 0 Ω 1 M . Restricting a "lowering of indices" map T e (Ω 1 M) ω −→ Ω 1 e (Ω 1 M) to the subspace D e , one obtains an injective representations are written in the notations of [B-E]). . V I Arnold, Mathematical Methods of Classical Mechanics. NaukaRussianV.I. Arnold, Mathematical Methods of Classical Mechanics, Springer-Verlag, New York, 1978 [Russian: Nauka, Moscow, 1974]. The Penrose transform, its interaction with representation theory. B-E] R J Baston, M G Eastwood, Oxford University Press[B-E] R.J. Baston and M.G. Eastwood, The Penrose transform, its interaction with repre- sentation theory, Oxford University Press, 1989. Sur les groupes d'holonomie des variétésá connexion affine et des variétés Riemanniennes. M Berger, Bull. Soc. Math. France. 83M. Berger, Sur les groupes d'holonomie des variétésá connexion affine et des variétés Riemanniennes, Bull. Soc. Math. France 83 (1955), 279-330. Metrics with exceptional holonomy. R Bryant, Ann. of Math. 2R. Bryant, Metrics with exceptional holonomy, Ann. of Math. (2) 126 (1987), 525-576. Two exotic holonomies in dimension four, path geometries, and twistor theory. R Bryant, Proc. Symposia in Pure Mathematics. Symposia in Pure Mathematics83R. Bryant, Two exotic holonomies in dimension four, path geometries, and twistor theory, Proc. Symposia in Pure Mathematics 83 (1991), 33-88. On the holonomy groups of linear connections. [h-O] J Hano, H Ozeki, Nagoya Math. J. 10[H-O] J. Hano and H. Ozeki, On the holonomy groups of linear connections, Nagoya Math. J. 10 (1956), 97-100. A theorem of completeness of characteristic systems for analytic families of compact submanifolds of complex manifolds. K Kodaira, Ann. Math. 75K. Kodaira, A theorem of completeness of characteristic systems for analytic families of compact submanifolds of complex manifolds, Ann. Math. 75 (1962), 146-162. Spaces of complex null geodesics in complex-Riemannian geometry. C R Lebrun, Trans. Amer. Math. Soc. 284C. R. LeBrun, Spaces of complex null geodesics in complex-Riemannian geometry, Trans. Amer. Math. Soc. 284 (1983), 209-321. Thickenings and conformal gravity. C R Lebrun, Commun. Math. Phys. 139C. R. LeBrun, Thickenings and conformal gravity. Commun. Math. Phys. 139 (1991), 1-43. Yu I Manin, Gauge Field theory and Complex Geometry. MoscowNaukaRussianYu. I. Manin, Gauge Field theory and Complex Geometry, Springer-Verlag, 1988 [Russian: Nauka, Moscow, 1984]. Existence and geometry of Legendre moduli spaces. S A Merkulov, preprintS. A. Merkulov, Existence and geometry of Legendre moduli spaces, preprint (1994); . Moduli of compact complex Legendre submanifolds of complex contact manifolds, Math. Research Lett. 1Moduli of compact complex Legendre submanifolds of complex contact manifolds, Math. Re- search Lett. 1 (1994), 717-727. Geometry of relative deformations I. S A Merkulov, Twistor Theory (ed. S. HuggettMarcell DekkerS. A. Merkulov, Geometry of relative deformations I, in Twistor Theory (ed. S. Huggett), Marcell Dekker, 1994. P] S A Merkulov, H Pedersen, Geometry of relative deformations II, in Twistor Theory. Marcell DekkerP] S. A. Merkulov and H. Pedersen, Geometry of relative deformations II, in Twistor Theory (ed. S. Huggett), Marcell Dekker, 1994. Twistorial construction of quaternionic manifolds. H Pedersen, Y S Poon, Proc. VIth Int. Coll. on Diff. Geom. , Cursos y Congresos. VIth Int. Coll. on Diff. Geom. , Cursos y CongresosUniv. Santiago de CompostelaH. Pedersen and Y. S. Poon, Twistorial construction of quaternionic manifolds, In Proc. VIth Int. Coll. on Diff. Geom. , Cursos y Congresos, Univ. Santiago de Compostela, 61 (1989), 207-218. Non-linear gravitons and curved twistor theory. R Penrose, Gen. Rel. Grav. 7R. Penrose, Non-linear gravitons and curved twistor theory, Gen. Rel. Grav. 7 (1976), 31-52.
[]
[ "Efficiency analysis of double perturbed pairwise comparison matrices", "Efficiency analysis of double perturbed pairwise comparison matrices" ]
[ "Kristóf Ábele-Nagy ", "Sándor Bozóki ", "Örs Rebák " ]
[]
[]
Efficiency is a core concept of multi-objective optimization problems and multi-attribute decision making. In the case of pairwise comparison matrices a weight vector is called efficient if the approximations of the elements of the pairwise comparison matrix made by the ratios of the weights cannot be improved in any position without making it worse in some other position. A pairwise comparison matrix is called double perturbed if it can be made consistent by altering two elements and their reciprocals. The most frequently used weighting method, the eigenvector method is analyzed in the paper, and it is shown that it produces an efficient weight vector for double perturbed pairwise comparison matrices. *
10.1080/01605682.2017.1409408
[ "https://arxiv.org/pdf/1602.07137v2.pdf" ]
45,629,060
1602.07137
ced4a85c698ccd0229f532fd7fedbbfac70b5956
Efficiency analysis of double perturbed pairwise comparison matrices Kristóf Ábele-Nagy Sándor Bozóki Örs Rebák Efficiency analysis of double perturbed pairwise comparison matrices Efficiency is a core concept of multi-objective optimization problems and multi-attribute decision making. In the case of pairwise comparison matrices a weight vector is called efficient if the approximations of the elements of the pairwise comparison matrix made by the ratios of the weights cannot be improved in any position without making it worse in some other position. A pairwise comparison matrix is called double perturbed if it can be made consistent by altering two elements and their reciprocals. The most frequently used weighting method, the eigenvector method is analyzed in the paper, and it is shown that it produces an efficient weight vector for double perturbed pairwise comparison matrices. * Introduction Ranking alternatives, or picking the best alternative is a commonly investigated problem. The case of a single cardinal objective function to be maximized or minimized is long studied by various operations research disciplines. This is however often not feasible. Alternatives can be ranked by assigning a cardinal utility to them, or by setting up ordinal preference relations among them. In the case of a single criterion and a single decision maker, modelling the preferences is often possible through standard methods. If there are multiple, often contradicting criteria, this becomes significantly harder. A dominant alternative, which is the best with respect to all criteria, very rarely exists. Thus, when a decision making method is used to aid the decision of a decision maker, some form of compromise is needed. Modelling the preferences of the decision maker by ranking or weighting the criteria can accomplish such a compromise. It allows the "best" alternative to be chosen (or the possible alternatives to be ranked) with respect to the subjective preferences of the decision maker. Examples of multi-criteria decision problems range from "Which house to buy?" or "What should the company invest in?" to public tenders. When weighting criteria, giving the weights directly is almost never feasible. Instead, a common method is to apply pairwise comparisons. Answers to the questions "How many times is Criterion A more important than Criterion B?" and so on (which are explicit cardinal ratios) can be arranged in a matrix, called a pairwise comparison matrix (PCM). Formally, a PCM is a square matrix A = [a ij ] i,j=1,...,n with the properties a ij > 0 and a ij = 1/a ji (which implies a ii = 1). If the cardinal transitivity property a ik a kj = a ij ∀i, j, k = 1, . . . , n also holds for a PCM, it is called consistent, otherwise it is called inconsistent [20]. Let PCM n denote the set of PCMs of size n × n. The next step is to extract the weights of criteria from the PCM. Several methods exist for this task [3,9,12,17], but we will only focus on the most commonly used one, the eigenvector method (EM). The eigenvector method gives the weight vector w EM = (w 1 , . . . , w n ) T as the right Perron eigenvector of A ∈ PCM n , thus Aw EM = λ max w EM holds, where λ max is the principal eigenvalue of A. λ max ≥ n, and λ max = n if and only if A is consistent [20]. A consistent PCM can be written as A =        1 x 1 x 2 . . . x n−1 1/x 1 1 x 2 /x 1 . . . x n−1 /x 1 1/x 2 x 1 /x 2 1 . . . x n−1 /x 2 . . . . . . . . . . . . . . . 1/x n−1 x 1 /x n−1 x 2 /x n−1 . . . 1        ∈ PCM n , where x 1 , . . . , x n−1 > 0. The elements of a PCM approximate the ratios of the weights, therefore the ratios of the elements of the weight vector should be as close as possible to the corresponding matrix elements. If a weight vector cannot be trivially improved in this regard (there is no other weight vector which is at least as good approximation, and strictly better in at least one position), it is called Pareto optimal or efficient. It has been proved that the eigenvector method does not always produce an efficient solution. However, in some special cases the eigenvector method always gives an efficient weight vector. If the PCM is simple perturbed, i.e., it differs from a consistent PCM in only one element and its reciprocal, the principal right eigenvector is efficient [1]. In the paper this will be extended to double perturbed PCMs, which only differ from a consistent PCM in two elements and their reciprocals. These special types of PCMs are not just theoretically important, but also occur in real decision problems. Poesz [19] gathered a handful of empirical PCMs that were analyzed in [7]; out of 90 matrices of size at most 6 × 6, 53 were either consistent, simple perturbed or double perturbed (see [7,Table 1]). In Section 2 we will introduce the key definitions and tools used in the paper, together with an example. In Section 3 the main results of the paper will be proved: through obtaining explicit formulas for the principal right eigenvector and a series of lemmas, efficiency of the principal right eigenvector will be shown for the case of double perturbed PCMs. The proofs of the lemmas, except for one, are given in detail in the Appendix. In Section 4 conclusions follow. Efficiency and perturbed pairwise comparison matrices Pareto optimality or efficiency [13, Chapter 2][22, Chapter 6] is a basic concept of multi-objective optimization and multi-attribute decision making, too. The definition for weight vectors of PCMs is as follows. Let A = [a ij ] i,j=1,...,n ∈ PCM n and w = (w 1 , w 2 , . . . , w n ) T be a positive weight vector. Definition 1. A positive weight vector w is called efficient if no other positive weight vector w = (w 1 , w 2 , . . . , w n ) T exists such that a ij − w i w j ≤ a ij − w i w j for all 1 ≤ i, j ≤ n,(1)a k − w k w < a k − w k w for some 1 ≤ k, ≤ n.(2) A weight vector w is called inefficient if it is not efficient. For a consistent PCM a ij = w EM i /w EM j [20], which implies the following remark: Remark 1. The principal right eigenvector w EM is efficient for every consistent PCM. For inconsistent PCMs however, the principal right eigenvector can be inefficient, found by Blanquero, Carrizosa and Conde [4,Section 3]. This result was also reinforced by Bajwa, Choo and Wedley [3], by Conde and Pérez [10] and by Fedrizzi [16]. Blanquero, Carrizosa and Conde [4] developed LP models to test whether a weight vector is efficient. Bozóki and Fülöp [6] further developed the models and provided algorithms to improve an inefficient weight vector. Anholcer and Fülöp [2] devised a new algorithm to derive an efficient solution from an inconsistent PCM. Furthermore, Bozóki [5] showed that the principal right eigenvector of a whole class of matrices, namely the parametric PCM A(p, q) =           1/p 1 1 1 . . . 1 q 1/p q 1 1 . . . 1/q 1            ∈ PCM n , where n ≥ 4, p > 0 and 1 = q > 0, is inefficient. Several necessary and sufficient conditions were examined by Blanquero, Carrizosa and Conde [4], one of which is of crucial importance to us. This one uses a directed graph representation: Definition 2. Let A = [a ij ] i,j=1,. ..,n ∈ PCM n and w = (w 1 , w 2 , . . . , w n ) T be a positive weight vector. A directed graph G = (V, − → E ) A,w is defined as follows: V = {1, 2, . . . , n} and − → E = arc(i → j) w i w j ≥ a ij , i = j . It follows from Definition 2 that if w i /w j = a ij , then there is a bidirected arc between nodes i, j. The result of Blanquero, Carrizosa and Conde using this representation is as follows: Theorem 1 ([4, Corollary 10]). Let A ∈ PCM n . A weight vector w is efficient if and only if G = (V, − → E ) A, w is a strongly connected digraph, that is, there exist directed paths from i to j and from j to i for all pairs of nodes i, j. The following numerical example provides an illustration for Theorem 1. Example 1. Let A ∈ PCM 4 be defined as follows: A =     1 1/2 4 2 2 1 5 7 1/4 1/5 1 2 1/2 1/7 1/2 1     . The principal right eigenvector w EM , corresponding to PCM A, and the consistent approximation of A as generated by w EM are as follows (truncated at 8 and 4 correct digits, respectively): Apply Definition 2, the directed graph G = (V, − → E ) A,w EM corresponding to A and w EM is drawn in Figure 1. By Theorem 1, w EM is not efficient, because the corresponding digraph is not strongly connected: no arc leaves node 2. It can be seen in a more direct way why the principal right eigenvector w EM is inefficient. If the second coordinate is increased to w 2 ≈ 0. w EM =     0.27471631 0.53204485 0.10869376 0.08454506     , w EM i w EM j =        . The approximation in the entries marked by bold became strictly better ((2) holds in Definition 1), while for all other entries the approximation remained the same ((1) holds with equality in Definition 1). As it can be seen from Example 1 above, Theorem 1 is a powerful an applicable characterization of efficiency. In the rest of the paper, special types of PCMs are considered. A simple perturbed PCM differs from a consistent PCM in only one element and its reciprocal, or in other words it can be made consistent by altering only one element (and its reciprocal). Thus, without loss of generality, a simple perturbed PCM can be written as A δ =        1 δx 1 x 2 . . . x n−1 1/(δx 1 ) 1 x 2 /x 1 . . . x n−1 /x 1 1/x 2 x 1 /x 2 1 . . . x n−1 /x 2 . . . . . . . . . . . . . . . 1/x n−1 x 1 /x n−1 x 2 /x n−1 . . . 1        ∈ PCM n , where x 1 , . . . , x n−1 > 0 and 0 < δ = 1. Similarly, a double perturbed PCM differs from a consistent PCM in two elements and their reciprocals, or in other words it can be made consistent by altering two elements (and their reciprocals). We have to differentiate between three cases of double perturbed PCMs. Without loss of generality, every double perturbed PCM is equivalent to one of them. Also, we can suppose without the loss of generality, that from now on n ≥ 4, because a PCM with n = 3 is either simple perturbed or consistent. In Case 1, the perturbed elements are in the same row, and they are multiplied by 0 < δ = 1 and 0 < γ = 1 respectively. In Case 2, they are in different rows, but this case needs to be further divided into two subcases (2A and 2B) due to algebraic issues. In Case 2A matrix size is 4 × 4, while in Case 2B matrix size is at least 5 × 5. Thus, these matrices take the following form: Case 1: P γ,δ =          1 δx 1 γx 2 x 3 . . . x n−1 1/(δx 1 ) 1 x 2 /x 1 x 3 /x 1 . . . x n−1 /x 1 1/(γx 2 ) x 1 /x 2 1 x 3 /x 2 . . . x n−1 /x 2 1/x 3 x 1 /x 3 x 2 /x 3 1 . . . x n−1 /x 3 . . . . . . . . . . . . . . . . . . 1/x n−1 x 1 /x n−1 x 2 /x n−1 x 3 /x n−1 . . . 1          ,(3) Case 2A: Q γ,δ =     1 δx 1 x 2 x 3 1/(δx 1 ) 1 x 2 /x 1 x 3 /x 1 1/x 2 x 1 /x 2 1 γx 3 /x 2 1/x 3 x 1 /x 3 x 2 /(γx 3 ) 1     ,(4) Case 2B: R γ,δ =            1 δx 1 x 2 x 3 x 4 . . . x n−1 1/(δx 1 ) 1 x 2 /x 1 x 3 /x 1 x 4 /x 1 . . . x n−1 /x 1 1/x 2 x 1 /x 2 1 γx 3 /x 2 x 4 /x 2 . . . x n−1 /x 2 1/x 3 x 1 /x 3 x 2 /(γx 3 ) 1 x 4 /x 3 . . . x n−1 /x 3 1/x 4 x 1 /x 4 x 2 /x 4 x 3 /x 4 1 . . . x n−1 /x 4 . . . . . . . . . . . . . . . . . . . . . 1/x n−1 x 1 /x n−1 x 2 /x n−1 x 3 /x n−1 x 4 /x n−1 . . . 1            .(5) Once again, x 1 , . . . , x n−1 > 0 and 0 < δ, γ = 1. Remark 2. If either δ = 1 or γ = 1 then the PCM is simple perturbed. If δ = γ = 1 then the PCM is consistent. Remark 3. If n = 4 and δ = γ, then the PCM P δ,δ in Case 1 is simple perturbed (multiply the single element x 3 in position (1,4) by δ to have a consistent PCM). Bozóki, Fülöp and Poesz examined PCMs that can be made consistent by modifying at most 3 elements [7]. Each of the three cases above corresponds to a graph: Case 1 corresponds to [7, Fig. 6(b)] while Case 2 corresponds to [7, Fig. 6(a)]. Cook and Kress [11] and Brunelli and Fedrizzi [8] also examined the similar idea of comparing two PCMs that differ in only one element. Main results The main result of the paper will be the extension of Theorem 2 for double perturbed PCMs. A method to acquire the explicit form of the principal right eigenvector of a PCM when the perturbed elements are in the same row or column has been developed by Farkas, Rózsa and Stubnya [15]. Farkas [14] writes the explicit formula for the simple perturbed case. Our first goal is to extend the method for the double perturbed case. Similar to [14], the characteristic polynomial is needed first. Let D = diag(1, 1/x 1 , . . . , 1/x n−1 ), and let e = (1, . . . , 1) T . For A ∈ PCM n , ee T − U i V T i = D −1 AD(6) holds for i = 1, 2, where U 1 =          0 1 1 − 1/δ 0 1 − 1/γ 0 0 0 . . . . . . 0 0          ∈ R n×2 , V 1 =          1 0 0 1 − δ 0 1 − γ 0 0 . . . . . . 0 0          ∈ R n×2 ,(7)U 2 =            0 1 0 0 1 − 1/δ 0 0 0 0 0 0 1 0 0 1 − 1/γ 0 0 0 0 0 . . . . . . . . . . . . 0 0 0 0            ∈ R n×4 , V 2 =            1 0 0 0 0 1 − δ 0 0 0 0 1 0 0 0 0 1 − γ 0 0 0 0 . . . . . . . . . . . . 0 0 0 0            ∈ R n×4 .(8) Case 1 with form (3) corresponds to i = 1, while Case 2 with forms (4) and (5) corresponds to i = 2. Lemma 1 (Matrix determinant lemma, [18]). If A ∈ R n×n is invertible, and U, V ∈ R n×m , then det(A + UV T ) = det(I m + V T A −1 U) det(A), where I m denotes the identity matrix of size m × m. Lemma 2 (Sherman-Morrison formula, [21]). Let A ∈ R n×n , u, v ∈ R n . If A is invertible and 1 + v T A −1 u = 0, then A + uv T −1 exists, and A + uv T −1 = A −1 − 1 1 + v T A −1 u A −1 uv T A −1 . Let A ∈ PCM n be a double perturbed PCM and U i , V i be as in (6). Let the matrix K A (λ) ∈ R n×n be defined as follows: K A (λ) = λI + U i V T i − ee T = λI − D −1 AD, where I denotes I n , e = (1, . . . , 1) T ∈ R n , i = 1 in Case 1, i = 2 in Case 2, and the the second equation follows from (6). Lemma 3. The characteristic polynomial of the double perturbed PCM A ∈ PCM n is p A (λ) = (−1) n det(K A (λ)). Proof. As before, i = 1 in Case 1 and i = 2 in Case 2. p A (λ) = det(A − λI) = (−1) n det(λI − A) = (−1) n det λI + D U i V T i − ee T D −1 = (−1) n det D λI + U i V T i − ee T D −1 = (−1) n det(D) det λI + U i V T i − ee T det D −1 = (−1) n det(K A (λ)). Lemma 4. det(λI − ee T ) = λ n − nλ n−1 . Proof. If λ = 0, then both sides of the equation are 0. If λ = 0, apply Lemma 1 with m = 1, A = λI, U = −e, V = e: det(λI − ee T ) = (1 − e T (λI) −1 e) det(λI) = λ n − nλ n−1 . Lemma 5. If λ = 0 and λ = n, then λI − ee T −1 exists, and λI − ee T −1 = 1 λ (λ − n) ee T + 1 λ I. Proof. Apply the Sherman-Morrison formula (Lemma 2) with A = λI, u = −e, v = e. Lemma 6. Let U, V ∈ R n×m be arbitrary matrices. If λ = 0 and λ = n, then det λI n + UV T − ee T = λ n − nλ n−1 det I m + 1 λ(λ − n) V T ee T U + 1 λ V T U . Proof. Apply Lemma 1 with A = λI n − ee T . According to Lemma 5, A is invertible. Utilizing Lemmas 1, 4 and 5 the following equations hold: det λI n − ee T + UV T = det I m + V T λI n − ee T −1 U det λI n − ee T = det I m + 1 λ(λ − n) V T ee T U + 1 λ V T U λ n − nλ n−1 . We can write the characteristic polynomial of double perturbed PCMs in explicit form. Theorem 3. Let n ≥ 4. The characteristic polynomial of a double perturbed PCM in form (3) (Case 1) is p P (λ) = (−1) n λ n−3 λ 3 − nλ 2 − γ δ + δ γ − (n − 3) γ + δ + 1 γ + 1 δ + 4n − 10 . Proof. Lemma 3 implies that p P (λ) = (−1) n det(K P (λ)) = (−1) n det λI + U 1 V 1 T − ee T , where U 1 and V 1 are defined by (7). Suppose that λ = n and λ = 0. According to Lemma 6 p P (λ) = (−1) n λ n − nλ n−1 det I 2 + 1 λ(λ − n) V 1 T ee T U 1 + 1 λ V 1 T U 1 = (−1) n λ n − nλ n−1 det(S) = (−1) n λ n−3 λ 3 − nλ 2 − γ δ + δ γ − (n − 3) γ + δ + 1 γ + 1 δ + 4n − 10 , where S = 1 + 2−1/δ−1/γ λ(λ−n) 1 λ(λ−n) + 1 λ (2−δ−γ)(1−1/δ)+(2−δ−γ)(1−1/γ) λ(λ−n) + (1−δ)(1−1/δ) λ + (1−γ)(1−1/γ) λ 1 + 2−δ−γ λ(λ−n) . A polynomial of degree n is uniquely determined by n+1 points, and we have calculated p P (λ) in all but two points, which completes the proof. p R (λ) = (−1) n λ n−5 λ 5 − nλ 4 − (n − 2) γ + δ + 1 γ + 1 δ − 4 λ 2 − cλ − (n − 4)c , where c = (γ − 1) 2 (δ − 1) 2 γδ . Furthermore, the characteristic polynomial of a double perturbed PCM in form (4) (Case 2A), p Q (λ) is a special case of p R (λ) with n = 4. Namely, p Q (λ) = λ 4 − 4λ 3 − 2 γ + δ + 1 γ + 1 δ − 4 λ − (γ − 1) 2 (δ − 1) 2 γδ . Proof. Lemma 3 implies that p R (λ) = (−1) n det(K R (λ)) = (−1) n det λI + U 2 V 2 T − ee T , where U 2 and V 2 are defined by (8). Suppose that λ = n and λ = 0. According to Lemma 6 p R (λ) = (−1) n λ n − nλ n−1 det I 4 + 1 λ(λ − n) V 2 T ee T U 2 + 1 λ V 2 T U 2 = (−1) n λ n − nλ n−1 det(T) = (−1) n λ n−5 λ 5 − nλ 4 − (n − 2) γ + δ + 1 γ + 1 δ − 4 λ 2 − cλ − (n − 4)c , where T =       1 + 1−1/δ λ(λ−n) 1 λ(λ−n) + 1 λ 1−1/γ λ(λ−n) 1 λ(λ−n) (1−δ)(1−1/δ) λ(λ−n) + (1−δ)(1−1/δ) λ 1 + 1−δ λ(λ−n) (1−δ)(1−1/γ) λ(λ−n) 1−δ λ(λ−n) 1−1/δ λ(λ−n) 1 λ(λ−n) 1 + 1−1/γ λ(λ−n) 1 λ(λ−n) + 1 λ (1−γ)(1−1/δ) λ(λ−n) 1−γ λ(λ−n) (1−γ)(1−1/γ) λ(λ−n) + (1−γ)(1−1/γ) λ 1 + 1−γ λ(λ−n)       and c = (γ − 1) 2 (δ − 1) 2 γδ . Again, a polynomial of degree n is uniquely determined by n + 1 points, and we have calculated p R (λ) in all but two points, which completes the proof. The case n = 4 is analogous, and p Q (λ) = λ 4 − 4λ 3 − 2 γ + δ + 1 γ + 1 δ − 4 λ − (γ − 1) 2 (δ − 1) 2 γδ is resulted in. Theorem 5. The principle right eigenvector of a double perturbed PCM can be written in explicit ways. In Case 1 (γ and δ are in the same row), the formulas for the principal right eigenvector are the following: w EM =               δγλ(λ − n + 1) 1 x 1 [γλ − (n − 2)γ + δ + (n − 3)δγ] 1 x 2 [δλ − (n − 2)δ + γ + (n − 3)δγ] 1 x 3 [γ + δ + δγλ − 2δγ] . . . 1 x i−1 [γ + δ + δγλ − 2δγ] . . . 1 x n−1 [γ + δ + δγλ − 2δγ]               ,(9)w EM = c 11               x 1 γλ [δλ − (n − 2)δ + γ + n − 3] γλ 3 − (n − 1)γλ 2 − (n − 3)(γ 2 − 2γ + 1) x 1 x 2 [γλ 2 − γλ + δλ + (n − 3)(δγ − δ − γ + 1)] x 1 x 3 [γλ 2 − γλ − γ + δ + δγλ − δγ + γ 2 ] . . . x 1 x i−1 [γλ 2 − γλ − γ + δ + δγλ − δγ + γ 2 ] . . . x 1 x n−1 [γλ 2 − γλ − γ + δ + δγλ − δγ + γ 2 ]               ,(10)w EM = c 12               x 2 δλ [δ + γλ − (n − 2)γ + n − 3] x 2 x 1 [δλ 2 − δλ + γλ + (n − 3)(δγ − δ − γ + 1)] δλ 3 − (n − 1)δλ 2 − (n − 3)(δ 2 − 2δ + 1) x 2 x 3 [δλ 2 − δλ + γ − δ + δ 2 + δγλ − δγ] . . . x 2 x i−1 [δλ 2 − δλ + γ − δ + δ 2 + δγλ − δγ] . . . x 2 x n−1 [δλ 2 − δλ + γ − δ + δ 2 + δγλ − δγ]               ,(11)w EM = c 13            x 3 δγλ(δ + γ + λ − 2) x 3 x 1 [δγλ 2 − δγλ + γ 2 + γλ − γ − δγ + δ] x 3 x 2 [δγλ 2 − δγλ − δγ + γ + δ 2 + δλ − δ] δγλ 2 − 4δγ + γ + δ + δ 2 γ + γ 2 δ x 3 x 4 [δγλ 2 − 4δγ + γ + δ + δ 2 γ + γ 2 δ] . . . x 3 x n−1 [δγλ 2 − 4δγ + γ + δ + δ 2 γ + γ 2 δ]            ,(12) where c 11 , c 12 , c 13 ∈ R. In Case 2A (γ and δ are in different rows, and matrix size is 4 × 4) the formulas take the following form: w EM =     δ(λ 3 γ − 3λ 2 γ − 1 + 2γ − γ 2 ) 1 x 1 [λ 2 γ − 2λγ + δ + 2λδγ − 2δγ + δγ 2 ] 1 x 2 γ [γ + λ − 1 + δλ 2 − 2λδ + δ + λδγ − δγ] 1 x 3 [1 + λγ − γ + λδ − δ + δγλ 2 − 2λδγ + δγ]     ,(13)w EM = c 21     x 1 [δγλ 2 − 2λδγ + 1 + 2λγ − 2γ + γ 2 ] λ 3 γ − 3λ 2 γ − 1 + 2γ − γ 2 x 1 x 2 γ [λγ + λ 2 − 2λ − γ + 1 + λδ − δ + δγ] x 1 x 3 [λ + λ 2 γ − 2λγ − 1 + γ + δ + λδγ − δγ]     ,(14)w EM = c 22     x 2 δ(1 + λγ − γ)(δ + λ − 1) x 2 x 1 [1 + λγ − γ + λδ − δ + δγλ 2 − 2λδγ + δγ] γ(δλ 3 − 3δλ 2 − 1 + 2δ − δ 2 ) x 2 x 3 [2λδγ + δλ 2 − 2λδ − 2δγ + γ + δ 2 γ]     ,(15)w EM = c 23     x 3 δ(λγ + λ 2 − 2λ − γ + 1 + λδ − δ + δγ) x 3 x 1 [γ + λ − 1 + δλ 2 − 2λδ + δ + λδγ − δγ] x 3 x 2 [2λδ + δγλ 2 − 2λδγ − 2δ + 1 + δ 2 ] δλ 3 − 3δλ 2 − 1 + 2δ − δ 2     ,(16) where c 21 , c 22 , c 23 ∈ R. In Case 2B (γ and δ are in different rows, and matrix size is at least 5 × 5) the formulas are the following: w EM =            δλ[λ 3 γ − (n − 1)λ 2 γ − (n − 3)(γ 2 − 2γ + 1)] 1 x 1 {λ 3 γ−(n−2)λ 2 γ+(n−2)δγλ 2 +[λδ+(n−4)(δ−1)](γ 2 −2γ+1)} 1 x 2 γλ [γ + λ − 1 + δλ 2 − 2λδ + δ + λδγ − δγ] 1 x 3 λ [1 + λγ − γ + λδ − δ + δγλ 2 − 2λδγ + δγ] 1 x 4 [γ 2 −2γ+λ 2 γ+1+λδ−δγλ 2 −2λδγ+λγ 2 δ+λ 3 δγ−δ+2δγ−δγ 2 ] . . . 1 x n−1 [γ 2 −2γ+λ 2 γ+1+λδ−δγλ 2 −2λδγ+λγ 2 δ+λ 3 δγ−δ+2δγ−δγ 2 ]            ,(17)w EM = c 31            x 1 [λ 3 δγ−(n−2)δγλ 2 −(n−4)δ(γ−1) 2 +λ+(n−2)λ 2 γ−2λγ+λγ 2 +(n−4)(γ−1) 2 ] λ(λ 3 γ − (n − 1)λ 2 γ − (n − 3)(γ − 1) 2 ) x 1 x 2 γλ(λγ + λ 2 − 2λ − γ + 1 + δλ − δ + δγ) x 1 x 3 λ(λ + λ 2 γ − 2λγ − 1 + γ + δ + λδγ − δγ) x 1 x 4 (λγ 2 −2λγ+λ 3 γ+λ−γ 2 +2γ−λ 2 γ−1+δ−2δγ+δγ 2 +δγλ 2 ) . . . x 1 x n−1 (λγ 2 −2λγ+λ 3 γ+λ−γ 2 +2γ−λ 2 γ−1+δ−2δγ+δγ 2 +δγλ 2 )            ,(18)w EM = c 32            x 2 δλ(1 + λγ − γ)(δ + λ − 1) x 2 x 1 λ(1 + λγ − γ)(1 + δλ − δ) γλ [λ 3 δ − (n − 1)δλ 2 − (n − 3)(δ − 1) 2 ] x 2 x 3 [δλ 3 −(n−2)δλ 2 (1−γ)−2λδγ+2(n−4)δ(1−γ)+λγ+δ 2 λγ+(n−4)(−1+γ−δ 2 +δ 2 γ)] x 2 x 4 (1 + λγ − γ)(δλ 2 + 1 − 2δ + δ 2 ) . . . x 2 x n−1 (1 + λγ − γ)(δλ 2 + 1 − 2δ + δ 2 )            ,(19)w EM = c 33            x 3 δλ(λγ + λ 2 − 2λ − γ + 1 + δλ − δ + δγ) x 3 x 1 λ(γ + λ − 1)(1 + δλ − δ) x 3 x 2 [λ 3 δγ−(n−2)δλ 2 (γ−1)−2δλ+2(n−4)δ(γ−1)+λ+δ 2 λ+(n−4)(1−γ+δ 2 −δ 2 γ)] λ[δλ 3 − (n − 1)δλ 2 − (n − 3)(δ − 1) 2 ] x 3 x 4 (δγλ 2 +λ 3 δ−δλ 2 −2δλ−2δγ+2δ−1+γ+λ+δ 2 λ−δ 2 +δ 2 γ) . . . x 3 x n−1 (δγλ 2 +λ 3 δ−δλ 2 −2δλ−2δγ+2δ−1+γ+λ+δ 2 λ−δ 2 +δ 2 γ)            ,(20)w EM = c 34              x 4 δλ(γ 2 − 2γ + λ 2 γ + 1)(δ + λ − 1) x 4 x 1 λ(γ 2 − 2γ + λ 2 γ + 1)(1 + δλ − δ) x 4 x 2 γλ(δγλ 2 +λ 3 δ−δλ 2 −2δλ−2δγ+2δ−1+γ+λ+δ 2 λ−δ 2 +δ 2 γ) x 4 x 3 λ(δλ 2 +λ 3 δγ−δγλ 2 −2λδγ−2δ+2δγ−γ+1+λγ+δ 2 +δ 2 λγ−δ 2 γ) (γ 2 − 2γ + λ 2 γ + 1)(δλ 2 + 1 − 2δ + δ 2 ) x 4 x 5 (γ 2 − 2γ + λ 2 γ + 1)(δλ 2 + 1 − 2δ + δ 2 ) . . . x 4 x n−1 (γ 2 − 2γ + λ 2 γ + 1)(δλ 2 + 1 − 2δ + δ 2 )              ,(21) where c 31 , c 32 , c 33 , c 34 ∈ R. Proof. The proof is similar to that of the eigenvector formulas (24)-(26) in [14]. Let us consider Case 1. Let D = diag(1, 1/x 1 , . . . , 1/x n−1 ), and let K P (λ) = λI+U 1 V T 1 −ee T , with U 1 and V 1 as defined by (7). Since D is invertible, every column of the one rank matrix D adj(K P (λ max ))D −1 is a Perron eigenvector of P. For Case 2, replace U 1 by U 2 and V 1 by V 2 as defined by (8). The formulas for the Perron eigenvector have been written in different forms in Theorem 5. Although it is not at all apparent, these forms are equal up to a constant multiplier. Using these formulas, the paper's main result can be obtained through a series of lemmas. Each of these lemmas corresponds to a directed edge in a digraph. Using these results, the direction of certain arcs can be determined. Thus, it will be shown that directed graphs of Cases 1, 2A and 2B are strongly connected. By Theorem 1, efficiency of the principal right eigenvector is implied. It follows from the positivity of w EM (see Remark 4 in the Appendix), that both sides of the starting inequalities of each lemma can be multiplied by the respective w EM i without further discussion. Since there are 28 lemmas, only one (Lemma 3f) has its proof in the main text as an illustrative example. The rest of the proofs are in the Appendix. The first group of lemmas correspond to Case 1 (γ and δ are in the same row), i.e., the double perturbed PCM is written in form (3). Lemma 1a. δ > 1 and δ ≥ γ ⇒ w EM 1 /w EM 2 < δx 1 . Lemma 1b. δ < 1 and δ ≤ γ ⇒ w EM 1 /w EM 2 > δx 1 . Lemma 1c. γ > 1 and γ ≥ δ ⇒ w EM 1 /w EM 3 < γx 2 . Lemma 1d. γ < 1 and γ ≤ δ ⇒ w EM 1 /w EM 3 > γx 2 . Lemma 1e. γ, δ > 1 ⇒ w EM 1 /w EM i > x i−1 , i = 4, . . . , n. Lemma 1f . γ, δ < 1 ⇒ w EM 1 /w EM i < x i−1 , i = 4, . . . , n. Lemma 1g. δ γ ⇔ w EM 2 /w EM 3 x 2 /x 1 . Lemma 1h. δ ≷ 1 ⇔ w EM 2 /w EM i ≶ x i−1 /x 1 , i = 4, . . . , n. Lemma 1i. γ ≷ 1 ⇔ w EM 3 /w EM i ≶ x i−1 /x 2 , i = 4, . . . , n. Lemma 1j. w EM i /w EM j = x j−1 /x i−1 , i, j = 4, . . . , n. The second group of lemmas correspond to Case 2A (γ and δ are in different rows, and matrix size is 4 × 4), i.e., the double perturbed PCM is written in form (4). Lemma 2a. δ ≷ 1 ⇔ w EM 1 /w EM 2 ≶ δx 1 . Lemma 2b. δ > 1, γ < 1 ⇒ w EM 1 /w EM 3 > x 2 . Lemma 2c. δ < 1, γ > 1 ⇒ w EM 1 /w EM 3 < x 2 . Lemma 2d. δ, γ < 1 ⇒ w EM 1 /w EM 4 < x 3 . Lemma 2e. δ, γ > 1 ⇒ w EM 1 /w EM 4 > x 3 . Lemma 2f . δ, γ < 1 ⇒ w EM 2 /w EM 3 > x 2 /x 1 . Lemma 2g. δ, γ > 1 ⇒ w EM 2 /w EM 3 < x 2 /x 1 . Lemma 2h. δ < 1, γ > 1 ⇒ w EM 2 /w EM 4 > x 3 /x 1 . Lemma 2i. δ > 1, γ < 1 ⇒ w EM 2 /w EM 4 < x 3 /x 1 . Lemma 2j. γ ≷ 1 ⇔ w EM 3 /w EM 4 ≶ γx 3 /x 2 . The last group of lemmas correspond to Case 2B, when γ and δ are in different rows, and matrix size is at least 5 × 5, i.e., the double perturbed PCM is written in form (5). Lemma 3a. γ ≷ 1 ⇔ w EM 3 /w EM 4 ≶ γx 3 /x 2 . Lemma 3b. δ ≷ 1 ⇔ w EM 1 /w EM 2 ≶ δx 1 . Lemma 3c. δ ≷ 1 ⇔ w EM 1 /w EM i ≷ x i−1 , i = 5, . . . , n. Lemma 3d. δ ≷ 1 ⇔ w EM 2 /w EM i ≶ x i−1 /x 1 , i = 5, . . . , n. Lemma 3e. γ ≷ 1 ⇔ w EM 3 /w EM i ≷ x i−1 /x 2 , i = 5, . . . , n. Lemma 3f . γ > 1, δ < 1 ⇒ w EM 2 /w EM 4 > x 3 /x 1 . γ < 1, δ > 1 ⇒ w EM 2 /w EM 4 < x 3 /x 1 . Proof. Instead of the statement of the lemma, we will prove the following stronger statement: γ δ ⇔ w EM 2 /w EM 4 x 3 /x 1 . Formula (21) is used in this proof. x 4 x 1 λ(γ 2 − 2γ + λ 2 γ + 1)(1 + δλ − δ) x 3 x 1 x 4 x 3 λ(δλ 2 + λ 3 δγ − δγλ 2 − 2λδγ − 2δ + 2δγ − γ + 1 + λγ + δ 2 + δ 2 λγ − δ 2 γ). This is further equivalent to γ 2 + λγ 2 δ − γ 2 δ − 2γ − 2λγδ + 2γδ + λ 2 γ + λ 3 γδ − λ 2 γδ + 1 + λδ − δ λ 2 δ + λ 3 γδ − λ 2 γδ − 2λγδ − 2δ + 2γδ − γ + 1 + λγ + δ 2 + λγδ 2 − γδ 2 . Further equivalent transformations yield λ 2 γ − λ 2 δ + λδ − λγ + λγ 2 δ − λγδ 2 + γ 2 − δ 2 + γδ 2 − γ 2 δ + 2δ − 2γ + γ − δ 0 λ 2 (γ − δ) + λ(δ − γ) + λγδ(γ − δ) + (γ + δ)(γ − δ) + γδ(δ − γ) + 2(δ − γ) + (γ − δ) 0 (γ − δ)(λ 2 − λ + λγδ + γ + δ − γδ − 1) 0 (γ − δ)(λ 2 − 2λ + λγδ − γδ + λ − 1 + γ + δ) 0 (γ − δ)(λ(λ − 2) + γδ(λ − 1) + (λ − 1) + γ + δ) 0. The second factor on the left hand side is always positive because λ > n ≥ 5 and γ, δ > 0. Lemma 3g. γ, δ > 1 ⇒ w EM 1 /w EM 4 > x 3 . γ, δ < 1 ⇒ w EM 1 /w EM 4 < x 3 . Lemma 3h. w EM i /w EM j = x j−1 /x i−1 , i, j = 5, . . . , n. Cases of δ = 1 and γ = 1 are not covered by Lemmas 1a-3h due to Remark 2. Utilizing these lemmas, the main result of the paper can be proved: Theorem 6. The principal right eigenvector of a double perturbed PCM is efficient. Proof. As per Theorem 1, the strong connectedness of the digraph in Definition 2 needs to be shown. All possible digraphs are shown in Figures 2-4. The direction of each arc (where applicable) is determined by the corresponding lemma using Definition 2, which is labeled on the arc itself. In the cases where there is a node named i, this represents the complete subgraph of the rest of the nodes (consisting of n − 3 in Case 1 and n − 4 nodes in Case 2B). In these subgraphs there are bidirected arcs between any two nodes, due to Lemmas 1j and 3h. This is a strongly connected subgraph, and for any fixed j ≤ 3 the direction of the arc between nodes i and j is the same for every i ≥ 4 in Case 1 (see Lemmas 1e, 1f, 1h, 1i). Similarly, for any fixed j ≤ 4 the direction of the arc between nodes i and j is the same for every i ≥ 5 in Case 2B (see Lemmas 3c, 3d, 3e). Hence, it can be contracted into a single node when analyzing strong connectedness. Figures 2, 3, 4 correspond to Cases 1, 2A, 2B respectively. For the strong connectedness of each digraph, it is sufficient to find a directed cycle. Unchecked arcs are denoted by dashed lines in Figures 2-4. Directed cycles are as follows: Case 1 (Figure 2): δ > 1, γ > δ : 1 → i → 2 → 3 → 1, γ > 1, γ < δ : 1 → i → 3 → 2 → 1, δ > 1, γ < 1 : 1 → 3 → i → 2 → 1, δ < 1, γ < δ : 1 → 3 → 2 → i → 1, γ < 1, γ > δ : 1 → 2 → 3 → i → 1, δ < 1, γ > 1 : 1 → 2 → i → 3 → 1. Case 2A (Figure 3): δ > 1, γ > 1 : 1 → 4 → 3 → 2 → 1, δ > 1, γ < 1 : 1 → 3 → 4 → 2 → 1, δ < 1, γ < 1 : 1 → 2 → 3 → 4 → 1, δ < 1, γ > 1 : 1 → 2 → 4 → 3 → 1. Case 2B (Figure 4): δ > 1, γ > 1 : 1 → 4 → 3 → i → 2 → 1, δ > 1, γ < 1 : 1 → i → 3 → 4 → 2 → 1, δ < 1, γ < 1 : 1 → 2 → i → 3 → 4 → 1, δ < 1, γ > 1 : 1 → 2 → 4 → 3 → i → 1. The presence of a directed cycle implies strong connectedness for all of the digraphs, which implies efficiency in all cases by Theorem 1. Conclusions In the paper we used linear algebraic methods to derive explicit formulas for the principal eigenvector of double perturbed PCMs. We also used a necessary and sufficient condition for efficiency which uses a directed graph representation (the weight vector is efficient if and only if this graph is strongly connected) developed by Blanquero, Carrizosa and Conde [4]. Double perturbed PCMs had to be divided into three cases in order to get explicit formulas for every case. In all three cases the digraph has been studied arc by arc, however not all arcs had to be studied in order to determine strong connectedness. Utilizing all these tools, we have shown in the paper, that the often used eigenvector method produces an efficient weight vector in the case of double perturbed PCMs. This is an extension of our earlier result for simple perturbed PCMs [1]. Extension to triple perturbed and further is not possible, since all PCMs of at least 4 × 4 size which are not (at most) double perturbed are triple perturbed, and there are examples of inefficiency at size 4 × 4. Thus, while in some cases (e.g. when all perturbed elements are in different rows/columns) it may be possible to show efficiency, for all triple perturbed PCMs this is impossible. Furthermore, a triple perturbed PCM can be equivalent to five separate basic cases (see [7, Fig. 7]), which may need to be further divided into more subcases, making the efficiency analysis of triple perturbed PCMs difficult. A full characterization of the efficiency of the principal right eigenvector is still an open question, and a possible subject of future research. 1 . Formula (10): w EM 1 = x 1 γλ [δλ − (n − 2)δ + γ + n − 3] = x 1 γλ [δ(λ − n + 2) + γ + (n − 3)] . Formula (11): w EM 1 = x 2 δλ [δ + γλ − (n − 2)γ + n − 3] = x 2 δλ [δ + γ(λ − n + 2) + (n − 3)] . Formula (12): Positivity is apparent for w EM 1 . Formula (13): w EM 2 = 1 x 1 λ 2 γ − 2λγ + δ + 2λδγ − 2δγ + δγ 2 = 1 x 1 λγ(λ − 2) + δ + 2δγ(λ − 1) + δγ 2 . Formula (14): w EM 1 = x 1 [δγλ 2 − 2λδγ + 1 + 2λγ − 2γ + γ 2 ] = x 1 [δγλ(λ − 2) + 1 + 2γ(λ − 1) + γ 2 ]. Formula (15): w EM 1 = x 2 δ(1 + λγ − γ)(δ + λ − 1) = x 2 δ[1 + γ(λ − 1)][δ + (λ − 1)]. Formula (16): w EM 3 = x 3 x 2 2λδ + δγλ 2 − 2λδγ − 2δ + 1 + δ 2 = x 3 x 2 2δ(λ − 1) + δγλ(λ − 2) + 1 + δ 2 . From here on in the proof, n ≥ 5. Formula (17) in formula (14), which is already proven to be positive. Formula (19): w EM 1 = x 2 δλ(1 + λγ − γ)(δ + λ − 1) = x 2 δλ[1 + γ(λ − 1)][δ + (λ − 1)]. Formula (20): w EM 2 = x 3 x 1 λ(γ + λ − 1)(1 + δλ − δ) = x 3 x 1 λ[γ + (λ − 1)][1 + δ(λ − 1)]. Formula (21): w EM 1 = x 4 δλ(γ 2 − 2γ + λ 2 γ + 1)(δ + λ − 1) = x 4 δλ[γ 2 + γ(λ 2 − 2) + 1][δ + (λ − 1)]. Lemma 1a (Case 1). δ > 1 and δ ≥ γ ⇒ w EM 1 /w EM 2 < δx 1 . Proof. Using formula (10), w EM 1 w EM 2 = x 1 γλ (δλ − (n − 2)δ + γ + n − 3) γλ 3 − (n − 1)γλ 2 − (n − 3)(γ 2 − 2γ + 1) . Substitute λ = λ max in the characteristic polynomial p P (λ) by Theorem 3: (−1) n λ n−3 λ 3 − nλ 2 − γ δ + δ γ − (n − 3) γ + δ + 1 γ + 1 δ + 4n − 10 = 0, which can be transformed to γδλ 3 − γδnλ 2 = γ 2 + δ 2 + (n − 3) γ 2 δ + γδ 2 + δ + γ − γδ(4n − 10).(22) The statement to be proven is equivalent to γλ (δλ + γ − (n − 2)δ + n − 3) < δ γλ 3 − (n − 1)γλ 2 − (n − 3)(γ − 1) 2 . Using (22) this is further equivalent to (γδλ 3 − γδnλ 2 ) + λ γδ(n − 2) − γ 2 − γn + 3γ − δ(n − 3)(γ − 1) 2 > 0. Now apply further equivalent transformations: γλ (δ(n − 2) − γ − n + 3) + γ 2 + δ 2 − (n − 3)(δγ 2 − 2δγ + δ) + (n − 3) δ 2 γ + δγ 2 + δ + γ − δγ(4n − 10) > 0 γλ ((δ − 1)(n − 3) + δ − γ) + γ 2 + δ 2 + (n − 3) δ 2 γ + 2δγ + γ − 4δγ(n − 3) − 2δγ > 0 γλ ((δ − 1)(n − 3) + (δ − γ)) + γ(n − 3)(δ − 1) 2 + (δ − γ) 2 > 0. Lemma 1b (Case 1). δ < 1 and δ ≤ γ ⇒ w EM 1 /w EM 2 > δx 1 . Proof. According to formula (10) w EM 1 w EM 2 = x 1 γλ (δλ − (n − 2)δ + γ + n − 3) γλ 3 − (n − 1)γλ 2 − (n − 3)(γ 2 − 2γ + 1) . Transforming (22) similar to Lemma 1a, γλ ((δ − 1)(n − 3) + δ − γ) + γ(n − 3)(δ − 1) 2 + (δ − γ) 2 < 0. Transforming this further yields γ(δ − 1)(n − 3) (λ + (δ − 1)) + γλ(δ − γ) + (δ − γ) 2 < 0 γ(δ − 1)(n − 3) (λ + (δ − 1)) + (δ − γ)(γ(λ − 1) + δ) < 0. Lemma 1c (Case 1). γ > 1 and γ ≥ δ ⇒ w EM 1 /w EM 3 < γx 2 . Proof. The proof follows from switching the role of δ and γ in the proof of Lemma 1a. Lemma 1d (Case 1). γ < 1 and γ ≤ δ ⇒ w EM 1 /w EM 3 > γx 2 . Proof. The proof follows from switching the role of δ and γ in the proof of Lemma 1b. Lemma 1e (Case 1). γ, δ > 1 ⇒ w EM 1 /w EM i > x i−1 , i = 4, . . . , n. Proof. According to formula (9) w EM 1 w EM i = x i−1 γδλ(λ − n + 1) γ + δ + γδλ − 2γδ , which means the statement to be proven is equivalent to γδλ(λ − n + 1) > γ + δ + γδλ − 2γδ. Further equivalent transformations yield (γδλ(λ − n)) + (2γδ − γ − δ) > 0 γδλ(λ − n) + (δ − 1)(γ − 1) + (δγ − 1) > 0. Lemma 1f (Case 1). γ, δ < 1 ⇒ w EM 1 /w EM i < x i−1 , i = 4, . . . , n. Proof. According to formula (12) w EM 1 w EM i = x i−1 γδλ(δ + γ + λ − 2) γδλ 2 − 4γδ + γ + δ + δ 2 γ + γ 2 δ . Applying further equivalent transformations γδλ(δ + γ + λ − 2) γδλ 2 − 4γδ + γ + δ + δ 2 γ + γ 2 δ < 1 γδλ(δ + γ + λ − 2) < γδ(λ 2 − 4) + γ + δ + δ 2 γ + γ 2 δ λ(δ + γ + λ − 2) < λ 2 − 4 + 1 δ + 1 γ + δ + γ 0 < λ 2 − 4 + 1 δ + 1 γ + δ + γ − λδ − λγ − λ 2 + 2λ 0 < 2(λ − 2) + (1 − λ)(δ + γ) + 1 δ + 1 γ 0 < 2(λ − 1) − 2 + (1 − λ)(δ + γ) + 1 δ + 1 γ 0 < (λ − 1)(2 − δ − γ) + 1 δ + 1 γ − 2. Lemma 1g (Case 1). δ γ ⇔ w EM 2 /w EM 3 x 2 /x 1 . Proof. According to formula (9), we need to consider w EM 2 w EM 3 = x 2 x 1 · γλ − (n − 2)γ + δ + (n − 3)γδ δλ − (n − 2)δ + γ + (n − 3)γδ 1. Applying further equivalent transformations γλ − (n − 2)γ + δ + (n − 3)γδ δλ − (n − 2)δ + γ + (n − 3)γδ λ(γ − δ) − (n − 2)(γ − δ) + δ − γ 0 (γ − δ)(λ − n + 1) 0. The third factor is positive because λ > n. Lemma 1h (Case 1). δ ≷ 1 ⇔ w EM 2 /w EM i ≶ x i−1 /x 1 , i = 4, . . . , n. Proof. According to formula (9) w EM 2 w EM i = x i−1 x 1 · γλ − (n − 2)γ + δ + (n − 3)γδ γ + δ + γδλ − 2γδ . Equivalent transformations yield γλ − (n − 2)γ + δ + (n − 3)γδ < γ + δ + γδλ − 2γδ 0 < γ(δ − 1)(λ − n + 1). The third factor is positive because λ > n. Lemma 1i (Case 1). γ ≷ 1 ⇔ w EM 3 /w EM i ≶ x i−1 /x 2 , i = 4, . . . , n. Proof. The proof follows from switching the role of δ and γ in the proof of Lemma 1h. Lemma 1j (Case 1). w EM i /w EM j = x j−1 /x i−1 , i, j = 4, . . . , n. Proof. It follows from each of formulas (9)- (12). Lemma 2a (Case 2A). δ ≷ 1 ⇔ w EM 1 /w EM 2 ≶ δx 1 . Proof. Formula (16) is used for this proof. Multiplying both sides by w EM 2 , the statement to be proven can be written as: x 3 δ(λγ + λ 2 − 2λ − γ + 1 + λδ − δ + δγ) ≶ δx 1 x 3 x 1 γ + λ − 1 + δλ 2 − 2λδ + δ + λδγ − δγ . (23) Further equivalent transformations yield: 0 ≶ λ 2 δ − λ 2 + 3λ − λγ − 3λδ + λδγ − 2δγ + 2γ + 2δ − 2 0 ≶ λ 2 (δ − 1) + λγ(δ − 1) + 3λ(1 − δ) + 2γ(1 − δ) + 2(δ − 1) 0 ≶ (δ − 1)(λ(λ − 3) + γ(λ − 2) + 2). The second factor on the right hand side is always positive because λ > n = 4 and γ, δ > 0. Lemma 2b (Case 2A). δ > 1, γ < 1 ⇒ w EM 1 /w EM 3 > x 2 . Proof. Formula (14) is used in this proof. Multiplying both sides by w EM 3 , the statement of the lemma is equivalent to: x 1 (δγλ 2 − 2λδγ + 1 + 2λγ − 2γ + γ 2 ) < x 2 x 1 x 2 γ λγ + λ 2 − 2λ − γ + 1 + λδ − δ + δγ . Further equivalent transformations yield: 0 < λ 2 γ − λ 2 γδ − 4λγ + λγ 2 + 3λδγ − 2γ 2 + 3γ + δγ 2 − δγ − 1 0 < λ 2 γ(1 − δ) + λγ(γ − 1) + 3λγ(δ − 1) + (γ − 1) + 2γ(1 − γ) + δγ(γ − 1) 0 < (1 − δ)λγ(λ − 3) + (γ − 1)(γ(λ − 2) + δγ + 1).(24) The second factor on the right hand side is always positive because λ > n = 4 and γ, δ > 0. Lemma 2c (Case 2A). δ < 1, γ > 1 ⇒ w EM 1 /w EM 3 < x 2 . Proof. The proof follows from the right hand side of (24) being positive in the case of δ < 1, γ > 1. Lemma 2d (Case 2A). δ, γ < 1 ⇒ w EM 1 /w EM 4 < x 3 . Proof. Again, formula (14) is used for this proof. Multiplying both sides by w EM 4 , the statement to be proven is equivalent to: x 1 (δγλ 2 − 2λδγ + 1 + 2λγ − 2γ + γ 2 ) < x 3 x 1 x 3 λ + λ 2 γ − 2λγ − 1 + γ + δ + λδγ − δγ . Further equivalent transformations yield: λ 2 γδ − λ 2 γ + 4λγ − 3λδγ − λ + γ 2 − 3γ + δγ − δ + 2 < 0 (δ − 1)(λ 2 γ − 3λγ) + (γ − 1)(λ + γ − 2 + δ) < 0 (δ − 1)λγ(λ − 3) + (γ − 1)((λ − 2) + γ + δ) < 0.(25) The left hand side is negative if γ, δ < 1, because λ > n = 4. Lemma 2e (Case 2A). δ, γ > 1 ⇒ w EM 1 /w EM 4 > x 3 . Proof. The proof follows from the left hand side of (25) being positive if γ, δ > 1. Lemma 2f (Case 2A). δ, γ < 1 ⇒ w EM 2 /w EM 3 > x 2 /x 1 . Proof. Formula (13) is used in this proof. Multiplying both sides by w EM 3 , the statement of the lemma can be written as: 1 x 1 λ 2 γ − 2λγ + δ + 2λδγ − 2δγ + δγ 2 > x 2 x 1 1 x 2 γ γ + λ − 1 + δλ 2 − 2λδ + δ + λδγ − δγ . Further equivalent transformations yield: 0 > λ 2 γδ − λ 2 γ − 4λδγ + 3λγ + λδγ 2 + γ 2 − 2δγ 2 + 3δγ − γ − δ 0 > (δ − 1)(λ 2 γ − 3λγ) + (γ − 1)(λδγ − 2δγ + δ + γ) 0 > (δ − 1)λγ(λ − 3) + (γ − 1)(δγ(λ − 2) + δ + γ).(26) The right hand side is negative if δ, γ < 1, because λ > n = 4. Lemma 2g (Case 2A). δ, γ > 1 ⇒ w EM 2 /w EM 3 < x 2 /x 1 . Proof. The proof follows from the right hand side of (26) being positive if δ, γ > 1. Lemma 2h (Case 2A). δ < 1, γ > 1 ⇒ w EM 2 /w EM 4 > x 3 /x 1 . Proof. Again, formula (13) is used in this proof. Multiplying both sides by w EM 4 , the statement to be proven is equivalent to: 1 x 1 λ 2 γ − 2λγ + δ + 2λδγ − 2δγ + δγ 2 > x 3 x 1 1 x 3 1 + λγ − γ + λδ − δ + δγλ 2 − 2λδγ + δγ . Further equivalent transformations yield: 0 > λ 2 γδ − λ 2 γ − 4λδγ + 3λγ + λδ + 3δγ − δγ 2 − 2δ − γ + 1 0 > (δ − 1)(λ 2 γ − 3λγ) + (1 − γ)(λδ − 2δ + δγ + 1) 0 > (δ − 1)λγ(λ − 3) + (1 − γ)(δ(λ − 2) + δγ + 1).(27) The right hand side of (27) is negative, if δ < 1, γ > 1, because λ > n = 4. Lemma 2i (Case 2A). δ > 1, γ < 1 ⇒ w EM 2 /w EM 4 < x 3 /x 1 . Proof. The proof follows from the right hand side of (27) being positive if δ > 1, γ < 1. Lemma 2j (Case 2A). γ ≷ 1 ⇔ w EM 3 /w EM 4 ≶ γx 3 /x 2 . Proof. Once again, formula (13) is used for the proof. Multiplying both sides by w EM 4 , the first statement (for γ > 1) becomes equivalent to: 1 x 2 γ γ + λ − 1 + δλ 2 − 2λδ + δ + λδγ − δγ ≶ γ x 3 x 2 1 x 3 1 + λγ − γ + λδ − δ + δγλ 2 − 2λδγ + δγ . (28) Applying further equivalent transformations: 0 ≶ λ 2 δγ − λ 2 δ − 3λδγ + 3λδ + λγ − λ + 2δγ − 2δ − 2γ + 2 0 ≶ (γ − 1)(λ 2 δ − 3λδ + λ + 2δ − 2) 0 ≶ (γ − 1)(λδ(λ − 3) + (λ − 2) + 2δ). The second factor on the right hand side of (29) is positive because λ > n = 4 and γ, δ > 0. ) are multiplied by λ, which immediately cancel each other. This may not be apparent about w EM 2 , but (γ + λ − 1)(1 + δλ − δ) = γ + λ − 1 + λγδ + λ 2 δ − λδ − γδ − λδ + δ which, after reduction, gives the same formula. Lemma 3c (Case 2B). δ ≷ 1 ⇔ w EM 1 /w EM i ≷ x i−1 , i = 5, . . . , n. Proof. Formula (19) is used for this proof. x 2 δλ(1 + λγ − γ)(δ + λ − 1) ≷ x i−1 x 2 x i−1 (1 + λγ − γ)(δλ 2 + 1 − 2δ + δ 2 ) λδ 2 + λ 2 δ − λδ ≷ δλ 2 + 1 − 2δ + δ 2 λδ(δ − 1) + δ(1 − δ) + (δ − 1) ≷ 0 (δ − 1)(δ(λ − 1) + 1) ≷ 0. The second factor on the left hand side is always positive because λ > n ≥ 5 and δ > 0. Lemma 3d (Case 2B). δ ≷ 1 ⇔ w EM 2 /w EM i ≶ x i−1 /x 1 , i = 5, . . . , n. Proof. Again, formula (19) is used in the proof. x 2 x 1 λ(1 + λγ − γ)(1 + δλ − δ) ≶ x i−1 x 1 x 2 x i−1 (1 + λγ − γ)(δλ 2 + 1 − 2δ + δ 2 ) λ + λ 2 δ − δλ ≶ δλ 2 + 1 − 2δ + δ 2 0 ≶ λδ − λ + δ 2 − 2δ + 1 0 ≶ λ(δ − 1) + (δ − 1) 2 0 ≶ (δ − 1)((λ − 1) + δ). The second factor on the right hand side is always positive because λ > n ≥ 5 and δ > 0. Lemma 3e (Case 2B). γ ≷ 1 ⇔ w EM 3 /w EM i ≷ x i−1 /x 2 , i = 5, . . . , n. Proof. Formula (18) is used in this proof. x 1 x 2 γλ(λγ + λ 2 − 2λ − γ + 1 + δλ − δ + δγ) ≷ x 4 x 2 x 1 x 4 (λγ 2 − 2λγ + λ 3 γ + λ − γ 2 + 2γ − λ 2 γ − 1 + δ − 2δγ + δγ 2 + δγλ 2 ). Further equivalent transformations yield: λ 2 γ 2 − λ 2 γ − 2λγ 2 + 3λγ + λγ 2 δ − λδγ − λ + γ 2 − 2γ + 1 − δ + 2δγ − δγ 2 ≷ 0 (γ − 1)(λ 2 γ − 2λγ + λ + λδγ + (γ − 1) − δ(γ − 1)) ≷ 0 (γ − 1)(λγ(λ − 2) + (λ − 1) + δγ(λ − 1) + γ + δ) ≷ 0. The second factor on the left hand side is always positive because λ > n ≥ 5 and γ, δ > 0. Lemma 3g (Case 2B). γ, δ > 1 ⇒ w EM 1 /w EM 4 > x 3 . γ, δ < 1 ⇒ w EM 1 /w EM 4 < x 3 . Proof. Instead of the above statement, we will prove the following stronger statement: γδ 1 ⇔ w EM 1 /w EM 4 x 3 . Formula (21) is used in this proof. x 4 δλ(γ 2 − 2γ + λ 2 γ + 1)(δ + λ − 1) x 3 x 4 x 3 λ(δλ 2 + λ 3 δγ − δγλ 2 − 2λδγ − 2δ + 2δγ − γ + 1 + λγ + δ 2 + δ 2 λγ − δ 2 γ). Further equivalent transformations yield: λ 2 δ 2 γ − λ 2 δ + λγ 2 δ − λγδ 2 + λδ − λγ + γ 2 δ 2 − γδ 2 − δγ 2 + δ + γ − 1 0 (δγ − 1)(λ 2 δ + λγ − λδ + δγ + 1 − δ − γ) 0 (δγ − 1)(λδ(λ − 2) + γ(λ − 1) + δ(λ − 1) + δγ + 1) 0. The second factor is always positive because λ > n ≥ 5 and γ, δ > 0. The first factor is positive exactly if γδ > 1, and negative exactly if γδ < 1. Lemma 3h (Case 2B). w EM i /w EM j = x j−1 /x i−1 , i, j = 5, . . . , n. Proof. It follows from each of formulas (17)- (21). Figure 1 : 1The principal right eigenvector in Example 1 is inefficient, because the corresponding digraph is not strongly connected: no arc leaves node 2 Theorem 2 ([ 1 , 21Theorem 3.1]). The principal right eigenvector of a simple perturbed pairwise comparison matrix is efficient. Theorem 4 . 4Let n ≥ 4. The characteristic polynomial of a double perturbed PCM in form (5) (Case 2B) is Figure 2 :Figure 3 :Figure 4 : 234The digraph of the principal right eigenvector in Case 1 is strongly connected, independently of the orientation of dashed arcs that have not been analyzed The digraph of the principal right eigenvector in Case 2A is strongly connected, independently of the orientation of dashed arcs that have not been analyzed The digraph of the principal right eigenvector in Case 2B is strongly connected, independently of the orientation of dashed arcs that have not been analyzed : w EM 3 in formula (17) is the same as λw EM 3 in formula (13), which is already proven to be positive. Formula (18): w EM 3 in formula (18) is the same as λw EM 3 Lemma 3a (Case 2B). γ ≷ 1 ⇔ w EM ≶ γx 3 /x 2 .Proof. Using formula (17) the proof is similar to the proof of Lemma 2j, the only difference is in (28) where both sides are multiplied by λ, which immediately cancel each other. 3b (Case 2B). δ ≷ 1 ⇔ w EM Proof. Using formula (21) the proof is similar to the proof of Lemma 2a, the only difference is in (23) where both sides (the formula for w EM3 /w EM 4 Lemma 1 /w EM 2 ≶ δx 1 . 1 and w EM 2 Efficiency analysis of simple perturbed pairwise comparison matrices. Fundamenta Informaticae, accepted. K Ábele-Nagy, S Bozóki, K. Ábele-Nagy and S. Bozóki. Efficiency analysis of simple perturbed pairwise comparison matrices. Fundamenta Informaticae, accepted, 2016. http://arxiv.org/abs/1505.06849. Deriving priorities from inconsistent PCM using the network algorithms. M Anholcer, J Fülöp, manuscriptM. Anholcer and J. Fülöp. Deriving priorities from inconsistent PCM using the network algorithms. manuscript, 2015. http://arxiv.org/pdf/1510.04315. Effectiveness analysis of deriving priority vectors from reciprocal pairwise comparison matrices. G Bajwa, E U Choo, W C Wedley, Asia-Pacific Journal of Operational Research. 253G. Bajwa, E. U. Choo, and W. C. Wedley. Effectiveness analysis of deriving pri- ority vectors from reciprocal pairwise comparison matrices. Asia-Pacific Journal of Operational Research, 25(3):279-299, 2008. Inferring efficient weights from pairwise comparison matrices. R Blanquero, E Carrizosa, E Conde, Mathematical Methods of Operations Research. 642R. Blanquero, E. Carrizosa, and E. Conde. Inferring efficient weights from pairwise comparison matrices. Mathematical Methods of Operations Research, 64(2):271- 284, 2006. Inefficient weights from pairwise comparison matrices with arbitrarily small inconsistency. Optimization. S Bozóki, 63S. Bozóki. Inefficient weights from pairwise comparison matrices with arbitrarily small inconsistency. Optimization, 63(12):1893-1901, 2014. Efficient weight vectors from pairwise comparison matrices. S Bozóki, J Fülöp, manuscriptS. Bozóki and J. Fülöp. Efficient weight vectors from pairwise comparison matri- ces. manuscript, 2015. http://arxiv.org/abs/1602.03311. On pairwise comparison matrices that can be made consistent by the modification of a few elements. S Bozóki, J Fülöp, A Poesz, Central European Journal of Operations Research. 192S. Bozóki, J. Fülöp, and A. Poesz. On pairwise comparison matrices that can be made consistent by the modification of a few elements. Central European Journal of Operations Research, 19(2):157-175, 2011. Axiomatic properties of inconsistency indices for pairwise comparisons. M Brunelli, M Fedrizzi, Journal of the Operational Research Society. 66M. Brunelli and M. Fedrizzi. Axiomatic properties of inconsistency indices for pairwise comparisons. Journal of the Operational Research Society, 66:1-15, 2014. A common framework for deriving preference values from pairwise comparison matrices. E U Choo, W C Wedley, Computers & Operations Research. 316E. U. Choo and W. C. Wedley. A common framework for deriving preference values from pairwise comparison matrices. Computers & Operations Research, 31(6):893-908, 2004. A linear optimization problem to derive relative weights using an interval judgement matrix. E Conde, M D P R Pérez, European Journal of Operational Research. 2012E. Conde and M. d. l. P. R. Pérez. A linear optimization problem to derive relative weights using an interval judgement matrix. European Journal of Operational Research, 201(2):537-544, 2010. Deriving weights from pairwise comparison ratio matrices: An axiomatic approach. W D Cook, M Kress, European Journal of Operational Research. 373W. D. Cook and M. Kress. Deriving weights from pairwise comparison ratio matrices: An axiomatic approach. European Journal of Operational Research, 37(3):355-362, 1988. On the extraction of weights from pairwise comparison matrices. T K Dijkstra, Central European Journal of Operations Research. 211T. K. Dijkstra. On the extraction of weights from pairwise comparison matrices. Central European Journal of Operations Research, 21(1):103-123, 2013. Multicriteria Optimization. M Ehrgott, Lecture Notes in Economics and Mathematical Systems. 491Springer VerlagM. Ehrgott. Multicriteria Optimization. volume 491 of Lecture Notes in Eco- nomics and Mathematical Systems. Springer Verlag, Berlin, 2000. The analysis of the principal eigenvector of pairwise comparison matrices. A Farkas, Acta Polytechnica Hungarica. 42A. Farkas. The analysis of the principal eigenvector of pairwise comparison ma- trices. Acta Polytechnica Hungarica, 4(2):99-115, 2007. Transitive matrices and their applications. A Farkas, P Rózsa, E Stubnya, Linear Algebra and its Applications. A. Farkas, P. Rózsa, and E. Stubnya. Transitive matrices and their applications. Linear Algebra and its Applications, 302-303:423-433, 1999. Obtaining non-dominated weights from preference relations through norm-induced distances. M Fedrizzi, XXXVII Meeting of the Italian Association for Mathematics Applied to Economic and Social Sciences (AMASES). M. Fedrizzi. Obtaining non-dominated weights from preference relations through norm-induced distances. XXXVII Meeting of the Italian Association for Math- ematics Applied to Economic and Social Sciences (AMASES), September 5-7, 2013, Stresa, Italy. A multicriteria evaluation of methods for obtaining weights from ratio-scale matrices. B Golany, M Kress, European Journal of Operational Research. 692B. Golany and M. Kress. A multicriteria evaluation of methods for obtaining weights from ratio-scale matrices. European Journal of Operational Research, 69(2):210-220, 1993. Matrix Algebra From a Statistician's Perspective. D A Harville, SpringerD. A. Harville. Matrix Algebra From a Statistician's Perspective. Springer, 2008. Empirical pairwise comparison matrices (epcm) -an on-line collection from real decisions, version epcm. A Poesz, A. Poesz. Empirical pairwise comparison matrices (epcm) -an on-line collection from real decisions, version epcm-october-2009, 2009. A scaling method for priorities in hierarchical structures. T L Saaty, Journal of Mathematical Psychology. 153T. L. Saaty. A scaling method for priorities in hierarchical structures. Journal of Mathematical Psychology, 15(3):234-281, 1977. Adjustment of an inverse matrix corresponding to a change in one element of a given matrix. J Sherman, W J Morrison, Annals of Mathematical Statistics. 211J. Sherman and W. J. Morrison. Adjustment of an inverse matrix corresponding to a change in one element of a given matrix. Annals of Mathematical Statistics, 21(1):124-127, 1950. Multiple Criteria Optimization: Theory, Computation, and Application. R E Steuer, Proofs of Lemmas. WileyR. E. Steuer. Multiple Criteria Optimization: Theory, Computation, and Appli- cation. Wiley Series in Probability and Mathematical Statistics. Wiley, 1986. Appendix: Proofs of Lemmas 1a-3h It is sufficient to prove the positivity of any arbitrary element of each formula, because the Perron-Frobenius theorem then guarantees the positivity for the vectors as well. The conclusions of the proofs generally follow from x i > 0 ∀i = 1, . . . , n, γ, δ > 0 and λ > n ≥ 4 (or n ≥ 5 in Case 2B). The proof for each formula follows: Formula. Positivity is apparent for w EMProof. It is sufficient to prove the positivity of any arbitrary element of each formula, because the Perron-Frobenius theorem then guarantees the positivity for the vectors as well. The conclusions of the proofs generally follow from x i > 0 ∀i = 1, . . . , n, γ, δ > 0 and λ > n ≥ 4 (or n ≥ 5 in Case 2B). The proof for each formula follows: Formula (9): Positivity is apparent for w EM
[]
[ "PALEY-WIENER THEOREM FOR LINE BUNDLES OVER COMPACT SYMMETRIC SPACES", "PALEY-WIENER THEOREM FOR LINE BUNDLES OVER COMPACT SYMMETRIC SPACES", "PALEY-WIENER THEOREM FOR LINE BUNDLES OVER COMPACT SYMMETRIC SPACES", "PALEY-WIENER THEOREM FOR LINE BUNDLES OVER COMPACT SYMMETRIC SPACES" ]
[ "Vivian M Ho ", "And Gesturólafsson ", "Vivian M Ho ", "And Gesturólafsson " ]
[]
[]
Paley-Wiener type theorems describe the image of a given space of functions, often compactly supported functions, under an integral transform, usually a Fourier transform on a group or homogeneous space. Several authors have studied Paley-Wiener type theorems for Euclidean spaces, Riemannian symmetric spaces of compact or non-compact type as well as affine Riemannian symmetric spaces. In this article we prove a Paley-Wiener theorem for homogeneous line bundles over a compact symmetric space U/K. The Paley-Wiener theorem characterizes f with sufficiently small support in terms of holomorphic extendability and exponential growth of their Fourier transforms. An important tool is a generalization of Opdam's estimate for the hypergeometric functions for multiplicity functions that are not necessarily positive. This is done in an appendix.
10.31390/gradschool_dissertations.2195
[ "https://arxiv.org/pdf/1407.1489v2.pdf" ]
50,123,821
1407.1489
58d8e5d9c62be04ff618b9b246e91c4097a848ec
PALEY-WIENER THEOREM FOR LINE BUNDLES OVER COMPACT SYMMETRIC SPACES 6 Jul 2014 Vivian M Ho And Gesturólafsson PALEY-WIENER THEOREM FOR LINE BUNDLES OVER COMPACT SYMMETRIC SPACES 6 Jul 2014 Paley-Wiener type theorems describe the image of a given space of functions, often compactly supported functions, under an integral transform, usually a Fourier transform on a group or homogeneous space. Several authors have studied Paley-Wiener type theorems for Euclidean spaces, Riemannian symmetric spaces of compact or non-compact type as well as affine Riemannian symmetric spaces. In this article we prove a Paley-Wiener theorem for homogeneous line bundles over a compact symmetric space U/K. The Paley-Wiener theorem characterizes f with sufficiently small support in terms of holomorphic extendability and exponential growth of their Fourier transforms. An important tool is a generalization of Opdam's estimate for the hypergeometric functions for multiplicity functions that are not necessarily positive. This is done in an appendix. Introduction One of the fundamental questions in Fourier analysis and abstract harmonic analysis is to describe the image of a given space of functions under the Fourier transform. The classical Paley-Wiener theorem identifies the space of smooth compactly supported functions on R n with certain classes of holomorphic functions on C n of exponential growth via the usual Fourier transform on R n . The exponent is determined by the size of the support, see [19,Thm. 7.3.1]. There are several generalizations of this theorem to settings where R n is replaced by a Lie group or a homogeneous space. Of all of those generalizations, the Riemannian symmetric spaces are best understood, in particular those of the noncompact type due to the work of Gangolli [10] and Helgason [17,18] for smooth functions, and by Eguchi [9] and Dadok [7] for distributions. The case of semisimple Lie groups was due to Arthur [1], and the case of pseudo-Riemannian reductive symmetric spaces was done by van den Ban and Schlichtkrull [2]. More recently the Paley-Wiener type theorems have been extended to the case of the Heckman-Opdam hypergeometric Fourier transform and the compact settings, [3,4,11,22,23,21,25,26,27] and even infinite dimensional Lie groups [28]. We refer to [8] for overview and further discussions. In the compact case every smooth function has a compact support, so the Paley-Wiener theorem is a local statement and only valid for functions supported in sufficiently small balls around the base point. As an example, consider the torus T = {z ∈ C | |z| = 1} ≃ R/2π Z. Suppose f ∈ C ∞ r (T), i.e. f has support in exp (i [−r, r]) with 0 < r < π. View f as a periodic function on R by t → f (e i t ) with supp (f ) ⊆ [−r, r] + 2π Z. The Fourier transform of f is n → f (n) on Z where f (n) = 1 2π π −π f (e i t ) e −i n t dt = T f (z)z −n dµ(z) where µ is the normalized invariant measure on T. As 0 < r < π it follows that z −λ (λ ∈ C) is well defined on the support of f and hence f has a holomorphic extension to C which is easily seen to be of exponential growth r. On the other hand, if F is a holomorphic function of exponential type r with 0 < r < π, then, by the classical Paley-Wiener Theorem, there exists a smooth function f 1 , supported in [−r, r], such that its Fourier transform is F . Define f (t) = n∈Z f 1 (t + n) we get a function on the torus, supported in exp i[−r, r] such that f = F . The articles [3,4,11,25,26,27] generalize this to compact groups, compact symmetric spaces, and the Jacobi transform related to even multiplicity functions. Our work is based on the article [25] which deals with functions on compact symmetric spaces. Our aim is to generalize the results of [25] to line bundles over Riemannian symmetric spaces of the compact type. Let us recall the basic facts. Let U be a simply connected compact simple Lie group and θ : U → U a nontrivial homomorphism. Then K = U θ := {u ∈ U | θ(u) = u} is connected and Y = U/K is simply connected. We fix the base point x o = eK and write a · (bK) = (ab)K for the action of U on Y. It is not necessary to assume that Y is simply connected, but it makes several arguments simpler. In particular, the classification of the χ-spherical representation is simpler. Note that the spherical harmonic analysis on a general compact symmetric space can be reduced to the simply connected case (see [24, p.4860] and [25]). The existence of a homogeneous line bundle over Y is equivalent to the existence of a nontrivial character χ : K → T which in turn is equivalent to dim Z(K) = 1, where Z(K) is the center of K. The line bundle is then given by the fiber-product U × χ C =: L χ and the space of smooth sections is isomorphic to the space C ∞ (Y; L χ ) of all smooth functions f : U → C such that (0. 1) f (uk) = χ(k) −1 f (u) for all k ∈ K and all u ∈ U . Similarly, one defines the space of L 2 -sections, L 2 (Y; L χ ). There is a natural unitary representation of U on L 2 (Y; L χ ) given by λ(g)f (u) = f (g −1 u) . The representations that are contained in the decomposition of this representations into irreducible representation of K are the χ-spherical representations (π µ , V µ ), i.e., π µ is irreducible and there exists a nonzero vector e µ ∈ V µ such that (0. 2) π µ (k)e µ = χ(k)e µ for all k ∈ K . Here µ denotes the highest weight of π µ . Each one of those representations occurs with multiplicity one [30]. The replacement for the K-biinvariant functions studied in [25] are the χ left and right covariant functions (0. 3) f (k 1 uk 2 ) = χ(k 1 k 2 ) −1 f (u) for all k 1 , k 2 ∈ K and all u ∈ U . This space is denoted by C ∞ (U//K; L χ ). The replacement for the spherical functions on Y are the χ-spherical functions ψ µ (also called spherical functions of type χ) given by (0.4) ψ µ (u) = e µ , π µ (u)e µ and the χ-spherical Fourier transform is given by (0.5) f (µ) = f, ψ µ = U f (u)ψ µ (u −1 ) du . Let k = {X ∈ u | θ(X) = X} and q = {X ∈ u | τ (X) = −X}. Then u = q ⊕ k, k is the Lie algebra of K and q can be identified with the tangent space of Y at x o by D X f (x o ) = d dt t=0 f (Exp(tX)) where Exp(Y ) = exp(Y ) · x o . Then T Y = q × K U . Any positive K-invariant bilinear form on q defines a Riemannian metric on Y. As an example we can take restriction of the negative of the Killing form. Denote by B r (0) ⊂ q the closed ball with center 0 and radius r > 0. Let B r (x o ) = ExpB r (0). If r is small enough then Exp : B r (0) → B r (x o ) is a homeomorphism. Let C ∞ r (U//K; L χ ) be the space of χ-bicovariant sections supported in B r (x o ). As in [25] we will characterize the χ-bicovariant functions f with small support in terms of holomorphic extendability and exponential growth of their χ-spherical Fourier transforms with the exponent linked to the size of the support of f , see Theorem 4.1. Let X = G/K o be the noncompact dual symmetric space of Y, K o the connected component of K containing the identity element (note that K is connected if U is assumed to be simply connected). Our proof relies on the fact that the spherical functions of type χ on U are connected to the spherical functions of type χ on G by holomorphic continuation. This is one of the main steps to analytically continue the χ-spherical Fourier transform. Notice also that the spherical functions of type χ on G are linked to the hypergeometric functions, but whose multiplicity parameters are not necessarily positive. We thus need to generalize Opdam's estimate (see [21]) for the hypergeometric functions to meet our situation. This is done in the Appendix A. This paper is organized as follows. In Section 1 we introduce basic notations and structure theory on Riemannian symmetric spaces. In Section 2 and Section 3 we discuss harmonic analysis related to line bundles over compact symmetric spaces, including the theory of highest weights for χ-spherical representations, elementary spherical functions of type χ, and χ-spherical Fourier transform. In Section 4 we define the relevant Paley-Wiener space and state the main theorem (Theorem 4.1), to prove which we need some tools of differential operators (Sections 3.2) and hypergeometric functions (Appendix A). Sections 4 and 5 contains the main body of the proof. In Section 4 we show the Notation and Preliminaries The material in this section is standard. We refer to [14] for references. We will often need [15,16] too. We use the notation from the introduction mostly without reference. 1.1. Symmetric spaces. We recall some standard notations and facts related to symmetric spaces. A Riemannian symmetric space of the compact type can be realized as U/K where U is a connected semisimple compact Lie group and K ⊆ U a closed symmetric subgroup. Thus, there exists a nontrivial involution θ : U → U such that U θ o ⊆ K ⊆ U θ = {u ∈ U | θ(u) = u} . As mentioned in the introduction, we assume U is simply connected. Let u be the Lie algebra of U . Then θ induces an involution on u, also denoted by θ. Decompose u into ±1-eigenspaces of θ, u = k ⊕ q where k = u θ is the Lie algebra of K. We identify q, respectively g × K U , with T xo (Y), respectively T Y, as is in the introduction. Let · , · be the inner product on u defined by X, Y = −Tr (ad(X)ad(Y )) . This inner product is K-invariant and defines a Riemannian metric on Y. The inner product on u gives an inner product on the dual space u * in a canonical way, and by hermitian extension they induce U -invariant inner products on u C = u ⊗ R C and the complex dual space u * C , all denoted by the same symbol. We write λ = λ, λ for the corresponding norm. Similar notation will be used for other Lie algebras and vector spaces. The C-bilinear extension to u * C will be denoted by λ, µ → (λ, µ). A maximal abelian subspace of q is called a Cartan subspace for Y (or (U, K)). All Cartan subspaces are K-conjugate and their common dimension is called the rank of Y. ¿From now on we fix a Cartan subspace b. Let n = dim b. We fix a Cartan subalgebra h of u containing b. Then h is θ-stable and (1.1) h = (h ∩ k) ⊕ b . Let B = exp(b) be the analytic subgroup of U with Lie algebra b. The subspace B · x o ≃ B/B ∩ K is a Cartan subspace of Y. Note that B ∩ K is finite. Since U is compact, it admits a finite dimensional faithful unitary representation. Thus U ⊂ U(p) ⊂ GL(p, C) for some p. As u ⊂ u(p) it follows that u ∩ iu = {0} and hence u C ≃ u ⊕ iu. Let U C denote the analytic subgroup of GL(p, C) with Lie algebra u C . Note that U C is simply connected. Let s = iq and let g = k ⊕ s. Note that g is a Lie algebra. Denote by G the analytic subgroup of GL(p, C) (and U C ) with Lie algebra g. Then G = K exp s and X = G/K is a Riemannian symmetric space of the noncompact type. It is called the Riemannian dual of Y. The Riemannian structure on X is again determined by the inner product X, Y = Tr ad(X)ad(Y ) on s. We will from now on view X and Y as real forms of the complex homogeneous space U C /K C where K C = exp k C ⊂ U C . Again x o = eK C is the common base point. The involution θ extends to a holomorphic involution on U C , also denoted by θ. We also write θ for the restriction to G and note that θ is a Cartan involution on G. Again, a maximal subspace of s is called a Cartan subspace of X. The Cartan subspaces of X are conjugate under K. We fix from now on the Cartan subspace a = ib where b is the fixed Cartan subspace for Y. Then A ≃ a is simply connected. We have a C = b C = a⊕b and A C = exp(a C ) = AB. We denote by log : A C → a C the multivariable inverse of exp | a C . It is a single valued isomorphism log : A → a. 1.2. Line bundles on hermitian symmetric spaces. We will from now on assume that U is simple and that there exists nontrivial line bundles over Y although some of our statements are true in general. As mentioned in the introduction, this happens, as we are assuming that U is simple, if and only of dim Z(K) = 1. Let z denote the center of k and k 1 = [k, k]. Then dim z = 1 and k = z ⊕ k 1 . Let K 1 denote the analytic subgroup of K (and U ) with Lie algebra k 1 . Then K 1 is closed and K = Z(K) o K 1 . The spaces X and Y are hermitian symmetric spaces of the noncompact and compact type respectively. They are complex homogeneous spaces where the complex structure is given by the adjoint action of a central element in k with eigenvalues 0 and ±i. Up to coverings the irreducible spaces with dim z = 1 are given in the following table (cf. [14, p.516, 518]). Here n = dim b = dim a is the rank of X and Y and d = dim R Y = dim R X. The conditions listed in the last column are given to prevent coincidence between different classes due to lower dimensional isomorphisms. The Hermitian symmetric spaces class G U K n d 1 AIII SU (p, q) SU (p + q) S (U(p) × U(q)) p 2pq q ≥ p ≥ 1 2 BDI SO o (p, q) SO (p + q) SO (p) × SO (q) p pq p = 2, q ≥ 5 3 DIII SO * (2 j) SO (2 j) U (j) [ 1 2 j] j(j − 1) j ≥ 5 4 CI Sp (j, R) Sp (j) U (j) j j(j + 1) j ≥ 2 5 EIII e 6 (−Y = SU (p + q)/S (U(p) × U(q)) is the complex Grassmann manifold of p-dimensional subspaces in C p+q . In Case 2, Y = SO (p + q)/(SO (p) × SO (q)) is a covering of SO (p+q)/S (O (p)×O (q)), the real Grassmann manifold of p-dimensional subspaces in R p+q . In Case 5, Y is the Grassmann manifold over the octonions. Let χ : K → T be a character. Then the homogeneous line bundle L χ → Y is defined as L χ = U × χ C χ = (U × C χ )/K where C χ denotes the complex numbers with the action k · z = χ(k) −1 z and K acts on U × C χ by (u, z) · k = (uk, k · z). We recall the parametrization of the group of characters given in [30]. Fix Z ∈ z as in [30, p. 283, (3.1)]. Thus exp(tZ) ∈ Z(K) for all t ∈ R, and exp(tZ) ∈ K 1 if and only if t ∈ 2πZ. Proposition 1.2 (H. Schlichtkrull). Let l ∈ Z. Define χ l : K → T by χ l (exp(tZ)k) = e ilt , t ∈ R and k ∈ K 1 . Then χ l is a well defined character on K. If χ is a character on K, then there is a unique l ∈ Z such that χ = χ l . Proof. See Proposition 3.4 in [30] and its following comment. Since all one dimensional representations χ of K have this form, hereafter, we parametrize χ = χ l for l ∈ Z. If l = 0, then χ 0 is trivial. Root structures and the Weyl group. For α ∈ h * C let u C, α = {X ∈ u C | (∀ H ∈ h C ) [H, X] = α (H) X}. If α = 0 and u C, α = {0} then α is said to be a root. We write ∆ = ∆ (u, h) for the set of roots. Similarly we define u C, β for β ∈ b * C and write Σ = Σ(u, b) for the set of (restricted) roots. Note that ∆ ⊂ ih * and Σ ⊂ ib * . We have Σ = ∆| b \ {0}, and for β ∈ Σ, m β = dim C u C,β = #{α ∈ ∆ | α| b = β} . The numbers m β are called multiplicities. Also note that u C,α ∩u = {0} for all α ∈ ∆∪Σ. Similarly, we can define the roots of a in g and we have Σ = Σ(g, a). We have u C,β = g β ⊕ ig β and m β = dim R g β for all β ∈ Σ. Working with roots it is therefore more convenient to work with g and a rather than the pair u and b. An element X ∈ a is called regular if α(X) = 0 for all α ∈ Σ. The subset a reg ⊂ a is dense and is a finite union of open cones called Weyl chambers. We fix a Weyl chamber a + and let Σ + := {α ∈ Σ | (∀H ∈ a + ) α(H) > 0} . Then we have a + = {H ∈ a | (∀ α ∈ Σ + ) α (H) > 0} . Let b + = ia + and A + = exp a + . We choose a positive system ∆ + in ∆ such that if α ∈ ∆ + and α| a = 0 then α| a ∈ Σ + . Let ∆ 0 = {α ∈ ∆ | α| a = 0} and ∆ + 0 = ∆ 0 ∩ ∆ + . Let (1.2) ρ = 1 2 α∈Σ + m α α and ρ h = 1 2 β∈∆ + β . Let ρ 0 = α∈∆ + 0 α. Then ρ 0 ∈ i(h ∩ k) * and (1.3) ρ h | a = ρ and ρ h = ρ + ρ 0 . If α ∈ Σ then it can happen that either α/2 ∈ Σ or 2α ∈ Σ, but not both. A root α ∈ Σ is said to be unmultiplicable if 2α / ∈ Σ and indivisible if α/2 / ∈ Σ. Denote by (1.4) Σ * = {α ∈ Σ | 2α / ∈ Σ} and Σ i = {α ∈ Σ | 1 2 α / ∈ Σ} . Both Σ * and Σ i are reduced root systems. Set Σ + * = Σ * ∩ Σ + . Note that Y is irreducible if and only if Σ i is irreducible. Let Π = {β j } n j=1 be the fundamental system of simple roots in Σ + * . For any λ ∈ a * C and α ∈ a * with α = 0, define λ α := λ, α α, α . We will use similar notation for ih * without comments. Note that 2 λ α = λ α/2 . Define ω j ∈ a * , j = 1, . . . , n, by (1.5) (ω j ) β i = δ i,j , 1 ≤ i, j ≤ n. The weights ω j are the class 1 fundamental weights for (u, k) and (g, k). We let Λ + 0 = {λ ∈ a * | (∀α ∈ Σ + ) λ α ∈ Z + } (1.6) = n j=1 Z + ω j = {k 1 ω 1 + . . . + k j ω n | k j ∈ Z, k j ≥ 0} . For α ∈ Σ define the reflection r α : a * → a * by r α (λ) := λ − 2 λ α α, ∀ λ ∈ a * . The group W = W (Σ) generated by r α , α ∈ Σ, is finite and called the Weyl group associated to Σ. Note that W = W (Σ * ) = W (Σ i ) with the obvious notation. Furthermore W ∼ = N K (a)/Z K (a) where N K (a) is the normalizer of a in K and Z K (a) is the centralizer of a in K. The W -action extends to a by duality, and then to a C and a * C by C-linearity, and to A and A C by w · exp(H) = exp(w(H)). This action can be written as w · b = kbk −1 where k ∈ N K (a) is such that Ad(k)| a = w. The group W then acts on functions f on any of these spaces by (w · f )(x) := f (w −1 · x), w ∈ W . We recall that W · a + = a reg . We now describe the root structures for the special case of irreducible Hermitian symmetric spaces in more details. Fix an orthogonal base {ε 1 , . . . , ε n } for a * = ib * . Then we have the following description of the root system Σ, see Moore [20,Theorem. 5.2], [14, p.528, 532]: Theorem 1.3. There are two possibilities for the root system Σ + : Case I: Σ + = {ε j ± ε i (1 ≤ i < j ≤ n), 2 ε j (1 ≤ j ≤ n)} Case II: Σ + = {ε j (1 ≤ j ≤ n), ε j ± ε i (1 ≤ i < j ≤ n), 2 ε j (1 ≤ j ≤ n)}. In particular Σ = O s · ∪ O m · ∪ O l , where O s = ∅ in Case I, is a disjoint union of two respectively three orbits. Hermitian symmetric spaces U/K: Root structures and multiplicities Let O + s = O s ∩ Σ + . We write m = (m s , m m , m l ) where m s = m ε j , m m = m ε j ±ε i and m l = m 2ε j .U K Σ (m s , m m , m l ) 1 SU (p + q) S (U p × U q ) case I p = q case II q > p (0, 2, 1) (2 (q − p), 2, 1) 2 SO (2 + q) SO (2) × SO (q) case I (0, q − 2, 1) 3 SO (2 j) U (j) case I j is even case II j is odd (0, 4, 1) (4, 4, 1) 4 Sp (j) U (j) case I (0, 1, 1) 5 e 6 (−78) so (10) + R case II (8, 6, 1) 6 e 7 (−133) e 6 + R case I (0, 8, 1) 1.4. Basic structure theory. Let n = ⊕ α∈Σ + g α . Then n is a nilpotent Lie algebra and g = k⊕ a⊕ n. Let N = exp n be the analytic subgroup of G with Lie algebra n. Then N is nilpotent, simply connected and closed. The group G is analytically diffeomorphic to K × A × N via the multiplication map K × A × N ∋ (k, a, n) → kan ∈ G. The inverse is denoted by x → (κ(x), a(x), n(x)). We write H(x) = log(a(x)). This is the Iwasawa decomposition of G. Furthermore K C A C N C is open and dense in G C . For the compact group U we have the Cartan decomposition U = KBK. We have then the corresponding integral formulas for the Lie group U . Lemma 1.4. Let δ(exp H) = α∈Σ + sin 1 i α(H) mα for H ∈ b. Then there exists a constant c > 0 such that for all f ∈ L 1 (Y) Y f (y) dy = c K B f (ka · x o )δ(a) da dk . Fourier Analysis on Y In this section we recall the classification of χ l -spherical representations of U (and G) due to H. Schlichtkrull [30]. We then recall the Plancherel formula for L 2 (Y; L l ) where L l = L χ l . Next, we discuss the χ l -spherical functions and the decomposition of L 2 (Y; L l ). 2.1. The χ l -spherical representations. Let Λ + (U ) ⊂ i h * be the semi-lattice of highest weights of irreducible representations of U . As we are assuming that U is simply connected we have [15, p. 535 and p.538] we have that if λ ∈ Λ + 0 (U ) then λ| h∩k = 0 and the set Λ + 0 is exactly the set introduced in (1.6). According to [30], we can decompose h ∩ k as Λ + (U ) = λ ∈ h * C | (∀α ∈ ∆ + ) 2λ α ∈ Z + . For λ ∈ Λ + (U ) choose an irreducible unitary representation (π λ , V λ ) of U . For l ∈ Z let (2.1) V χ l λ = V l λ = {v ∈ V λ | (∀k ∈ K) π λ (k)v = χ l (k)v} . Note that V 0 λ = V K λ = {v ∈ V λ | (∀k ∈ K) π λ (k)v = v} is the space of K-fixed vectors. The representation (π λ , V λ ) is said to be χ l -spherical if V l λ = {0} and spherical if it is χ 0 -spherical. If (π λ , V λ ) is χ l -spherical then dim V l λ = 1. Denote by Λ + l (U ) the set of highest weights of χ l -spherical representations of U . Let (2.2) Λ + l := {µ ∈ a * | µ = λ| a , λ ∈ Λ + l (U )} . According toh ∩ k = (h ∩ k 1 ) ⊕ RX where X is defined as in [30, p.285, (4.4)] so that (1) e t X ∈ K 1 if and only if t ∈ 2π i Z, (2) Z − X ∈ k 1 where Z is the same as in Proposition 1.2 (see Lemma 4.3 in [30]). Note that X = 0 in Case I. For λ ∈ i h * we write accordingly λ = (µ, λ 1 , µ 0 ) where µ = λ| a , λ 1 = λ| k 1 ∩h and µ 0 = λ(iX). If X = 0 then we write µ 0 = 0. When χ l is fixed for some l ∈ Z, µ 0 is then fixed. Note that µ 0 is independent of λ. The condition on λ for one-dimensional K-types to occur was given by [30]: (1) λ h∩k 1 = 0, (2) λ has to satisfy a certain integrality condition (see below). Hence, λ is uniquely determined by its restriction µ. Thus there is a bijective corre- spondence Λ + l (U ) ∼ = Λ + l via λ = (µ, 0, µ 0 ) → µ. For convenience, we sometimes write (µ, 0, µ 0 ) = µ + µ 0 . We identify a * C with C n by µ = (µ 1 , . . . , µ n ) with µ j = µ ε j . Recall that a * = ib * . Theorem 2.1 (H. Schlichtkrull). Let U be a compact simply connected semisimple Lie group and K the fixed point group of an involution of U . Let l ∈ Z. The set Λ + l of highest restricted weights of irreducible χ l -spherical representations of U is given by (2.3) Λ + l =    µ ∈ a * µ j − µ i ∈ 2 Z + (1 ≤ i < j ≤ n) µ 0 = 0 (Case I); µ 0 = l (Case II) µ 1 ∈ |l| + 2 Z +    . Proof. See Proposition 7.1 and Theorem 7.2 in [30]. Remark 2.2. It follows from the description of Λ + l that in Case I for some l ∈ Z, a χ l -spherical representation π µ is also spherical if l is even. In fact it is shown in [30,Thm. 7.2] that if l is even π µ must also contain the character χ 0 . A simpler description of the set Λ + l is given by the following proposition. For that let (2.4) ρ s = 1 2 α∈O + s α = 0 in Case I 1 2 (1, . . . , 1) in Case II Proposition 2.3. For l ∈ Z we have Λ + l = Λ + 0 + 2|l| ρ s . Proof. Let µ ∈ Λ + l . We want to show that (µ − 2|l|ρ s ) α = µ α − 2|l| (ρ s ) α ∈ Z + for all α ∈ Σ + . Let r = 0, Case I, and r = |l| for Case II. We have (2.5) µ ε j − 2r(ρ s ) ε j = µ j − r ∈ 2Z + and µ ε j ±ε i − 2r(ρ s ) ε j ±ε i = 1 2 (µ j ± µ i ) ∈ Z + according to (2.3). Finally, again by (2.3), we have (2.6) µ 2ε j − 2r (ρ s ) 2ε j = 1 2 (µ j − r) ∈ Z + . Thus µ − 2|l|ρ s ∈ Λ + 0 . On the other hand, if µ 0 ∈ Λ + 0 define µ = µ 0 + 2|l|ρ s . Then (2.5) and (2.6) together with (2.3) show that µ ∈ Λ + l . Recall that the fundamental spherical weights ω j are defined by (1.5). Then Λ + 0 = Z + ω 1 ⊕ · · · ⊕ Z + ω n . Hence (2.7) (Z + ) n ≃ Λ + l , (k 1 , . . . , k n ) → k 1 ω 1 + · · · + k n ω n + 2|l|ρ s . The Fourier transform. In this section we recall the basic facts about Fourier analysis on L 2 (Y; L l ). For λ = (µ, 0, µ 0 ) ∈ Λ + l (U ). As l, and hence µ 0 will be fixed most of the time, the only variable is the first coordinate µ and sometimes (µ, l). We therefore simply write µ instead of λ. Let L 2 (Y; L l ) be the space of L 2 -sections of the line bundles L l , L 2 (U//K; L l ) the space of elements in L 2 (Y; L l ) such that f (k 1 uk 2 ) = χ l (k 1 k 2 ) −1 f (u) for all k 1 , k 2 ∈ K and u ∈ U . Finally C ∞ (U//K; L l ) is the space of smooth elements in L 2 (U//K; L l ). Let d(µ) := dim V µ . Then µ → d(µ) is a polynomial map. Fix e µ,l ∈ V l µ of length one. We will mostly write e µ for e µ,l as l will be fixed. We normalize the invariant measure on all compact groups so that the total measure is one. Define (2.8) P µ,l (u) := K χ l (k) −1 π µ (k)u dk . Then P µ,l is the orthogonal projection: V µ → V l µ . In particular, P µ,l (u) = u, e µ e µ . If f ∈ L 2 (Y, L l ) then π µ (f ) = π µ (f )P µ,l , where, as usually, π µ (f ) = U f (u)π µ (u) du. It is therefore natural to define the vector valued Fourier transform of f to be (2.9) f (µ, l) = f (µ) = π µ (f )e µ . Note that Tr(π µ (f )) = π µ (f )e µ , e µ = U f (u) π µ (u)e µ , e µ du = f, ψ µ,l . The function (2.10) ψ µ,l (u) = e µ , π µ (u)e µ , u ∈ U is the (µ, l), or χ l , spherical function on U which we will discuss in more details in the next section. Furthermore, if f ∈ L 2 (U//K; L l ) then π µ (f )e µ is again a scalar multiple of e µ and so π µ (f )e µ = π µ (f )e µ , e µ e µ = f, ψ µ,l e µ . The χ l -spherical function is the unique element in C ∞ (U//K, L l ) such that ψ µ,l (e) = 1. Furthermore {d(µ) 1/2 ψ µ,l } is an orthogonal basis for L 2 (U//K; L l ). Note that ψ µ,l (u) = π µ (u)e µ , e µ = e µ , π µ (u −1 )e µ = ψ µ,l (u −1 ) Let (2.11) ℓ 2 d (Λ + l ) = {(a(µ)) µ∈Λ + l | µ∈Λ + l d(µ)|a(µ)| 2 < ∞} . Then ℓ 2 d (Λ + l ) is an Hilbert space with inner product (a, b) = d(µ)a(µ)b(µ). The χ lspherical Fourier transform S l : L 2 (U//K, L l ) → ℓ 2 d (Λ + l ) defined by (2.12) S l (f ) (µ) = U f (u) ψ µ, l (u −1 ) d u = f, ψ µ, l is an unitary isomorphism. We collect the main facts in the following theorem: Theorem 2.4 (The Plancherel Theorem). Assume that Y is a simply connected. For µ ∈ Λ + l (U ) and v ∈ V µ let f µ,v (x) = v, π µ (x) e µ , v ∈ V µ and L 2 µ (Y; L l ) = {f µ,v | v ∈ V µ }. Then the following holds true: (1) If f ∈ L 2 (Y; L l ) then f 2 = µ∈Λ + l d(µ) f 2 Vµ and f (x) = µ∈Λ + l d(µ) f (µ), π µ (x)e µ where the convergence is in the L 2 -norm topology. The convergence is uniform if f is smooth. (2) L 2 (Y; L l ) ≃ µ∈Λ + l L 2 µ (Y; L l ). (3) If f ∈ L 2 (U//K; L l ) then f 2 = µ∈Λ + l d(µ)|S l (f ) (µ)| 2 and f = µ∈Λ + l d(µ)S l (f ) (µ)ψ µ,l where the sum is understood in the L 2 -norm sense and uniformly if f is smooth. (4) L 2 (U//K; L l ) ≃ ℓ 2 d (Λ + l ). The χ l -Spherical Functions The χ l -spherical functions were already introduced in the last section. We now discuss them in more details and present the results needed for the proof of the Paley-Wiener Theorem in Section 4. The standard references for the material in this section are [13]. Also see [31]. Our assumptions are the same as in the last section. In particular Y = U/K is an irreducible Hermitian symmetric space with U simple and simply connected. 3.1. The χ l -spherical functions on G. Let us start by recalling the definition of a χ l -spherical function. We will mostly say that ϕ is a spherical function of type χ l or a χ l -spherical function. Furthermore, ϕ(e) = 1. Proof. Let b ∈ H be so that ϕ(b) = 0. Let a ∈ H and m ∈ L. Then ϕ(am) = 1 ϕ(b) L ϕ(amkb)χ l (k) dk = 1 ϕ(b) L ϕ(akb)χ l (m −1 k) dk = χ l (m) −1 ϕ(a) . One can show that ϕ(ma) = χ l (m) −1 ϕ(a) in the same way by applying (3.1) to 1 ϕ(b) L ϕ(bka)χ l (k m −1 ) dk. That ϕ(e) = 1 follows from (3.1) by taking b = e. As G C = U C is simply connected it follows that π µ extends to an irreducible holomorphic representation of G C which we also denote by π µ . Thus χ l extends to a homomorphism of K C also denoted by χ l . Lemma 3.3. Let µ ∈ Λ + l . Then ψ µ,l is a spherical function of χ l type. It extends to a holomorphic function on U C . The extension is given by ψ µ, l (g) = π µ (g −1 )e µ , e µ , g ∈ U C . Furthermore, the holomorphic extension satisfiesψ µ,l (k 1 gk 2 ) = χ l (k 1 k 2 ) −1ψ µ,l (g) for all k 1 , k 2 ∈ K C and g ∈ U C . Proof. This is standard, but let us show that ψ µ,l satisfies (3.1). For that we note that K χ l (k) −1 π µ (k)π µ (b)e µ d k = π µ (b)e µ , e µ e µ , b ∈ U. Hence, for a, b ∈ U , K ψ µ,l (akb)χ l (k) dk = K π µ (a −1 )e µ , χ l (k) −1 π µ (k)π µ (b)e µ dk = e µ , π µ (a)e µ π µ (b)e µ , e µ = ψ µ, l (a) ψ µ, l (b) . For λ ∈ a * C define ϕ λ, l : G → C by When l = 0, ϕ λ (g) = ϕ λ, 0 (g) is the Harish-Chandra spherical function on G. (3.2) ϕ λ,l (g) = K a(g −1 k) λ−ρ χ l (κ(g −1 k)k −1 ) dk. For the following theorem see [ Theorem 3.5. The function ϕ λ,l is a spherical function of type χ l on G. If ψ is a spherical function of type χ l then there exists λ ∈ a * C such that ψ = ϕ λ,l . Furthermore the following holds true: (1) ϕ λ,l (g) is real analytic in g ∈ G, and holomorphic in λ ∈ b * C . (2) ϕ λ,l = ϕ µ,l if and only if there is a w ∈ W such that λ = wµ. (3) ϕ λ,l (a) = ϕ λ,−l (a) = ϕ −λ,−l (a) = ϕ −λ,l (a), ∀ a ∈ A. (4) ϕ λ,l (g) = ϕ −λ,−l (g −1 ) = ϕ λ,−l (g −1 ) = ϕ −λ,l (g), ∀ g ∈ G. The following lemma is the basis for the holomorphic extension of S l (f ) for f with sufficiently small support. Lemma 3.6. Let µ ∈ Λ + l . Then ϕ µ+ρ,l extends to a holomorphic function on G C , denoted again by ϕ µ+ρ,l , andψ µ,l = ϕ µ+ρ,l . Proof. As G is totally real in G C it is enough to show thatψ µ,l | G = ϕ µ+ρ,l . Let u µ ∈ V µ,l be a nonzero highest weight vector. Then V µ,l is generated by π µ (K C ) u µ . As K C A C N C is dense it follows that u µ , e µ = 0. Choose u µ so that u µ , e µ = 1. Then P µ,l u µ = e µ . Thus for g ∈ G:ψ µ,l (g) = π µ (g −1 )e µ , e µ = K χ l (k) −1 π µ (g −1 k)u µ , e µ dk = K a(g −1 k) µ χ l (κ (g −1 k) k −1 ) dk = ϕ µ+ρ, l (g) . 3.2. Holomorphic extension and estimates for ϕ λ,l . We refer to Chapter 5 in [13] and Section 2 in [31] for detailed discussion about invariant differential operators on L l . Here we just recall what we need. Let D l (X) ≃ D l (Y) be the algebra of invariant differential operators D : C ∞ (X; L l ) → C ∞ (X; L l ). Let U (g) K be the Ad(K)-invariant elements in the universal enveloping algebra of g C = u C . Then there exists a surjective map u → D u of U (g) K onto D l (X). We denote by γ l : U (g) K → S(a) W the Harish-Chandra homomorphism. Here S(a) W is the commutative algebra of W -invariant polynomials on a * C . Then γ l induces a algebra isomorphism D l (X) ≃ S(a) W , see [13,Thm. 5 ]. Define a homomorphism ζ λ,l : D l (X) → C by ζ λ,l (D) = γ l (D)(λ). We also write ζ l (D; λ) for ζ λ,l (D). We then have: Lemma 3.7 (Theorem 3.2, [31]). Let D ∈ D l (X) and λ ∈ a * C . Then If ϕ ∈ C ∞ (U//K; L l ) is a solution of the system of differential equations (3.3) and ϕ(e) = 1, then ϕ = ϕ λ, l . We will also write m(l) for m + (l). Note that the radial part of the Laplace-Beltrami operator on X (acting on χ lbicovariant functions) is exactly the operator L (l) = L (m(l)) := n j=1 ∂ 2 ξ j + α∈Σ + m(l) α 1 + e −2α 1 − e −2α ∂ α associated with the root system Σ and the multiplicity m(l). This operator is actually defined on A reg C = exp(a reg C ). We write ρ(l) = ρ (m (l)). It was shown that ζ l (L (l); λ) = (λ, λ) − (ρ(l), ρ(l)). Theorem 3.8. Let Ω = {X ∈ b | (∀α ∈ Σ) |α(X)| < π}. The function A × a * C → C, (a, λ) → ϕ λ,l (a), extends to a holomorphic function (b, λ) → ϕ λ,l (b) on A exp(Ω) × a * C . The extension satisfies the symmetry conditions in Theorem 3.5. Furthermore there exists a constant C > 0 such that for X ∈ Ω we have |ϕ λ,l (exp X)| ≤ Ce X Re(λ) . Proof. This is done in the Appendix. The estimate for ϕ λ,l follows from Remark A.4 and Proposition A.6. Let ε > 0 and Ω ε = {X ∈ b | (∀α ∈ Σ) |α(X)| ≤ π − ε}. Then Ω = ∪ ε>0 Ω ε . Let Y j ∈ a be such that ǫ i (Y j ) = δ i,j . For Y ∈ a C write Y = n j=1 y j Y j = n j=1 ǫ j (Y )Y j . 1 Our multiplicity notation is different from the one used by Heckman and Opdam. The root system R they use is related to our 2Σ and their multiplicity function is k2α = 1 2 mα. We have a + Ω ǫ = {Y ∈ a C | |Im y j | ≤ 1 2 (π − ǫ)} . The Paley-Wiener Theorem Let be the norm on u with respect to the inner product · , · defined earlier using the Cartan-Killing form. For r > 0, let B r (0) = {X ∈ q : X < r} be the open ball in q centered at 0 with radius r. Let B r (x o ) = Exp(B r (0)). We will fix R > 0 such that R is smaller than the injectivity radius and B R (0) ∩ b ⊂ Ω. In particular Exp : B r (0) → B r (x o ) is a diffeomorphism and ϕ λ,l is well defined on the closure of B r (x o ) for all 0 < r < R. We therefore let B r (0) be the closed ball in q with radius r and B r (x o ) = Exp(B r (0)) the closure of B r (x o ). Finally we let C ∞ r (U//K; L l ) be the space of functions in C ∞ (U//K; L l ) with support in B r (x o ). As R is smaller than the injectivity radius it follows that for 0 < r < R we have (4.1) C ∞ r (B) W ∼ = C ∞ r (B · o) W and C ∞ r (U//K; L l ) ∼ = − −→ η l · C ∞ r (B) W . For r > 0 denote by PW r (b * C ) the space of holomorphic functions ϕ on b * C of expo- nential type r. Thus a holomorphic function F on b * C is in PW r (b * C ) if and only if for every k ∈ N, there is a constant C k such that |ϕ (λ)| ≤ C k (1 + λ ) −k e r Re λ , ∀ λ ∈ b * C . There are two natural actions of the Weyl group. The first one is the usual conjugation of the variable, and the second is the ρ-shifted affine action R(w) F (λ) = F (w −1 (λ + ρ) − ρ). Let PW r (b * C ) R(W ) = {F ∈ PW r (b * C ) | (∀w ∈ W ) R(w)F = F } . We note that this is the same Paley-Wiener space as in [25]. Similarly one defines the space PW r (b * C ) W where we now use the standard action of the Weyl group. We note that those spaces are isomorphic via the map F → Ψ(F ) : λ → F (λ − ρ) with inverse G → Ψ −1 (G) : λ → G(λ + ρ) . To see that Ψ(F ) is W -invariant a simple calculation gives: Ψ(F )(wλ) = F (wλ − ρ) = F (w(λ − ρ + ρ) − ρ) = F (λ − ρ) = Ψ(F )(λ) . Similarly for Ψ −1 (G). We will use this isomorphism in the following to connect results from [28] on restriction of Paley-Wiener spaces to subspaces without comment that one has sometimes to use the above isomorphism. S l : C ∞ r (U//K; L l ) ∼ = −→ PW r (b * C ) R(W ) for each 0 < r < R. Precisely, (1) If f ∈ C ∞ r (U//K; L l ), then S l (f ) : Λ + l → C extends to a function in PW r (b * C ) R(W ) ; (2) Let ϕ ∈ PW r (b * C ) R(W ) . Then there exists a unique f ∈ C ∞ r (U//K; L l ) such that S l (f ) (µ) = ϕ (µ), ∀ µ ∈ Λ + l ; (3) The functions in PW r (b * C ) R(W ) are uniquely determined by their values on Λ + l . Corollary 4.2. Let l, k ∈ Z and 0 < r < R. Then S −1 k • S l : C ∞ r (U//K; L l ) ∼ = C ∞ r (U//K; L k ) is a linear isomorphism. Proof. This follows from Theorem 4.1 applied to l and k. (1), (2) and (3), and then take the minimum of those constants for the map (4.2) to be a bijection. 2) In [25] the authors used for Ω the domain where |α(X)| ≤ π/2. This is because [25] used the Opdam estimates [21] which were shown for this domain. 3) We note that C ∞ r (Y) K = C ∞ r (U//K, L 0 ) so this case is also cowered in the corollary. The hard part of Theorem 4.1 is (2) so we start with (1) and (3) and leave (2) for the next section. The proof follows closely [25]. Proof. (Part (1)) Let µ ∈ Λ + l and f ∈ C ∞ r (U//K; L l ). It is easy to see that the function u → f (u)ψ µ,l (u) is K-biinvariant. Using Lemma 1.4 and using that −1 ∈ W , we get S l (f )(µ) = 1 |W | B f (b)δ(b)ψ µ,l (b) db = 1 |W | B [f (b)δ(b)]ψ µ,l (b) db = 1 |W | B [f (b)δ(b)]ϕ µ+ρ,l (b) db . As supp(f | B·xo ) ⊆ exp(Ω) · x o we can, using Theorem 3.8 define the holomorphic extension of S l (f ) to a * C by λ → S l (f )(λ) = 1 |W | B [f (b)δ(b)]ϕ λ+ρ,l (b) db . Then, by the W -invariance of ϕ λ, l it follows that S l (f )(w(λ + ρ) − ρ) = S l (f )(λ). As b → f (b)δ(b) is in C ∞ r (B) W it follows again from Theorem 3.8 that |S l (f )(λ)| ≤ Ce r Re λ . The polynomial estimate follows by applying D ∈ D l (Y) to ϕ λ,l and noticing that ζ l (D; λ)S l (f )(λ) = Y f (y)Dϕ λ,l (y −1 ) dy = Y D * f (y)ϕ λ,l (y −1 ) dy = S l (D * f )(λ) where D * is the adjoint of D. Proof. (Part (3)) This follows from the generalization of the Carleson's theorem for higher dimensions, see [25,Lem. 7.1]. Here we use the fundamental weights ω 1 , . . . , ω n and (2.7) to view S l (f ) as a function on C n . Denote by λ 0 the standard norm on C n . Then there exists C > 0 such that λ ≤ C λ 0 . It follows that there exists C 1 > 0 such that |S l f ( λ j ω j + 2 |l| ρ s )| ≤ C 1 e (rC) λ 0 . Hence, if rC < π and S l f ( k j ω j + 2 |l| ρ s ) = 0 for all k j ∈ Z + then Carleson's theorem implies that S l f = 0 and hence the extension is unique. The Surjectivity In this section we prove the surjectivity. The tool is the Paley-Wiener theorem for central functions on compact Lie groups. We recall the Paley-Wiener theorem for the group case first, and then use it to prove the surjectivity of the χ l -spherical Fourier transform. The Paley-Wiener theorem for central functions on U . The tool to prove the surjectivity of the χ l -spherical Fourier transform is the Paley-Wiener theorem for central functions on compact Lie group originally proved by F. Gonzalez in [11]. This is a special case of the Paley-Wiener theorem for compact symmetric spaces as U ≃ U × U/K where K is the diagonal in U × U , i.e. K = {(u, u) | u ∈ U } ≃ U . The corresponding involution is τ (a, b) = (b, a) and the action of U × U on U is (a, b) · u = aub −1 . In particular we have C ∞ (U ) U = {f ∈ C ∞ (U ) | (∀u, k ∈ U ) f (kuk −1 ) = f (u)} . The spherical functions on U = U × U/K are the normalized trace functions ξ µ (u) = 1 d(µ) Tr(π µ (u −1 )) . The non-compact dual is G/K = U C /K. The role of a is played by the Cartan subalgebra i h, W by W h , the Weyl group associated to ∆, and Λ + 0 by Λ + (U ), the semi-lattice of all highest weights of irreducible representations of U . Finally, the spherical functions on U C /K are given by, see [15,Thm. 5.7,Chapter IV]: ϕ λ (a) = π(ρ(h)) π(λ) w∈W (det w)a wλ w∈W (det w)a wρ h , where π(µ) = α∈∆ + α, µ . The first one to consider this case was F. Gonzalez in [11]. The result was used in [25] to prove the surjectivity part of the Paley-Wiener theorem for K-invariant functions on U/K. The simple reformulation corresponding to results of [25], i.e., using the normalized trace function ξ µ instead of the character Tr • π µ , was done in [28,Lem. 5.4]. The formulation of Gonzalez theorem, in the form we need it, is then: Theorem 5.1. Let PW r (h * C ) R(W h ) be the space of holomorphic functions on h C of exponential growth r and such that F (w(λ + ρ h ) − ρ h ) = F (λ) for all λ ∈ h * C and w ∈ W h . Then there exists a R > 0 such that for all 0 < r < R the spherical Fourier transform f (λ) = U f (u)ϕ λ+ρ (u −1 ) du is a surjective linear map C ∞ r (U ) U → PW r (h * C ) R(W h ) . 5.2. The Surjectivity of the χ l -Spherical Fourier Transform. It is easy to see that if F ∈ PW r (b * C ) R(W ) then the function f (u) = µ∈Λ + l d(µ)F (µ)ψ µ,l (u) is smooth and S l (f ) = F . The hard part is to see that supp(f ) ⊆ B r (x o ). To use Theorem 5.1 we define Q l (f )(u) = K f (uk)χ l (k) dk = K χ l (k)f (ku) dk , f ∈ C ∞ r (U ) U . Lemma 5.2. Assume that f ∈ C ∞ r (U ) U , then Q l (f ) ∈ C ∞ r (U//K, L l ). Furthermore, if F ∈ PW r (h * C ) R(W h ) and f = µ∈Λ + (U ) d(µ) 2 F (µ)ξ µ , then (5.1) Q l (f )(u) = µ∈Λ + l d((µ, 0, µ 0 ))F ((µ, 0, µ 0 ))ψ µ,l (u) . Proof. If f = µ∈Λ + (U ) d(µ) 2 F (µ)ξ µ then f is the inverse Fourier transform of F . Because of the rapid decay of F it follows that Q l (f ) = µ∈Λ + (U ) d(µ) 2 F (µ)Q l (ξ µ ) . Note that the square in d(µ) 2 comes from the fact that the representation that we are in fact using in V µ ⊗ V * µ has dimension d(µ) 2 . Recall that the P µ,l v = K χ l (k) −1 π µ (k)v is the orthogonal projection V µ → V l µ . In particular, if π µ is not χ l -spherical, then P µ,l (V µ ) = 0. Fix an orthonormal basis of V µ , say v 1 , . . . , v d(µ) . In case V µ is χ l -spherical, we assume that v 1 = e µ,l . Then d(µ) K ξ µ (uk)χ l (k) dk = d(µ) j=1 v j , K χ l (k −1 )π µ (u)π µ (k)v j dk = d(µ) i=1 v j , π µ (u)P µ,l v j = 0 if π µ is not χ l spherical ψ µ b ,l if π µ is χ l spherical where µ b is the projection of µ ∈ ih * onto ib * . The claim (5.1) now follows from our description of Λ + l (U ) = {(µ, 0, µ 0 ) | µ ∈ Λ + l }. The claim that Q l (f ) is supported in a ball of radius r follows as Lemma 9.3 in [25]. We still need a few lemmas and observations for the proof of surjectivity of S l . Recall that ρ 0 = ρ h | a ⊥ and ρ h = ρ + ρ 0 . The map Ψ h : PW r (h * C ) R (W h ) −→ PW r (h * C ) W h F −→ Ψ h (F ), Ψ h (F ) (λ) = F (λ − ρ h ) is an isomorphism with the inverse Ψ −1 h (F ) (λ) = F (λ + ρ h ). Recall that we write (0, 0, µ 0 ) as µ 0 . The map F −→ Φ (F ), Φ (F ) (λ) = F (λ + µ 0 + ρ 0 ) is an isomorphism of PW r (h * C ) onto itself. Let W = {w ∈ W h | w (a C ) = a C }. We have W ⊆ W h . Recall ∆ 0 = {α ∈ ∆ | α| a = 0}. Let W 0 = {w ∈ W h | w| a = Id}. Then W 0 ⊆ W . Lemma 5.3. If w ∈ W , then w µ 0 = µ 0 . Proof. Since µ 0 = λ (i X) (see the discussion in Section 2.1), it remains to show that if w ∈ W then w X = X. Recall [30, (4.4)] for the definition of X. Note that W | a = W , and elements of W are permutations and sign changes. The permutations of ε j 's are given by products of reflections r 1 2 (ε i −ε j ) 's. Let β ∈ ∆ be given by β = β + + β − where β − = 1 2 (ε i − ε j ) ∈ a * and β + ∈ (a ⊥ ) * . Let β = −β + + β − . Then β, β = 0, r β r β ∈ W , and r β r β | a = r 1 2 (ε i −ε j ) . We have (r β r β ) X = X. It is easy to see that any w ∈ W with w| a = r 1 2 (ε i −ε j ) also satisfies w X = X. On the other hand, the sign changes of ε j 's are given by products of r ε j 's. But then r ε j X = X since X ⊥ a. This is correct for any element in W whose restriction on a is a sign change. Therefore, w X = X for any w ∈ W . Lemma 5.4. Let G ∈ PW r (h * C ) W h . Then Φ(G)| a * C = G ( · + µ 0 + ρ 0 ) is W -invariant, i.e. if w ∈ W and µ ∈ a * C , then G (w µ + µ 0 + ρ 0 ) = G (µ + µ 0 + ρ 0 ). Proof. Let w ∈ W . Let w ∈ W be such that w | a = w. Since G is W h -invariant, it is thus W -invariant. In view of Lemma 5.3 we get G (w µ + µ 0 + ρ 0 ) = G (µ + w −1 µ 0 + w −1 ρ 0 ) = G (µ + µ 0 + w −1 ρ 0 ). Note that w −1 ∆ + 0 = ∆ + 0 , and we can choose w 0 ∈ W 0 such that w 0 ( w −1 ∆ + 0 ) = ∆ + 0 , in particular, choose w 0 s.t. w 0 w −1 (ρ 0 ) = ρ 0 . Moreover, w 0 µ = µ. It follows that G (w µ + µ 0 + ρ 0 ) = G (w 0 (µ + µ 0 + w −1 ρ 0 )) = G (µ + µ 0 + ρ 0 ). Let G ∈ PW r (h * C ). Let k = |W h |. Let P 1 , . . . , P k be a basis for S(h) over S(h) W h . Here, S (h) is the symmetric algebra of C-valued polynomials on h * C , and S(h) W h consists of W h -invariant elements in S (h). According to Rais [29], published proof due to L. Clozel and P. Delorme, [5], there exists G 1 , . . . , G k ∈ PW r (h * C ) W h such that G = P 1 G 1 + . . . + P k G k . We are now ready to prove that the χ l -spherical Fourier transform S l (4.2) is surjective. Proof. (Part (2) of Theorem 4.1) Let F ∈ PW r (b * C ) R(W ) . Then Ψ (F ) ∈ PW r (b * C ) W . It follows from Cowling [6] that there exists E ∈ PW r (h * C ) such that Φ(E)| a * C = Ψ(F ) ( · + µ 0 ), i.e. E (µ + µ 0 + ρ 0 ) = Φ(E)| a * C (µ) = Ψ (F ) (µ + µ 0 ), µ ∈ a * C . By the above results of Rais, there exist polynomials P j ∈ S(h) and G j ∈ PW r (h * C ) W h such that E = k j=1 P j G j . Hence (5.2) Ψ(F ) ( · + µ 0 ) = k j=1 Φ(P j )| a * C Φ(G j )| a * C . Taking the average of (5.2) over W gives that Ψ (F ) (µ + µ 0 ) = k j=1 1 |W | w∈W Φ(P j )| a * C (w µ) Φ(G j )| a * C (w µ) = k j=1 1 |W | w∈W Φ(P j )| a * C (w µ) =: q j (µ) Φ(G j )| a * C (µ) where by Lemma 5.4 Φ(G j )| a * C (w µ) = G j (w µ + µ 0 + ρ 0 ) = G j (µ + µ 0 + ρ 0 ) = Φ(G j )| a * C (µ), w ∈ W. Note that q j ∈ S (a) W . Let D j ∈ D l (Y) be such that q j (λ) = ζ l (D * j , λ), λ ∈ a * C , see the discussion at the beginning of Section 3.2. By the Paley-Wiener theorem for C ∞ r (U ) U , there exists ϕ j ∈ C ∞ r (U ) U with spherical Fourier transform Ψ −1 h G j . Then f j = Q l (ϕ j ) ∈ C ∞ r (U//K, L l ) has the χ l -spherical Fourier transform: S l (f j ) (µ + µ 0 ) = Φ(G j )| a * C (µ + ρ). It follows that F is the χ l -spherical Fourier transform of f := D 1 f 1 + . . . + D k f k . The surjectivity now follows from the fact that differentiation does not increase supports and hence f ∈ C ∞ r (U//K; L l ). Rank One Case The rank one case corresponds to n = 1, that is, b is one-dimensional. From Table 1.2, we see that the only rank one hermitian symmetric space Y is SU (q + 1)/S(U (1) × U(q)) , q ≥ 1. This is the space of complex lines in C q+1 , known as the Grassmann manifold of onedimensional subspaces of C q+1 . The root system Σ = {± α, ± 2 α} of type BC 1 , and we fix Σ + = {α, 2α}. Set k 1 = m α and k 2 = m 2 α . From Table 1.3, we have k 1 = 2 (q − 1), q ≥ 1, and k 2 = 1. We identify b and i b * with i R, and b C and b * C with C. So B = exp b ∼ = T. The Weyl group W = {± 1} acting on i R and C by multiplication. The χ l -spherical Fourier transform becomes, up to a constant, S l (f ) (λ) = T f (x) η l (x) F (λ + ρ, m(l); x) δ (x) d x where d x is the invariant measures on the torus T, and F is as defined in Definition A.1. Let s = 1 2 (x + x −1 ). The weight measure δ (x) d x becomes 2 q (1 − s) q−1 ds, where ds is the invariant measure on R. Since f | B is W -invariant and f has compact support in exp Ω/2, it reduces S l (f ) (λ) to an integral with respect to s over −1 ≤ s ≤ 1. We give an alternative method to get the exponential growth of S l (f ). In our case, the multiplicity parameter m(l) in F (λ + ρ, m(l); · ) might be negative. With the help of shift operators (cf. Chapter 3 in [13]), we can shift up or down certain multiplicities as needed. Applying a suitable shift operator to F we can move multiplicities from negative to positive so that we are free to use the estimate given by [21]. For rank one case, the shift operator we use has a simple form, derived from [13, (3.3.5)], E − = (s − 1) d d s + C, C = k 1 + k 2 − 1. This is a first order differential operator. Using integration by parts, followed by Proposition 6.1 in [21], gives the desired exponential growth of S l (f ). Remark 6.1. Let u be a rank n (n > 1) Lie algebra associates with the multiplicity (m s , m m , 1) with m m even. Let u j be rank one Lie algebras with (m s , 0, 1), j = 1, . . . , n. Then b = b 1 ⊕ · · · ⊕ b n where b j ⊂ q j is maximal abelian, and q j is the −1-eigenspace of u j . So B = B 1 × B 2 × · · · × B n , B j = exp b j . Let f ∈ C ∞ r (U//K, L l ). Using a shift operator to move m m down to 0, we can then write S l (f ) (λ) as a n-fold iterated integral of rank one cases with which we have done. This is a different proof of an exponential growth of S l (f ), but only for the case m m is even. Appendix A. Estimates for the Heckman-Opdam Hypergeometric Functions We refer to [13, Part I, Chapter 4] and [31, Section 3.2] for an introduction to the Harish-Chandra asymptotic expansion for the χ l -spherical functions ϕ λ, l . The Harish-Chandra expansion was the basic tool in Heckman and Opdam's theory of (generalized) hypergeometric functions for arbitrary multiplicity functions. The χ l -spherical functions ϕ λ,l were identified as hypergeometric functions in [13]. In the following we will give a uniform estimate for the hypergeometric functions generalizing the result of Opdam [21] to some negative multiplicities. We do not assume that Σ corresponds to a Hermitian symmetric space. We refer to [13] for notation and basic definitions. We recall that a multiplicity function m : is called the hypergeometric function on A associated with the triple (a, Σ, m). We collect now the information that we need about the hypergeometric functions. Theorem A.2. Let M ≥ = {m ∈ M | (∀α ∈ Σ * ) m α + m α/2 ≥ 0, m α ≥ 0} . Then the following holds: Proposition A.3. Let the notation be as above. Let l ∈ Z and let η ± l be as in (3.4) and (3.5). Then the following holds: (1) m ± (l) ∈ M ≥ . (2) ϕ λ, l | A = η ± l F (λ, m ± (l); ·), where λ ∈ a * C , and the ± sign indicates that both possibilities are valid. Proof. (1) follows from the definition of m ± (l) (cf. (3.6) and (3.7)), and (2) Remark A.4. For X ∈ Ω and λ ∈ a * C we have ϕ λ, l (exp X) = η l (exp X)F (λ, m(l); exp X). Since α(b) ⊂ iR for α ∈ Σ, 0 < |η l (exp X)| = α∈O + s |cos Im α (X)| |l| ≤ 1. Therefore, η l is holomorphic on A(exp Ω) and bounded on exp Ω. In the main part showing that the χ l -spherical Fourier transform maps the space C ∞ r (U//K; L l ) into the Paley-Wiener space, a good control over the growth of the hypergeometric functions was needed. Proposition 6.1 in [21] gives a uniform estimate both in λ ∈ a * C and in Z ∈ a + Ω/2 (recall the difference in the notation) in case all multiplicities are positive. But we need similar estimates where some multiplicities are allowed to be negative. In the following we will generalize Opdam's results to multiplicities in M ≥ . We also point out, that Opdam's estimates holds for Y ∈ 1 2 Ω, but our estimates, with possibly a different constant in the exponential growth, holds for Y ∈ Ω. Our proof is based on ideas from [21] but uses a different regrouping of terms as we will point out later. where Z = X + i Y with X, Y ∈ a and |α (Y )| ≤ π − ε for all α ∈ Σ. Proof. Let φ w (exp Z) = G(λ, m, w −1 Z) where G is the nonsymmetric hypergeometric function defined as in [21,Theorem 3.15], so that F (λ, m; exp Z) = |W | −1 w∈W G(λ, m, w −1 Z) . In the following we will often write φ w instead of φ w (exp Z). By Definition 3.1 and Lemma 3.2 in [21] we have ∂ ξ φ w = − 1 2 α∈Σ + m α α(ξ) 1 + e −2α(Z) 1 − e −2α(Z) (φ w − φ rαw ) − sgn(w −1 α)φ rαw + (wλ, ξ)φ w . Here, sgn (α) = 1 if α ∈ Σ + , and sgn (α) = −1 if α ∈ −Σ + . We get by taking complex conjugates, ∂ ξ φ w = − 1 2 α∈Σ + m α α(ξ) 1 + e −2α(Z) 1 − e −2α(Z) (φ w − φ rαw ) − sgn(w −1 α)φ rαw + (wλ, ξ)φ w . It follows that ∂ ξ w |φ w | 2 = w [(∂ ξ φ w )φ w + φ w (∂ ξ φ w )] = − 1 2 α∈Σ + ,w [m α α(ξ) 1 + e −2α (Z) 1 − e −2α (Z) (φ w − φ rαw )φ w − sgn(w −1 α)φ rα w φ w +m α α(ξ) 1 + e −2α(Z) 1 − e −2α(Z) (φ w − φ rαw )φ w − sgn(w −1 α)φ rα w φ w ] +2 w Re(wλ(ξ))|φ w | 2 . For fixed α, we add the terms with index w and r α w. Then 1 − e −2α(Z) |φ w − φ rαw | 2 + α∈Σ + ,w m α sgn(w −1 α)Im(α(ξ))Im(φ w φ rαw ) + 2 w Re(wλ(ξ))|φ w | 2 . Observe that Let ε > 0 and write Z = X + iY with X, Y ∈ a and |α(Y )| ≤ π − ε, for all α ∈ Σ. Let α(X) = t ∈ R and α(Y ) = s ∈ R. Then α(Z) = t + is. Write α(ξ) = a + ib with a = Reα(ξ) and b = Imα(ξ). We then have (1 + e −2α(Z) )(1 − e −2α(Z) ) = 1 − e −4t − 2ie −2t sin(2s) . Similarly, (1 + e −2α(Z) )(1 − e −2α(Z) ) = 1 − e −4t + 2ie −2t sin(2s) . A simple calculation the shows that α(ξ)(1 + e −2α(Z) )(1 − e −2α(Z) ) + α(ξ)(1 + e −2α(Z) )(1 − e −2α(Z) ) = 2Re(α(ξ))(1 − e −4α(X) ) + 4Im(α(ξ))e −2α(X) sin(2α(Y )). Hence, ∂ ξ w |φ w | 2 (A.1) = − 1 2 α∈Σ + ,w m α Re(α(ξ))(1 − e −4α(X) ) + 2Im(α(ξ)) sin(2α(Y )) e 2α(X) |1 − e −2α(Z) | 2 |φ w − φ rαw | 2 + α∈Σ + ,w m α sgn(w −1 α)Im(α(ξ))Im(φ w φ rαw ) + 2 w Re(wλ(ξ))|φ w | 2 . We first take X, ξ ∈ a reg such that they are in the same Weyl chamber. Let µ ∈ {wReλ} w∈W be such that µ(ξ) = max w Re(wλ)(ξ). Then (wReλ − µ)(ξ) ≤ 0. The where we use the fact that G(λ, m, 0) = 1 (cf. [21,Theorem 3.15]). Next, we take Y ∈ a reg such that |α(Y )| ≤ π − ε for all α ∈ Σ, and η ∈ a reg belonging to the same Weyl chamber, and let ξ = iη. Then Re(wλ(ξ)) = −Im(wλ(η)) and Im(α(ξ)) = Re(α(η)). Take µ ∈ {wImλ} w∈W such that −Im(wλ(η)) ≤ −µ(η) for all w ∈ W . This is to say, µ = min w Im(wλ). Observe that α∈Σ + |m α ||α(η)| ≤ max w α∈Σ + |m α |α(wη) = 2 max w (w ρ, η). We have α∈Σ + ,w m α sgn(w −1 α)Im(α(ξ))Im(φ w φ rαw ) ≤ α∈Σ + ,w |m α | |α(η)||φ w ||φ rαw | ≤ 2 max w (w ρ, η) w |φ w | 2 . Choose ν ∈ {w ρ} w∈W such that (ν, η) = max w (w ρ, η). Let C > 2 be a constant to be determined and let H(iY ) = e 2µ(Y ) e −Cν(Y ) w |φ w (exp (iY ))| 2 . Definition 3. 1 .identically 1Let H be a locally compact Hausdorff group and L ⊂ H a compact subgroup. Let χ l : L → T be a continuous homomorphism. A continuous function ϕ : H → C is an elementary spherical function of type χ l if ψ is not akb) χ l (k) dk = ϕ(a)ϕ(b) , for all a, b ∈ H . Lemma 3. 2 . 2Let ϕ be a spherical function of type χ l . Thenϕ(k 1 hk 2 ) = χ l (k 1 k 2 ) −1 ϕ(h) ,for all h ∈ H and k 1 , k 2 ∈ L . Remark 3. 4 . 4Notice that the formula (3.2) differs from the one in [13, p.82, (5.4.1)] by an inverse sign. The definition (3.2) for ϕ λ,l is equivalent to ϕ λ,l (g) = K a(gk) −λ−ρ χ l (κ(gk) −1 k) dk. (3. 3 ) 3Dϕ λ,l = ζ l (D; λ)ϕ λ, l . η l for η +l . Both functions are holomorphic and W -invariant on A C . Recall that a multiplicity function m is a W -invariant functions m : Σ → C. We write m α = m (α) and fix m to be the multiplicity function given by m α = dim R g α as before. In our case it can only take three values m = (m s , m m , m l ) where m s = m ε j , m m = m ε j ±ε i , i = j, and m l = m 2ε j 1 . It is possible that one or more of those number is zero. Define m + (l) = (m s − 2|l|, m m , m l + 2|l|) (3.6) m − (l) = (m s + 2|l|, m m , m l − 2|l|) . (3.7) Theorem 4 . 1 ( 41Paley-Wiener Theorem). The (extended) χ l -spherical Fourier transform S l gives a linear bijection Remark 4.3. 1) As remarked in [25, Remark 4.3] one can use different R in Σ → C is a W -invariant function. It is said to be positive if m(α) ≥ 0 for all α. The set of multiplicity functions is denoted by M and the subset set of positive multiplicity function is denoted by M + . The Harish-Chandra series corresponding to a multiplicity function m is denoted by Φ(λ, m; a) and the c-function is denoted by c(λ, m). Definition A.1. The function F (λ, m; a) = w∈W c(wλ, m)Φ(wλ, m; a) . 5 . 5Let m ∈ M ≥ . Let F be the hypergeometric function associated with Σ and m. Let ε > 0. Then there is a constant C = C ε > 0 depending on ε such that |F (λ, m; exp Z)| |1 − e −2α(Z) | 2 = (1 − e −2α(Z) )1 − e −2α(Z) = (1 − e −2α(Z) )(1 − e −2α(Z) ξ)(1 + e −2α(Z) )(1 − e −2α(Z) ) + α(ξ)(1 + e −2α(Z) )(1 − e −2α(Z) ) |1 − e −2α(Z) | 2 . e −4α(X) ) |1 − e −2α(Z) | 2 |φ w − φ rαw | 2 e −2µ(X) − µ)(ξ)|φ w | 2 e −2µ(X) , (A.3)Observe that the term (A.3) is clearly nonpositive. In the term (A.2), the factor |φ w − φ rαw | 2 e −2µ(X) ≥ 0. We let m α/2 = 0 if α/2 is not a root. Considerα∈Σ + , w m α α(ξ)(1 − e −4α(X) ) |1 − e −2α(Z) | 2 |φ w − φ rαw | 2 e −2µ(X) e −2α(X) ) |1 − e −α(Z) | 2 |φ w − φ rαw | 2 e −2µ(X) α/2 |φ w − φ rαw | 2 e −2µ(X) . (A.4) Since X, ξ are in the same Weyl chamber, α(ξ)(1 − e −2α(X) ) ≥ 0 for all α ∈ Σ + . Since m α/2 ≥ −m α and m α ≥ 0 for all α ∈ Σ + * , then m α 1 + e −2α(X) |1 + e −α(Z) e −2t ) − |1 + e −t e −is | 2 2|1 + e −t e −is | 2 ≥ 0if and only if the numerator is nonnegative, which is clearly as2(1 + e −2t ) − |1 + e −t e −is | 2 = 1 + e −2t − 2e −t cos(s/2) ≥ 1 + e −2t − 2e −t = (1 − e −t ) 2 ≥ 0.It follows that (A.4) is nonnegative. Thus the term (A.2) is nonpositive and hence∂ ξ (e −2µ(X) w∈W |φ w (exp Z)| 2 ) ≤ 0.This impliese −2 maxw Re(wλ(X)) w |φ w (exp Z)| 2 ≤ e −2 maxw Re(wλ(0)) w |φ w (exp (0 + iY ))| 2 = w |φ w (exp (iY ))| 2if X ∈ a reg , and by continuity this holds for all X ∈ a. Note that |φ e (exp Z)| = |G(λ, m, e −1 Z)| = |G(λ, m, Z)|and |φ e (exp Z)| 2 ≤ w |φ w (exp Z)| 2 which implies |φ e (exp Z)| ≤ ( w |φ w (exp Z)| 2 ) 1/2 . Hence, we have (A.5) |G(λ, m, X + iY )| ≤ e maxw Re(wλ(λ, m, X)| ≤ |W | 1/2 e maxw Re(wλ(X)) , The individual cases from Table 1.2 are listed in the following table: .1.10] or [31, Prop. 2.2. There exists an open set M reg containing M ≥ and an open setW ⊂ A C containing A such that F (λ, m; a) is holomorphic on a * C × M reg × W .(2) One can take W in (1) to be exp (a + Ω).Proof. For (1) see[13, Theorem 4.4.2. and Remark 4.4.3]. See also[21, Theorem 3.15]. For (2) see Remark 3.17 in[3]. is [13, p.76, Theorem 5.2.2]. Now using the formula (A.1) we obtainwhere (III) and (IV) are clearly nonpositive. In the original proof in[21]α(η) sin 2α(Y ) ≥ 0 and m α ≥ 0 for all α ∈ Σ + . Hence (I) is nonpositive. Then the author takes C = 2 to let (II) vanish. In fact any C ≥ 2 would be good enough. Our assumptions allows for π ≤ |2α(Y )| ≤ 2π − 2ǫ which would imply that α(η) sin 2α(Y ) ≤ 0 and so (I)≥ 0. Similar problems arise since m α might be nonpositive for some α ∈ Σ + . Thus the rest of the proof differs from the original proof in[21]by grouping (I) and (II) together and then choosing the constant C such that the sum becomes nonpositive.As earlier we set m α/2 = 0 if α/2 is not a root. For (I) observe thatFix a root α ∈ Σ + * . Let s α = s := α(Y ). Then |1 − e −2α(iY ) | 2 = 4 sin 2 (s) .Using that m α + m α/2 ≥ 0 and m α ≥ 0 for all α ∈ Σ + * we get:where we use the formulas sin(2s) = 2 sin(s) cos(s) and cos(2s) = cos 2 (s) − sin 2 (s). Since η, Y belong to the same chamber, we see that α(η) tan(s/2) ≥ 0. Since |s| = |α(Y )| ≤ π − ε, then there is a constant C 1 = C 1 (ε) > 0 depending on ε such thatIt follows thatHence (I)+(II)≤ 0 if we take C ≥ 2 + 2C 1 (C thus depends on ε). It follows thatTogether with (A.5), we getRe(wλ(X))).But note that |G(λ, m, Z)| = |F (λ, m; exp Z)|, we have therefore proved the desired estimate for F .Since ρ ( · ) (as ρ is independent of λ) and |W | are constants, we restate Proposition A.5 as for all X ∈ Ω ε , Y ∈ b, and λ ∈ a * C . A Paley-Wiener theorem for real reductive groups. J Arthur, Acta. Math. 150J. Arthur, A Paley-Wiener theorem for real reductive groups, Acta. Math. 150 (1983), 1-89. A Paley-Wiener theorem for reductive symmetric spaces. E Van Den Ban, H Schlichtkrull, Ann. Math. 164E. van den Ban and H. Schlichtkrull, A Paley-Wiener theorem for reductive symmetric spaces, Ann. Math. 164 (2006), 879-909. The Paley-Wiener theorem and the local Huygens' principle for compact symmetric spaces: the even multiplicity case. T Branson, G Ólafsson, A Pasquale, Indag. Mathem. (N.S.). 16T. Branson, G.Ólafsson, and A. Pasquale, The Paley-Wiener theorem and the local Huygens' principle for compact symmetric spaces: the even multiplicity case, Indag. Mathem. (N.S.) 16 (3- 4) (2005), 393-428. The Paley-Wiener theorem for the Jacobi transform and the local Huygens' principle for root systems with even multiplicities. T Branson, G Ólafsson, A Pasquale, Indag. Math. (N.S.). 16T. Branson, G.Ólafsson, and A. Pasquale, The Paley-Wiener theorem for the Jacobi transform and the local Huygens' principle for root systems with even multiplicities, Indag. Math. (N.S.) 16 (2005), 429-442. Le théorème de Paley-Wiener invariant pour les groupes de Lie réductifs II. L Clozel, P Delorme, Ann. Sci.École Norm. Sup. 23L. Clozel and P. Delorme, Le théorème de Paley-Wiener invariant pour les groupes de Lie réductifs II, Ann. Sci.École Norm. Sup. 23 (1990), 193-228. On the Paley-Wiener theorem. M Cowling, Invent. Math. 83M. Cowling, On the Paley-Wiener theorem, Invent. Math. 83 (1986), 403-404. Paley-Wiener theorem for singular support of K-finite distributions on symmetric spaces. J Dadok, J. Funct. Anal. 31J. Dadok, Paley-Wiener theorem for singular support of K-finite distributions on symmetric spaces, J. Funct. Anal. 31 (1979), 341-354. Paley-Wiener theorems with respect to the spectral parameter. S Dann, G Ólafsson, New developments in Lie theory and its applications. Providence, RIAmer. Math. Soc544S. Dann, G.Ólafsson, Paley-Wiener theorems with respect to the spectral parameter. New de- velopments in Lie theory and its applications, 55-83, Contemp. Math., 544, Amer. Math. Soc., Providence, RI, 2011. The Paley-Wiener theorem for distributions on symmetric spaces. M Eguchi, M Hashizume, K Okamoto, Hiroshima Math. J. 3M. Eguchi, M. Hashizume, and K. Okamoto, The Paley-Wiener theorem for distributions on sym- metric spaces, Hiroshima Math. J. 3 (1973), 109-120. On the Plancherel formula and the Paley-Wiener theorem for spherical functions on semisimple Lie groups. R Gangolli, Ann. Math. 93R. Gangolli, On the Plancherel formula and the Paley-Wiener theorem for spherical functions on semisimple Lie groups, Ann. Math. 93 (1971), 150-165. A Paley-Wiener theorem for central functions on compact Lie groups. F B Gonzalez, Contemporary Math. 278F.B. Gonzalez, A Paley-Wiener theorem for central functions on compact Lie groups, Contemporary Math. 278 (2001), 131-136. Harish-Chandra, Spherical Functions on A Semisimple Lie Group, I-II. 80Harish-Chandra, Spherical Functions on A Semisimple Lie Group, I-II, Amer. J. Math. 80 (1958), 241-310, 553-613. G Heckman, H Schlichtkrull, Harmonic Analysis and Special Functions on Symmetric Spaces. Academic Press16G. Heckman, H. Schlichtkrull, Harmonic Analysis and Special Functions on Symmetric Spaces, Perspectives in Mathematics, 16, Academic Press, 1994. S Helgason, Differential Geometry, Lie Groups, and Symmetric Spaces. New YorkAcademic pressS. Helgason, Differential Geometry, Lie Groups, and Symmetric Spaces, Academic press, New York, 1978. Groups and Geometric Analysis. S Helgason, Mathematical Surveys and Monographs. 83AMSS. Helgason, Groups and Geometric Analysis, Mathematical Surveys and Monographs, Volume 83, AMS, 2000. Geometric Analysis on Symmetric Spaces. S Helgason, Mathematical Surveys and Monographs. 39AMSS. Helgason, Geometric Analysis on Symmetric Spaces, Mathematical Surveys and Monographs, Volume 39, AMS, 2008. An analogue of the Paley-Wiener theorem for the Fourier transform on certain symmetric spaces. S Helgason, Math. Ann. 165S. Helgason, An analogue of the Paley-Wiener theorem for the Fourier transform on certain sym- metric spaces, Math. Ann. 165 (1966), 297-308. Paley-Wiener theorems and surjectivity of invariant differential operators on symmetric spaces and Lie groups. S Helgason, Bull. Amer. Math. Soc. 79S. Helgason, Paley-Wiener theorems and surjectivity of invariant differential operators on symmetric spaces and Lie groups, Bull. Amer. Math. Soc. 79 (1973), 129-132. The Analysis of Linear Partial Differential Operators, I. Distribution Theory and Fourier Analysis. L Hörmander, SpringerBerlinL. Hörmander, The Analysis of Linear Partial Differential Operators, I. Distribution Theory and Fourier Analysis, Springer, Berlin 1990. Compactifications of symmetric spaces II: the Cartan domains. C C Moore, Amer. J. Math. 86C.C. Moore, Compactifications of symmetric spaces II: the Cartan domains, Amer. J. Math. 86 (1964), 358-378. Harmonic analysis for certain representations of graded Hecke algebras. E M Opdam, Acta. Math. 175E.M. Opdam, Harmonic analysis for certain representations of graded Hecke algebras, Acta. Math. 175, (1995), 75-121. A Paley-Wiener theorem for the Θ-hypergeometric transform: the even multiplicity case. G Ólafsson, A Pasquale, J. Math. Pures Appl. 83G.Ólafsson and A. Pasquale, A Paley-Wiener theorem for the Θ-hypergeometric transform: the even multiplicity case. J. Math. Pures Appl. 83 (2004), 869-927. Paley-Wiener theorems for the Θ-spherical transform: an overview. G Ólafsson, A Pasquale, Acta. Appl. Math. 81G.Ólafsson and A. Pasquale, Paley-Wiener theorems for the Θ-spherical transform: an overview. Acta. Appl. Math. 81 (2004), 275-309. Ramanujan's master theorem for Riemannian symmetric spaces. G Ólafsson, A Pasquale, J. Funct. Anal. 26211G.Ólafsson and A. Pasquale, Ramanujan's master theorem for Riemannian symmetric spaces, J. Funct. Anal. 262 (2012), no. 11, 4851-4890. A local Paley-Wiener theorem for compact symmetric spaces. G Ólafsson, H Schlichtkrull, Advance in Mathematics. 218G.Ólafsson and H. Schlichtkrull, A local Paley-Wiener theorem for compact symmetric spaces, Advance in Mathematics 218 (2008), 202-215. Fourier series on compact symmetric spaces: K-finite functions of small support. G Ólafsson, H Schlichtkrull, J. Fourier Anal. Appl. 16G.Ólafsson, H. Schlichtkrull, Fourier series on compact symmetric spaces: K-finite functions of small support, J. Fourier Anal. Appl. 16 (2010), 609-628. Fourier transforms of spherical distributions on compact symmetric spaces. G Ólafsson, H Schlichtkrull, Math. Scand. 109G.Ólafsson and H. Schlichtkrull, Fourier transforms of spherical distributions on compact symmetric spaces, Math. Scand. 109 (2011), 93-113. The Paley-Wiener theorem and limits of symmetric spaces. G Ólafsson, J A Wolf, J. Geom. Anal. 24G.Ólafsson and J.A. Wolf, The Paley-Wiener theorem and limits of symmetric spaces, J. Geom. Anal. 24 (2014), 1-31. Groupes linéaries compacts et fonctions C ∞ covariantes. M Rais, Bull. Sci. Math. 107M. Rais, Groupes linéaries compacts et fonctions C ∞ covariantes. Bull. Sci. Math. 107 (1983), 93-111. One-dimensional K-types in finite dimensional representations of semisimple Lie groups: a generalization of Helgason's theorem. H Schlichtkrull, Math. Scand. 54H. Schlichtkrull, One-dimensional K-types in finite dimensional representations of semisimple Lie groups: a generalization of Helgason's theorem, Math. Scand. 54 (1984), 279-294. The Plancherel formula for spherical functions with one dimensional K-type on a simply connected simple Lie group of hermitian type. N Shimeno, J. Funct. Anal. 121N. Shimeno, The Plancherel formula for spherical functions with one dimensional K-type on a simply connected simple Lie group of hermitian type, J. Funct. Anal. 121 (1994), 330-388.
[]
[ "THE FOUR OPERATIONS ON PERVERSE MOTIVES", "THE FOUR OPERATIONS ON PERVERSE MOTIVES" ]
[ "Florian Ivorra ", "Sophie Morel " ]
[]
[]
Let k be a field of characteristic zero with a fixed embedding σ : k ֒→ C into the field of complex numbers. Given a k-variety X, we use the triangulated category of étale motives with rational coefficients on X to construct an abelian category M (X) of perverse mixed motives. We show that over Spec(k) the category obtained is canonically equivalent to the usual category of Nori motives and that the derived categories D b (M (X)) are equipped with the four operations of Grothendieck (for morphisms of quasiprojective k-varieties) as well as nearby and vanishing cycles functors.In particular, as an application, we show that many classical constructions done with perverse sheaves, such as intersection cohomology groups or Leray spectral sequences, are motivic and therefore compatible with Hodge theory. This recovers and strengthens work by Zucker, Saito, Arapura and de Cataldo-Migliorini and provide an arithmetic proof of the pureness of intersection cohomology with coefficients in a geometric variation of Hodge structures.
null
[ "https://export.arxiv.org/pdf/1901.02096v2.pdf" ]
119,127,392
1901.02096
5a01ecc8c1e2494bfc4f3a17120e2b954ad29849
THE FOUR OPERATIONS ON PERVERSE MOTIVES 22 Dec 2022 Florian Ivorra Sophie Morel THE FOUR OPERATIONS ON PERVERSE MOTIVES 22 Dec 2022 Let k be a field of characteristic zero with a fixed embedding σ : k ֒→ C into the field of complex numbers. Given a k-variety X, we use the triangulated category of étale motives with rational coefficients on X to construct an abelian category M (X) of perverse mixed motives. We show that over Spec(k) the category obtained is canonically equivalent to the usual category of Nori motives and that the derived categories D b (M (X)) are equipped with the four operations of Grothendieck (for morphisms of quasiprojective k-varieties) as well as nearby and vanishing cycles functors.In particular, as an application, we show that many classical constructions done with perverse sheaves, such as intersection cohomology groups or Leray spectral sequences, are motivic and therefore compatible with Hodge theory. This recovers and strengthens work by Zucker, Saito, Arapura and de Cataldo-Migliorini and provide an arithmetic proof of the pureness of intersection cohomology with coefficients in a geometric variation of Hodge structures. Introduction Let k be a field of characteristic zero with a fixed embedding σ : k ֒→ C into the field of complex numbers. A k-variety is a separated k-scheme of finite type. Unless otherwise specified, we will only consider quasi-projective k-varieties. In the present work, we construct the four operations on the derived categories of perverse Nori motives. In order to combine the tools provided by [6,7] and [17,18] in a most efficient way, we define the abelian category of perverse Nori motives on a given k-variety as a byproduct of the triangulated category of constructible étale motives on the same variety. Over the base field the category obtained still coincides with the usual category of Nori motives but now, as we show, it is possible to equip the derived categories of these abelian categories with the four operations of Grothendieck as well as nearby and vanishing cycles functors. However, we leave the construction of the tensor product and internal Hom operations on these categories to a later paper. In particular, as an application, we show that many classical constructions done with perverse sheaves, such as intersection cohomology groups or Leray spectral sequences, are motivic and therefore compatible with Hodge theory. This recovers and strengthens works by Zucker [78], Saito [68], Arapura [4] and de Cataldo-Migliorini [28]. Moreover it provides an arithmetic proof via reduction to positive characteristic and the Weil conjectures of the pureness of the Hodge structure on intersection cohomology with coefficients in a geometric variation of Hodge structures. Conjectural picture and some earlier works Before going into more detail about the content of this paper, let us discuss perverse motives from the perspective of perverse sheaves and recall parts of the conjectural picture and related earlier works. For someone interested in perverse sheaves, perverse motives can be thought of as perverse sheaves of geometric origin. However, the classical definition of these perverse sheaves as a full subcategory of the category of all perverse sheaves is not entirely satisfactory. Indeed, this category contains too many morphisms and consequently, as we take kernels and cokernels of morphisms which shouldn't be considered, too many objects. For example, perverse sheaves of geometric origin should define mixed Hodge modules and therefore any morphism between them should also be a morphism of mixed Hodge modules. Therefore, one expects the category of perverse motives/perverse sheaves of geometric origin to be an abelian category endowed with a faithful -but not full -exact functor into the category of perverse sheaves. According to Grothendieck, there should exist a Q-linear abelian category MM(k) whose objects are called mixed motives. Given an embedding σ : k ֒→ C, the category MM(k) should come with a faithful exact functor MM(k) → MHS to the category of (polarizable) mixed Q-Hodge structures MHS, called the realization functor. The mixed Hodge structure on the i-th Betti cohomology group H i (X) of a given k-variety X should come via the realization functor from a mixed motive H i M (X). The appealing beauty of this picture lies in the expected properties of this category, in particular, the conjectural relations between extension groups and algebraic cycles (see e.g. [50]), or the relation with period rings and motivic Galois groups (see e.g. for a survey [12]). As part of Grothendieck's more general cohomological program, the category MM(k) should underlie a system of coefficients. For any k-variety X, there should exist an abelian category MM(X) of mixed motives along with a realization functor into the category of mixed Hodge modules (or simply of sheaves of Q-vector spaces) on the associated analytic space X an , and their derived categories should satisfy a formalism of (adjoint) triangulated functors D b (MM(X)) f M * G G D b (MM(Y )) f * M o o f ! M G G D b (MM(X)), f M ! o o a formalism which has been at the heart of Grothendieck's approach to every cohomology theory. Then, for a k-variety a : X → Spec k, the motive H i M (X) would be given as the i-th cohomology of the image under a M * of a complex of mixed motives Q M X that should realize to the standard constant sheaf Q X on X an . Grothendieck was looking for abelian categories modeled after the categories of constructible sheaves, but as pointed out by Beȋlinson and Deligne one could/should also look for categories modeled after perverse sheaves (see e.g. [31]). Many attempts have been made to carry out at least partially but unconditionally Grothendieck's program. The most successful attempt in constructing the triangulated category of mixed motives (that is, conjecturally, the derived category of MM(X)) stems from Morel-Voevodsky's stable homotopy theory of schemes. The best candidate so far is the triangulated category DA ct (X) of constructible étale motivic sheaves (with rational coefficients) extensively studied by Ayoub in [6,7,9]. The theory developed in [6,7] provides these categories with the Grothendieck four operations and, as shown by Voevodsky in [76], Chow groups of smooth algebraic k-varieties can be computed as extension groups in the category DA ct (k). On the abelian side, Nori has constructed a candidate for the abelian category of mixed motives over k. The construction of Nori's abelian category HM(k) is tannakian in essence and, since it is a category of comodules over some Hopf algebra, it comes with a built-in motivic Galois group. Moreover any Nori motive has a canonical weight filtration and Arapura has shown in [5, Theorem 6.4.1] that the full subcategory of pure motives coincides with the semi-simple abelian category defined by André in [3] using motivated algebraic cycles (see also [45, Theorem 10.2.5]). More generally, attempts have been made to define Nori motives over k-varieties. Arapura has defined a constructible variant in [5] and the first author a perverse variant in [48]. However, the Grothendieck four operations have not been constructed (at least in their full extent) in those contexts. For example in [5], the direct image functor is only available for structural morphisms or projective morphisms and no extraordinary inverse image is defined. Note that the two different attempts should not be unrelated. One expects the triangulated category DA ct (X) to possess a special t-structure (called the motivic t-structure) whose heart should be the abelian category of mixed motives. This is a very deep conjecture, even for X = Spec k, which implies for example the Lefschetz and Künneth type standard conjectures (see [16]). As of now, the extension groups in Nori's abelian category of mixed motives are known to be related with algebraic cycles only very poorly. However, striking unconditional relations between the two different approaches have still been obtained. In particular, in [24], Gallauer-Choudhury have shown that the motivic Galois group constructed by Ayoub in [10, 11] using the triangulated category of étale motives is isomorphic to the motivic Galois group obtained by Nori's construction. Content of this paper Let us now describe more precisely the content of our paper. Given a k-variety X, consider the bounded derived category D b c (X, Q) of sheaves of Q-vector spaces with algebraically constructible cohomology on the analytic space X an associated with the base change of X along σ and the category of perverse sheaves P(X) which is the heart of the self-dual perverse t-structure on D b c (X, Q) introduced in [19]. Let DA ct (X) be the triangulated category of constructible étale motivic sheaves (with rational coefficients) which is a full triangulated subcategory of the Qlinear counterpart of the stable homotopy category of schemes SH(X) introduced by Morel and Voevodsky (see [51,60,75]). This category has been extensively studied by Ayoub in [6,7,9] and comes with a realization functor Bti * X : DA ct (X) → D b c (X, Q) (see [8]) and thus, by composing with the perverse cohomology functor, with a homological functor p H 0 P with values in P(X). The category of perverse motives considered in the present paper is defined (see Section 2) as the universal factorization DA ct (X) p H 0 M −−−→ M (X) rat M X − −− → P(X) of p H 0 P , where M (X) is an abelian category, p H 0 M is a homological functor and rat M X is a faithful exact functor. This kind of universal construction goes back to Freyd and is recalled in Section 1. As we see in Section 6, ℓ-adic perverse sheaves can also be used to defined the category of perverse motives (see Definition 6.3 and Proposition 6.11). Given a morphism of k-varieties f : X → Y , the four functors D b c (X, Q) f P * G G D b c (Y, Q) f * P o o f ! P G G D b c (X, Q) f P ! o o(1) where developed by Verdier [73] (see also [53]) on the model of the theory developed by Grothendieck for étale and ℓ-adic sheaves [2]. The nearby and vanishing cycles functors Ψ g : D b c (X η , Q) → D b c (X σ , Q) Φ g : D b c (X, Q) → D b c (X σ , Q) associated with a morphism g : X → A 1 k were constructed by Grothendieck in [1] (here X η denotes the generic fiber and X σ the special fiber). By a theorem of Gabber, the functors ψ g := Ψ g [−1] and φ g := Φ g [−1] are t-exact for the perverse t-structures and thus induce exact functors ψ g : P(X η ) → P(X σ ) φ g : P(X) → P(X σ ). (2) Categorical preliminaries Let us recall in this section a few universal constructions related to abelian and triangulated categories. They date back to Freyd's construction of the abelian hull of an additive category [36] and have been considered in many different forms in various works (see e.g. [74,58,64,15]). Let S be an additive category. Let Mod(S) be the category of right S-modules, that is, the category of additive functors from S op to the category Ab of abelian groups. The category Mod(S) is abelian and a sequence of right S-modules 0 → F ′ → F → F ′′ → 0 is exact if and only if for every s ∈ S the sequence of abelian groups 0 → F ′ (s) → F (s) → F ′′ (s) → 0 is exact. A right S-module F is said to be of finite presentation if there exist objects s, t in S and an exact sequence S(−, s) → S(−, t) → F → 0 in Mod(S). Definition 1.1. Let S be an additive category. We denote by R(S) the full subcategory of Mod(S) consisting of right S-modules of finite presentation. Note also that the category A ad (S) is canonically equivalent to R(L(S)). This construction can be used to provide an alternative description of Nori's category (see [15]). Let Q be a quiver, A be an abelian category and T : Q → A be a representation. Let P(Q) be the path category and P(Q) ⊕ be its additive completion obtained by adding finite direct sums. Then, up to natural isomorphisms, we have a commutative diagram Q G G T 4 4 ❊ ❊ ❊ ❊ ❊ ❊ ❊ ❊ ❊ P(Q) ⊕ G G ̺T A ad (P(Q) ⊕ ) =: A qv (Q) ρT v v • • • • • • • • • • • • • • • A where ̺ T is an additive functor and ρ T an exact functor. The kernel of ρ T is a thick subcategory of A qv (Q) and we define the abelian category A qv (Q, T ) to be the quotient of A qv (Q) by this kernel. By construction, the functor ρ T has a canonical factorization A qv (Q) πT − − → A qv (Q, H) rT − − → A where π T is an exact functor and r T is a faithful exact functor. If we denote by T the composition of the representation Q → A qv (Q) and the functor π T : A qv (Q) → A qv (Q, T ), it provides a canonical factorization of T Q T − → A qv (Q, T ) rT − − → A where T is a representation and r T is a faithful exact functor. It is easy to see that the above factorization is universal among all factorizations of T of the form Q R − → B s − → A where B is an abelian category, R is a representation and s is a faithful exact functor. In particular, whenever Nori's construction is available, e.g. if A is Noetherian, Artinian and has finite dimensional Hom-groups over Q (see [48]), then the category A qv (Q, T ) is equivalent to Nori's abelian category associated with the quiver representation T . Let us consider the case when Q is an additive category and T is an additive functor. Then, up to natural isomorphisms, we have a commutative diagram Q G G T 4 4 ❋ ❋ ❋ ❋ ❋ ❋ ❋ ❋ ❋ ❋ A ad (Q) T * A where T * is an exact functor. The kernel of T * is a thick subcategory of A ad (Q) and we define the abelian category A ad (Q, T ) to be the quotient of A ad (Q) by this kernel. Q → A ad (Q, T ) → A satisfies the universal property that defines A qv (Q, T ). Consider a factorization of the representation T of the quiver Q Q R − → B s − → A where B is an abelian category, R is a representation and s is a faithful exact functor. Since s is faithful, R must be an additive functor. Therefore, we get a commutative diagram (up to natural isomorphisms) Q T @ @ R 1 1 ❄ ❄ ❄ ❄ ❄ ❄ ❄ ❄ G G A ad (Q) exact { { T * u u B s A. The exactness and the faithfulness of s imply that the above diagram can be further completed into a commutative diagram (up to natural isomorphisms) Q T 9 9 R 0 0 ❂ ❂ ❂ ❂ ❂ ❂ ❂ ❂ G G A ad (Q) exact z z ✈ ✈ ✈ ✈ ✈ ✈ ✈ ✈ ✈ ✈ T * l l B s A ad (Q, T ) o o z z ✉ ✉ ✉ ✉ ✉ ✉ ✉ ✉ ✉ ✉ A. This shows the desired universal property. Let us finally consider the special case when S is a triangulated category. In that case the additive category S has pseudo-kernels and pseudo-cokernels, in particular, the category A tr (S) := R(S) is an abelian category. 1 The Yoneda embedding h S : S → A tr (S) is a homological functor and is universal for this property (see [63, Theorem 5.1.18]). In particular, if A is an abelian category, any homological functor H : S → A admits a canonical factorization S h S G G A tr (S) ρH G G A where ρ H is an exact functor. This factorization of H is universal among all such factorizations. The kernel of ρ H is a thick subcategory of A tr (S) and we define the abelian category A tr (S, H) to be the quotient of A tr (S) by this kernel. By construction, the functor ρ H has a canonical factorization A tr (S) πH − − → A tr (S, H) rH − − → A where π H is an exact functor and r H is a faithful exact functor. Setting H S := π H • h S , it provides a canonical factorization of H S H S − − → A tr (S, H) rH − − → A where H S is a homological functor and r H a faithful exact functor. It is easy to see that the above factorization is universal among all factorizations of H of the form S L − → B s − → A where L is a homological functor and s is a faithful exact functor. We can also see the triangulated category S simply as a quiver (resp. an additive category) and the homological functor H : S → A simply as a representation (resp. an additive functor). In particular, we have at our disposal the universal factorizations of the representation H S → A qv (S, H) → A and S → A ad (S, H) → A where the arrows on the right are exact and faithful functors. Q R − → B s − → A where B is an abelian category, R is an additive functor and s is a faithful exact functor. Since s is faithful, R must be homological. Therefore, we get a commutative diagram (up to natural isomorphisms) S H @ @ R 1 1 ❃ ❃ ❃ ❃ ❃ ❃ ❃ ❃ G G A tr (S) exact | | u u B s A. The exactness and the faithfulness of s imply that the above diagram can be further completed into a commutative diagram (up to natural isomorphisms) S H 9 9 R 0 0 ❂ ❂ ❂ ❂ ❂ ❂ ❂ ❂ G G A tr (S) exact { { ✈ ✈ ✈ ✈ ✈ ✈ ✈ ✈ ✈ ✈ l l B s A tr (S, T ) o o z z ✉ ✉ ✉ ✉ ✉ ✉ ✉ ✉ ✉ A. This shows the desired universal property. Perverse motives We fix a field k that admits an embedding σ : k → C. Unless otherwise specified, we will only consider quasi-projective k-varieties in this article. Definition Let X be a quasi-projective k-variety. We denote by X an the complex analytic space associated with the base change of X along σ, by D b c (X, Q) the category of complexes of sheaves of Q-vector spaces on X an with bounded algebraically constructible cohomology, by P(X) the heart of the perverse t-structure on D b c (X, Q) introduced in [19, Section 2] for the self-dual perversity and by DA ct (X) the triangulated category of constructible étale motivic sheaves with rational coefficients (see for example [9, Section 3]). By [25,Theorem 16.2.18], this last category is equivalent to the category of constructible Beȋlinson motives studied in Cisinski and Déglise's book [25], and the equivalence commutes with the operations we will consider later (direct and inverse images and tensor product). So we will use reference to Ayoub's articles or to the book [25], as convenient. To construct the abelian category of perverse motives M (X) used in the present work, we take S to be the triangulated category DA ct (X) and H to be the homological functor p H 0 P obtained by composing of the Betti realization Bti * X : DA ct (X) → D b c (X,M (X) := A tr (S, H) = A tr (DA ct (X), p H 0 P ). By construction the functor p H 0 P has a factorization DA ct (X) p H 0 M −−−→ M (X) rat M X − −− → P(X) where rat M X is a faithful exact functor and p H 0 M is a homological functor. Let us recall the two consequences (denoted by P1 and P2 below) of the universal property of the factorization DA ct (X) p H 0 M G G T T p H 0 P ✤✤ ✤✤ ρX M (X) rat M X G G P(X).p H 0 P G G F ✌ ✌ ✌ ✌ Ò α P(X) G DA ct (Y ) p H 0 P G G P(Y ) where F is a triangulated functor, G is an exact functor and α is an invertible natural transformation, there exists a commutative diagram DA ct (X) ✌ ✌ ✌ ✌ Ò β G G F M (X) G G E ☛ ☛ ☛ ☛ Ñ Ù γ P(X) G DA ct (Y ) G G M (Y ) G G P(Y ) where E is an exact functor and β, γ are invertible natural transformations such that the diagram DA ct (X) F p H 0 M @ @ ◗ ◗ r r r r u } β ❴ ❴ ❴ ❴ C Q ρX DA ct (X) p H 0 P 5 5 ❍ ❍ ❍ ❍ ❍ ❍ ❍ ❍ F q q q q t | α M (X) rat M X rat M X G G ✙✙ ✙✙ Ø γ2 P(X) G2 8 8 DA ct (Y ) p H 0 M G G M (Y ) rat M Y G G P(Y ) is commutative. Lifting of 2-functors As in [6, §1.1], in this work, we only consider strict 2-categories. However, as in loc.cit., 2-functors are not necessarily strict (see also [29]). Let (Sch/k) be the category of quasi-projective k-varieties and C be a subcategory of (Sch/k). The properties P1 and P2 can be used to lift (covariant or contravariant) 2-functors. Indeed, let F : C → TR be a 2-functor (let's say covariant to fix the notation), where TR is the 2-category of triangulated categories, such that F(X) = DA ct (X) for every k-variety X in C. Similarly, let Ab be the 2-category of abelian categories, and let G : C → Ab be a 2-functor such that G(X) = P(X) for every k-variety X in C and that G(f ) is exact for every morphism f in C. Assume that (Θ, α) : F → G is a 1-morphism of 2-functors such that Θ X = p H 0 P for every X ∈ C and that α f is invertible for every morphism f in C. Let f : X → Y be a morphism in C. By applying P1 to the square DA ct (X) p H 0 P G G F(f ) ✌ ✌ ✌ ✌ Ò α f P(X) G(f ) DA ct (Y ) p H 0 P G G P(Y ) we get a commutative diagram DA ct (X) ✌ ✌ ✌ ✌ Ò β f G G F M (X) G G E(f ) ☛ ☛ ☛ ☛ Ñ Ù γ f P(X) G(f ) DA ct (Y ) G G M (Y ) G G P(Y ) where E(f ) is an exact functor and β f , γ f are invertible natural transformations such that the diagram DA ct (X) F(f ) p H 0 M @ @ ◗ ◗ r r r r u } β f ❴ ❴ ❴ ❴ C Q ρX DA ct (X) p H 0 P 5 5 ❍ ❍ ❍ ❍ ❍ ❍ ❍ ❍ F(f ) q q q q t | α f M (X) rat M X 9 9 ❖ ❖ E(f ) ♦ ♦ ♦ ♦ s { γ f P(X) G(f ) P(X) G(f ) DA ct (Y ) p H 0 M @ @ ◗ ◗ ❴ ❴ ❴ ❴ C Q ρY DA ct (Y ) p H 0 P 5 5 M (Y ) rat M Y 9 9 ❖ ❖ P(Y ) P(Y ) is commutative. Let X f − → Y g − → Z be morphisms in C. By applying P2 to the commutative diagram DA ct (X) p H 0 P G G F(g•f ) B B ❱ ❱ ❱ ❱ ❱ ❱ ❱ ❱ ❱ ❱ ❱ ❴ ❴ ❴ ❴ k s α g•f P(X) G(g•f ) A A ❙ ❙ ❙ ❙ ❙ ❙ ❙ ❙ ❙ ✤✤ ✤✤ c F (f,g) DA ct (Z) p H 0 P G G ✤✤ ✤✤ c G (f,g) P(X) DA ct (X) p H 0 P G G F(f ) A A ❙ ❙ ❴ ❴ ❴ ❴ k s α f P(X) G(f ) 9 9 DA ct (Y ) p H 0 P G G F(g) A A | | ❴ ❴ ❴ ❴ k s αg P(Y ) G(g) 9 9 DA ct (Z) Betti realization of étale motives Let f : X → Y be a morphism of quasi-projective k-varieties. Recall that the category D b c (X, Q) is equivalent to the derived category of the abelian category of perverse sheaves on X via the realization functor constructed in [19, 3.1.9] (it is known to be an equivalence by [18, Theorem 1.3]). In particular, the four (adjoint) functors D b c (X, Q) f P * G G D b c (Y, Q) f * P o o f ! P G G D b c (X, Q) f P ! o o can be seen as functors between the derived categories of perverse sheaves (for their construction in terms of perverse sheaves see [18]). Let Bti * X : DA ct (X) → D b c (X, Q) be the realization functor of [8]. If f : X → Y is a morphism of quasi-projective k-varieties, by construction, there is an invertible natural transformation Proposition 2.4]). Let θ be the collection of these natural transformations, then (Bti * , θ) is a morphism of stable homotopical 2-functors in the sense of [8, Definition 3.1]. Following the notation in [8], we denote by Théorème 3.19] these transformations are invertible. θ f : f * P • Bti * Y → Bti * X • f * (see [8,γ f : Bti * Y • f * → f P * • Bti * X ; ρ f : f P ! • Bti * X → Bti * Y • f ! ; ξ f : Bti * X • f ! → f ! P • Bti * Y ; the induced natural transformations. By [8, Direct images under affine and quasi-finite morphisms Let QAff (Sch/k) be the subcategory of (Sch/k) with the same objects but in which we only retain the morphisms that are quasi-finite and affine. By [19,Corollaire 4.1.3], for such a morphism f : X → Y , the functors f P * , f P ! : D b c (X, Q) → D b c (Y, Q) are t-exact for the perverse t-structures. In particular, they induce exact functors between categories of perverse sheaves and by applying the property P1 to the canonical transformation γ f : Bti * Y •f * → f P * •Bti * X , we get a commutative diagram DA ct (X) is commutative. Moreover, since the natural transformations γ f are compatible with the composition of morphisms (that is, with the connection 2-isomorphisms), Subsection 2.2 provides a 2-functor QAff H M * : QAff (Sch/k) → TR with QAff H M * (X) = D b (M (X) ) and such that ( p H 0 M , γ DA ) and (rat M , γ M ) are 1-morphisms of 2-functors. For every affine and quasi-finite morphism f : X → Y we have a natural transformation γ M f : rat M Y f M * → f P * rat M X compatible with the composition of morphisms. Inverse image by a smooth morphism Let f : X → Y be a smooth morphism of k-varieties. Then, Ω f is a locally free O X -module of finite rank. Let d f its rank (which is constant on each connected component of X). Then, d f is the relative dimension of f (see [40, (17.10.2)]) and if g : Y → Z is a smooth morphism, then d g•f = d g + d f with the obvious abuse of notation (see [40, (17.10.3)]). By [19, 4.2.4], the functor f * P [d f ] : D b c (Y, Q) → D b c (X, Q) is t-exact for the perverse t-structures. In particular, it induces an exact functor between the categories of perverse sheaves and by applying the property P1 to the canonical transformation θ f : f * P • Bti * Y → Bti * X • f * , we get a commutative diagram DA ct (Y ) ✌ ✌ ✌ ✌ Ò θ DA f G G f * [d f ] M (Y ) G G ☛ ☛ ☛ ☛ Ñ Ù θ M f P(Y ) f * P [d f ] DA ct (X) G G M (X) G G P(X) where the functor in the middle f * M [d f ] is an exact functor and θ DA f , θ M f are invertible natural transformations such that the diagram DA ct (Y ) f * [d f ] p H 0 M @ @ ◗ ◗ r r r r u } θ DA f ❴ ❴ ❴ ❴ C Q ρY DA ct (Y ) p H 0 P 5 5 ❍ ❍ ❍ ❍ ❍ ❍ ❍ ❍ f * [d f ] q q q q t | θ f M (Y ) rat M Y 9 9 ❖ ❖ f * M [d f ] ♦ ♦ ♦ ♦ s { θ M f P(Y ) f * P [d f ] P(Y ) f * P [d f ] DA ct (X) p H 0 M @ @ ◗ ◗ ❴ ❴ ❴ ❴ C Q ρX DA ct (X) p H 0 P 5 5 M (X) rat M X 9 9 ❖ ❖ P(X) P(X) is commutative. Remark 2.2. Note that f * M A, given A in M (Y ), is not yet defined. We set f * M := (f * M [d f ])[−d f ]. Let Liss (Sch/k) be the subcategory of (Sch/k) with the same objects but having as morphisms only the smooth morphisms of k-varieties. Since the natural transformations θ f are compatible with the composition of morphisms (that is, with the connection 2-isomorphisms), Subsection 2. θ M f : f * P rat M Y → rat M X f * M compatible with the composition of morphisms. Exchange structure Let us denote by Imm H M * : Imm (Sch/k) → TR the restriction of the 2-functor obtained in Subsection 2.4 to the subcategory Imm (Sch/k) of (Sch/k) with the same objects but having as morphisms only the closed immersions of k-varieties. Exchange structures are defined in Définition 1.2.1 of [6]. Proof. As for Proposition 2.3, the proof is a simple application of property P2. The details are left to the reader. Duality The result in this subsection will be used in the proof of Proposition 5.3. Let D P X be the duality functor for perverse sheaves and ε P X : Id → D P X • D P X be the canonical 2-isomorphism. Recall that, given a smooth morphism f : [25, A.5.2]). The last crucial observation is that Verdier duality on D b c (X, Q) restricts to an exact contravariant functor on the subcategory of perverse sheaves (see for example the beginning of Section 4 of [19]). X → Y of relative dimension d, there is a canonical 2-isomorphism ε P f : D P X • f * P (−)(d)[d] → f * P (−)[d] • D P Y . Proposition 2.6. Let X, Y be k-varieties and f : X → Y be a smooth morphism of relative dimension d. (1) There exist a contravariant exact functor D M X : M (X) → M (X), a 2- isomorphism ν M X : D P X • rat M X → rat M X • D M X and a 2-isomorphism ε M X : Id → D M X • D M f ! ≃ f * (d)[ Perverse motives as a stack Let S be a quasi-projective k-variety. Let us denote by AffEt/S the category of affine étale schemes over S endowed with the étale topology. As in [71, Tag 02XU], the 2-functor AffEt/X → Ab U → M (U ) u → u * M can be turned into a fibered category M → AffEt/S such that the fiber over an object U of AffEt/X is the category M (U ). Proof. Let U be a k-variety, I be a finite set and U = (u i : U i → U ) i∈I be a covering of U by affine and étale morphisms. If J ⊆ I is a nonempty subset of I, we denote by U J the fiber product of the U j , j ∈ J, over U and by u J : U J → U the induced morphism. Given an object A ∈ M (U ), and k ∈ Z, we set C k (A, U ) := 0 if k < 0; J⊆I,|J|=k+1 (u J ) M * (u J ) * M A if k 0. We make C • (A, U ) into a complex using the alternating sum of the maps obtained from the unit of the adjunction in Proposition 2.5. The unit of this adjunction also provides a canonical morphism A → C • (A, U ) in C b (M (U )) . This morphism induces a quasi-isomorphism on the underlying complex of perverse sheaves and so is a quasi-isomorphism itself since the forgetful functor to the derived category of perverse sheaves is conservative. By [71,Tag 0268], to prove the proposition we have to show the following: (1) if U is an object in AffEt/S and A, B are objects in M (U ), then the presheaf (V v − → U ) → Hom M (V ) (v * M A, v * M B) on AffEt/U is a sheaf for the étale topology; (2) for any covering U = (u i : U i → U ) i∈I of the site AffEt/S, any descent datum is effective. We already now that the similar assertions are true for perverse sheaves by [ Hom(A, B) G G i∈I Hom((u i ) * M A, (u i ) * M B) G G G G i,j∈I Hom((u ij ) * M A, (u ij ) * M B) Hom(K, L) G G i∈I Hom((u i ) * P K, (u i ) * P L) G G G G i,j∈I Hom((u ij ) * P K, (u ij ) * P L). The lower row is exact and the vertical arrows are injective. We only have to check that the upper row is exact at the middle term. Let c be an element in i∈I Hom((u i ) * M A, (u i ) * M B) which belongs to the equalizer of the two maps on the right-hand side. Then, it defines (by adjunction) a morphism c 0 and a morphism c 1 such that the square C 0 (A, U ) d 0 G G c0 C 1 (A, U ) c1 C 0 (B, U ) d 0 G G C 1 (B, U ) (3) is commutative. Since A → C • (A, U ) and B → C • (B, U ) are quasi-isomorphisms, A is the kernel of the upper map in (3) and B is the kernel of the lower map. Hence, c 0 and c 1 induce a morphism A → B in M (U ) which maps to c. Now we prove (2). Consider a descent datum. In other words, consider, for every i ∈ I, an object A i in M (U i ) and, for every i, j ∈ I, an isomorphism φ ij : (p ij ) * M A i → (p ji ) * M A j in M (U ij ) satisfying the usual cocycle condition. Let A the kernel of the map i∈I (u i ) M * A i → i,j∈I (u i ) M * (p ij ) M * (p ij ) * M A i = i,j∈I (u ij ) M * (p ij ) * M A i given on (u i ) M * A i by the difference of the maps obtained by composing the morphism induced by adjunction (u i ) M * A i → (u i ) M * (p ij ) M * (p ij ) * M A i with either the identity or the isomorphism φ ij . Using the fact that descent data on perverse sheaves are effective, it is easy to see that A makes the given descent datum effective. A simpler generating quiver Let X be a k-variety. Consider the quiver Pairs eff X defined as follows. A vertex in Pairs eff X is a triple (a : Y → X, Z, n) where a : Y → X is morphism of k-varieties, Z is a closed subscheme of Y and n ∈ Z is an integer. • Let (Y 1 , Z 1 , i) and (Y 2 , Z 2 , i) be vertices in Pairs eff X . Then, every morphism of X-schemes f : Y 1 → Y 2 such that f (Z 1 ) ⊆ Z 2 defines an edge f : (Y 1 , Z 1 , i) → (Y 2 , Z 2 , i).(4) • For every vertex (a : Y → X, Z, i) in Pairs eff X and every closed subscheme W ⊆ Z, we have an edge ∂ : (a : Y → X, Z, i) → (az : Z → X, W, i − 1)(5) where z : Z ֒→ Y is the closed immersion. The quiver Pairs eff X admits a natural representation in D b c (X, Q). If c = (a : Y → X, Z, i) is a vertex in the quiver Pairs eff X and u : U ֒→ Y is the inclusion of the complement of Z in Y , then we set . In loc.cit. the relative dualizing complex u ! P a ! P Q X is used instead of the absolute dualizing complex K U . If X is smooth, then the two different choices lead to equivalent categories. B(c) := a P ! u P * K U [−i] where K U is the dualizing complex of U . On vertices the representation B is defined as follows. Let c 1 := (a 1 : (4). The morphism f maps Z 1 to Z 2 and therefore U : Y 1 → X, Z 1 , i), c 2 := (a 2 : Y 2 → X, Z 2 , i) be vertices in Pairs eff X and f : c 1 → c 2 be an edge of type= f −1 (U 2 ) is contained in U 1 . Let u : U ֒→ U 1 be the open immersion. Then, we have a morphism f P ! u P 1 * K U1 adj. − −→ f P ! (u 1 u) P * K U −→ u P 2 * f P ! K U = u P 2 * f P ! f ! P K U2 adj. − −→ u P 2 * K U2 where the arrow in the middle is given by the exchange morphism. By taking the image of this morphism under a 2! [−i], we get a morphism B(f ) : B(c 1 ) := a P 1! u P 1 * K U1 [−i] → B(c 2 ) := a P 2! u P 2 * K U2 [−i]. Let c = (Y a − → X, Z, i) be a vertex in Pairs eff X , and W ⊆ Z be a closed subset. Consider the commutative diagram U := Y \ Z j G G u 9 9 Y \ W vY G G Y a G G X V := Z \ W v G G zV y y Z z y y b d d where v, v Y , j are the open immersions, z the closed immersion and a, b the structural morphisms. The localization triangle (z V ) P ! (z V ) ! P → id → j P * j * P +1 − − →, applied to K Y \W , provides a morphism j P * K U → (z V ) P ! K V [1]. As z and z V are closed immersions, applying (v Y ) * , yields a morphism u P * K U → z P ! v P * K V [1]. Applying a ! [−i], we obtain a morphism B(∂) : B(c) := a P ! u P * K U [−i] → B(az : Z → X, Z, i − 1) := b P ! v P * K V [1 − i] The category of perverse Nori motives considered in [48] is defined as follows. Definition 2.9. Let X be a k-variety. The category of effective perverse Nori motives is the abelian category N eff (X) := A qv (Pairs eff X , p H 0 • B). Recall that the category M (X) can also be obtained by considering DA ct (X) simply as a quiver, that is it is canonically equivalent to the abelian category Lemma 1.5). The Grothendieck six operations formalism constructed in [6,7] and its compatibility with its topological counterpart on the triangulated categories D b c (X, Q) shown in [8], imply that the quiver representation B can be lifted via the realization functor Bti * X to a quiver representation B : Pairs eff X → DA ct (X). In particular, since the diagram A qv (DA ct (X), p H 0 • Bti * X ) (seeDA ct (X) Bti * X G G D b c (X, Q) Pairs eff X , B y y B V V r r r r r r r r r r is commutative (up to natural isomorphisms), there exists a canonical faithful exact functor N eff (X) → M (X).(6) Let us explain now how Tate twists can be defined in the categories N eff (X) and M (X). In the category DA ct (X), the Tate twist (−)(1) is defined to be the endofunctor Th(O X )(−)[−2] where Th(O X ) is the Thom equivalence associated with the trivial locally free sheaf O X (see [6, §1.5.3]). This construction, being compatible with the usual Tate twist via the Betti realization, induces an exact functor (−)(1) on the category M (X). Note that this functor is an equivalence by construction. In the category N eff (X) Tate twists can be defined using the following observation: if S is a k-variety, q : G m,S → S is the structural morphism and v : V ֒→ G m,S is the complement of the unit section, then q ! v * v * q ! K = K(1)[1] for every K ∈ D b c (S, Q). In particular, if Q : Pairs eff X → Pairs eff X is the morphism of quivers which maps (Y, Z, n) to (G m,Y , G m,Z ∪ Y, n + 1) (here Y is embed- ded in G m,Y via the unit section) , then one has a natural isomorphism between B(Q(Y, Z, n)) and B(Y, Z, n) (1). As a consequence, the Tate twist on the category of effective perverse Nori motives can be defined as the exact functor induced by the morphism of quivers Q (and the usual Tate twist). This last construction does not yield an equivalence and one defines the category N (X) to be the category obtained from N eff (X) by inverting the Tate twists (see [48, 7.6] for details). By construction, the category of Nori motives HM(k) of [33] coincides with N (k). Lemma 2.10. The functor (6) extends to a faithful exact N (X) → M (X).(7) Proof. To prove the lemma it is enough to observe that there is a natural isomorphism in DA ct (X) between B(Q(Y, Z, n)) and B(Y, Z, n)(1). Proposition 2.11. The category M (k) is canonically equivalent to the abelian category of Nori motives HM(k). More precisely the functor (7) is an equivalence when X = Spec(k) Proof. (See also [14,Proposition 4.12]) Consider the triangulated functor R N,s : [24,Proposition 7.12]. Up to a natural isomorphism, the diagram DA ct (X) → D b (HM(k)) constructed inDA ct (k) Bti * k @ @ R N,s G G D b (HM(k)) forgetful G G D b (Q) is commutative. In particular, it provides a factorization of the cohomological functor H 0 • Bti * k DA ct (k) H 0 •R N,s − −−−−− → HM(k) forgetful − −−−− → vec(Q) This implies the existence of a canonical faithful exact functor M (k) → HM(k) such that DA ct (k) p H 0 M G G H 0 •R N,s 9 9 M (k) G G HM(k) is commutative up to a natural isomorphism. Using the universal properties, it is easy to see that it is a quasi-inverse to (7). The following conjecture seems reasonable and reachable via our current technology. Conjecture 2.12. Let X be a smooth k-variety. Let N (X) be the category of perverse motives constructed in [48] and RL N X : DA ct (X) → D b (N (X) ) be the triangulated functor constructed in [47]. Then, the Betti realization Bti * X is isomorphic to the composition DA ct (X) → D b (N (X)) forgetful − −−−− → D b (P(X)) real − − → D b c (X, Q) If Conjecture 2.12 holds then the same proof as the one of Proposition 2.11 implies the following Conjecture 2.13. Let X be a smooth k-variety. Then, the functor (7) is an equivalence. Unipotent nearby and vanishing cycles In [17], Beȋlinson has given an alternate construction of unipotent vanishing cycles functors for perverse sheaves and has used it to explain a gluing procedure for perverse sheaves (see [17,Proposition 3.1]). In this section, our main goal is to obtain similar results for perverse Nori motives. Later on, the vanishing cycles functors for perverse Nori motives will play a crucial role in the construction of the inverse image functor (see Section 4). Given the way the abelian categories of perverse Nori motives are constructed from the triangulated categories of étale motives, our first step is to carry out Beȋlinson's constructions for perverse sheaves within the categories of étale motives or analytic motives (the latter categories being equivalent to the classical unbounded derived categories of sheaves of Q-vector spaces on the associated analytic spaces). This is done in Subsection 3.2 and Subsection 3.4. Our starting point is the logarithmic specialization system constructed by Ayoub in [7]. However, by working in triangulated categories instead of abelian categories as Beȋlinson did, one has to face the classical functoriality issues, one of the major drawback of triangulated categories. To avoid these problems and ensure that all our constructions are functorial we will rely heavily on the fact that the triangulated categories of motives underlie a triangulated derivator. Only then, using the compatibility with the Betti realization, will we be able to obtain in Subsection 3.5 the desired functors for perverse Nori motives. Reminder on derivators Let us recall some features of triangulated (a.k.a. stable) derivators D needed in the construction of the motivic unipotent vanishing cycles functor and the related exact triangles. For the general theory, originally introduced by Grothendieck [41], we refer to [6,7,26,38,39,59]. We will assume that our derivator D is defined over all small categories. In our applications, the derivators considered will be of the form D := DA(S, −) for some k-variety S. Given a functor ρ : A → B, we denote by ρ * : D(B) → D(A), ρ * : D(A) → D(B), ρ ♯ : D(A) → D(B) the structural functor and its right and left adjoint. Note that in the literature on derivators, the notation ρ ! is used instead of ρ ♯ . We follow here the notation used in [6,7]. Notation: We let e be the punctual category reduced to one object and one morphism. Given a small category A, we denote by p A : A → e the projection functor and, if a is an object in A, we denote by a : e → A the functor that maps the unique object of e to a. Given n ∈ N, we let n be the category n ← · · · ← 1 ← 0. If one thinks of functors in Hom(A op , D(e)) as diagrams, then an object in D(A) can be thought as a "coherent diagram". Indeed, every object M in D(A) has an underlying diagram called its A-skeleton and defined to be the functor A op → D(e) which maps an object a in A to the object a * M of D(e). This construction gives the A-skeleton functor D(A) → Hom(A op , D(e)) which is not an equivalence in general (coherent diagrams are richer than simple diagrams). We say that M ∈ D(A) is a coherent lifting of a given diagram of shape A if its A-skeleton is isomorphic to the given diagram. We will not give here the definition of a stable derivator (see e.g. [ , but instead recall a few properties which will be constantly used. (1) Let ρ : A → B be a functor and b be an object in B. Denote by j A/b : A/b → A and j b\A : b\A → A be the canonical functors where A/b and b\A are respectively the slice and coslice categories. The exchange 2-morphisms (given by (2) If a small category A admits an initial object o (resp. a final object o), then adjunction) b * ρ * → (p A/b ) * j * A/b ; (p b\A ) ♯ j * b\A → b * ρ ♯ are invertible (see [6,the 2-morphism o * → (p A ) ♯ (resp. the 2-morphism (p A ) * → o * ) is invertible too (see [6, Corollaire 2.1.40]).(1, 1) (0, 1) o o (1, 0) y y (0, 0). y y o o (8) We denote by the full subcategory of that does not contain the object (0, 0) and by i : → the inclusion functor. We denote by (−, 1) : 1 → the fully faithful functor which maps 0 to (0, 1) and 1 to (1, 1). Similarly we denote by the full subcategory of that does not contain the object (1, 1) and i : → the inclusion functor. We denote by (0, −) : 1 → the fully faithful functor that maps 0 and 1 respectively to (0, 0) and (0, 1) An object M in D( ) is said to be cocartesian (resp. cartesian) if and only if the canonical morphism (i ) ♯ (i ) * M → M (resp. M → (i ) * (i ) * M ) is an isomorphism. Since D is stable, a square M in D( ) is cartesian if and only if it is cocartesian. Let be the category (2, 1) (1, 1) o o (0, 1) o o (2, 0) y y (1, 0) y y o o (0, 0). y y o o(9) There are three natural ways to embed in and an object M ∈ D( ) is said to be cocartesian if the squares in D( ) obtained by pullback along those embeddings are cocartesian. A coherent triangle is a cocartesian object M ∈ D( ) such that (0, 1) * M and (2, 0) * M are zero. For such an object, we have a canonical isomorphism (0, 0) * M ≃ (2, 1) * M [1] and the induced sequence (2, 1) * M → (1, 1) * M → (1, 0) * M → (2, 1) * M [1](10) is an exact triangle in D(e). One of the main advantage of working in a stable derivator is the possibility to associate with a coherent morphism M ∈ D(1) functorially a coherent triangle. Let us briefly recall the construction of this triangle. Let U be the full subcategory of that does not contain (0, 0) and (1, 0). Denote by v : 1 → U the functor that maps 0 and 1 respectively to (1, 1) and (2, 1) and by u : U → the inclusion functor. The image under the functor u ♯ v * : D(A × 1) → D(A × ) of a coherent morphism M in D(A × 1) is a coherent triangle. Using the properties (1-2) recalled above, we see that (10) provides an exact triangle 1 * M → 0 * M → Cof(M ) → 1 * M [1](11) where the cofiber functor Cof is defined by Cof := (1, 0) * u ♯ v * : D(1) → D(e).(12) Using the properties 1-3 recalled above, it is easy to see that this functor is also given by Cof = (0, 0) * (i ) ♯ (−, 1) * . In the exact triangle (11), the canonical morphism 0 * M → Cof(M ) is the 1-skeleton of the coherent morphism (1, −) * u ♯ v * M where (1, −) : 1 → is the fully faithful functor that maps 0 and 1 respectively to (1, 0) and (1, 1). Note that we have an isomorphism of functors (1, −) * u ♯ v * M ≃ (0, −) * (i ) * (i ) ♯ (−, 1) * : D(1) → D(1). Similarly the boundary morphism Cof(M ) → 1 * M [1] is the 1-skeleton of the co- herent morphism (−, 0) * u ♯ v * M where (−, 0) : 1 → is the fully faithful functor that maps 0 and 1 respectively to (0, 0) and (1, 0). The construction of the cofiber functor Cof and the cofiber triangle (11) can be dualized to get a fiber functor Fib and a fiber triangle. Let us recall the following lemma (see e.g. [39, Proposition 15.1.10] for a proof). Fib((−, 1) * M ) G G (1, 1) * M G G (0, 1) * M +1 G G Fib((−, 0) * M ) G G (1, 0) * M G G (0, 0) * M +1 G G which is functorial in M . Furthermore, if M is cartesian if and only if the canonical morphism Fib((−, 1) * M ) → Fib((−, 0) * M ) is an isomorphism. There is also a functorial version of the octahedron axiom in D (see e.g. [39, Proof of Theorem 9.44]), that is, there is a functor D(2) → D(O) which associates to a coherent sequence of morphisms a coherent octahedron diagram. Here the category O ⊆ 4 × 2 is the full subcategory that does not contain the objects (4, 0) and (0, 2). In other words, O is the category (4, 2) (3, 2) o o (2, 2) o o (1, 2) o o (4, 1) y y (3, 1) o o y y (2, 1) o o y y (1, 1) o o y y (0, 1) o o (3, 0) y y (2, 0) o o y y (1, 0) o o y y (0, 0). o o y y Let W be the full subcategory of O that does not contain the objects (1, 1), (2, 1), (3, 1), (0, 0), (1, 0) and (2, 0). Denote by ω : 2 → W the fully faithful functor which maps 0, 1 and 2 respectively on (2, 2), (3,2) and (4, 2) and by w : W → O the inclusion functor. The octahedron diagram functor is defined to be the functor w ♯ ω * : D(2) → D(O). Denote by sm : 1 → 2 the fully faithful functor that maps 0 and 1 respectively to 0 and 1 and by fm : 1 → 2 the fully faithful functor that maps 0 and 1 respectively to 1 and 2. Denote also by cm : 1 → 2 the functor which maps 0 and 1 respectively to 0 and 2. Consider the fully faithful functor fsq : → O which maps the square (8) to the square (4, 2) (3, 2) o o (4, 1) y y (3, 1). y y o o Similarly we denote by ssq : → O (resp. csq : → O) the fully faithful functor which maps the square (8) to the square (3, 2) (2, 2) o o (3, 0) y y (2, 0) y y o o (resp. (4, 2) (2, 2) o o (4, 1) y y (2, 1)). y y o o We have the following lemma. Lemma 3.2. We have canonical isomorphisms fsq * w ♯ ω * ≃ (i ) ♯ (−, 1) * fm * , ssq * w ♯ ω * ≃ (i ) ♯ (−, 1) * sm * and csq * w ♯ ω * ≃ (i ) ♯ (−, 1) * cm * . Proof. Let i : → W be the fully faithful functor that maps (0, 1), (1, 0) and (1, 1) respectively to (3,2), (4,1) and (4, 2). Since ω • fm = i • (−, 1), we get a natural transformation i * ω * → (0, 1) * fm * . Using the properties (Der1-3), it is easy to see that this natural transformation is invertible. Similarly, since w•i = fsq•i , there is a natural transformation (i ) ♯ i * → fsq * w ♯ . Again, using the properties (Der1-3), we see that it is invertible. This provides invertible natural transformations (i ) ♯ i * ω * G G (i ) ♯ (−, 1) * fm * fsq * w ♯ ω * . The other invertible natural transformations are constructed similarly. In particular, it follows from Lemma 3.2 that (3, 1) * w ♯ ω * is isomorphic to Cof • fm * , (2, 0) * w ♯ ω * is isomorphic to Cof•sm * and (2, 1) * w ♯ ω * is isomorphic to Cof•cm * . Since the inverse image of w ♯ ω * along the fully faithful functor → O that maps the square (8) to the square (3, 1) (2, 1) o o (3, 0) y y (2, 0) y y o o is a cocartesian square, by Lemma 3.2 and [6, Définition 2.1.34], we get a natural exact triangle Cof(fm * (−)) → Cof(cm * (−)) → Cof(sm * (−)) +1 − − → .(13)C(M ) G G C(α) M [1] α[1] C(N ) G G N [1] is commutative. Moreover the whole diagram M G G α j * j * M j * j * α G G C(M ) G G C(α) M [1] α[1] N G G j * j * N G G C(N ) G G N [1] is commutative. Note that in loc.cit. the lemma is stated only in the case I = e. However its proof works in the more general situation considered here. We will need the following technical lemma. of ∆ * f (M ) is M → f * f * M . Proof. Consider the diagram of k-varieties (F , 1 × I) : 1 × I → (Sch/k) that maps (0, i) to Y and (1, i) to X and the canonical morphisms of diagrams of k-varieties (F , 1 × I) α β G G (X, 1 × I) (X, I). The functor ∆ * f := β * α * satisfies the desired property. M → j * j * M → Cof(∆ * j (M )) +1 − − → functorial in M . It follows from [6, Lemme 1.4.8] that the functor i ! i ! (−)[1] is isomorphic to Cof • ∆ * j (−) Similarly we will need the following lemma. Its proof is completely similar to the one of Lemma 3.4 and will be omitted. of ∆ ! f (A) is f ♯ f * A → A. Remark 3.7. Assume I = e. Given M in DA(X), as in Remark 3.5, we have an exact triangle j ! j ! M → M → Cof(∆ ! j (M )) +1 − − → functorial in M .i * i * (−) is isomorphic to Cof • ∆ ! j (−). Motivic unipotent vanishing cycles functor Let f : X → A 1 k be a morphism of k-varieties. We consider the following diagram of k-varieties X η fη G G X f X σ fσ o o G m,k j G G A 1 k Spec(k) i o o where i denotes the zero section of A 1 k and j the open immersion of the complement. We denote also by i the closed immersion of the special fiber X σ in X and by j the open immersion of the generic fiber X η in X. Let Log f be the logarithmic specialization system constructed in [7, 3.6] (see also [9, p.103-109]). It is defined by Log f := χ f ((−) ⊗ f * η L og ∨ ) =: i * j * ((−) ⊗ f * η L og ∨ ) where L og ∨ is the commutative associative unitary algebra in DA(G m,k ) constructed in [7, Définition 3.6.29] (see also [9, Définition 11.6]). The monodromy triangle Q(0) → L og ∨ N − → L og ∨ (−1) +1 − − →(14) (see [7,Corollaire 3.6.21] or [9, (116)]) in the triangulated category DA(G m,k ) induces an exact triangle χ f (−) → Log f (−) → Log f (−)(−1) +1 − − → . To construct the motivic unipotent vanishing cycles functor, we shall use the fact that the 1-skeleton functor DA(A 1 k , 1) → Hom(1 op , DA(A 1 k )) is full and essentially surjective. This allows to choose an object L in DA(A 1 k , 1) that lifts the morphism Q(0) → j * L og ∨ obtained as the composition of the adjunction morphism Q(0) → j * Q(0) and the image under j * of the unit Q(0) → L og ∨ of the commutative associative unitary algebra L og ∨ . Moreover, using the monodromy triangle (14), we can fix an isomorphism between L og ∨ (−1) and the cofiber of j * L such that the diagram Q(0) G G L og ∨ G G L og ∨ (−1) +1 G G Q(0) G G L og ∨ G G Cof(j * L ) +1 G G is commutative. Consider the object Q := ∆ * j (L ) in DA(A 1 k , ) obtained by applying the functor ∆ * j of Lemma 3.4. Its -skeleton is the commutative square Q(0) G G j * L og ∨ j * Q(0) G G j * L og ∨ . Let be the full subcategory of that does not contain (0, 1). Denote by i : → the inclusion and by p , : → the unique functor which is the identity on and maps (0, 1) to (0, 0). Consider the functor By construction, Θ f (−) is a coherent lifting of the commutative square Id(−) G G j * (j * (−) ⊗ f * η L og ∨ ) j * j * (−) G G j * (j * (−) ⊗ f * η L og ∨ ) . By pulling back along the closed immersion i : X σ ֒→ X we get the functor i * Θ f (−) : DA(X) → DA(X σ , ) which is a coherent lifting of the commutative square i * (−) G G Log f (j * (−)) χ f (j * (−)) G G Log f (j * (−)). Let (−, 1) : 1 → be the fully faithful functor that maps 0 and 1 respectively to (0, 1) and (1, 1). In particular, the 1-skeleton of (−, 1) * i * Θ f (−) is the morphism i * (−) → Log f (j * (−)). Definition 3.8. The motivic unipotent vanishing cycles functor Φ f : DA(X) → DA(X σ ) is defined as the composition of (−, 1) * i * Θ f (−) and the cofiber functor : Φ f := Cof • (−, 1) * i * Θ f (−). By construction, we get a natural transformation can : Log f (−) • j * → Φ f (−) and an exact triangle i * → Log f (−) • j * can − − → Φ f +1 − − → .(15) We also get a natural transformation var : Φ f (−) → Log f (j * (−))(−1) such that var • can = N . Indeed, let (−, 0) : 1 → be the fully faithful functor that maps 0 and 1 respectively to (0, 0) and (1, 0). The chosen isomorphism between L og ∨ (−1) and the cofiber of j * L induces an isomorphism between Log f (j * (−))(−1) and the cofiber of (−, 0) * i * Θ f (−) such that the diagram χ f (j * (−)) G G Log f (j * (−)) N G G Log f (j * (−))(−1) +1 G G χ f (j * (−)) G G Log f (j * (−)) G G Cof((−, 0) * i * Θ f (−)) y y +1 G G is commutative. On the other hand, the canonical morphism (−, 1) * Θ f (−) → (−, 0) * Θ f (−) in DA(X, 1) induces a commutative diagram χ f (j * (−)) G G Log f (j * (−)) G G Cof((−, 0) * i * Θ f (−)) +1 G G i * (−) G G y y Log f (j * (−)) can G G Φ f (−) y y +1 G G . By applying the coherent triangle functor u ♯ v * to the object i * Θ f (−) of the category DA(X σ , ) = DA(X σ , 1 × 1), we get a functor DA(X) → DA(X σ , 1 × ) which is a coherent lifting of the commutative diagram χ f (j * (−)) G G Log f (j * (−)) G G N 0 0 G G Log f (j * (−))(−1) G G χ f (j * (−))[1]. i * (−) 8 8 ◆ ◆ G G Log f (j * (−)) | | | | G G can 0 B B ❚ ❚ ❚ ❚ ❚ ❚ ❚ ❚ ❚ ❚ ❚ 0 8 8 ▼ ▼ ▼ ▼ ▼ ▼ G G Φ f (−) var @ @ G G i * (−)[1] A A The category 1 × is given by (1, 2, 1) (1, 1, 1) o o (1, 0, 1) o o (1, 2, 0) y y (1, 0, 1)(1, 0, 1) (1, 0, 0) o o (0, 1, 0) y y (0, 0, 0) o o y y inside 1 × . In the next subsection, we will be mainly focusing on the functor sq * u ♯ v * i * Θ f : DA(X) → DA(X, ) which is a coherent lifting of the commutative square Φ f (−) G G var i * (−)[1] Log f (j * (−))(−1) G G χ f (j * (−))[1]. Maximal extension functor Let us now construct Beȋlinson's maximal extension functor Ξ f (see [17]) and the related exact triangles in the triangulated categories of étale motives. This will be essential to prove Theorem 3.15 and for gluing perverse motives. By applying the coherent triangle functor u ♯ v * to the object Θ f (−) in DA(X, ) = DA(X, 1 × 1), we get a functor u ♯ v * Θ f : DA(X) → DA(X, 1 × ) which is a coherent lifting of the commutative diagram j * j * (−)) G G j * (j * (−) ⊗ f * η L og ∨ ) G G 0 0 G G j * (j * (−) ⊗ f * η L og ∨ (−1)) G G j * j * (−)[1]. Id(−) 8 8 ◆ ◆ G G j * (j * (−) ⊗ f * η L og ∨ ) G G 0 A A | | | | | | | | | 0 8 8 ▼ ▼ ▼ ▼ ▼ ▼ G G • 9 9 G G Id(−) [1] @ @ (Here • is some motive which we do not need to specify). The category 1 × is given by (1, 2, 1) (1, 1, 1) o o (1, 0, 1) o o (1, 2, 0) y y (1, 0, 1)(1, 1, 0) (1, 0, 0) o o y y (0, 0, 1) f f ▼ ▼ ▼ ▼ ▼ ▼ (0, 1, 0) f f ▲ ▲ ▲ ▲ ▲ ▲ (0, 0, 0). o o f f ▼ ▼ ▼ ▼ ▼ ▼ y y(16) We denote by α : 1 × → × 1 the functor which maps (16) to 1, 1, 0). (1, 0, 1) (1, 0, 0) y y (0, 1, 1) (0, 1, 0) o o f f ▲ ▲ ▲ (0, 1, 0) y y (0, 0, 0). y y o o f f Then, α * u ♯ v * Θ f (−) : DA(X) → DA(X, 1 × ) is a coherent lifting of the commu- tative diagram 0 C C ❲ ❲ ❲ ❲ ❲ ❲ ❲ ❲ ❲ ❲ ❲ ❲ ❲ ❲ j * (j * (−) ⊗ f * η L og ∨ ) G G N D D ❳ ❳ ❳ ❳ ❳ ❳ ❳ 0 C C ❱ ❱ ❱ ❱ ❱ ❱ ❱ ❱ ❱ ❱ ❱ ❱ ❱ Id(−)[1] j * (j * (−) ⊗ f * η L og ∨ (−1)) G G j * j * (−)[1] and (0 × Id ) * α * u ♯ v * Θ f (−) = (i ) * sq * u ♯ v * Θ f (−). Let β : 1 × → 1 × 1 × beThe 1 × -skeleton of the functor Σ f (−) := β * • ∆ ! j • α * u ♯ v * Θ f (−) : DA(X) → DA(X, 1 × ) is now the commutative diagram 0 C C ❲ ❲ ❲ ❲ ❲ ❲ ❲ ❲ ❲ ❲ ❲ ❲ ❲ ❲ j ! j * (−) ⊗ f * η L og ∨ G G D D ❳ ❳ ❳ ❳ ❳ ❳ ❳ 0 C C ❱ ❱ ❱ ❱ ❱ ❱ ❱ ❱ ❱ ❱ ❱ ❱ ❱ Id(−)[1] j * j * (−) ⊗ f * η L og ∨ (−1) G G j * j * (−)[1] where the non-zero diagonal morphism is obtained via the canonical morphism j ! → j * and the monodromy operator. Note that we have (0 × Id ) * Σ f = (0 × Id ) * α * u ♯ v * Θ f (−) = (i ) * sq * u ♯ v * Θ f (−). In particular, we have canonical isomorphisms (0, 0, 0) * Σ f (−) = j * j * (−) [1] and (0, 0, 1) * Σ f (−) = Id(−) [1]. We also define Ω f : DA(X) → DA(X) to be the functor Ω f (−) := (1, 1) * (i ) * Cof(Σ f (−)). By construction, we have an exact triangle Ω f (−) → Ξ f (−) ⊕ (0, 1) * Cof(Σ f (−)) → (0, 0) * Cof(Σ f (−)) +1 − − → .(17) Since the canonical morphisms Id(−)[1] = (0, 0, 1) * Σ f (−) → (0, 1) * Cof(Σ f (−)) and j * j * (−)[1] = (0, 0, 0) * Σ f (−) → (0, 0) * Cof(Σ f (−) ) are isomorphisms, the exact triangle (17) can be rewritten as Ω f (−) → Ξ f (−) ⊕ Id(−)[1] → j * j * (−)[1] +1 − − → . On the other hand, we have an exact triangle (1, 1, 0) * Σ f (−) → (0, 1, 0) * Σ f (−) → Ξ f (−) +1 − − → that is an exact triangle j ! j * (−) ⊗ f * η L og ∨ → j * j * (−) ⊗ f * η L og ∨ (−1) → Ξ f (−) +1 − − → .(18) Proposition 3.11. There are exact triangles i * Log f (j * (−)) → Ξ f → j * j * (−)[1] +1 − − →(19) and j ! j * (−)[1] → Ξ f → i * Log f (j * (−))(−1) +1 − − →(20) Proof. Let us first construct (19) using the functorial version of the octahedron axiom (see Subsection 3.1). Recall that by definition Ξ f (−) := (1, 0) * Cof(Σ f (−)) = Cof((−, 1, 0) * Σ f (−)). Let us set Σ ′ f (−) := ∆ ! j • α * u ♯ v * Θ f (−) : DA(X) → DA(X, 1 × 1 × ) so that Σ f (−) = β * Σ ′ f (−) . Now let γ : 2 → 1 × 1 × be the fully faithful functor that maps 0, 1 and 2 respectively to (0, 0, 1, 0), (0, 1, 1, 0) and (1, 1, 1, 0). Recall that cm : 1 → 2 is the fully faithful functor that maps 0 and 1 respectively to 0 and 2. Then, β • (−, 1, 0) = γ • cm. In particular, we get that (−, 1, 0) * Σ f (−) = cm * γ * Σ ′ f (−). Using the exact triangle (13) given by the functorial octahedron axiom, we get an exact triangle Cof(fm * γ * Σ ′ f (−)) → Ξ f (−) → Cof(sm * γ * Σ ′ f (−)) +1 − − → . However, by construction, we have an exact triangle j ! (j * (−) ⊗ f * η L og ∨ ) → j * (j * (−) ⊗ f * η L og ∨ ) → Cof(fm * γ * Σ ′ f (−)) +1 − − → . Using Remark 3.7, we see that Cof(fm * γ * Σ ′ f (−)) is isomorphic to i * Log f (j * (−)) := i * i * j * (j * (−) ⊗ f * η L og ∨ ). On the other hand, sm * γ * Σ ′ f (−) = (0, 1, −) * u ♯ v * Θ f (−), so that we get an isomorphism Cof(fm * γ * Σ ′ f (−)) = (0, 0, 0) * u ♯ v * Θ f (−) = j * j * (−)[1] . This constructs the exact triangle (19). Consider now the localization triangle j ! j * Ξ f (−) → Ξ f (−) → i * i * Ξ f (−) +1 − − → . To obtain (20) it is enough to check that j * Ξ f (−) is isomorphic to j * (−) [1] and that i * Ξ f (−) is isomorphic to Log f (j * (−)). The first isomorphism is obtained by applying j * to (19) and the second isomorphism is obtained by applying i * to (18). i * Log f (j * (−)) → Ω f → Id(−)[1] +1 − − →(21) and j ! j * (−)[1] → Ω f → i * Φ f (−) +1 − − → .(22) Proof. Using (19), the exact triangle (21) is obtained by applying Lemma 3.1 to the cartesian square (i ) * Cof(Σ f (−)). Since j * i * = 0, (21) provides an isomorphism between j * Ω f (−) and j * (−) [1]. Now, consider the localization triangle j ! j * Ω f → Ω f → i * i * Ω f (−) +1 − − → . To construct (22), it is enough to obtain an isomorphism between i * Ω f (−) and Φ f (−). By definition i * Ω f (−) = (1, 1) * (i ) * Cof(i * Σ f (−)). However since i * j ! = 0, the canonical morphism (0 × Id ) * i * Σ f (−) → Cof(i * Σ f (−)) is an isomorphism. Given that (0 × Id ) * Σ f = (0 × Id ) * α * u ♯ v * Θ f (−) = (i ) * sq * u ♯ v * Θ f (−), we get isomorphisms (1, 1) * (i ) * (0 × Id ) * i * Σ f (−) ≃ G G (1, 1) * (i ) * Cof(i * Σ f (−)) = i * Ω f (−) (1, 1) * (i ) * (i ) * sq * u ♯ v * Θ f (−). By Remark 3.9, the canonical morphism Φ f (−) = (1, 1) * sq * u ♯ v * Θ f (−) → (1, 1) * (i ) * (i ) * sq * u ♯ v * Θ f (−) is an isomorphism. This concludes the proof. Betti realization Let X be a complex algebraic variety. Let AnDA(X) be the triangulated category of analytic motives. This category is obtained as the special case of the category SH an M (X) considered in [8] when the stable model category M is taken to be the category of unbounded complexes of Q-vector spaces with its projective model structure. Recall that the canonical triangulated functor i * X : D(X) → AnDA(X)(23) is an equivalence of categories (see [8, Théorème 1.8]). Here D(X) denotes the (unbounded) derived category of sheaves of Q-vector spaces on the associated analytic space X an . The functor An X : (Sm/X) → (AnSm/X an ) which maps a smooth X-scheme Y to the associated X an -analytic space Y an induces a triangulated functor An * X : DA(X) → AnDA(X). The Betti realization Bti * X of [8] is obtained as the composition of (24) and a quasiinverse to (23). Let L og ∨ P be the image under the Betti realization of the motive L og ∨ and consider the specialization system it defines Log P f (−) := i * P j P * ((−) ⊗ (f η ) * P L og ∨ P ) : D(X η ) → D(X σ ).i P * Log P f j * P → Ξ P f → j P * j * P [1] +1 − − →, j P ! j * P [1] → Ξ P f → i P * Log P f (−1) +1 − − → and the two triangles i P * Log P f j * P → Ω P f → Id[1] +1 − − →, j P ! j * P [1] → Ω P f → i P * Φ P f +1 − − → . Moreover, we have canonical natural transformations Bti * • Log f → Log P f • Bti * , Bti * • Φ f → Φ P f • Bti * and Bti * • Ξ f → Ξ P f • Bti * , Bti * • Ω f → Ω P f • Bti * which are isomorphisms when applied to constructible motives (see [8, Théorème 3.9]) and are also compatible with the various exact triangles. As proved in [8, Théorème 4.9], the Betti realization is compatible with the (total) nearby cycles functors for constructible motives. In this subsection, we will need the compatibility of the Betti realization with the unipotent nearby cycles functors. Lemma 3.13. The functor Log P f (−) is isomorphic to the unipotent nearby cycles functor ψ un f (−). Let e : C → C × ; z → exp(z) be the universal cover of the punctured complex plane C × . The group of deck transformations is identified with Z by mapping the integer k ∈ Z to the deck transformation z → z + 2iπk. Let E n be the unipotent rational local system on C × of rank n+1 with (nilpotent) monodromy given by one Jordan block of maximal size. It underlies a variation of Q-mixed Hodge structures described e.g. in [67, §1.1]. Let us recall the description of this local system and relate it to Ayoub's logarithmic motive L og ∨ n . The following description is given in [66, 2.3. Remark]. Let E n be the subsheaf of e * Q C annihilated by (T − Id) n+1 where T is the automorphism of e * Q C induced by the deck transformation corresponding to 1 ∈ Z. The restriction of T to E n is unipotent and we denote by N = log T the associated nilpotent endomorphism. The sheaf E n is a local system on C × of rank n + 1. Let (E n ) 1 be its fiber over 1. We have an inclusion (E n ) 1 ⊆ (e * Q C ) 1 = k∈Z (Q C ) 2iπk = k∈Z Q. Note that the automorphism T acts by mapping a sequence (a k ) k∈Z to (a k+1 ) k∈Z . Let τ n be the element in (E n ) 1 given by τ n = (k n /n!) k∈Z . The family (1, τ 1 , . . . , τ n ) is a basis of (E n ) 1 such that T (τ r ) = r k=0 τ k /(r − k)! for every r ∈ [ [1, n]]. The matrix with respect to the basis (1, τ 1 , . . . , τ n ) of the unipotent endomorphism T of (E n ) 1 is thus given by n k=0 (J n ) k /k! where J n is the nilpotent Jordan block of size n + 1 and therefore N is given by the Jordan block J n in the basis (1, τ 1 , . . . , τ n ). The multiplication e * Q C ⊗ e * Q C → e * Q C induces a morphism of local systems E k ⊗E ℓ → E k+ℓ . In particular, for n ∈ N * , we have a canonical morphism E ⊗n 1 → E n which defines a morphism Sym n E 1 → E n .(25) If τ := τ 1 , then τ n = τ n /n! and the above description of E n implies that (25) Théorème 3.19] the Betti realization is compatible with the Kummer transform (for constructible motives). In particular, we have a natural isomorphism Bti * K → E 1 where K ∈ DA(G m ) is the motivic Kummer extension, that is, the cone of the Kummer natural transform for étale motives (see [7, Lemme 3.6.28]). Since the Betti realization Bti * is a symmetric monoidal functor, it induces an isomorphism Q(−1)[−1] eK − − → Q → E 1 +1 − − → . By [8,Bti * L og ∨ n = Bti * Sym n K ≃ − → Sym n Bti * K ≃ − → Sym n E 1(25) − −− → E n for every integer n ∈ N. Therefore, we get an isomorphism L og ∨ P := Bti * L og ∨ ≃ − → E(26) where E is the ind-local system given by E = colim n∈N × E n . Let K ∈ D b c (X, Q), the unipotent nearby cycles functor ψ un f is given by [17,65]). With this description, Lemma 3.13 is an immediate consequence of (26). ψ un f (K) = i * P j P * (K ⊗ (f η ) * P E ) (see [66, (2.3.3)] or Corollary 3.14. The functors p Log P f (−) := Log P f (−)[−1], p Φ P f (−) := Φ P f (−)[−1], p Ξ P f (−) := Ξ P f (−)[−1] and p Ω P f (−) := Ω P f (−)[−1] are t-exact for the perverse t-structure. Proof. Since the functor ψ un f (−)[−1] is t-exact for the perverse t-structure, the corollary is an immediate consequence of Lemma 3.13 and the exact triangles relating the various functors. Application to perverse motives Now, we can apply the universal property of the categories of perverse motives to obtain four exact functors p Log M f (−) : M (X η ) → M (X σ ), p Φ M f (−) : M (X) → M (X σ ) and p Ξ M f (−) : M (X) → M (X), p Ω M f (−) : M (X) → M (X) . Moreover we have four canonical exact sequences obtained from the exact triangles relating the four functors used in the construction. Two exact sequences 0 → i M * p Log M f (j * M (−)) → p Ξ M f → j M * j * M (−) → 0 and 0 → j M ! j * M (−) → p Ξ M f → i M * p Log M f (−)(−1) → 0. As well as two exact sequences 0 → i M * p Log M f (j * M (−)) → p Ω M f (−) → Id(−) → 0(27) and We first consider the case of the immersion of a special fiber. Lemma 3.16. Let X be a k-variety and f : X → A 1 k be a morphism. Let i : X σ ֒→ X be the closed immersion of the special fiber in X and Z be a closed subscheme of X σ . Then, the exact functor 0 → j M ! j * M (−) → p Ω M f (−) → i M * p Φ M f (−) → 0.i M * : M Z (X σ ) → M Z (X) is an equivalence of categories. Proof. We may assume Z = X σ . Indeed, let u : X \ Z ֒→ X and v : X σ \ Z ֒→ X σ be the open immersion. By Proposition 2.3 applied to cartesian square Let us show that the functor p Φ M f is a quasi-inverse. Let X η be the generic fiber and j : X η ֒→ X be the open immersion. The exact triangle (15), provides an isomorphism of endomorphisms of DA(X σ ) between i * i * and Φ f [−1]i * . By composing with the isomorphism of functors i * i * → Id, we get an isomorphism of functors between the identity of DA(X σ ) and X σ \ Z i ′ G G v X \ Z u X σ i G G X, we get an isomorphism u * M i M * ≃ i ′M * v *p Φ f [−1]i * . Similarly, we get an isomorphism between the identity of D(X σ , Q) and the functor Φ P f [−1]i P * . Since these isomorphisms are compatible with the Betti realization, the property P2, ensures that p Φ M f i M * is isomorphic to the identity functor of the category M (X σ ). An isomorphism between the identity of M Xσ (X) := Ker j * M and i * M p Φ f is provided by the exact sequences 0 → i M * p Log M f (j * M (−)) → p Ω M f (−) → Id(−) → 0 and 0 → j M ! j * M (−) → p Ω M f (−) → i M * p Φ M f (−) → 0 (O(X) such that U = D(f 1 ) ∪ · · · D(f r ). Let Z r+1 = X and set Z k = Z k+1 \ D(f k ) for k ∈ [[1, r]]. Let i k : Z k ֒→ Z k+1 be the closed immersion. We have Z 1 = Z and i = i r • i r−1 • · · · • i 1 , so that the functor i M * : M (Z) → M Z (X) is obtained as the composition M (Z) (i1) M * − −−− → M Z (Z 2 ) (i2) M * − −−− → M Z (Z 3 ) → · · · → M Z (Z k ) (i k ) M * − −−− → M Z (X). By Lemma 3.16, all these functors are equivalences. This concludes the proof. Inverse images The purpose of this section is to extend the (contravariant) 2-functor Liss H * M constructed in Subsection 2.5 into a (contravariant) 2-functor H * M : (Sch/k) → TR X → D b (M (X)) f → f * M .Ext i M (X) (A, j M * B) = Ext i M (V ) (j * M A, B) by Proposition 2.5, and, if u ∈ Ext i M (V ) (j * M A, B) and B ֒→ B ′ is a monomorphism of M (V ) such that the image of u in Ext i M (V ) (j * M A, B ′ ) is 0, then, the image in Ext i M (X) (A, j M * B ′ ) of the element of Ext i M (X) (A, j M * B) corresponding to u is also 0. Applying this to an open cover j 1 : U 1 ֒→ X, . . . , j n : U n ֒→ X of X by affine subsets and using the fact the canonical map B → n r=1 (j r ) M * (j r ) * M B given by Proposition 2.5 is a monomorphism for every object B of M (X), we reduce to the case where X is affine. If X is affine, then, as in the proof of Theorem 3.15, we write i = i r • · · · • i 1 , where Z 1 = Z, Z r+1 = X, and, for every k ∈ {1, . . . , r}, i k : Z k → Z k+1 is the immersion of the complement of an open set of the form D(f ), with f ∈ O(Z k+1 ). It suffices to show that each (i k ) M * : D b (M (Z k )) → D b Z k (M (Z k+1 )) is an equivalence of categories. So we may assume that there exists f ∈ O(X) such that i is the immersion of the complement of D(f ). In that case, we showed in the proof of Proof. By Theorem 4.1, it suffices to show that the inclusion functor D b Z (M (X)) → D b (M (X))(28) admits a left adjoint C • . Let j : U ֒→ X be the open immersion of the complement of Z in X. Let us first assume that U is affine. In that case, given A in C b (M (X)), we define C • (A) as the mapping cone of the canonical morphism j M ! j * M A → A given by Proposition 2.5. This construction induces a triangulated functor C • : [1], which shows that C • takes its values in the full subcategory D b (M (X)) → D b (M (X)) and there is a canonical exact triangle j M ! j * M A → A → C • (A) → j M ! j * M AD b Z (M (X)). Let B ∈ D b Z (M (X) ). Using the long exact sequence associated with this triangle and Proposition 2.5 which ensures that Hom D b (M (X)) (j M ! j * M A, B[n]) = Hom D b (M (U)) (j * M A, j * M B[n]) = 0, we get a functorial isomorphism Hom D b (M (X)) (C • (A), B) ≃ − → Hom D b (M (X)) (A, B) as desired. In the general case, the adjoint C • can be constructed by considering a finite set I and an affine open covering U = (j i : U i → U ) i∈I . For every J ⊆ I, let j J be the inclusion i∈J U i ֒→ X. We define an exact functor C • : M (X) → C b (M (X)) in the following way. Let A be an object of M (X). We set : Let Z be a closed immersion such that the open immersion j : U ֒→ X of the complement of Z in X is affine. It follows from the proof of Proposition 4.2 that we have a canonical exact triangle C i (A) =    0 if i 1 A if i = 0 I⊂{1,...,r},|J|=−i (j J ) M ! (j J ) * M A if i −1 The differential of C • (A) isj M ! j * M → Id → i M * i * M +1 − − → . Moreover the diagram dg-enhancements For the general theory of dg categories we refer to [32,55,56,72]. Let A be an abelian category. We denote by Hom rep(D b dg (A),D b dg (B)) (F, G) → Hom Fct(A,B) (H 0 F, H 0 G) is an isomorphism. A triangulated functor D b (A) → D b (B) is said to be dg enhanced if it is induced by some dg quasi-functor in rep(D b dg (A), D b dg (B)). Note that a composition of dg enhanced functors is also dg enhanced. dg (M (X)) formed by the complexes that belongs to D b Z (M (X)). We have then dg-functors D b dg (M (Z)) i M * G G D b dg,Z (M (X)) D b dg (M (X)) C • o o(32) where C • is the dg functor constructed (using the given open covering of U by affine open subsets) in the proof of Proposition 4.2. Since the dg-functor on the left is a quasi-equivalence, the diagram (32) defines a quasi-functor from D b dg (M (X)) to D b dg (M (Z)) that induces the triangulated functor i * M . Gluing of the 2-functors Let us now start with the construction of the 2-isomorphisms (31). Step 1: When the square (30) is cartesian the 2-isomorphism (31) is obtained by considering the exchange structure M Ex * * on the pair ( Imm H M * , Liss H * M ) obtained in Proposition 2.3 (in this exchange structure, all squares are cartesian). By applying [6, Proposition 1.2.5], we get an exchange structure M Ex * * on the pair ( Imm H * M , Liss H * M ) for the class of cartesian squares (30). The uniqueness in loc.cit. implies that this exchange structure lifts the trivial exchange structure on ( Imm H * P , Liss H * P ) given by the connection 2-isomorphisms of the 2-functor H * P . In particular, the conservativity of the functors rat M X : D b (M (X)) → D b (P(X)) implies that M Ex * * is an iso-exchange. Step 2: Let us consider a commutative triangle (33) in which i is a closed immersion and f, g are smooth morphisms. As preparation for the construction of the 2-isomorphism (31), we first construct a 2-isomorphism X i G G g 2 2 ❅ ❅ ❅ ❅ ❅ ❅ ❅ ❅ Y f Si * M • f * M → g * M .(34) To do this, observe that, if d is the relative dimension of g, then the triangulated functors i * M • f * M [d] and g * M [d] are t-exact for the classical t-structures. This is a vanishing statement that can be checked after application of the functor rat M X and, for perverse sheaves, it follows from [19, 4.2.4] since g * P and i * P •f * P are isomorphic. Moreover both functors are dg enhanced by Remark 4.5. By Proposition 4.4, to construct (34), it is enough to construct a 2-isomorphism i * M • f * M [d] → g * M [d](35) where both functors are exact functors from M (S) to M (X). Therefore, it suffices to prove the following proposition. Proposition 4.6. Consider the commutative diagram (33). Let A be an object in M (S) and let K be its underlying perverse sheaf. Then, the canonical morphism of perverse sheaves i * P • f * P [d](K) → g * P [d ](K) lies in the image of the injective morphism Hom M (X) (i * M •f * M [d](A), g * M [d](A)) → Hom P(X) (i * P •f * P [d](K), g * P [d](K) ). (36) Remark 4.7. Note that the map (36) is obtained via the functor rat M X using the invertible natural transformations f * P • rat M S → rat M Y • f * M , g * P • rat M S → rat M X • g * M and i * P • rat M Y → rat M X • i * M which have been previously constructed. Proof. Step (a). Consider a commutative diagram X ′ i ′ G G g ′ G G v Y ′ u f ′ Ñ Ñ X i G G g 3 3 ❈ ❈ ❈ ❈ ❈ ❈ ❈ ❈ Y f S where i is a closed immersion, f, g are smooth morphisms and u is an étale morphism. By step 1, we have a natural transformation i ′ * M • u * M → v * M • i * M that lifts the corresponding natural transformation in the derived category of perverse sheaves. Assume the proposition true for the diagram (33). Then, the morphism i * P • f * P [d](K) → g * P [d](K) lifts to a morphism i * M • f * M [d](A) → g * M [d](A). By applying v * M to this lift we obtain a morphism i ′ * M • f ′ * M [d](A) → g ′ * M [d](A) that lifts the morphism i ′ * P • f ′ * P [d](K) → g ′ * M [d](K) . This shows, in particular, that if the proposition is true for the diagram (33) then it is also true for the diagram X ′ i ′ G G g ′ 3 3 ❇ ❇ ❇ ❇ ❇ ❇ ❇ ❇ Y ′ f ′ S. Step (b). Let Y = (Y α ) α∈I be a finite Zariski open covering of Y and consider for every α ∈ I the commutative diagram X α iα G G gα G G vα Y α uα fα Ñ Ñ X i G G g 3 3 ❈ ❈ ❈ ❈ ❈ ❈ ❈ ❈ Y f S where u α is the open immersion of Y α in Y . Note that the canonical morphism of perverse sheaves i * P • f * P [d](K) → g * P [d](K) is obtained by gluing the morphism i * α,P • f * α,P [d](K) → g * α,P [d](K) along the Zariski open covering X = (X α ) α∈I of X. Hence it follows from step (a) and Proposition 2.7 that the proposition is true for the diagram (33) if and only if it is true for the diagrams X α iα G G gα 3 3 ❈ ❈ ❈ ❈ ❈ ❈ ❈ ❈ Y α fα S. Step (c). By step (b) the problem is local on Y for the Zariski topology. Since both Y and X are smooth over S, we may assume that there exists a cartesian square X i G G g G G v Y u f Ò Ò A d S G G 4 4 ❊ ❊ ❊ ❊ ❊ ❊ ❊ ❊ ❊ A d+c S S where u is an étale morphism. Using step (a) and induction, we are reduced to proving the proposition in the case A d S π 4 4 ❊ ❊ ❊ ❊ ❊ ❊ ❊ ❊ ❊ s G G A d+1 S p S where p and π are the projections and s is the zero section. By considering the factorization A d S ❉ ❉ ❉ ❉ ❉ ❉ ❉ ❉ ❉ ❉ ❉ ❉ ❉ ❉ ❉ ❉ s G G π % % ✷ ✷ ✷ ✷ ✷ ✷ ✷ ✷ ✷ ✷ ✷ ✷ ✷ ✷ ✷ ✷ A d+1 S p Ò Ò A d S π S and observing that the functors π * M [d], π * P [d] are exact, we may further assume d = 0. is commutative for every object A in D b (M (X)). Since all the entries of the above diagram are dg enhanced and t-exact functors up to a shift by the relative dimension d of f , by Proposition 4.4 it suffices to check the commutativity of the diagram induced on the hearts. This can be checked on the underlying perverse sheaves. Lemma 4.10. Consider a commutative diagram X ′ f ′ i ′ G G h ′ I I Y ′ f h Ð Ð X i or X ′′ G G P P X × Y Y ′′ G G Y ′′ X G G Y. The desired compatibility is now a consequence of Proposition 2.3, Lemma 4.10 and Lemma 4.8. Main theorem In Subsection 3.5, we have shown that the unipotent nearby and vanishing cycles functors can be defined at the level of perverse Nori motives. Our goal is to prove that the four operations (1) can be lifted to the derived categories of perverse Nori motives. To obtain these various functors D b (M (X)) f M * G G D b (M (Y )) f * M o o f ! M G G D b (M (X)) f M ! o o(41) (and their compatibility relations) with the least amount of effort, we have chosen to follow Ayoub's approach developed in [6] around the notion of stable homotopical 2-functor, which encompasses in a small package all the ingredients needed to build the rest of the formalism. In particular, we can apply [6, Scholie 1.4.2] to get the functors (41). The next subsection is devoted to the proof of Theorem 5.1, and the reader will find some applications of the main theorem in Subsection 5.4. Statement of the theorem Proof of the main theorem (Theorem 5.1) We start by showing the existence of the direct image functor. The most important step is the proof of the existence of the direct image by the projection of the affine line A 1 Y onto its base Y . Proposition 5.2. For every morphism f : X → Y in (Sch/k), the functor f * M : D b (M (Y )) → D b (M (X)) admits a right adjoint f M * . Moreover (1) if i : Z → X is a closed immersion, the counit of the adjunction i * M i M * → Id is invertible; (2) the natural transformation γ M f : rat M Y f M * → f P * rat M X , obtained from θ M f by adjunction, is invertible; (3) if p : A 1 X → X is the canonical projection, then the unit morphism Id → p M * p * M is invertible. Proof. In the proof, all products are fiber products over the base field k and A 1 is the affine line over k. Step 1 : Suppose first that f is a closed immersion. Then f * M admits f M * as a right adjoint by construction of f * M , we know point (2) by Lemma 4.3, and point (1) is true by (2) and by conservativity of rat M X . Step 2 : Now we consider the case where f is the projection morphism p : X := A 1 Y → Y . As before, if we can prove that p * M admits a right adjoint satisfying (2), then point (3) will follow automatically. We consider the following commutative diagram : A 1 × Y p A 1 × A 1 × Y q2 o o q1 U × Y j o o Y A 1 × Y p o o A 1 × Y id o o i g g ❖ ❖ ❖ ❖ ❖ ❖ ❖ ❖ ❖ ❖ ❖ id ❲ ❲ ❲ ❲ ❲ ❲ ❲ ❲ ❲ ❲ ❲ ❲ k k ❲ ❲ ❲ ❲ ❲ ❲ ❲ ❲ ❲ ❲ ❲ ❲ where q 1 = id A 1 × p, q 2 is the product of the projection A 1 → Spec k and of id A 1 ×Y , i is the product of the diagonal morphism of A 1 and of id Y , and j is the complementary open inclusion. We also denote by s : Y → A 1 × Y the zero section of p. By the smooth base change theorem (or a direct calculation), the base change map p * P p P * → q P 1 * q * 2P is an isomorphism, so we get a functorial isomorphism p P * ≃ s * P p * P p P * → s * P q P 1 * q * 2P . Let K be a perverse sheaf on Y . Then L := q * 2P K [1] is perverse, and we have i * P L = K [1], so we get an exact sequence of perverse sheaves on A 1 × Y : 0 → i P * K → j P ! j * P L → L → 0. Applying the functor q P 1 * and using the fact that q 1 • i = id A 1 ×Y , we get an exact triangle : q P 1 * q * 2P K → K → q P 1 * j P ! j * P L +1 − − → . We claim that q P 1 * j P ! j * P L is perverse. Indeed, this complex is concentrated in perverse degrees −1 and 0 by [19, 4.1.1 & 4.2.4]. So we just need to prove that M := p H −1 q P 1 * j P ! j * P L is equal to 0. By [19, 4.2.6.2], the adjunction morphism q * 1P M [1] → j P ! j * P L is injective; we denote its quotient by N . Then, as q 1 • i = id A 1 ×Y and i * P j P Finally, we get an exact sequence of perverse sheaves on A 1 × Y : 0 → p H 0 q P 1 * q * 2P K → K → q P 1 * j P ! j * P q * 2P K[1] → p H 1 q P 1 * q * 2P K → 0. Consider the functors F P , G P : P(A 1 × Y ) → D b (P(A 1 × Y )) defined by F P (K) := K and G P (K) := q P 1 * j P ! j * P q * 2P K[1] . We have just proved that these functors are t-exact (of course, this is obvious for the first one) and that there is a functorial exact triangle q P 1 * q * 2P → F P → G P +1 − − → . The functors F P and G P are defined in terms of the four operations. The existence of these operations in the categories DA ct (−) and the compatibility of the Betti realization with the four operations (see [8, Théorème 3.19]), imply by the universal property of the categories of perverse motives that there exist : • two exact functors F M , G M : M (A 1 × Y ) → M (A 1 × Y ), • a natural transformation F M → G M , and • two invertible natural transformations rat M Y )), and the Betti realization of H M is isomorphic to q P 1 * q * P 2 . We now define a functor A 1 ×Y • F M → F P • rat M A 1 ×Y and rat M A 1 ×Y • G M → G P • rat M A 1 ×Y such that the diagram rat M A 1 ×Y • F M G G rat M A 1 ×Y • G M F P • rat M A 1 ×Y G G G P • rat M A 1 ×YH M : D b (M (A 1 × Y )) → D b (M (A 1 ×p M • := s * M H M (−) : D b (M (A 1 × Y )) → D b (M (Y )). By construction of p M • , we have an invertible natural transformation rat M A 1 ×Y p M • → p P * rat M A 1 ×Y . Note also the following useful fact. We denote by f : A 1 × Y → A 1 the first projection and by a : G m × Y → A 1 × Y the inclusion. Then applying s * M to the connecting map in the exact sequence (27) in Subsection 3.5, we get a natural transformation s * M → p Log M f a * M [1] , whose composition with the functor H M is invertible. Indeed, we can check this last statement after applying the functors rat M Y , and then this follows from the exact triangle q P 1 * q * P 2 → F P → G P +1 − − → and the fact that the composition of the natural transformation s * P → p Log P f a * P [1] and of the functor q P 1 * q * P − → id. Next we construct δ M . First we define a functor q M 1• : D b (M (A 1 × Y )) → D b (M (A 1 ×A 1 ×Y )) in the same way as p M • . That is, we consider the commutative diagram A 1 × A 1 × Y q1 A 1 × A 1 × A 1 × Y r2 o o r1 A 1 × U × Y J o o A 1 × Y A 1 × A 1 × Y q1 o o A 1 × A 1 × Y id o o I i i ❙ ❙ ❙ ❙ ❙ ❙ ❙ ❙ ❙ ❙ ❙ ❙ ❙ ❙where r 1 = id A 1 × q 1 , r 2 = id A 1 × q 2 , I = id A 1 × i and J = id A 1 × j, and we set t = id A 1 × s : A 1 × Y → A 1 × A 1 × Y . Then the functors F ′ P , G ′ P from D b (P(A 1 ×A 1 ×Y )) to itself defined by F ′ P = id and G ′ P = r P 1 * J P ! J * P r * 2P [1] are texact and we have a natural transformation F ′ P → G ′ P . As before, we can lift these functors and transformation to endofunctors F ′ M → G ′ M of D b (M (A 1 × A 1 × Y )). We denote by H ′ M the mapping fiber of F ′ M → G ′ M , and we set q M 1• = t * M • H ′ M . Also, if we denote by f ′ : A 1 × A 1 × Y → A 1 the second projection and by a ′ the injection of A 1 × G m × Y into A 1 × A 1 × Y , we get as above an invertible natural transformation from q M 1• to the mapping cone of the morphism of exact functors p Log M f ′ (a ′ ) * M • F ′ M → p Log M f ′ (a ′ ) * M • G ′ M . Let's show that the base change isomorphism p * P p P * ∼ − → q P 1 * q * 2P lifts to a mor- phism p * M p M • → q M 1• q * 2M (which will automatically be an isomorphism). We have invertible natural transformations F ′ P • q * 2P ≃ q * 2P • F P and G ′ P • q * 2P ≃ q * 2P • G P . As all the functors involved are t-exact up to the same shift, the transformations lift to natural transformations F ′ M • q * 2M ≃ q * 2M • F M and G ′ M • q * 2M ≃ q * 2M • G M , and induce an invertible natural transformation H ′ M • q * 2M ≃ q * 2M • H M . Composing on the left with t * M and using the connection isomorphism t * M q * 2M ≃ p * M s * M , we get the desired isomorphism p * M p M • ∼ − → q M 1• q * 2M . Composing this isomorphism with the unit of the adjunction (i * M , i M * ) and using the connection isomorphism i * M q * 2M ≃ id gives a natural transformation p * M p M • → q M 1• i M * . It remains to show that the isomorphism q P 1 * i P * ≃ id lifts to a natural transformation q M 1• i M * → id. First we note that the functors p Log B f ′ (a ′ ) * P r P 1 * r * P 2 i P * [1](42) and p Log B f ′ (a ′ ) * P r P 1 * J P ! J * P r * P 2 i P * [1]id D b (P(A 1 ×Y )) ≃ t * P q * P 1 q P 1 * i P * (connection isomorphisms) ∼ − → t * P r P 1 * r * 2P i P * (base change) ∼ − → p Log B f ′ (a ′ ) * P r P 1 * r * 2P i P * [1] (by Subsection 3.5 as above) and all the maps in it are defined in the categories DA ct (−), so it induces an invertible natural transformation id M (A 1 ×Y ) D P X f * P [d]D P Y rat M Y ε P f G G ε M f B B ❚ ❚ ❚ ❚ ❚ ❚ ❚ ❚ ❚ ❚ ❚ ❚ ❚ ❚ ❚ (D P X ) 2 f * P (d)[d]rat M Y (ε P X ) −1 D P X f * P [d]rat M Y D M Y ε P f ν M Y y y f * P (d)[d](D P Y ) 2 rat M Y ν M Y u u • • • • • • • • • • • • • • • (ε P Y ) −1 G G f * P (d)[d]rat M Y f * P (d)[d]D P Y rat M Y D M Y ν M Y G G f * P (d)[d]rat M Y (D M Y ) 2 (ε M Y ) −1 y y is commutative. The pair (j * M , i * M ) is conservative, since so is the pair (j * P , i * P ). This follows from the existence of the isomorphisms θ M j , θ M i and the fact that rat M X is a conservative functor. To finish the proof of Theorem 5.1, it remains to check that, if s is the zero section of p : A 1 X → X, then p M ♯ s M * is an equivalence of categories. By construction p M ♯ s M * = D M X p M * (−1)[−2]D M A 1 X s M * . ξ M i y y rat M X G G j P * rat M U j * M +1 G G (θ M j ) −1 y y rat M X i M * i ! M G G γ M i y y rat M X G G rat M X j M * j * M +1 G G γ M j y y which implies that the image of ξ M i under i M * is invertible since all the other morphisms are. Therefore ξ M i is also invertible. Some consequences In this subsection, we draw some immediate consequences of the main theorem (Theorem 5.1). Geometric local systems are motivic A Q-local system L on a quasi-projective k-variety X will be called geometric if there exists a smooth proper morphism g : Z → X such that L = R i g * Q for some integer i ∈ Z. We will say that L is motivic if there exists an object L in D b (M (X)) such that L and rat M X (L) are isomorphic in the category D b (P(X)). Corollary 5.5. A geometric Q-local system L on a quasi-projective k-variety X is motivic. Proof. If the local system L is geometric, there exists a smooth proper morphism g : Z → X such that L = R i g * Q Z for some integer i ∈ Z. Then L is the image under the functor rat M X of the perverse motive c H i (g M * Q M Z ), where c H i is the cohomological functor associated with the constructible t-structure (see below). Remark 5.6. In this remark, we denote by H i the standard cohomology functors on D b (M (X)). Let L be a geometric Q-local system on a smooth quasi-projective variety of (pure) dimension d and choose a smooth proper morphism g : Z → X, an integer j ∈ Z such that L = R j g * Q Z . As Z is smooth and g is proper and smooth, the constructible sheaves R i g * Q Z are all Q-local systems on X. Hence the complexes (R i g * Q Z )[d] are perverse sheaves and therefore (R i g * Q Z )[d] = p H d+i g * Q Z for every i ∈ Z. In particular, L [d] = p H j+d g * Q Z and it follows that L [d] is the image under rat M X of the perverse motivic sheaf A := H j+d (g M * Q M Z ). Intersection cohomology The four operations formalism allows the definition of a motivic avatar of intersection complexes. In particular, intersection cohomology groups with coefficients in geometric systems are motivic. More precisely: Corollary 5.7. Let X be an irreducible quasi-projective k-variety and L be a Qlocal system on a smooth dense open subscheme of X. If L is motivic (in particular if L is geometric), then the intersection cohomology group IH i (X, L ), for i ∈ Z, is canonically the Betti realization of a Nori motive over k. Proof. Let d be the dimension of X and L be a Q-local system on a smooth dense open subscheme U of X. Since L is motivic, there exists an object L ∈ D b (M (U )) such that L is isomorphic to rat M U (L). Since L [d] is a perverse sheaf on U and rat M U is conservative, the complex L[d] is a perverse motivic sheaf on U that is belongs to M (U ). Then, with the notation of Definition 6.19, the intersection complex This shows, in particular, that intersection cohomology groups carry a natural Hodge structure. If X is a smooth projective curve, and L underlies a polarizable variation of Hodge structure, then the Hodge structure on the intersection cohomology groups was constructed by Zucker in [78, (7.2) Theorem, (11.6) Theorem]. In general, it follows from Saito's work on mixed Hodge modules [68] and a different proof has been given in [27]. We consider the weights in the next section (see Theorem 6.28 and Corollary 6.29). IC X (L ) := Im( p H 0 j P ! L [d] → p H 0 j P * L [d]) is Leray spectral sequences Let f : X → Y be a morphism of quasi-projective k-varieties and L be a Q-local system on X. Then, we can associate with it two Leray spectral sequences in Betti cohomology: the classical one H r (Y, R s f * L ) =⇒ H r+s (X, L ) and the perverse one H r (Y, p H s f * L ) =⇒ H r+s (X, L ). The main theorem of [4] shows that, if L = Q X is the constant local system on X and the morphism f is projective, then the classical Leray sequence is motivic, that is, it is the realization of a spectral sequence in the abelian category of Nori motives over k (see precisely [4,Theorem 3.1]). This property is still true without the projectivity assumption and also more generally if the local system L is geometric: Corollary 5.8. If the local system L is motivic (in particular if it is geometric), then the classical Leray spectral sequence and the perverse Leray spectral sequence are spectral sequences of Nori motives over k. Proof. The result follows from the functoriality of the direct image functors. In particular, the Leray spectral sequences are spectral sequences of (polarizable) mixed Hodge structures. The compatibility of the classical Leray spectral sequence result in Hodge theory was already proved by Zucker in [78] when X is a curve and more generally, for both spectral sequences, by Saito if L underlies an admissible variation of mixed Hodge structures (see [68]). This result has been recovered by de Cataldo and Migliorini with different techniques in [28]. Nearby cycles The theory developed here also shows that nearby cycles functors applied to perverse motives produce Nori motives. Corollary 5.9. Let X be a quasi-projective k-variety, f : X → A 1 k a flat morphism with smooth generic fiber X η and L be a Q-local system on X η . If L is motivic (in particular if it is geometric), then, for every point x ∈ X σ (k) and every integer i ∈ Z, the Betti cohomology H i (Ψ f (L ) x ) of the nearby fiber is canonically a Nori motive over k. Proof. The nearby cycles functor ψ f := Ψ f [−1] is t-exact for the perverse tstructure. Since it exists in the triangulated category of constructible étale motives (see [7]) and the Betti realization is compatible with the nearby cycles func- Exponential motives The perverse motives introduced in the present paper and their stability under the four operations could be used also in the study of exponential motives as introduced in [35]. Indeed, recall that Kontsevich and Soibelman define an exponential mixed Hodge structure as a mixed Hodge module A on the complex affine line A 1 C such that p * A = 0, where p : A 1 C → Spec(C) is the projection (see [57]). Their definition can be mimicked in the motivic context and the abelian category of exponential Nori motives can be defined as the full subcategory of M (A 1 k ) formed by the objects which have no global cohomology. Constructible t-structure Let us conclude by a possible comparison with Arapura's construction from [5]. Let X be a k-variety and consider the following full subcategories of D b (M (X)) c D 0 := {A ∈ D b (M (X)) : H k (x * M A) = 0, ∀ x ∈ X, ∀ k > 0}, c D 0 := {A ∈ D b (M (X)) : H k (x * M A) = 0, ∀ x ∈ X, ∀ k < 0}. As in [68, 4.6. Remarks] (see also [5, Theorem C.0.12]), we can check that these categories define a t-structure on D b (M (X)). Let ct M (X) be the heart of this t-structure. Then, the functor rat M X induces a faithful exact functor from ct M (X) into the abelian category of constructible sheaves of Q-vector spaces on X. Then, using the universal property of the category of constructible motives M(X, Q) constructed by Arapura in [5], we get a faithful exact functor M(X, Q) → ct M (X). Is this functor an equivalence? If X = Spec k, then both categories are equivalent to the abelian category of Nori motives, so this functor is an equivalence. Weights In this section, we will use results on motives and weight structures from [23,42]. To apply these references directly in our context, we will make use of the fact that, if S is a Noetherian scheme of finite dimension, then Ayoub's category DA ct (S) is canonically equivalent to the category of constructible Beȋlinson motives studied in Cisinski and Déglise's book [25]. This follows from [25,Theorem 16.2.18] and will henceforth be used without further comment. (Note also that, though the authors of [23,42] have chosen to use Beȋlinson's motives, étale motives could have been used.) Continuity of the abelian hull Remember that, in chapter 5 of Neeman's book [63], there are four constructions of the abelian hull of a triangulated category. The first one gives a lax 2-functor from the 2-category of triangulated categories to that of abelian categories, but the other three constructions give strict 2-functors. If we use the fourth construction, which Neeman calls D(S) (see [63, Definition 5.2.1]), then the following proposition is immediate. Then the canonical functor A tr (S) → 2 − lim − →i∈I A tr (S i ) is an equivalence of abelian categories. Étale realization and ℓ-adic perverse Nori motives Let S be a Noetherian excellent scheme finite-dimensional scheme, let ℓ be a prime number invertible over S; we assume that S is a Q-scheme. (By Exposé XVIII-A of [46], the hypotheses above imply Hypothesis 5.1 in Ayoub's paper [9].) Under this hypothesis, Ayoub has constructed an étale ℓ-adic realization functor on DA ct (S), compatible with pullbacks. θ f : f * • R ét Y → R ét X • f * . Using results of Gabber (see [37], and also sections 4 and 5 of Fargues's article [34]), we can construct an abelian category P(S, Q ℓ ) of ℓ-adic perverse sheaves on S, satisfying all the usual properties. In particular, we get a perverse cohomology functor p H 0 ℓ := p H 0 • R ét S : DA ct (S) → P(S, Q ℓ ). Definition 6.3. Let S be as above. The Abelian category of ℓ-adic perverse motives on S is the Abelian category M (S) ℓ := A tr (DA ct (S), p H 0 ℓ ). By construction, the functor p H 0 ℓ has a factorization DA ct (S) p H 0 M −−−→ M (S) ℓ rat M S,ℓ −−−→ P(S, Q ℓ ) where rat M S,ℓ is a faithful exact functor and p H 0 M is a homological functor. By the universal property of M (S) ℓ , we also get pullback functors between these categories as soon as the pullback functor between the categories of ℓ-adic complexes preserves the category of perverse sheaves. We will use the following important fact : If we fix a base field k of characteristic 0 and only consider schemes that are quasi-projective over k, then the main theorem (stated in Subsection 5.1) stays true for the categories M (S) ℓ . Of course, we have to replace D b ct (S) and the Betti realization functor by D b c (S, Q ℓ ) and the étale realization functor in all the statements. Indeed, the proof of the main theorem, of of the statements that it uses, still work if we use the ℓ-adic étale realization instead of the Betti realization. The only result that requires a slightly different proof is Lemma 3.13 : we have to show that the ℓ-adic realization of Ayoub's logarithmic motive L og ∨ n is the local system used in Beilison's construction of the unipotent nearby cycle functor (see 1.1 and 1.2 of [20] or Definition 5.2.1 of [61]). As in the proof of Lemma 3.13, it suffices to check this for n = 1, and then it follows from Lemma 11.22 of [9]. Mixed horizontal perverse sheaves Let k be a field and S be a k-scheme of finite type. Suppose that k is finitely generated over its prime field. We also fix a prime number ℓ invertible over S. The category D b m (S, Q ℓ ) of mixed horizontal Q ℓ -complexes and its perverse t-structure with heart P m (S, Q ℓ ) (the category of mixed horizontal ℓ-adic perverse sheaves on S) were constructed in Huber's article [43] (see also [61, section 2]). We recall the definition quickly and refer to [43] and [61] for the details. First we consider the category D b h (S, Q ℓ ) of horizontal complexes on S, which is by definition the 2colimit of the categories D b c (X , Q ℓ ), where X runs over all flat finite type models of X over regular subalgebras A of k that are of finite type over Z and have k as their fraction field. There is an obvious functor η * : D b h (S, Q ℓ ) → D b c (S, Q ℓ ), which is triangulated and conservative, and a perverse t-structure on D b h (S, Q ℓ ) that is characterized by the fact that η * is t-exact. Also, the functor η * is fully faithful on the heart of this t-structure ([61, Proposition 2.6.2]). We say that an object of D b h (S, Q ℓ ) if it extends to a complex K on a model X of X as before such that all the (ordinary) cohomology sheaves of K are successive extensions of punctually pure sheaves in the sense of [30]. The category D b m (S, Q ℓ ) of mixed horizontal complexes is the full subcategory of D b h (S, Q ℓ ) whose objects are mixed complexes. The perverse t-structure on D b h (S, Q ℓ ) restricts to a t-structure on D b m (S, Q ℓ ), whose heart is the category P m (S, Q ℓ ) of mixed horizontal perverse sheaves; this last category is a full subcategory of the heart of the perverse t-structure on D b h (S, Q ℓ ), so η * induces a fully faithful functor P m (S, Q ℓ ) → P(S, Q ℓ ). Now we want to show that the realization functor rat M S,ℓ : M (S) ℓ → P(S, Q ℓ ) factors through the fully faithful functor P m (S, Q ℓ ) → P(S, Q ℓ ). We Using the definition of mixed horizontal ℓ-adic complexes, we immediately get the following corollary. Proof. This follows from the facts that DA ct (S) is generated by the Tate twists of motives of smooth S-schemes (see Definition 15.1.1 and Proposition 15.1.4 of [25]) and that mixed horizontal complexes are preserved by direct images and Tate twists (see [43,Proposition 3.2] for direct images, the stability by Tate twists is easy). We will also denote the resulting faithful exact functor M (S) → P m (S, Q ℓ ) by rat M S,ℓ . Remark 6.8. Suppose that k is not necessarily finitely generated over its prime field. We define D b m (S, Q ℓ ) as the 2-colimit of the categories D b m (S ′ , Q ℓ ), for S ′ a model of S over a finitely generated subfield of k. This category inherits a perverse tstructure from the perverse t-structures on the D b m (S ′ , Q ℓ ), whose heart we denote by P m (S, Q ℓ ). The obvious functor P m (S, Q ℓ ) → P(S, Q ℓ ) is only exact faithful in general (not necessarily fully faithful), but the the perverse cohomology functor p H 0 ℓ : DA ct (S) → P(S, Q ℓ ) still factors through this functor as in Corollary 6.6 (by Theorem 6.4), so we get a faithful exact realization functor M (S) ℓ → P m (S, Q ℓ ). Continuity for perverse Nori motives Like the triangulated category of motives, the category of perverse Nori motives satisfies a continuity property. Proposition 6.9. Let S be a scheme and ℓ be a prime number satisfying the conditions of Subsection 6.2. We assume that S = lim ← −i∈I S i , where (S i ) i∈I is a directed projective system of schemes satisfying the same conditions as S, and in which the transition maps are affine. We also assume that the pullback by any transition map Proof. This follows from Proposition 6.1 and Theorem 6.4. Corollary 6.10. Let S and ℓ be as above, and suppose also that S is integral. Then, if η is the generic point of S, the canonical exact functor S i → S2 − lim − → U M (U ) ℓ → M (η) ℓ , where the limit is taken over all nonempty affine open subschemes of S and where the image of K U ∈ Ob M (U ) ℓ is K U,η [− dim S], is an equivalence of categories. Proof. By Proposition 6.9, it suffices to check that the similar functor 2 − lim − → U P(U, Q ℓ ) → P(η, Q ℓ ) is faithful. Let K be an object of 2 − lim − →U P(U, Q ℓ ) whose image in P(η, Q ℓ ) is 0, and let U be a nonempty open affine subscheme of S such that K comes from an object K ′ of P(U, Q ℓ ). After shrinking U (which does not change K), we may assume that K ′ [− dim S] is a local system. Then the condition K ′ η [− dim S] = 0 implies that this local system is zero, hence that K = 0. Comparison of the different categories of perverse Nori motives In the next proposition, we compare the ℓ-adic definition of perverse motives with the one used previously and obtained via the Betti realization. are, which is equivalent to the surjectivity of ρ ℓ (N |U ) → ρ ℓ (M |U ) and ρ ℓ (Φ f N ) → ρ ℓ (Φ f M ). We have a similar statement for ρ σ . As dim(S − U ) < dim(S), we can use the induction hypothesis to reduce to the case S = U . It suffices to check the result on an étale cover of S, so we may assume that S has a rational point x. We fix a field k of characteristic zero and a quasi-projective scheme S over k. We first define weights via the ℓ-adic realizations. Definition 6.13. Let w ∈ Z. Let K be an object of M (S). We say that K is of weight ≤ w (resp. ≥ w) if rat M S,ℓ (K) ∈ Ob(P m (S, Q ℓ )) is of weight ≤ w (resp. ≥ w) for every prime number ℓ. We say that K is pure of weight w if it is both of weight ≤ w and of weight ≥ w. In Proposition 6.18, we will a more intrinsic definition of weights that does not use the realization functors. Definition 6.14. A weight filtration on an object K of M (S) is an increasing filtration W • K on K such that W i K = 0 for i small enough, W i K = K for i big enough, and W i K/W i−1 K is pure of weight i for every i ∈ Z. The next result follows immediately from the similar result in the categories of mixed horizontal perverse sheaves (see Proposition 3.4 and Lemma 3.8 of [43]). Proposition 6.15. Let K, L be objects of M (S), and let w ∈ Z. (i) If K is of weight ≤ w (resp. ≥ w), so is every subquotient of K. (ii) If K is of weight ≤ w and L is of weight ≥ w+1, then Hom M (S) (K, L) = 0. Recall that, if A and B are objects of an abelian category endowed with increasing filtrations (F i A) i∈Z and (F i B) i∈Z , then a morphism u : A → B is called compatible (resp. strictly compatible) with the filtrations if, for every i ∈ Z, we have u( F i A) ⊂ F i B (resp. u(F i A) = u(A) ∩ F i B). Application of Bondarko's weight structures Let S be as in the previous subsection. We will now make use of Bondarko's Chow weight structure on DA ct (S). Let Chow(S) be the full subcategory of DA ct (S) whose objects are direct factors of finite direct sums of objects of the form f ! Q X (d)[2d], with f : X → S a proper morphism from a smooth k-scheme X to S and d ∈ Z. Then, as shown in [42,Theorem 3.3], see also [23, Theorem 2.1], there exists a unique weight structure on DA ct (S) with heart Chow(S) (see [42,Definition 1.5] or [23,Definition 1.5] for the definition of a weight structure). Let K be an object in M (S). Given an object (L, α : u i (L) → K) in the slice category M (S) w≤i /K we can consider the subobject Im α of K and define W i K to be the union of all such subobjects in K, that is, we set , then K has strict support Z. Indeed, this follows from the similar result for perverse sheaves, which follows from [19, 4.3.2] (note that the proof of this result does not use the hypothesis that L is irreducible). Proposition 6.23. (Compare with [19, 5.3.8].) Let K be an object of M (S), and suppose that K is pure of some weight. Then we can write K = Z K Z , where the sum is over all integral closed subschemes Z of S, each K Z is an object of M (S) with strict support Z, and K Z = 0 for all but finitely many Z. generated over Q. So it suffices to show the theorem for k finitely generated over Q. But then we can embed k into C, and the conclusion follows from [45, Theorem 10.2.7]. Definition 6.25. Let K be an object of D b M (X) and w ∈ Z. We say that K is of weight ≤ w (resp. of weight ≥ w, resp. pure of weight w) if, for every i ∈ Z, the perverse motive H i K is of weight ≤ w + i (resp. of weight ≥ w + i, resp. pure of weight w + i). Proof. By Lemma 4.5 of [69], this follows from the existence of the weight filtration and the fact that it is strictly compatible with morphisms of M (S), and from the semisimplicity of pure objects of M (S). Proof. (i) We apply part II of Theorem 4.3.2 of [21] to the triangulated category D b M (S) and the full subcategory A of complexes of weight 0. This subcategory is stable by finite coproducts and direct summands, and it generates D b M (S). Indeed, to prove the second statement, it suffices to show that the triangulated subcategory generated by A contains P(S); but every perverse motives is a successive extension of pure perverse motives (by the existence of the weight filtration), and, if K is a pure perverse motives, then some shift of K is an A . By Theorem 4.3.2 of [21], there exists a weight structure on D b M (S) with heart A if and only if, for every objects K, L of A and every integer n > 0, we have Hom D b M (S) (K, L) = 0. As the functor Hom is cohomological in each variable, we may assume that K and L are concentrated in one degree, so that there exist objects A and B that are pure of respective weights i and j such that Proof. Let us say that L ∈ D b (M (S)) is pure of weight w if H i L is pure of weight w + i for every i ∈ Z. For such an L, by Corollary 6.12, it follows from the Weil conjectures proved by Deligne in [30] that f M * L is pure of weight w (see the remark after [43,Definition 3.3]). Hence, Proposition 6.20 ensures that f M * j M ! * K is pure of weight w. This gives the conclusion. In particular, this provides (for geometric variations of Hodge structures) an arithmetic proof of Zucker's theorem [78,Theorem p.416] via reduction to positive characteristic and to the Weil conjectures [30, Théorème 2]. More generally, in higher dimension: Corollary 6.29. Let k be a field embedded into C. Let X be an irreducible proper k-variety and L be a Q-local system on a smooth dense open subscheme U of X of the form L = R w g * Q V where g : V → U is a smooth proper morphism and w ∈ Z is an integer. Then, the intersection cohomology group IH i (X, L ), for i ∈ Z, is canonically the Betti realization of a Nori motive over k which is pure of weight i + w. In particular, IH i (X, L ) carries a canonical pure Hodge structure of weight i + w. Proof. Let d be the dimension of X, π : X → Spec(k) be the structural morphism and j be the inclusion of U in X. As in Corollary 5.7, IH i (X, L ) is the Betti realization of the Nori motive H i−d (π M * j M ! * H w+d (g M * Q M V )), which is pure of weight w + i by Theorem 6.28. Lemma 2 provides a contravariant 2-functor Liss H * M : Liss (Sch/k) → TR with Liss H * M (X) = D b (M (X)) and such that ( p H 0 M , θ DA ) and (rat M , θ M ) are 1morphisms of 2-functors. For every smooth morphism f : X → Y we have a natural transformation are commutative. With these natural transformations, the functors (f M ! , f * M ) form a pair of adjoint functors. Proposition 2. 7 . 7The fibered category M → AffEt/S is a stack for the étale topology. Remark 2. 8 . 8There is a difference between the representation p H 0 •B used here and the representation used in [48, 7.2-7.4] (see [48, Remark 7.8]) Définition 2.1.34] or the base change axiom Der 3 of [26, Definition 1.11]). ( 3 ) 3Let A and B be small categories. Given an object a ∈ A we denote by a : B → A × B the functor which maps b ∈ B to the pair (a, b). The A-skeleton of an object M in D(A × B) is defined to be the functor A op → D(B) which maps an object a in A to the object a * M of D(B). This construction gives the A-skeleton functor D(A × B) → Hom(A op , D(B)). Lemma 3. 1 . 1Let M ∈ D( ). Then, we have a morphism of exact triangles Let us recall [ 6 , 6Lemma 1.4.8]. Note that the functors j * : DA(X, I) → DA(U, I) and j * : DA(U, I) → DA(X, I) used below are induced by the functoriality of the categories of presheaves on diagrams of schemes (see [7, §4.5] for details). Lemma 3. 3 . 3Let I be a small category and j : U ֒→ X be an open immersion. Assume that we have a exact triangle M → j * j * M → C(M ) +1 − − → for every given object M ∈ DA(X, I). Then, for every morphism α : M → N in DA(X, I) there exists one and only one morphism C(M ) → C(N ) such that the square Lemma 3. 4 . 4Let I be a small category and f : Y → X be a morphism of separated k-schemes of finite type. There exists a functor ∆ * f : DA(X, I) → DA(X, 1 × I) such that, for every M ∈ DA(X, I), the 1-skeleton Remark 3. 5 . 5Assume I = e. Given M in DA(X), we have an exact triangle Lemma 3. 6 . 6Let I be a small category and f : Y → X be a smooth morphism of separated k-schemes of finite type. There exists a functor ∆ ! f : DA(X, I) → DA(X, I×1) such that for every A ∈ DA(X, I) the 1-skeleton Θ f (−) := (p , ) * (i ) * ∆ * j ((p 1 ) * (−) ⊗ f * L ) : DA(X) → DA(X, ). consider the functor sq : → 1 × which maps (8) to the square Let be the full subcategory of that does not contain (1, 1). Then 1 × is the category (1, 0, 1) Definition 3. 10 . 10Let Ξ f : DA(X) → DA(X) be the functor defined by Ξ f (−) := (1, 0) * Cof(Σ f (−)). Proposition 3 . 12 . 312There are exact triangles Recall that in Subsection 3.2 we fixed an object L in DA(A 1 k , 1) that lifts the morphism Q(0) → j * L og ∨ obtained as the composition of the adjunction morphism Q(0) → j * Q(0) and the image under j * of the unit Q(0) → L og ∨ of the commutative associative unitary algebra L og ∨ . Let L P the image in D(X, 1) of L . Using this object, we can perform the same constructions as in Subsection 3.2 and Subsection 3.3 using the derivator D(X, −) to obtain functors Ξ P f (−), Ω P f (−) : D(X) → D(X) and Φ P f (−) : D(X) → D(X σ ) and four exact triangles: the two triangles These four functors and the associated exact sequences are compatible with the various functors and exact triangles constructed in Subsection 3.2, Subsection 3.3 and Subsection 3.4.Now we can prove the following theorem. Theorem 3. 15 . 15Let i : Z ֒→ X be a closed immersion of k-varieties. Then, the functor i M * : M (Z) → M (X) is fully faithful and its essential image is the kernel, denoted by M Z (X), of the exact functor j * M : M (X) → M (U ) where j : U ֒→ X is the open immersion of the complement of Z in X. M . Since the functor i ′M * is conservative (it is faithful exact), we see that an object A in M (X σ ) belongs to Ker v * M if and only if i M * A belongs to Ker u * M . Hence, it is enough to show that i M * : M (X σ ) → M Xσ (X) is an equivalence. the first terms vanish for objects in the kernel of j * M ). This concludes the proof. Proof of Theorem 3.15. Using Proposition 2.7, we may assume that X is an affine scheme. Let U be the open complement of Z in X and let f 1 , . . . , f r be elements in Theorem 4. 1 . 1Let i : Z ֒→ X be a closed immersion. Then, the functor i M * : D b (M (Z)) → D b (M (X)) is fully faithful and its essential image is is the kernel, denoted by D b Z (M (X)), of the exact functor j * M : D b (M (X)) → D b (M (U )) where j : U ֒→ X is the open immersion of the complement of Z in X.Proof. We know that the essential image of i M *: D b (M (Z)) → D b (M (X)) is contained in D bZ (M (X)) by Theorem 3.15. We now want to prove that the functori M * : D b (M (Z)) → D b Z (M (X))is an equivalence of categories. Note that the obvious t-structure on D b (M (X)) induces a t-structure on D b Z (M (X)), whose heart is the thick abelian subcategory M Z (X) of M (X). ByTheorem 3.15, the functor i M * : M (Z) → M (X) induces an equivalence of categories M (Z) → M Z (X). So, by [18, Lemma 1.4], the functor i M * : D b (M (Z)) → D b Z (M (X)) is an equivalence of categories if and only if, for any A, B in M Z (X) and i 1, and any class u ∈ Ext i M (X) (A, B), there exists a monomorphism B ֒→ B ′ in M Z (X) such that the image of u in Ext i M (X) (A, B ′ ) is 0. Suppose that j : V ֒→ X is an affine open immersion, that A is an object of M (X) and that B is an object of M (V ). Let i 1. Then, we have Lemma 3.16 that the trivial derived functor of the exact functor p Φ M f : M (X) → M (Z) induces a quasi-inverse of i M * : D b (M (Z)) → D b Z (M (X)). Proposition 4.2. Let i : Z ֒→ X be a closed immersion. Then, the functor i M * : D b (M (Z)) → D b (M (X)) admits a left adjoint. an alternating sum of maps given by Proposition 2.5. Then, the left adjoint of D b Z (M (X)) → D b (M (X)) is the functor sending A • to the total complex of C • (A • ). C b dg (A) the dg category of bounded complexes of objects of A and by D b dg (A) the dg quotient of C b dg (A) by the subcategory of acyclic bounded complexes (for a simple construction of the dg quotient see [32, §3.1]). The bounded derived category D b (A) of A is the homotopy category of the dg category D b dg (A). We let rep(D b dg (A), D b dg (B)) be the category of dg quasi-functors from D b dg (A) to D b dg (B) (this category is denoted by T (D b dg (A), D b dg (B)) in [77]). Let us recall the following particular case of [77, Theorem 1]. Proposition 4.4. Let A, B be abelian categories and F, G ∈ r(A, B) be dg quasifunctors. Assume that the induced triangulated functors F, G : D b (A) → D b (B) are t-exact for the classical t-structures. Then F, G are respectively canonically isomorphic to the functors induced by the exact functors H 0 F : A → B, H 0 G : A → B and the canonical map Remark 4 . 5 . 45Let i : Z ֒→ X be a closed immersion and f : X → Y be a smooth morphism of quasi-projective k-varieties. By construction the triangulated functors i M * and f * M are dg enhanced. This is also the case of the triangulated functor i * M : D b (M (X)) → D b (M (Z)). Indeed, let j : U ֒→ X be the open immersion of the complement of Z in X and fix a finite open covering of U by affine open subsets. Let D b dg,Z (M (X)) be the dg full subcategory of D b ! = 0 , 0we have i * P N = M [2]. But i is the complement of an open affine embedding, so i * P is of perverse cohomological amplitude [−1, 0] by [19, 4.1.10], hence M = 0. is commutative. Given a complex M • of perverse motives on X = A 1 × Y , let H M (M • ) be the mapping fiber of the morphism F M (M • ) → G M (M • ) of complexes of perverse motives on X. We get a triangulated functor are t-exact and the counit of the adjunction (J P ! , J * P ) induces a natural transformation from the second one to the first one. Hence, the functor (42) induces an exact endofunctor H ′′ M of M (A 1 × Y ), together with a natural transformation p Log M (a ′ ) * M • G ′ M • i M * → H ′′ M . But we also have an invertible natural transformation of t-exact functors canonically isomorphic to the image under rat M X of the perverse motivic sheaf j M ! * L[d] := Im(H 0 (j M ! L[d]) → H 0 (j M * L[d])). This implies that IH i (X, L ) := H i−d (X, IC X (L )) is the Betti realization of the Nori motive H i−d (π M * j M ! * L[d]) where π : X → Spec k is the structural morphism. tor by [8, Proposition 4.9], the universal property ensures the existence of an exact functor ψ M f : M (X η ) → M (X σ ) and an invertible natural transformation rat M Xσ ψ M f ≃ ψ f rat M Xη . Let d be the dimension of the generic fiber X η . Since L is motivic, there exists an object L in D b (M (X η )) such that L and rat M Xη (L) are isomorphic. As L [d] is perverse and rat M Xη is conservative, the complex L[d] belongs to M (X η ). So we conclude that H i (Ψ f (L ) x ) is the Betti realization of the Nori motive H i+1−d (x * ψ M f L[d]). Theorem 6. 2 . 2(See [9] sections 5 and 10.) Denote by D b c (S, Q ℓ ) the category of constructible ℓ-adic complexes on S. Then we have a triangulated functor R ét S : DA ct (S) → D b c (S, Q ℓ ) for every S and, for every morphism f : S → S ′ , with S ′ satisfying the same hypotheses as S, we have an invertible natural transformation Corollary 6. 5 . 5Let S and ℓ be as in the beginning of this subsection. Then the étale realization functor DA ct (S) → D b c (S, Q ℓ ) factors through a functor DA ct (S) → D b h (S, Q ℓ ). Corollary 6.6. With the notation of the previous corollary, the essential image of the functor DA ct (S) → D b h (S, Q ℓ ) is contained in the full subcategory D b m (S, Q ℓ ). In particular, the perverse cohomology functor p H 0 ℓ : DA ct (S) → P(S, Q ℓ ) factors through the subcategory P m (S, Q ℓ ). Corollary 6. 7 . 7The essential image of the realization functor rat M S,ℓ : M (S) ℓ → P(S, Q ℓ ) is contained in the subcategory P m (S, Q ℓ ). Let i : x → S be the obvious inclusion. As ρ ℓ (N )[− dim S] and ρ ℓ (M )[− dim S] are locally constant sheaves on S, the morphism ρ ℓ(N ) → ρ ℓ (M ) is surjective if and only if ρ ℓ (i * N [− dim S]) → ρ ℓ (i * M [− dim S])is, and similarly for ρ σ . So we are reduced to the result on the scheme x, which we have already treated.Corollary 6.12. Let k be a field of characteristic 0 and S a quasi-projective scheme over k. We have a canonical Q-linear abelian category of perverse Nori motives M (S), together with a cohomological functor p H 0 M : DA ct (S) → M (S), with a ℓadic realization functor rat M S,ℓ : M (S) → P(S, Q ℓ ) for every prime number ℓ, with a Betti realization functor rat M S,σ : M (S) → P(S) for every embedding σ : k → C, and it has a formalism of the 4 operations, duality, unipotent nearby and vanishing cycles compatible with all these operations. Corollary 6.16. A weight filtration on an object of M (S) is unique if it exists, and morphisms of M (S) are strictly compatible with weights filtrations. In particular, if an object of M (S) has a weight filtration, then so do all its subquotients. Definition 6 . 21 . 621Let Z be a closed integral subscheme of S, and denote the immersion Z → S by i. We say that an object K of M (S) has strict support Z if K |S−Z = 0 and if, for every nonempty open subset j : U → Z, the adjunction morphism K → (ij) M * (ij) * M K is injective and induces an isomorphism between K and (ij) M ! * (ij) * M K. Remark 6.22. For example, if K |S−Z = 0 and if there exists a smooth dense open subset j : U → Z such that rat M U (K U )[− dim U ] (or any rat M U,ℓ (K U )[− dim U ] for some prime number ℓ) is locally constant and K Z = j M ! * (K |U ) Corollary 6 . 26 . 626Let K, L be objects of M (S). If K and L are pure of respective weights i and j, then Ext r M (S) (A, B) = 0 if i < j + r. Corollary 6 . 627.(i) There exists a unique weight structure (see[23, Definition 1.5]) on D b M (S) whose heart is the full subcategory of complexes of weight 0.(ii) Let K, L be objects of D b M (S) and w ∈ Z. If K is of weight ≤ w and L is of weight > w, then Hom D b M (S) (K, L) = 0. (iii) The weight structure of (i) is transversal to the canonical t-structure on D b M (S) in the sense of Definition 1.2.2 of [22]. (iv) If K ∈ Ob D b M (S) is pure of some weight, then K ≃ i∈Z H i K[−i]. K = A[−i] and L = B[−j]. Then Hom D b M (S) (K, L[n]) = Ext n+i−j M (S) (A, B) is zero by Corollary 6.26. (ii) We have Hom D b M (S) (K, L) = Hom D b M (S) (K[−w], L[−w]). As K[−w] is of weight ≤ 0 and L[−w] is of weight ≥ 1, the statement follows from Proposition 1.3.3(1) of[21]. ( iii ) iiiThis follows immediately from the existence of the weight filtration on objects of M (S). ( iv ) ivLet w be the weight of K. Let i ∈ Z. Then τ ≤i K and τ >i K are pure of weight w, so Hom D b M (S) (τ >i K, τ ≤i K[1]) = 0 by (iii), so the exact triangleτ ≤i K → K → τ >i K +1 − − → splits.This implies the statement.Theorem 6.28. Let f : X → S be a proper morphism of quasi-projective kvarieties with X irreducible. Let j : U → X be an open immersion, and K be a perverse motive on U . If K is pure of weight w, then H i (f M * j M ! * K) is a motivic perverse sheaf that is pure of weight w + i. Lemma 1.4. Let Q and A be additive categories. Then, for every additive functor T : Q → A, the categories A qv (Q, T ) and A ad (Q, T ) are canonically equivalent.Proof. To see this, it suffices to check that the factorization 1.5. Let S be a triangulated category, A be an abelian category and H : S → A be a homological functor. Then, the three abelian categories A qv (S, H), A ad (S, H) and A tr (S, H) are canonically equivalent. Proof. We have seen in Lemma 1.4 that A qv (S, H) and A ad (S, H) are canonically equivalent. Let us prove that so do A ad (S, H) and A tr (S, H). It suffices to check that the factorization S → A tr (S, H) → A satisfies the universal property that defines A ad (S, H). Consider a factorization of the additive functor H Definition 2.1. Let X be a k-variety. The abelian category of perverse motives is the abelian categoryQ) constructed by Ayoub in [8] and the perverse cohomology functor p H 0 : D b c (X, Q) → P(X). Given i, j ∈ I, we denote by u ij : U ij := U i × U U j → U the fiber product and by p ij : U ij → U i , p ji : U ij → U j the projections.19, Proposition 3.2.2, Théorème 3.2.4]. Let U = (u i : U i → U ) i∈I be a covering in the site AffEt/S. Let us first prove (1). Let A, B be objects in M (U ) and K, L be their underlying perverse sheaves. Consider the canonical commutative diagram 6, Definition 2.1.34] or [39, Definitions 7.2.8 & 15.1.1]) This functor is conservative. Moreover if A = 1, it is full and essentially surjective. (See the axioms Der 2 and Der 5 of [26, Definition 1.11].)We denote by = 1 × 1 the category It follows from (the dual statement of) [6, Lemme 1.4.8], that the functor Remark 3.9. The square sq * u ♯ v * i * Θ f iscartesian. This can be deduced from the basic properties of cartesian squares (see e.g. from [39, Proposition 15.1.6] or [39, Proposition 15.1.10]). is an isomorphism. Let us consider the Kummer natural transform e K : Id(−)(−1)[−1] → Id(−) in Betti cohomology (see [7, Définition 3.6.22]). By [70, 5.1 Lemma], the local system E 1 fits into an exact triangle To do this, we first use the vanishing cycles functor to show that the (covariant) 2-functor Imm H M * admits a global left adjoint Imm H * M (we recall that a global left adjoint is unique up to unique isomorphism and refer to [6, Définition 1.1.18] for the definition). Then, we show that the 2-functors Liss H * By [6, Proposition 1.1.17], to show that Imm H M * admits a global left adjoint Imm H * M it suffices to show that for every closed immersion i : Z ֒→ X the functor i M * admits a left adjoint; this in turn is proved in Proposition 4.2.M and Imm H * M can be glued into a 2-functor H * M . 4.1. Inverse image by a closed immersion Proposition 6.1. Let S be a triangulated category, and suppose that we have an equivalence of triangulated categories S∼ − → 2 − lim − →i∈I S i , where I is a small filtered category. have a continuity theorem for the categories of étale motives, proved in [13, Corollaire 1.A.3] and [9, Corollaire 3.22] (see also [25, Proposition 15.1.6]). Theorem 6.4. Let S be a Noetherian scheme of finite dimension. Suppose that we have S = lim ← −i∈I S i , where all the S i are finite-dimensional Noetherian schemes and all the transition maps S i → S j are affine. Then the canonical functor 2 − lim − →i∈I DA ct (S i ) → DA ct (S) is an equivalence of monoidal triangulated categories. j preserves the category of perverse sheaves, and that there exists a ∈ Z such that, if f i : S → S i is the canonical map, then f * i [a] preserves the category of perverse sheaves for every i ∈ I. Under these hypotheses, the functors f * i∈IM (S i ) ℓ → M (S) ℓ ,and this functor is full and essentially surjective.If moreover the canonical exact functor 2−lim − →i∈IP(S i , Q ℓ ) → P(S, Q ℓ ) induced by the f * i [a] is faithful, then the canonical functor 2 − lim − → i∈I M (S i ) ℓ → M (S) ℓis an equivalence of abelian categories.i [a] induce a functor 2 − lim − → This is the abelian category denoted by A(S) in [63, Chapter V]. with the locally free O X -module Ω f . As Ω f has rank d, we get an isomorphism M b bis commutative. Acknowledgments. The present paper was partly written while the first author was a Marie-Curie FRIAS COFUND fellow at the Freiburg Institute of AdvancedStep(d). It remains to prove the proposition in the case of the diagramwhere s is the zero section and p is the projection. Let f : A 1 S = A 1 × S → A 1 be the first projection, a : G m ×S → A 1 ×S be the inclusion. We set q = p•a. Given a motive B ∈ M (A 1 S ), consider the connecting morphism B → s M *S )) obtained from the exact sequence(27). By adjunction, we get a morphism s * M (B) → p Log M f (a * M B)[1]in the category D b (M (S)). Taking B to be the perverse motive B = p * M [1]A, we get after a shift a morphism. As both objects are concentrated in degree zero, the above morphism is actually a morphism in the abelian category M (S). Moreover, it is an isomorphism since it is on the underlying perverse sheaves. Moreover, we know that the squareis commutative. Hence, to conclude, it suffices to show that the canonical morphism of perverse sheaves p Log P f (q * P [1](K)) → K(37)lifts to a morphism p Log M f (q * M [1](A)) → A in the abelian category M (S). By construction of the exact functors p Log M f and q * M[1], this is an application of the property P2, since(37)is the Betti realization of a natural transformation Log f (q * (−)) → Id in the triangulated category of étale motives on S.Lemma 4.8. Consider a commutative diagramin which i, s are closed immersions and f, g, h are smooth morphisms. Then, the diagramThe lemma follows from the analogous statement for perverse sheaves. Indeed, let d be the relative dimension of h. It suffices to show that the diagramY Y is commutative. Since all functors in this diagram are dg enhanced and t-exact for the classical t-structures, by Proposition 4.4 it suffices to check the commutativity of the diagram induced on the hearts. This can be checked on the underlying perverse sheaves.Step 3: To construct the 2-isomorphisms (31) in the general case, we can decompose the commutative square (30) as followswhere i ′′ , i ′′′ are closed immersions and f ′′ is a smooth morphism. Then, using the iso-exchange constructed in step 1, the 2-isomorphism of step 2 and the connection 2-isomorphisms of the 2-functor Imm H * M we get (31) as the compositionThen, the following diagram is commutativein which i, i ′ are closed immersions and all other morphisms are smooth. Then, the diagramis commutative. Since all functors in this diagram are dg enhanced and t-exact for the classical t-structures, by Proposition 4.4 it suffices to check the commutativity of the diagram induced on the hearts. This can be checked on the underlying perverse sheaves.(38)in which i, s, i ′ , s ′ are closed immersions and f, f ′ , f ′′ are smooth morphisms. We have to prove that the diagramis commutative. Let us decompose (38) the following ways:andG G X. Therefore the desired compatibility is a consequence of Proposition 2.3, Lemma 4.9 and Lemma 4.8.• Vertical composition of squares. Consider a commutative diagram(40)in which i, i ′ , i ′′ are closed immersions and f, g, f ′ , g ′ are smooth morphisms. We have to prove that the diagramis commutative. We can refine (40) into the following commutative diagramsAs before, (Sch/k) denotes the category of quasi-projective k-varieties. Recall that a contravariant 2-functor (1) H(∅) = 0 (that is, H(∅) is the trivial triangulated category).(2) For every morphism f : X → Y in (Sch/k), the functor f * : H(Y ) → H(X) admits a right adjoint. Furthermore for every immersion i the counitadmits a left adjoint f ♯ . Furthermore, for every cartesian squareThe main theorem of[6]says that these data can be expanded into a complete formalism of the four operations (see[6,• is right adjoint to the functor p * M . Let η P : id → p P * p * P and δ P : p * P p P * → id be the unit and the counit of the adjunction between p * P and p * P . It suffices to lift η P and δ P to natural transformationsare isomorphisms and the first one is the identity (see[70,Section 3.1]). Note that the fact that these natural transformations are isomorphisms will follow automatically from the conservativity of the functors rat M X . We first construct η M . Let us first show that G M • p * M = 0. As the functors rat M X are conservative, it suffices to prove thatso, applying π P * , we get an exact triangleand this implies the desired result. Now that we know thatand p * M[1]are exact and equal to the derived functor of their H 0 , and the natural transformationsare also defined by extending their action on the H 0 's, so it suffices to check that they are equal on these H 0 's. But this follows from the analogous result for the category of perverse sheaves.Step 3 : We can now use the Brown Representability Theorem to see that the proposition is true more generally if f is the projection p : Step 4 : By steps 1 and 3, the proposition is true if f is an affine morphism.Indeed, if f is affine, then we can write f = p • i, where i is a closed immersion and p : E → Y is a vector bundle on Y .Step 5 : We now consider the case of an arbitrary morphism f : X → Y in (Sch/k). By Jouanolou's trick (cf.[52]), there exists a vector bundle E → X and an affine E-torsor p : X → X. As p is affine, we know the proposition for p by step 3. Moreover, the unit id → p M * p * M is an isomorphism; indeed, it suffices to show this after restricting to an open covering of X, so we may assume that the morphism p is isomorphic to the second projection A n × X → X, and then the result follows from point (3) of the proposition. As the unit of the adjunction (p * M , p M * ) is an isomorphism, the left adjoint p * M is fully faithful. Let g = f • p. As X is affine, the morphism g is affine. Also, we show as before that the unit id → p P * p * P is an isomorphism, so we get an isomorphism f P * ∼ − → f P * p P * p * P ≃ g P * p * P . We set f M * = g M * p * M ; by the calculation we just did, this satisfies condition(2). It remains to show that f M * is right adjoint to f * M . Let K ∈ Ob D b M (Y ) and L ∈ Ob D b M (X). Then we have isomorphismsobtained from θ M f by adjunction, is invertible; (2) for every cartesian squareProof. The assertion(2)is an immediate consequence of (1) since the functor rat M Y is conservative. We deduce the proposition from Proposition 5.2 using Verdier duality. Let f : X → Y be a smooth morphisms of relative dimension d. Note that f * P has a left adjoint given by f PLet A be an object in D b (M (X)) and B be an object in D b (M (Y )). Then, Proposition 5.2 and Proposition 2.6 provide isomorphismsform a pair of adjoint functors. Note that the counit M δ * ♯ of the adjunction is given by the compositionand the unit M η * ♯ by the compositionTo show that the morphismwhere γ M f is the invertible natural transformation of Proposition 5.2. Using the expressions of M δ * ♯ and M η * ♯ given in(43)and(44), this follows directly from Proposition 2.6 (2) and Proposition 2.6 (1), which ensure that the diagram Note that the isomorphism D P A 1 X s P * ≃ s P * D P X exists in the category of (constructible) étale motives. Therefore, the compatibility of the Betti realization with the four operations (see [8,Théorème 3.19]) implies by the universal property of the categories of perverse motives that this isomorphism lifts to an isomorphism D MAs a consequence, we get an isomorphismThis shows that p M ♯ s M * is an equivalence of categories and concludes the proof of Theorem 5.1.Complement to the main theoremThe following proposition complements Theorem 5.1.Proposition 5.4. Let f : X → Y be a morphism of quasi-projective k-varieties.Then, the natural transformations Proof. We first treat the case S = Spec k. If k can be embedded in C, then (ii) follows from Huber's construction of mixed realizations in[44], and (i) follows from (ii). In the general case, (i) follows from the case where k can be embedded in C and from Proposition 6.9, applied to the family of subfields of k that can be embedded in C. Now we treat the case of a general k-scheme S. As in the first case, (i) follows from (ii) and from Proposition 6.9. So suppose that we have an embedding σ : k → C. We prove the desired result by induction on the dimension of S. The case dim S = 0 has already been treated, so we may assume that dim S > 0 and that the result is known for all the schemes of lower dimension. We denote by M → [M ] the canonical functor DA ct (S) → A tr (DA ct (S)); as DA ct (S) is a triangulated category, this is a fully faithful functor. Let X be an object of A tr (DA ct (S)). By construction of A tr (DA ct (S)), there exists a morphism N → M in DA ct (S) such that X is the cokernel ofis of weight ≤ w (resp. ≥ w + 1) in our sense, and also in the sense of[45]when this applies. In general, using the Chow weight structure of Bondarko, we can find an exact triangle, then W w K is of weight ≤ w and K/W w K is of weight ≥ w + 1. This defines a weight filtration on K.Weights and the related weight filtration so far have been defined and constructed for perverse motives via the ℓ-adic realizations. As we shall see now, we can also define weights more directly. Let DA ct (S) w≤i be the full subcategory of DA ct (S) whose objects are direct factors of successive extensions of objects of Chow(S)[w]with w ≤ i and consider the Abelian category M (S) w≤i := A ad (DA ct (S) w≤i , p H 0 ℓ ), for some prime number ℓ. It follows from Proposition 6.11 that this category, up to an equivalence, does not depend on ℓ. Indeed, the universal property provides a commutative diagram (up to isomorphisms of functors)where I is the inclusion and J, ̺ ℓ are exact functors. As by construction M (S) w≤i := A ad (DA ct (S) w≤i )/ Ker ̺ ℓ it suffices to show that Ker ̺ ℓ is independent on ℓ. Let A be an object in A ad (DA ct (S) w≤i ). Since A belongs to Ker ̺ ℓ if and only if J(A) belongs to Ker ρ ℓ our claim follows from Proposition 6.11.The inclusion DA ct (S) w≤i ⊆ DA ct (S) induces a faithful exact functorThis construction is functorial in K (and moreover using the inclusion of DA ct (S) w≤i in DA ct (S) w≤i+1 it is easy to see that it defines a filtration on K).Proof. As observed in the proof of Proposition 6.17, if M belongs to DA ct (S) w≤i , then p H 0 M (M ) is of weight ≤ i. Hence, the functor u i takes its values in the Abelian subcategory of M (S) formed by the objects of weight ≤ i. As a consequence, for (L, α) in the slice category M (S) w≤i /K, the subobject Im α of K is of weight ≤ i and therefore W i K ⊆ W i K.Conversely, there exists an epimorphism e : p H 0 M (M ) ։ K where M belongs to DA ct (S). By constructionwhere A is an object of DA ct (S) w≤i that fits in an exact triangle A → M → B+1− − → such that B is a direct factor of a successive extension of objects of Chow(S)[w] with w ≥ i + 1. Therefore, since the weight filtration on K is the induced filtration (see Corollary 6.16), we getThis concludes the proof.The intermediate extension functorRecall the definition of the intermediate extension functor, that already appeared in the proof of Corollary 5.7. Definition 6.19. Let j : S → T be a quasi-finite morphism between quasi-projective k-schemes. We define a functor j M ! * : M (S) → M (T ) by j M ! * (K) = Im(H 0 (j M ! K) → H 0 (j M * K)). Note that, as j is quasi-finite, the functor j M ! is right exact and the functor j M * is left exact. In particular, the functor j M ! * preserves injective and surjective morphisms, but it is not exact in general.Also, the functor j M ! * is exact on the full abelian subcategory of objects that are pure of weight w.Proof. It suffices to show these statement for mixed ℓ-adic perverse sheaves. The first statement follows from[19, 5.3.2](more precisely, if j is not affine, it follows from [19, 5.1.14 and 5.3.1] ). The second statement follows from [61, Corollary 9.4].Pure objectsLet us start with the definition of objects with strict support on a given closed subscheme.Proof. We prove the result by Noetherian induction on S. If dim S = 0, there is nothing to prove. Suppose that dim S ≥ 1, and let j : U → S be a nonempty open affine subset of S. After shrinking U , we may assume that U is smooth and that rat M S (K)[− dim U ] is a locally constant sheaf on U . Let w be the weight of K. Then [61, Corollary 9.4] implies that j MThe first summand has strict support U by the remark above, and L |U = 0, so the conclusion follows from the induction hypothesis applied to L |S−U . Theorem 6.24. Let S be as before, and let w ∈ Z. Let M (S) w be the full abelian subcategory of M (S) whose objects are motives that are pure of weight w.Then M (S) w is semisimple.Proof. By Proposition 6.23, we may assume that S is integral, and it suffices to prove the result for the full subcategory M (S) 0 w of objects in M (S) w with strict support S itself.Let η be the generic point of S. By Corollary 6.10, we have a full and essentially surjective exact functor (given by the restriction morphisms) 2 − lim − →U M (U ) → M (η), where the limit is over the projective system of nonempty affine open subsets U of S. For such a U , we denote by M (U ) 0 w the full subcategory of M (U ) whose objects are motives that are pure of weight w and have strict support U . By Proposition 6.17, the functor above induces a full and essentially surjective functor w is an equivalence of categories, because it has a quasi-inverse, given by j M ! * . So we deduce that the restriction functor M (S) 0 w → M (η) w is full and essentially surjective. But this functor is also faithful, because the analogous functor on categories of ℓ-adic perverse sheaves is faithful. So M (S) 0 w → M (η) w is an equivalence of categories, which means that we just need to show the theorem in the case S = η, i.e. if S is the spectrum of a field. Now suppose that S = Spec k. Then, by Proposition 6.9, M (k) w = 2 − lim − →k ′ M (k ′ ) w , where the limit is over all the subfields k ′ of k that are finitely Groupes de monodromie en géométrie algébrique. Dirigé par P. Deligne et N. Katz. MR 0354657. Berlin-New York, 1973, Séminaire de Géométrie Algébrique du Bois-MarieSpringer-VerlagIISGA 7 IIGroupes de monodromie en géométrie algébrique. II, Lecture Notes in Mathematics, Vol. 340, Springer-Verlag, Berlin-New York, 1973, Séminaire de Géométrie Algébrique du Bois-Marie 1967-1969 (SGA 7 II), Dirigé par P. Deligne et N. Katz. MR 0354657 Séminaire de Géométrie Algébrique du Bois-Marie. MR 0354654Théorie des topos et cohomologie étale des schémas. Tome. M. Artin, A. Grothendieck et J. L. Verdier. Avec la collaboration de P. Deligne et B. Saint-DonatBerlin-New YorkSpringer-Verlag3SGA 4Théorie des topos et cohomologie étale des schémas. Tome 3, Lecture Notes in Mathematics, Vol. 305, Springer-Verlag, Berlin-New York, 1973, Séminaire de Géométrie Algébrique du Bois-Marie 1963-1964 (SGA 4), Dirigé par M. Artin, A. Grothendieck et J. L. Verdier. Avec la collaboration de P. Deligne et B. Saint-Donat. MR 0354654 Pour une théorie inconditionnelle des motifs. Yves André, MR 1423019Inst. Hautes Études Sci. Publ. Math. 83Yves André, Pour une théorie inconditionnelle des motifs, Inst. Hautes Études Sci. Publ. Math. (1996), no. 83, 5-49. MR 1423019 MR 2178703 (2006m:14025) 5. , An abelian category of motivic sheaves. Donu Arapura, MR 2995668Invent. Math. 1603Adv. Math.Donu Arapura, The Leray spectral sequence is motivic, Invent. Math. 160 (2005), no. 3, 567-589. MR 2178703 (2006m:14025) 5. , An abelian category of motivic sheaves, Adv. Math. 233 (2013), 135-195. MR 2995668 Les six opérations de Grothendieck et le formalisme des cycles évanescents dans le monde motivique. I. Joseph Ayoub, Astérisque. 314466Joseph Ayoub, Les six opérations de Grothendieck et le formalisme des cycles évanescents dans le monde motivique. I, Astérisque (2007), no. 314, x+466 pp. (2008). L'algèbre de Hopf et le groupe de Galois motiviques d'un corps de caractéristique nulle. vi+386. MR 3381140Les six opérations de Grothendieck et le formalisme des cycles évanescents dans le monde motivique. I, J. Reine AngewIIMém. Soc. Math. Fr., Les six opérations de Grothendieck et le formalisme des cycles évanescents dans le monde motivique. II, Astérisque (2007), no. 315, vi+364 pp. (2008). 8. , Note sur les opérations de Grothendieck et la réalisation de Betti, J. Inst. Math. Jussieu 9 (2010), no. 2, 225-263. 9. , La réalisation étale et les opérations de Grothendieck, Ann. Sci. École Norm. Sup. 47 (2014), no. 1, 1-141. 10. , L'algèbre de Hopf et le groupe de Galois motiviques d'un corps de caractéristique nulle, I, J. Reine Angew. Math. 693 (2014), 1-149. 11. , L'algèbre de Hopf et le groupe de Galois motiviques d'un corps de caractéristique nulle, II, J. Reine Angew. Math. 693 (2014), 151-226. 12. , Periods and the conjectures of Grothendieck and Kontsevich-Zagier, Eur. Math. Soc. Newsl. (2014), no. 91, 12-18. MR 3202399 13. , Motifs des variétés analytiques rigides, Mém. Soc. Math. Fr. (N.S.) (2015), no. 140- 141, vi+386. MR 3381140 Luca Barbieri-Viale, Annette Huber, Mike Prest, Tensor structure for nori motives. Luca Barbieri-Viale, Annette Huber, and Mike Prest, Tensor structure for nori motives, https://arxiv.org/abs/1803.00809v1, 2018. Definable categories and T-motives. Luca Barbieri, - Viale, Mike Prest, Sem. Mat. Univ. Padova. to appear in RendLuca Barbieri-Viale and Mike Prest, Definable categories and T-motives, to appear in Rend. Sem. Mat. Univ. Padova, 2018. Remarks on Grothendieck's standard conjectures, Regulators. A Beilinson, Contemp. Math. 571Amer. Math. SocMR 2953406A. Beilinson, Remarks on Grothendieck's standard conjectures, Regulators, Contemp. Math., vol. 571, Amer. Math. Soc., Providence, RI, 2012, pp. 25-32. MR 2953406 On the derived category of perverse sheaves, K-theory, arithmetic and geometry. A A Beȋlinson, How to glue perverse sheaves, K-theory, arithmetic and geometry. Berlin; Moscow; BerlinSpringer1289Lecture Notes in Math.A. A. Beȋlinson, How to glue perverse sheaves, K-theory, arithmetic and geometry (Moscow, 1984-1986), Lecture Notes in Math., vol. 1289, Springer, Berlin, 1987, pp. 42-51. MR 923134 (89b:14028) 18. , On the derived category of perverse sheaves, K-theory, arithmetic and geometry (Moscow, 1984-1986), Lecture Notes in Math., vol. 1289, Springer, Berlin, 1987, pp. 27-41. A A Beȋlinson, J Bernstein, P Deligne, Faisceaux pervers, Analysis and topology on singular spaces, I (Luminy, 1981). ParisSoc. Math. France100A. A. Beȋlinson, J. Bernstein, and P. Deligne, Faisceaux pervers, Analysis and topology on singular spaces, I (Luminy, 1981), Astérisque, vol. 100, Soc. Math. France, Paris, 1982, pp. 5- 171. How to glue perverse sheaves, K-theory, arithmetic and geometry (Moscow, 1984-1986). A A Beȋlinson, MR 923134Lecture Notes in Math. 1289SpringerA. A. Beȋlinson, How to glue perverse sheaves, K-theory, arithmetic and geometry (Moscow, 1984-1986), Lecture Notes in Math., vol. 1289, Springer, Berlin, 1987, pp. 42-51. MR 923134 Weight structures vs. t-structures; weight filtrations, spectral sequences, and complexes (for motives and in general). M V Bondarko, J. K-Theory. 63EnglishM. V. Bondarko, Weight structures vs. t-structures; weight filtrations, spectral sequences, and complexes (for motives and in general), J. K-Theory 6 (2010), no. 3, 387-504 (English). Weight structures and 'weights' on the hearts of t-structures. Mikhail V Bondarko, Homology Homotopy Appl. 141EnglishMikhail V. Bondarko, Weight structures and 'weights' on the hearts of t-structures, Homology Homotopy Appl. 14 (2012), no. 1, 239-261 (English). Weights for relative motives: relation with mixed complexes of sheaves. Mikhail V Bondarko, MR 3257549Int. Math. Res. Not. IMRN. 17Mikhail V. Bondarko, Weights for relative motives: relation with mixed complexes of sheaves, Int. Math. Res. Not. IMRN (2014), no. 17, 4715-4767. MR 3257549 An isomorphism of motivic Galois groups. Utsav Choudhury, Martin Gallauer Alves De Souza, MR 3649230Adv. Math. 313Utsav Choudhury and Martin Gallauer Alves de Souza, An isomorphism of motivic Galois groups, Adv. Math. 313 (2017), 470-536. MR 3649230 Triangulated categories of mixed motives. Denis- , Charles Cisinski, Frédéric Déglise, Denis-Charles Cisinski and Frédéric Déglise, Triangulated categories of mixed motives, http://deglise.perso.math.cnrs.fr/docs/2012/DM.pdf , 2013. Additivity for derivator K-theory. Denis- , Charles Cisinski, Amnon Neeman, MR 2382732Adv. Math. 2174Denis-Charles Cisinski and Amnon Neeman, Additivity for derivator K-theory, Adv. Math. 217 (2008), no. 4, 1381-1475. MR 2382732 The perverse filtration and the Lefschetz hyperplane theorem. Mark Andrea, A De Cataldo, MR 2877437J. Algebraic Geom. II2Mark Andrea A. de Cataldo, The perverse filtration and the Lefschetz hyperplane theorem, II, J. Algebraic Geom. 21 (2012), no. 2, 305-345. MR 2877437 The perverse filtration and the Lefschetz hyperplane theorem. Mark Andrea, A De Cataldo, Luca Migliorini, MR 2680404Ann. of Math. 2Mark Andrea A. de Cataldo and Luca Migliorini, The perverse filtration and the Lefschetz hyperplane theorem, Ann. of Math. (2) 171 (2010), no. 3, 2089-2113. MR 2680404 Motivic Homotopy Theory Program, IAS Princeton, Fall 2001. 30. , La conjecture de Weil. Pierre Deligne, 137- 252. 31Inst. Hautes Études Sci. Publ. Math. II52Amer. Math. SocProc. Sympos. Pure Math.. MR 1265528 (95c:14013Pierre Deligne, Voevodsky's lectures on cross functors, Motivic Homotopy Theory Program, IAS Princeton, Fall 2001. 30. , La conjecture de Weil. II, Inst. Hautes Études Sci. Publ. Math. (1980), no. 52, 137- 252. 31. , À quoi servent les motifs?, Motives (Seattle, WA, 1991), Proc. Sympos. Pure Math., vol. 55, Amer. Math. Soc., Providence, RI, 1994, pp. 143-161. MR 1265528 (95c:14013) DG quotients of DG categories. Vladimir Drinfeld, MR 2028075J. Algebra. 2722Vladimir Drinfeld, DG quotients of DG categories, J. Algebra 272 (2004), no. 2, 643-691. MR 2028075 Najmuddin Fakhruddin, Notes of Nori's Lectures on Mixed Motives. TIFR, MumbaiNajmuddin Fakhruddin, Notes of Nori's Lectures on Mixed Motives, TIFR, Mumbai, 2000. Filtration de monodromie et cycles évanescents formels. Laurent Fargues, MR 2511743 35. Javier Fresán and Peter Jossen, Exponential motives. 177Laurent Fargues, Filtration de monodromie et cycles évanescents formels, Invent. Math. 177 (2009), no. 2, 281-305. MR 2511743 35. Javier Fresán and Peter Jossen, Exponential motives, http://javier.fresan.perso.math.cnrs.fr/expmot.pdf , 2018. Representations in abelian categories. Peter Freyd, MR 0209333Proc. Conf. Categorical Algebra. Conf. Categorical AlgebraLa Jolla, Calif; New YorkSpringerPeter Freyd, Representations in abelian categories, Proc. Conf. Categorical Algebra (La Jolla, Calif., 1965), Springer, New York, 1966, pp. 95-120. MR 0209333 Ofer Gabber, MR 2099084Geometric aspects of Dwork theory. BerlinWalter de GruyterINotes on some t-structuresOfer Gabber, Notes on some t-structures, Geometric aspects of Dwork theory. Vol. I, II, Walter de Gruyter, Berlin, 2004, pp. 711-734. MR 2099084 Moritz Groth, Derivators, pointed derivators and stable derivators. 13MR 3031644 39. , Introduction to the theory of derivators. Project of book available on the author webpage, 334 p.Moritz Groth, Derivators, pointed derivators and stable derivators, Algebr. Geom. Topol. 13 (2013), no. 1, 313-374. MR 3031644 39. , Introduction to the theory of derivators, Project of book available on the author webpage, 334 p., 2018. Éléments de géométrie algébrique. IV. étude locale des schémas et des morphismes de schémas IV. A Grothendieck, 361. MR 0238860Inst. Hautes Études Sci. Publ. Math. 32A. Grothendieck, Éléments de géométrie algébrique. IV. étude locale des schémas et des mor- phismes de schémas IV, Inst. Hautes Études Sci. Publ. Math. (1967), no. 32, 361. MR 0238860 . Alexandre Grothendieck, Les Dérivateurs, M. Künzer, J. Malgoire, G. MaltsiniotisAlexandre Grothendieck, Les dérivateurs, Edited by M. Künzer, J. Malgoire, G. Maltsiniotis, 1991. Structure de poids à la Bondarko sur les motifs de Beilinson. David Hébert, MR 2834728Compos. Math. 1475David Hébert, Structure de poids à la Bondarko sur les motifs de Beilinson, Compos. Math. 147 (2011), no. 5, 1447-1462. MR 2834728 Mixed perverse sheaves for schemes over number fields. Annette Huber, MR 1775312J. Algebraic Geom. 1081Compositio Math.Annette Huber, Mixed perverse sheaves for schemes over number fields, Compositio Math. 108 (1997), no. 1, 107-121. MR 1458759 44. , Realization of Voevodsky's motives, J. Algebraic Geom. 9 (2000), no. 4, 755-799. MR 1775312 Periods and Nori motives. Annette Huber, Stefan Müller-Stach, Ergebnisse der Mathematik und ihrer Grenzgebiete. 3. Folge. A Series of Modern Surveys in Mathematics [Results in Mathematics and Related Areas. 3rd Series. A Series of Modern Surveys in Mathematics. Benjamin Friedrich and Jonas von Wangenheim. MR 3618276ChamSpringer65Annette Huber and Stefan Müller-Stach, Periods and Nori motives, Ergebnisse der Mathe- matik und ihrer Grenzgebiete. 3. Folge. A Series of Modern Surveys in Mathematics [Results in Mathematics and Related Areas. 3rd Series. A Series of Modern Surveys in Mathemat- ics], vol. 65, Springer, Cham, 2017, With contributions by Benjamin Friedrich and Jonas von Wangenheim. MR 3618276 Travaux de Gabber sur l'uniformisation locale et la cohomologie étale des schémas quasi-excellents. MR 3309086With the collaboration of Frédéric Déglise. Alban Moreau, Vincent Pilloni, Michel Raynaud, Joël Riou, Benoît Stroh, Michael Temkin and Weizhe ZhengParisSociété Mathématique de FranceAstérisqueLuc Illusie, Yves Laszlo, and Fabrice Orgogozo (eds.), Travaux de Gabber sur l'uniformisation locale et la cohomologie étale des schémas quasi-excellents, Société Mathématique de France, Paris, 2014, Séminaire à l'École Polytechnique 2006-2008. [Seminar of the Polytechnic School 2006-2008], With the collaboration of Frédéric Déglise, Alban Moreau, Vincent Pilloni, Michel Raynaud, Joël Riou, Benoît Stroh, Michael Temkin and Weizhe Zheng, Astérisque No. 363-364 (2014) (2014). MR 3309086 Perverse, Hodge and motivic realizations of étale motives. Florian Ivorra, MR 3723805Math. Res. Lett. 1526Compos. Math.Florian Ivorra, Perverse, Hodge and motivic realizations of étale motives, Compos. Math. 152 (2016), no. 6, 1237-1285. 48. , Perverse Nori motives, Math. Res. Lett. 24 (2017), no. 4, 1097-1131. MR 3723805 Nori motives of curves with modulus and Laumon 1-motives. Florian Ivorra, Takao Yamazaki, MR 3813515Canad. J. Math. 704Florian Ivorra and Takao Yamazaki, Nori motives of curves with modulus and Laumon 1- motives, Canad. J. Math. 70 (2018), no. 4, 868-897. MR 3813515 Uwe Jannsen, Motivic sheaves and filtrations on Chow groups, Motives. Seattle, WA; Providence, RIAmer. Math. Soc55MR 1265533 (95c:14006Uwe Jannsen, Motivic sheaves and filtrations on Chow groups, Motives (Seattle, WA, 1991), Proc. Sympos. Pure Math., vol. 55, Amer. Math. Soc., Providence, RI, 1994, pp. 245-302. MR 1265533 (95c:14006) Motivic symmetric spectra. J F Jardine, Doc. Math. 5electronicJ. F. Jardine, Motivic symmetric spectra, Doc. Math. 5 (2000), 445-553 (electronic). Une suite exacte de Mayer-Vietoris en K-théorie algébrique, Algebr. K-Theory I. J P Jouanolou, Proc. Conf. Battelle Inst. Conf. Battelle Inst341J. P. Jouanolou, Une suite exacte de Mayer-Vietoris en K-théorie algébrique, Algebr. K- Theory I, Proc. Conf. Battelle Inst. 1972, Lect. Notes Math. 341, 293-316 (1973)., 1973. With a chapter in French by Christian Houzel, Corrected reprint of the 1990 original. 54. , Categories and sheaves, Grundlehren der mathematischen Wissenschaften. Masaki Kashiwara, Pierre Schapira, Sheaves on manifolds, Grundlehren der Mathematischen Wissenschaften. Berlin; BerlinSpringer-Verlag292Fundamental Principles of Mathematical SciencesMasaki Kashiwara and Pierre Schapira, Sheaves on manifolds, Grundlehren der Mathematis- chen Wissenschaften [Fundamental Principles of Mathematical Sciences], vol. 292, Springer- Verlag, Berlin, 1994, With a chapter in French by Christian Houzel, Corrected reprint of the 1990 original. 54. , Categories and sheaves, Grundlehren der mathematischen Wissenschaften [Funda- mental Principles of Mathematical Sciences], vol. 332, Springer-Verlag, Berlin, 2006. Deriving DG categories. Bernhard Keller, MR 2275593MR 1258406 56. , On differential graded categories, International Congress of Mathematicians. ZürichIIEur. Math. Soc.Bernhard Keller, Deriving DG categories, Ann. Sci. École Norm. Sup. (4) 27 (1994), no. 1, 63-102. MR 1258406 56. , On differential graded categories, International Congress of Mathematicians. Vol. II, Eur. Math. Soc., Zürich, 2006, pp. 151-190. MR 2275593 Cohomological Hall algebra, exponential Hodge structures and motivic Donaldson-Thomas invariants. Maxim Kontsevich, Yan Soibelman, MR 2851153Commun. Number Theory Phys. 52Maxim Kontsevich and Yan Soibelman, Cohomological Hall algebra, exponential Hodge struc- tures and motivic Donaldson-Thomas invariants, Commun. Number Theory Phys. 5 (2011), no. 2, 231-352. MR 2851153 Functors on locally finitely presented additive categories. Henning Krause, MR 1487973Colloq. Math. 751Henning Krause, Functors on locally finitely presented additive categories, Colloq. Math. 75 (1998), no. 1, 105-132. MR 1487973 Georges Maltsiniotis, MR 2951712Carrés exacts homotopiques et dérivateurs. 53Georges Maltsiniotis, Carrés exacts homotopiques et dérivateurs, Cah. Topol. Géom. Différ. Catég. 53 (2012), no. 1, 3-63. MR 2951712 A 1 -homotopy theory of schemes. Fabien Morel, Vladimir Voevodsky, Inst. Hautes Études Sci. Publ. Math. 90Fabien Morel and Vladimir Voevodsky, A 1 -homotopy theory of schemes, Inst. Hautes Études Sci. Publ. Math. (1999), no. 90, 45-143 (2001). Mixed ℓ-adic complexes for schemes over number fields. Sophie Morel, Sophie Morel, Mixed ℓ-adic complexes for schemes over number fields, https://arxiv.org/pdf/1806.03096.pdf , 2018. The Grothendieck duality theorem via Bousfield's techniques and Brown representability. Amnon Neeman, Triangulated categories. Princeton, NJPrinceton University Press9J. Amer. Math. Soc.Amnon Neeman, The Grothendieck duality theorem via Bousfield's techniques and Brown representability, J. Amer. Math. Soc. 9 (1996), no. 1, 205-236. 63. , Triangulated categories, Annals of Mathematics Studies, vol. 148, Princeton Univer- sity Press, Princeton, NJ, 2001. Definable additive categories: purity and model theory. Mike Prest, vi+109. MR 2791358Mem. Amer. Math. Soc. 210987Mike Prest, Definable additive categories: purity and model theory, Mem. Amer. Math. Soc. 210 (2011), no. 987, vi+109. MR 2791358 Notes on Beilinson's "How to glue perverse sheaves. Ryan Reich, MR 2671769J. Singul. 1Ryan Reich, Notes on Beilinson's "How to glue perverse sheaves", J. Singul. 1 (2010), 94-115. MR 2671769 Extension of mixed Hodge modules. Morihiko Saito, Duality for vanishing cycle functors. 25Publ. Res. Inst. Math. Sci.. MR 1047415 (91m:14014Morihiko Saito, Duality for vanishing cycle functors, Publ. Res. Inst. Math. Sci. 25 (1989), no. 6, 889-921. MR 1045997 67. , Extension of mixed Hodge modules, Compositio Math. 74 (1990), no. 2, 209-234. MR 1047741 68. , Mixed Hodge modules, Publ. Res. Inst. Math. Sci. 26 (1990), no. 2, 221-333. MR 1047415 (91m:14014) Hodge conjecture and mixed motives. II, Algebraic geometry. Morihiko Saito, EnglishChicago, IL, USA; Berlin etcProceedings of the US-USSR symposiumMorihiko Saito, Hodge conjecture and mixed motives. II, Algebraic geometry. Proceedings of the US-USSR symposium, held in Chicago, IL, USA, June 20-July 14, 1989, Berlin etc.: Springer-Verlag, 1991, pp. 196-215 (English). Morihiko Saito, arXiv:math/0611597On the formalism of mixed sheaves. Morihiko Saito, On the formalism of mixed sheaves, arXiv:math/0611597, 2006. The Stacks Project Authors, Stacks Project. The Stacks Project Authors, Stacks Project, http://stacks.math.columbia.edu, 2018. Bertrand Toën, MR 2762557Lectures on dg-categories, Topics in algebraic and topological K-theory. BerlinSpringerBertrand Toën, Lectures on dg-categories, Topics in algebraic and topological K-theory, Lec- ture Notes in Math., vol. 2008, Springer, Berlin, 2011, pp. 243-302. MR 2762557 Jean-Louis Verdier, Maltsiniotis. MR 1453167Dualité dans la cohomologie des espaces localement compacts. Paris9With a preface by Luc IllusieJean-Louis Verdier, Dualité dans la cohomologie des espaces localement compacts, Séminaire Bourbaki, Vol. 9, Soc. Math. France, Paris, 1995, pp. Exp. No. 300, 337-349. 74. , Des catégories dérivées des catégories abéliennes, Astérisque (1996), no. 239, xii+253 pp. (1997), With a preface by Luc Illusie, Edited and with a note by Georges Maltsiniotis. MR 1453167 (electronic). 76. , Motivic cohomology groups are isomorphic to higher Chow groups in any characteristic. Vladimir Voevodsky, 351-355. MR 1883180Proceedings of the International Congress of Mathematicians. the International Congress of MathematiciansBerlinI14021A 1 -homotopy theoryVladimir Voevodsky, A 1 -homotopy theory, Proceedings of the International Congress of Math- ematicians, Vol. I (Berlin, 1998), no. Extra Vol. I, 1998, pp. 579-604 (electronic). 76. , Motivic cohomology groups are isomorphic to higher Chow groups in any character- istic, Int. Math. Res. Not. (2002), no. 7, 351-355. MR 1883180 (2003c:14021) On the derived DG functors. Vadim Vologodsky, 1155-1170. MR 2729639Math. Res. Lett. 176Vadim Vologodsky, On the derived DG functors, Math. Res. Lett. 17 (2010), no. 6, 1155-1170. MR 2729639 Hodge theory with degenerating coefficients. L 2 cohomology in the Poincaré metric. Steven Zucker, MR 534758Ann. of Math. 2Steven Zucker, Hodge theory with degenerating coefficients. L 2 cohomology in the Poincaré metric, Ann. of Math. (2) 109 (1979), no. 3, 415-476. MR 534758 Email address: [email protected] UMPA UMR CNRS 5669, ENS Lyon Site Monod, 46 Allée d'Italie. Rennes cedex (France). 35042Institut de recherche mathématique de Rennes ; UMR 6625 du CNRS, Université de Rennes 1, Campus de Beaulieu69364 Lyon Cedex 07 Email address: [email protected] de recherche mathématique de Rennes, UMR 6625 du CNRS, Université de Rennes 1, Campus de Beaulieu, 35042 Rennes cedex (France), Email address: [email protected] UMPA UMR CNRS 5669, ENS Lyon Site Monod, 46 Allée d'Italie, 69364 Lyon Cedex 07 Email address: [email protected]
[]
[ "Effect of Chiral Symmetry Restoration on Pentaquark Θ + Mass and Width at Finite Temperature and Density", "Effect of Chiral Symmetry Restoration on Pentaquark Θ + Mass and Width at Finite Temperature and Density" ]
[ "Xuguang Huang ", "Xuewen Hao ", "Pengfei Zhuang ", "\nCenter of Theoretical Nuclear Physics\nPhysics Department\nNational Laboratory of Heavy Ion Collisions\n730000LanzhouChina\n", "\nTsinghua University\n100084BeijingChina\n" ]
[ "Center of Theoretical Nuclear Physics\nPhysics Department\nNational Laboratory of Heavy Ion Collisions\n730000LanzhouChina", "Tsinghua University\n100084BeijingChina" ]
[]
We investigate the effect of chiral phase transition on the pentaquark Θ + mass and width at one-loop level of N Θ + K coupling at finite temperature and density. The behavior of the mass, especially the width in hadronic medium is dominated by the characteristics of chiral symmetry restoration at high temperature and high density. The mass and width shifts of positive-parity Θ + are much larger than that of negative-parity one, which may be helpful to determine the parity of Θ + in high energy nuclear collisions.
10.1016/j.physletb.2004.12.039
[ "https://export.arxiv.org/pdf/nucl-th/0409001v2.pdf" ]
118,948,427
nucl-th/0409001
905f8ce7ff4ff354364f5b239aa79ec68d641770
Effect of Chiral Symmetry Restoration on Pentaquark Θ + Mass and Width at Finite Temperature and Density 21 Jan 2005 Xuguang Huang Xuewen Hao Pengfei Zhuang Center of Theoretical Nuclear Physics Physics Department National Laboratory of Heavy Ion Collisions 730000LanzhouChina Tsinghua University 100084BeijingChina Effect of Chiral Symmetry Restoration on Pentaquark Θ + Mass and Width at Finite Temperature and Density 21 Jan 2005(Dated: January 21, 2022)numbers: 1360Rj1110Wx2575-q We investigate the effect of chiral phase transition on the pentaquark Θ + mass and width at one-loop level of N Θ + K coupling at finite temperature and density. The behavior of the mass, especially the width in hadronic medium is dominated by the characteristics of chiral symmetry restoration at high temperature and high density. The mass and width shifts of positive-parity Θ + are much larger than that of negative-parity one, which may be helpful to determine the parity of Θ + in high energy nuclear collisions. I. INTRODUCTION Recently, an exotic baryon Θ + was discovered firstly by LEPS group at Spring-8 in the reaction γn → K + K − n [1], and was subsequently observed by many other groups [2,3,4,5,6,7,8,9,10,11,12]. It has K + n quantum numbers(B=+1, Q=+1, S=+1), and its minimal quark content must be uudds. The remarkable features of the Θ + are its small mass (1540MeV) and very narrow width (<25MeV) [1]. While the isospin of Θ + is probably zero [4,6,7,12], the other quantum numbers including spin and parity have not been measured experimentally yet. Theoretically, most of the works considers Θ + as a J = 1 2 particle because of its low mass [13,14,15,16], and some predict positive parity [13,14,15,17,18,19,20,21,22] and some suggest negative parity [16,22,23,24,25,26,27,28]. The search for the pentaquark Θ + is also extended to the experiment of relativistic heavy ion collisions where the extreme condition to form a new state of matter -quark-gluon plasma (QGP) can be reached. The STAR collaboration [29] reported the progress of the pentaquark search in p − p, d − Au and Au − Au collisions at energy √ s = 200 GeV, and the PHENIX collaboration [30] investigated the decay of the anti-pentaquarkΘ + → K −N . Theoretically, the baryon density modification [31] on the Θ + mass was discussed with a phenomenological densitydependent nucleon propagator [32], and the Θ + production in relativistic heavy ion collisions was studied in the coalescence model [33]. It is generally believed that there are two QCD phase transitions in hot and dense nuclear matter. One of them is related to the deconfinement process in moving from a hadron gas to QGP, and the other one describes the transition from the chiral symmetry breaking phase to the phase in which it is restored. From the QCD lattice simulations [34], the phase transitions are of first order in high density region and may be only a crossover in high temperature region. As the order parameter of chiral phase transition, the dynamic quark mass, or the nucleon mass reflects the characteristics of chiral symmetry restoration, and will influence the pentaquark decay process Θ + → KN . In this letter, we investigate the effect of chiral phase transition on the pentaquark Θ + mass and width at finite temperature and density at one-loop level of N Θ + K coupling. If the mass and width shifts induced by the chiral symmetry restoration are sensitive to the pentaquark parity, it may help us to solve the puzzle of Θ + parity. We proceed as follows. In Section 2, we calculate the self-energy of Θ + at finite temperature and density at one-loop level of pseudovector and pseudoscalar N Θ + K couplings with positive and negative Θ + parity, and obtain the Θ + mass shift and width shift in the medium. The medium dependence of the nucleon mass is determined through the mean field gap equation of the NJL model [35] which is one of the models to see directly how the dynamical mechanisms of chiral symmetry breaking and restoration operate. In Section 3, we show the numerical calculations, analyze the contribution of chiral symmetry restoration to the mass and width shifts, and discuss the parity dependence of the results. Finally, we give our summary. II. FORMULAS We introduce the effective Lagrangians for the pseudovector and pseudoscalar N Θ + K couplings [32,36], L P V = − g ⋆ A 2f πΘ + γ µ γ 5 ∂ µ K + n , L P S = igΘ + γ 5 K + n .(1) Here, the positive parity of Θ + is assumed. The effective lagrangians with assuming negative parity of Θ + can be obtained by removing iγ 5 in the vertexes. The pseudovector and pseudoscalar coupling constants g ⋆ A and g are fixed to reproduce the mass M Θ = 1540M eV and decay width Γ Θ + = 15 MeV at zero temperature and zero density [36]. Through the calculation at tree level one has g ⋆ A = 0.28 and g = 3.8 for positive parity and g ⋆ A = 0.16 and g = 0.53 for negative parity [36]. We calculate the in-medium Θ + self-energy by perturbation method above mean field. To the lowest order, it is shown in Fig.1. The propagators of nucleon and kaon at mean field level read, G N (p) = i p ′ / − M N , G K (p) = i p 2 − M 2 K ,(2) where the four-momentum p ′ is defined as p ′ = {p 0 + µ, p} with baryon chemical potential µ. The mechanism of chiral symmetry restoration at finite temperature and density is embedded in our calculation through the effective nucleon mass M N . Since the calculation of M N is nonperturbative, it is difficult to calculate it directly with QCD, one has to use models. While the quantitative result depends on the models used, the qualitative temperature and density behavior is not sensitive to the details of different chiral models [37]. A simple model to describe chiral symmetry breaking in vacuum and symmetry restoration in medium is the NJL model [35]. Within this model, on can obtain the hadronic mass spectrum and the static properties of mesons remarkably well. In particular, one can recover the Goldstone mode, and some important low-energy properties of current algebra such as the Goldberger-Treiman and GellMann-Oakes-Renner relations [35]. In mean field approximation of NJL, the effective nucleon mass can be determined through the gap equation [35,38], M N = 3m q , 1 − N c N f G π 2 Λ 0 dp p 2 E q tanh E q + µ/3 2T + tanh E q − µ/3 2T = m 0 m q ,(3) where E q = p 2 + m 2 q is the constituent quark energy, the current quark mass m 0 , the color and flavor degrees of freedom N c and N f , the coupling constant G and the momentum cutoff Λ are chosen to fit the nucleon and pion properties in the vacuum [35,38]. The numerical results of temperature and chemical potential dependent nucleon mass are shown in Fig.2. The effective nucleon mass drops down continuously with increasing temperature and approaches finally three times the current quark mass m 0 . Very different from the temperature effect, the nucleon mass jumps down at a critical chemical potential µ c , which means a first-order chiral phase transition in high baryon density region. These different temperature and density effects will be reflected in the Θ + mass and width shifts. Considering the near cancellation of attractive scalar and repulsive vector potential, the K + mass increases slightly with temperature and density [37]. To simplify the calculation we take it as a constant M K = 494 MeV in the following. The Θ + self-energy can be separated into a scalar and a vector part, Σ(p ′ ) = Σ s (p ′ ) + Σ µ (p ′ )γ µ ,(4) with which the propagator of Θ + reads where m Θ is the Θ + mass in vacuum. The Θ + complex mass M Θ = M Θ − i Γ 2 in the medium can be determined by the pole equation of the propagator, G Θ = i p ′µ γ µ − m Θ − Σ = i (p ′µ − Σ µ )γ µ + (m Θ + Σ s ) (p ′µ − Σ µ )(p ′ µ − Σ µ ) − (m Θ + Σ s ) 2 ,(5)(p ′µ − Σ µ )(p ′ µ − Σ µ ) − (m Θ + Σ s ) 2 p ′ 0 = √ M 2 Θ +p 2 = 0 .(6) From this equation, one can obtain the temperature, chemical potential, and momentum dependence of the Θ + mass and width. In the rest frame of Θ + , p = 0 and Σ = 0, the calculation can be done more easily. Since the width of Θ + is very small compared to its mass(we assume that this statement holds both in vacuum and in medium), the Eq. (6) can be separated into two uncoupled equations. The pentaquark mass M Θ at finite temperature and density is calculated through the gap equation M Θ = m Θ + Re (Σ 0 (M Θ ) + Σ s (M Θ )) ,(7) and the medium correction to the pentaquark width Γ is determined by ∆Γ Θ = −2Im (Σ 0 (M Θ ) + Σ s (M Θ )) .(8) For the pseudovector coupling with positive parity, − iΣ P V (p) = − g ⋆ A 2f π 2 d 4 k (2π) 4 (p / − k /) k ′ / − M N k ′2 − M 2 N 1 (p − k) 2 − M 2 K (p / − k /) = g ⋆ A 2f π 2 d 4 k (2π) 4 M N (p − k) 2 − 2k ′ · (p − k)p ′ / + (p ′2 − k ′2 )k ′ / (k ′2 − M 2 N )((p − k) 2 − M 2 K ) .(9) where k is the loop 4-momentum carried by the nucleon. Taking the transformations dk0 (2π) → iT m and k 0 → iω m in imaginary time formulism of finite temperature field theory, we can obtain the explicit expression of Θ + self-energy at finite temperature and density, where ω m = (2m + 1)πT with m = 0, ±1, ±2, . . . is the fermion frequency. After the summation over the nucleon frequency one derives the real and imaginary parts of the in-medium self-energy, ReΣ P V s (M Θ ) = − 1 2 g ⋆ A 2f π 2 d 3 k (2π) 3 M N E N 1 + M 2 K F − (N, K) f − f + 1 + M 2 K F + (N, K) f + f − M 2 K E N E K 1 F + (K, N ) + 1 F − (K, N ) f b , ReΣ P V 0 (M Θ ) = 1 2 g ⋆ A 2f π 2 d 3 k (2π) 3 1 + E N M 2 K + 2M Θ k 2 E N F − (N, K) f − f − 1 + E N M 2 K − 2M Θ k 2 E N F + (N, K) f + f − M Θ (E 2 K + k 2 ) E K E N M Θ − E N F − (N, K) − M Θ + E N F + (N, K) − E K M 2 K E N 1 F − (N, K) − 1 F + (N, K) f b ,(10) and ImΣ P V s (M Θ ) = πM N M 2 K 4 g ⋆ A 2f π 2 d 3 k (2π) 3 1 E K E N (δ − (K, N ) − δ + (K, N )) f − f + δ + (N, K)f + f − (δ + (K, N ) + δ − (K, N ) − δ + (N, K)) f b , ImΣ P V 0 (M Θ ) = − π 4 g ⋆ A 2f π 2 d 3 k (2π) 3 1 E N E K (E N M 2 K + 2M Θ k 2 ) (δ − (K, N ) − δ + (K, N )) f − f − (E N M 2 K − 2M Θ k 2 )δ + (N, K)f + f − M Θ (E 2 K + k 2 )(M Θ − E N ) − E 2 K M 2 K E K (δ − (K, N ) − δ + (K, N )) − M Θ (E 2 K + k 2 )(M Θ + E N ) − E 2 K M 2 K E K δ + (N, K) f b ,(11) with particle energies E 2 N = k 2 + M 2 N ,(12)E 2 K = k 2 + M 2 K ,(13) the Fermi-Dirac and Bose-Einstein distributions f ± f = 1 e (EN ±µ)/T + 1 ,(14)f b = 1 e EK/T − 1 ,(15) and the functions F ± (X, Y ) and δ ± (X, Y ) defined by F ± (X, Y ) = (M Θ ± E X ) 2 − E 2 Y , δ ± (X, Y ) = δ (M Θ ± E X − E Y ) .(16) For the pseudoscalar coupling with positive parity, ReΣ P S s (M Θ ) = − g 2 M N 2 d 3 k (2π) 3 1 E N 1 F − (N, K) f − f + 1 F + (N, K) f + f − 1 E K 1 F − (K, N ) + 1 F + (K, N ) f b , ReΣ P S 0 (M Θ ) = g 2 2 d 3 k (2π) 3 1 F − (N, K) f − f − 1 F + (N, K) f + f − 1 E K M Θ − E K F − (K, N ) + M Θ + E K F + (K, N ) f b ,(17) and ImΣ P S s (M Θ ) = πg 2 M N 4 d 3 k (2π) 3 1 E K E N (δ − (K, N ) − δ + (K, N )) f − f + δ + (N, K)f + f − (δ + (K, N ) + δ − (K, N ) − δ + (N, K)) f b , ImΣ P S 0 (M Θ ) = − πg 2 4 d 3 k (2π) 3 1 E K (δ − (K, N ) − δ + (K, N )) f − f − δ + (N, K)f + f − M Θ − E K E N (δ − (K, N ) − δ + (N, K)) + M Θ + E K E N δ + (K, N ) f b .(18) For the couplings with negative Θ + parity, the only difference is to change the sign of the corresponding scalar self energy, Σ P V s → −Σ P V s and Σ P S s → −Σ P S s . We will see in the following that this change in sign leads to a partial cancellation between Σ 0 and Σ s in determining the in-medium Θ + mass and width, and results in small mass and width shifts for negative-parity Θ + . III. NUMERICAL RESULTS With the formulas given in last section, we now calculate numerically the Θ + mass shift ∆M Θ (T, µ) = M Θ (T, µ) − m Θ and the width shift ∆Γ(T, µ) at finite temperature and density for the pseudovector (PV) and pseudoscalar (PS) couplings with positive and negative Θ + parity. We call these four couplings in the following P V + , P V − , P S + , P S − , respectively. We first consider the mass shift. From its temperature dependence at fixed chemical potentials (Figs.3a and b) and chemical potential dependence at fixed temperatures (Figs.3c and d), it has the following properties: 1)The Θ + becomes light in the medium, like most of the hadrons such as nucleon, ρ meson and σ meson characterized by chiral symmetry. While for the couplings P S − , P V − and P V + the mass shift by pure temperature effect is rather small, see Fig.3a, it becomes remarkable in the case with high baryon density and high temperature, see Figs.3b-d. 2) The temperature and density dependence of the mass shift is controlled by the chiral properties. In the case of pure temperature effect (Fig.3a) and the case of high temperature and high density effect (Figs.3b and d), the continuous chiral phase transition, shown in Fig.2a for µ = 0, results in a smooth mass shift. In the case of high density but low temperature, the mass shift is zero in the chiral breaking phase at µ < µ c , changes suddenly at the phase transition with µ = µ c , and decreases rapidly in the chiral restoration phase with µ > µ c , see Fig.4c. This behavior reflects the properties of first-order chiral phase transition shown in Fig.2b. 3) The degree of the Θ + mass change depends strongly on its parity. In any case of temperature and density, the mass shift of positive-parity Θ + is much larger than that of negative-parity one. For instance, at T = 200 MeV and µ = 1000 MeV, the mass shifts for the couplings P S + , P V + , P V − and P S − are, respectively, −115, −40, −10 and −2 MeV. We turn to the discussion of the Θ + width shift. It is indicated as a function of temperature at fixed chemical potentials (Figs.4a and b) and a function of chemical potential at fixed temperatures ( Fig.4c and d) for the four different couplings. Related to the properties of the mass shift, the width has the characteristics: 1) Like most of the hadrons in medium, the suppressed mass leads to Θ + broadening at finite temperature and density. The pentaquark will become more and more unstable with increasing temperature and density, and easy to decay in relativistic heavy ion collisions if it is created. 2) Again, the behavior of the width shift is dominated by the chiral properties. The width increases continuously in the case of pure temperature effect and the case of high density and high temperature, but starts to jump up suddenly at the critical chemical potential µ c in the case of high density but low temperature, resulted from the chiral phase transition shown in Fig.2. 3) The broadening depends also on the Θ + parity. In any case the broadening of positive-parity Θ + is much larger than that of negative-parity one. 4) Compared with the vacuum mass and width (1540 MeV and 15 MeV in our calculation), the mass shift is slight, but the width shift is extremely strong. From Fig.3 the maximum mass shift in the considered temperature and density region is 20% of the vacuum value for the coupling P S + and 6% for the coupling P V + . However, from the Fig.4 the maximum width shift is 17 times the vacuum value for P S + and 7 times for P V + ! To study the pentaquark production in relativistic heavy ion collisions, we estimate the Θ + mass and width shifts at RHIC, SPS and SIS, shown in Tab.I. We take the corresponding temperature and baryon chemical potential at RHIC, SPS and SIS as (T, µ) = (200, 50), (180, 300) and (150, 700) MeV, respectively. While the mass shift can be neglected in any case, the width shift for positive-parity Θ + at SPS and SIS is important and measurable. Compared with the width shift in the vacuum, it goes up form 7% at RHIC to 40% at SPS and to 380% at SIS for the coupling P V + , and from 23% at RHIC to 70% at SPS and to 760% at SIS for the coupling P S + ! The temperature and density are introduced into our calculation through two ways. One is the chiral symmetry restoration reflected in the effective nucleon mass, and the other one is the loop frequency summation in Fig.1. To make sure that the remarkable mass shift and the crucial width shift are induced mainly by the chiral properties, we now turn off the way of chiral phase transition and keep the nucleon mass as the vacuum value M N = 940M eV during the numerical calculation. The results are shown in Fig.5. Compared with the calculation with chiral symmetry restoration, at zero chemical potential the maximum mass shift is reduced from -20 MeV (Fig.3a) to -10 MeV (Fig.5a), and the maximum width shift is strongly suppressed from 9 MeV (Fig.4a) to -1 MeV (Fig.5b). Qualitatively different from the case with chiral symmetry restoration, the Θ + becomes narrow in the case without chiral symmetry restoration. It is easy to see that the considerable mass shift, especially the extreme width shift are originated from the mechanism of chiral phase transition. IV. CONCLUSIONS We studied the temperature and density effect on the pentaquark mass and width to the lowest order of the perturbation expansion above chiral mean field for four different N Θ + K couplings. The chiral phase transition, here reflected in the effective nucleon mass, plays an essential rule in determining the in-medium Θ + mass and width shifts. Like most of the hadrons, the Θ + becomes light and unstable in high temperature and density region where the chiral symmetry is restored. The degree of the mass and width shifts depends strongly on the Θ + parity. For positive-parity Θ + , the maximum width shift in reasonable temperature and density region is 17 times its vacuum value for pseudoscalar coupling, and 7 times for pseudovector coupling, while for negative-parity Θ + , the width shift is much smaller. This parity dependence may be helpful to determine the pentaquark parity in relativistic heavy ion collisions. At SIS energy, the width shift is almost 4-8 times the vacuum value for positiveparity Θ + . When the chiral symmetry restoration is removed from the calculation, the pure temperature and density effect resulted from the thermal loop on the mass and width becomes rather small. FIG. 1 : 1The lowest order Θ + self-energy. The solid and dashed line represent nucleon and kaon field, respectively. FIG. 2 : 2The effective nucleon mass scaled by its value in the vacuum as a function of temperature at zero chemical potential (a) and a function of chemical potential at zero temperature (b). FIG. 3 : 3The temperature dependence at fixed chemical potentials µ = 0 (a) and µ = 1000M eV (b) and the chemical potential dependence at fixed temperatures T = 0 (c) and T = 200M eV (d) of the Θ + mass shift for four different couplings. FIG. 4 : 4The temperature dependence at fixed chemical potentials µ = 0 (a) and µ = 1000M eV (b) and the chemical potential dependence at fixed temperatures T = 0 (c) and T = 200M eV (c) of the Θ + width shift for four different couplings.∆MΘ∆ΓΘ T µ P V + P V − P S + P S − P V + P V − P S + P S FIG. 5 : 5The temperature dependence of the Θ + mass shift (a) and width shift (b) with a constant nucleon mass for four different couplings. TABLE I : IThe estimation of Θ + mass and width shifts at RHIC, SPS and SIS for four different couplings. Acknowledgments:The work is supported in part by the Grants NSFC10135030 and G2000077407. . T Nakano, Phys. Rev. Lett. 9112002T. Nakano et al., Phys. Rev. Lett. 91, (2003) 012002. . V V Barmin, Phys. Atom. Nucl. 661715V. V. Barmin et al., Phys. Atom. Nucl. 66 (2003)1715. . S Stepanyan, Phys. Rev. Lett. 91252001S. Stepanyan et al., Phys. Rev. Lett. 91 (2003)252001. . J Barth, Phys. Lett. 572127J. Barth et al., Phys. Lett. B572 (2003) 127. . A E Aratayn, A G Dololenko, M A Kubantsev, Phys. Atom. Nucl. 67682A. E. Aratayn, A. G. Dololenko and M. A. Kubantsev, Phys. Atom. Nucl. 67 (2004) 682. . V Kubarovsky, Phys. Rev. Lett. 9232001V. Kubarovsky et al., Phys. Rev. Lett. 92 (2004) 032001. . A Airapetian, Phys. Lett. 585213A. Airapetian et al., Phys. Lett. B585 (2004) 213. . Phys. Lett. 595127COSY-TOF Collaboration, Phys. Lett. B595 (2004)127. . P Zh, Aslanyan, hep-ex/0403044P. Zh. Aslanyan et al., hep-ex/0403044. . Phy. Lett. 5917ZEUS Collaboration, Phy. Lett. B591 (2004) 7. . S V Chekanov, hep-ex/0404007S. V. Chekanov, hep-ex/0404007. . D Diakonov, V Petrov, M Polyakov, Z. Phys. A. 359305D. Diakonov, V. Petrov, and M. Polyakov, Z. Phys. A 359, (1997) 305. . R Jaffe, F Wilczek, Phys. Rev. Lett. 91232003R. Jaffe and F. Wilczek, Phys. Rev. Lett. 91, (2003) 232003. . F Stancu, D O Riska, Phys. Lett. 575242F. Stancu and D. O. Riska, Phys. Lett. B575 (2003) 242. . F Huang, Z Y Zhang, Y W Yu, B S Zou, Phys. Lett. 58669F. Huang, Z. Y. Zhang, Y. W. Yu, and B. S. Zou, Phys. Lett. B586(2004) 69. . T.-W Chiu, T.-H Hsieh, hep-ph/0403020T.-W. Chiu, and T.-H. Hsieh, hep-ph/0403020. . M Karliner, H J Lipkin, hep-ph/0307243M. Karliner and H. J. Lipkin, hep-ph/0307243. . A Hosaka, Phys. Lett. 57155A. Hosaka, Phys. Lett. B571 (2003) 55. . C E Carlson, C D Carone, H J Kwee, V Nazaryan, Phys.Rev.D. 70375001C. E. Carlson , C.D.Carone, H.J.Kwee, V.Nazaryan, Phys.Rev.D 70(2004) 0375001. . Y.-X Liu, J.-S Li, C.-G Bao, hep-ph/0401197Y.-X. Liu, J.-S. Li, and C.-G. Bao, hep-ph/0401197. . N Mathur, F X Lee, A Alexandru, A Bennhold, Y Chen, Phys.Rev.D. 7074508N.Mathur, F.X.Lee, A.Alexandru,A.Bennhold,and Y.Chen,Phys.Rev.D 70(2004) 074508. . F Csikor, JHEP. 031170F. Csikor et al., JHEP 0311 (2003) 070; . S Sasaki, Phys.Rev.Lett. 93152001S. Sasaki, Phys.Rev.Lett.93 (2004) 152001. . S.-L Zhu, Phys. Rev. Lett. 91232002S.-L. Zhu, Phys. Rev. Lett. 91, (2003) 232002. . R D Matheus, Phys. Lett. 578323R.D. Matheus et al., Phys. Lett. B578 (2004) 323; . J Sugiyama, Phys. Lett. 581167J. Sugiyama et al., Phys. Lett. B581 (2004) 167. . C E Carlson, Phys. Lett. B. 573101C. E. Carlson et al., Phys. Lett. B 573, (2003) 101. . B Wu, B.-Q Ma, hep-ph/0311331B. Wu and B.-Q. Ma, hep-ph/0311331. . X.-C Song, S.-L Zhu, Mod.Phys.Lett.A. 19X.-C. Song and S.-L. Zhu, Mod.Phys.Lett.A 19 (2004) 2791-2797. . S Salur, nucl-ex/0403009for the STAR collaborationS. Salur (for the STAR collaboration), nucl-ex/0403009. . C Pinkenburg, PHENIX collaborationJ. Phys. 301201C. Pinkenburg (for the PHENIX collaboration), J. Phys. G30 (2004) s1201. . F S Navarra, M Nielsen, K Tsushima, nucl-th/0408072F.S. Navarra, M. Nielsen and K. Tsushima,nucl-th/0408072. . H. -Ch Kim, C. -H Lee, H. -J Lee, hep-ph/0402141H. -Ch. Kim, C. -H. Lee, and H. -J. Lee, hep-ph/0402141. . L W Chen, V Greco, C M Ko, S H Lee, W Liu, Phys.Lett.B. 601L. W. Chen, V. Greco, C. M. Ko, S. H. Lee, and W. Liu, Phys.Lett.B 601(2004) 34-40. . Z See, S D Fodor, Katz, JHEP. 040450see, for instance, Z. Fodor, and S. D. Katz, JHEP 0404 (2004) 050. . See, U Instance, W Vogl, Weise, Prog. Part. and Nucl. Phys. 27195see, for instance, U. Vogl and W. Weise, Prog. Part. and Nucl. Phys. 27(1991) 195; . S P Klevansky, Rev. Mod. Phys. 64649S. P. Klevansky, Rev. Mod. Phys. 64(1992) 649. . S I Nam, A Hosaka, H. -Ch Kim, Phys. Lett. B. 57943S. I. Nam, A. Hosaka, and H. -Ch. Kim, Phys. Lett. B 579, (2004) 43. . G Q See, Li, nucl-th/9710008see, for instance, G. Q. Li, nucl-th/9710008. . P Zhuang, J Hüfner, S P Klevansky, Nucl. Phys. 576525P. Zhuang, J.Hüfner, and S. P. Klevansky, Nucl. Phys. A576(1994) 525.
[]
[ "Thermodynamics of multi-boson phenomena", "Thermodynamics of multi-boson phenomena" ]
[ "Yu M Sinyukov \nBogolyubov Institute for Theoretical Physics\nInstitute of Physics\nMetrologicheskaya 14b, Na Slovance 2252143, 18040Kiev, Prague 8Ukraine, Czech Republic\n", "S V Akkelin \nBogolyubov Institute for Theoretical Physics\nInstitute of Physics\nMetrologicheskaya 14b, Na Slovance 2252143, 18040Kiev, Prague 8Ukraine, Czech Republic\n", "R Lednicky \nBogolyubov Institute for Theoretical Physics\nInstitute of Physics\nMetrologicheskaya 14b, Na Slovance 2252143, 18040Kiev, Prague 8Ukraine, Czech Republic\n" ]
[ "Bogolyubov Institute for Theoretical Physics\nInstitute of Physics\nMetrologicheskaya 14b, Na Slovance 2252143, 18040Kiev, Prague 8Ukraine, Czech Republic", "Bogolyubov Institute for Theoretical Physics\nInstitute of Physics\nMetrologicheskaya 14b, Na Slovance 2252143, 18040Kiev, Prague 8Ukraine, Czech Republic", "Bogolyubov Institute for Theoretical Physics\nInstitute of Physics\nMetrologicheskaya 14b, Na Slovance 2252143, 18040Kiev, Prague 8Ukraine, Czech Republic" ]
[]
Using the method of locally equilibrium statistical operator we consider the thermalized relativistic quantum fields in an oscillatory trap. We compare this thermal picture of the confined boson gas with non-relativistic model of independent factorized sources. We find that they are equivalent in the limit of very large effective sizes R, more exactly, when the Compton wave length 1/m and thermal wave length 1/ √ mT are much smaller than R . Under this conditions we study the influence of Bose condensation in finite volumes on the structure of the Wigner function, momentum spectra and correlation function. *
null
[ "https://export.arxiv.org/pdf/nucl-th/9909015v1.pdf" ]
117,800,642
nucl-th/9909015
3b0679410e8c9f5d5501e9dd520da2cf5564154c
Thermodynamics of multi-boson phenomena 8 Sep 1999 Yu M Sinyukov Bogolyubov Institute for Theoretical Physics Institute of Physics Metrologicheskaya 14b, Na Slovance 2252143, 18040Kiev, Prague 8Ukraine, Czech Republic S V Akkelin Bogolyubov Institute for Theoretical Physics Institute of Physics Metrologicheskaya 14b, Na Slovance 2252143, 18040Kiev, Prague 8Ukraine, Czech Republic R Lednicky Bogolyubov Institute for Theoretical Physics Institute of Physics Metrologicheskaya 14b, Na Slovance 2252143, 18040Kiev, Prague 8Ukraine, Czech Republic Thermodynamics of multi-boson phenomena 8 Sep 1999 Using the method of locally equilibrium statistical operator we consider the thermalized relativistic quantum fields in an oscillatory trap. We compare this thermal picture of the confined boson gas with non-relativistic model of independent factorized sources. We find that they are equivalent in the limit of very large effective sizes R, more exactly, when the Compton wave length 1/m and thermal wave length 1/ √ mT are much smaller than R . Under this conditions we study the influence of Bose condensation in finite volumes on the structure of the Wigner function, momentum spectra and correlation function. * I. INTRODUCTION In heavy ion collisions at RHIC and LHC energies quasi-macroscopical systems containing 10 4 − 10 5 particles are expected to be created. If phase-space densities of such systems at a pre-decaying stage will be high enough, one can observe multi-boson effects enhancing the production of pions with low relative momenta, softening their spectra and modifying correlation functions. One can even hope to observe new interesting phenomena like boson condensation in certain kinematic regions with a large pion density in the 6-dimensional phase space: f = (2π) 3 d 6 n/d 3 pd 3 x >∼ 1 (see, e.g., [1][2][3][4][5]). Generally, the account of the multi-boson effects is extremely difficult task. Even on the neglection of particle interactions in the final state the requirement of the BE symmetrization leads to severe numerical problems which increase factorially with the number of produced bosons [1,2]. In such situation, it is important that there exists a simple analytically solvable models [3] allowing for a study of the characteristic features of the multi-boson systems. Actually, there are two basic methods that are presently used to describe multi-boson systems in finite (small) volumes typical for A+A collisions. First one [3,6,7] is maximally closed to the procedure of computer simulations of multiboson effects. We call it: the model of independent factorized sources, or MIFS. It implies the independent emission of 'probed' non-identical particles with factorized Wigner functions D n (p 1 , x 1; ...; p n , x n ) = n i=1 D(p i , x i ).(1) The basic function in such an approach is the 'probed' Wigner distribution D(p, x) that is chosen usually in the thermal-like (Boltzmann) form: D(p, x) = η (2πR∆) 3 exp(− p 2 2∆ 2 − r 2 2R 2 )δ(t).(2) We normalize it to a mean multiplicity η. In typical cases the input numbers n of particles are described by the Poissonian distribution P (n) = e −η η n /n!.(3) In order to obtain an output ('true') particle number distribution as well as single and two-particle momentum spectra, the special procedure of "switching on" of the Bose-statistics is applied. In other words, the symmetrization of the 'probe' n-particle normalized amplitudes A n {p i } is provided for. Then the final multiplicity distribution is P S (n) = P (n)N [A S n ] , where N [A S n ] is the normalized weight due to the symmetrization. The other approach [8,9]deals with true boson statistics from very beginning and is based on the density matrix formalism. In particular, the specific ρ−matrix ansatz in the wave packet formalism was proposed [9] to get (and develop) MIFS results. To treat the results of these approaches one can often use the statistical thermodynamics language like "rare gas", "Bose condensate", etc. At the same time no systematic consideration based on the thermal matrix density for such a kind of system in finite volumes has been done. For thermal relativistic essentially finite (small) systems the problem is mathematically rather complicated, however for large enough ones it can be solved analytically. We will demonstrate it here using the method of statistical operator. II. DENSITY MATRIX FOR LOCALLY EQUILIBRIUM SYSTEMS According to the definition the statistical operator is ρ =e −S .(4) For locally equilibrium systems the entropy S has to be maximized under additional conditions fixing the average densities of energy ǫ(x), momentum p(x), charge q(x). Systems are considered on some hypersurface σ : dσ ν = dσn ν with a time-like normal vector n ν . In the relativistic covariant form the conditions look like p µ (x) ≡ (ǫ(x), p(x)) = n ν (x) T µν (x) , q(x) = n ν (x) J ν (x)(5) where ... means the average with the statistical operator ρ in (4). The result for entropy operator is then [10], [11] S = S(σ) = Φ(σ) + dσ ν (x)β µ (x) T µν (x) − dσ ν (x)µ(x) J ν (x),(6) where Φ(σ) = ln Sp exp{ dσ n ν (x)β µ (x) T µν (x)} is Masier-Planck functional, β µ (x) and µ(x) are Lagrange multipli- ers. For real one-component free scalar field we will use the current of particle number density J ν (x) = ϕ (+) (x) ←→ ∂ ν ϕ (−) (x) where ϕ (+) and ϕ (−) are the positive and negative field components. The energy-momentum tensor has the standard form. Then for the system which has no internal flows and is in local equilibrium state on the hypersurface t = 0: n ν = (1, 0), at the same temperature T : β µ (x) = ( 1 T , 0) the density matrix looks like: ρ = 1 Z exp −β d 3 pp 0 a † p a p + β 2(2π) 3 d 3 xµ(x) d 3 kd 3 k ′ k 0 k ′ 0 (k 0 + k ′ 0 )e −i(k−k ′ )x a † k a k ′(7) To guarantee the maximal closeness to the MIFS we will use the chemical potential in the form: µ(x) = µ − x 2 2R 2 β .(8) The description of inclusive spectra and correlations for a multiparticle production is based on a computation of the averages p 0 dN dp = a + p a p , p 0 1 p 0 2 dN dp 1 dp 2 = a + p1 a + p2 a p1 a p2 , etc,...(9) For Gaussian-type operator like (7) the thermal Wick theorem takes place: a + p1 a + p2 a p1 a p2 = a + p1 a p1 a + p2 a p2 + a + p1 a p2 a + p2 a p1 .(10) To find the averages a + p1 a p2 the Gaudin's method [12] is used [13]. As a result we have: a + p1 a p2 = p 0 2 ∞ n=1 G n (p 1 , p 2 ); G n (p 1 , p 2 ) = d 3 kG n−1 (p 1 , k)G 1 (k, p 2 ).(11) The basic function G 1 (p 1 , p 2 ) will be defined below. For this aim, first the commutation relation with entropy operator have to be considered: [a(p), S] = d 3 kM (p, k)a(k) =⇒ M (p, k) ≈ M (0) (p, k) + O( 1 p 2 0 R 2 ) + O( β p 0 R 2 )(12) where, using (7) and (8), we have M (0) (p 1 , p 2 ) = βp 0 2 δ(p 1 − p 2 ) − β (2π) 3 d 3 xe −i(p 1 −p 2 )x µ(x).(13) The basic function G 1 (p 1 , p 2 ) is then defined as follows: G 1 (p 1 , p 2 ) = ∞ n=0 (−1) n n! M * n (p 1 , p 2 ),(14) where M 0 (p 1, p 2 ) = δ(p 1 − p 2 ), M 1 (p 1, p 2 ) = M (p 1, p 2 ); (15) M n (p 1 , p 2 ) = d 3 kM n−1 (p 1 , k)M 1 (k, p 2 ). III. THE COMPARISON WITH MIFS. At m 2 R 2 , mβ −1 R 2 ≫ 1 we find, neglecting the terms 1/p 4 0 R 4 and β 2 /p 2 0 R 4 in Eqs. (12), G 1 (p 1 , p 2 ) = G (0) 1 (p 1 , p 2 ) + G (1) 1 (p 1 , p 2 ) + O( 1 p 4 0 R 4 , β 2 p 2 0 R 4 ) ≈ 1 (2π) 3 d 3 xe iqx exp(−β(p 0 1 − µ(x)))× 1 − 3β 4p 0 1 R 2 (1 + 2ip1x 3 + 2βp 2 1 9p 0 1 ) − 3 4(p 0 1 R) 2 (1 + βp 2 1 p 0 1 ) = d 3 xe iqx D(x, p).(16) Here p = (p 1 + p 2 )/2; q = p 1 − p 2 . At m 2 R 2 , ∆ 2 R 2 → ∞ in the non-relativistic limit the Wigner distribution is D(x, p) ≈ ξ (2π) 3 exp(− p 2 2∆ 2 − r 2 2R 2 ),(17) where ∆ 2 = mT and fugacity ξ = e µ , µ = µ − m. Comparing Eqs. (17) and (2) we found that in the non-relativistic approach in the limit of very large emission volumes the thermal density matrix (7) with the chemical potential (8) and the fugacity ξ = η/(∆R ) 3 is equivalent to MIFS. The physical reasons are the following. First, if the wave-length of the quanta is much less that the system size, the particle does not 'feel' the finiteness of the systems and the 'probe' distribution will be the same as in the thermodynamic limit, i.e., the Boltzmann-like one. Second, for large systems the assumption of an independent emission of non-interacting particles is natural: the average distance between any two particles is large comparing with its wave-length (or with the size of the wave packet). At the same time Eq.(16) indicates that for essentially small system's sizes compared with quanta wave-lengths, the single-particle locally equilibrium distribution D(x, p) has no more simple Boltzmann-like form (2). Some distortion terms have relativistic nature and are essential when the system size is close to the Compton wave-length, R ∼ 1/m. The others can take place even if Rm ≫ 1, depending on the ratio between size R and thermal wave-length 1/ √ mT . They are of quantum nature and appear when one describes the thermal equilibrium of the quanta with an average de Broglie wave length that is larger than the system size. In general, according to Eq.(16) one can expect the reduction of soft quanta in thermal model in comparison with the MIFS Boltzmann anzats while the distributions of "hard" quanta coincide in both approaches. IV. THE WIGNER FUNCTIONS AND SPECTRA IN MULTI-BOSON SYSTEMS We will consider the limiting behavior of the functions G n (p 1 , p 2 ) (11) at small n << ∆R and large n ≫ ∆R at R∆ ≫ 1 in the non-relativistic case. Neglecting the terms 1/∆ 2 R 2 and 1/m 2 R 2 in Eq. (16) we can put G 1 (p 1 , p 2 ) = G (0) 1 (p 1 , p 2 ) and found the optimal tailing of the two limiting forms at the point n t = R∆. Then at R∆ ≫ 1 one can express the basic operator average (11) through the Wigner functions of the Bose gas (g) and the Bose condensate (c) (we will see the correspondence later) as follows: 1 p 0 2 a + p1 a p2 = ∞ n=1 G n (p 1 , p 2 ) ≈ d 3 xe iqx (f g (p, x) + f c (p, x)),(18) where f g (p, x) = 1 (2π) 3 1 ξ −1 x exp(p 2 /2∆ 2 ) − 1 ,(19)ξ x = ξ exp(−x 2 /2R 2 ) ≡ e β( µ− 3 2∆R ) exp(−x 2 /2R 2 ) and f c (p, x) = ξ 1 − ξ ξ R∆ 1 π 3 exp(− R ∆ p 2 − ∆ R x 2 ).(20) We have used here the following transformation: exp − q 2 R 2 2n = n 2πR 2 3/2 d 3 xe −ixq exp(− x 2 n 2R 2 ).(21) Note here that the critical value of the nonrelativistic chemical potential µ = 3 2∆R is positive because gas is not ideal (effectively the gas interacts with external field confining it in a finite volume). In the thermodynamic limit: volume V → ∞ (R∆ → ∞) at fixed temperature T and at fixed particle density n = N/V the phase space density in the central part is n ≡ n(0) ∆ 3 ≈ 1 (2π) 3/2 Φ 3/2 ( ξ) + 1 (πR∆) 3/2 ξ 1 − ξ ξ R∆ ≡ n g + n c(22) where Φ α (z) = ∞ k=1 z k k α . The first term corresponds to the ordinary Bose-Einstein gas contribution. Let us introduce associated with it at ξ = 1 the critical density: n cr = 1 (2π) 3/2 Φ 3/2 (1). It is easy to see that in order to move to the thermodynamic limit at fixed density n > n cr the parameter ξ at large enough R∆ has to tend to unity as: ξ = 1 − 1 ( n − n cr )(πR∆) 3/2 .(23) The single particle inclusive distribution in the thermodynamic limit has the form: n (1) (p) = ∞ −∞ d 3 xf (x, p) = n g (p) + n c (p) ≈ R 3 (2π) 3/2 Φ 3/2 ( ξe − p2 2∆ 2 ) + ξ 1− ξ ξ R∆ R 3 (πR∆) −3/2 exp(− R ∆ p 2 ) R∆→∞ −→ 1 (2π) 3 d 3 x 1 ξ −1 x exp(p 2 /2∆ 2 )−1 + (πR∆) 3/2 ( n − n cr )δ(p)(24) Two terms in the last equality of Eq. (24) are associated with the classical Bose gas and the condensate in thermodynamic limit at the densities n > n cr . They describe Bose gas and Bose condensate for finite systems at the condition R∆ >> 1. Note that for Bose condensate the momentum distribution is much more narrow than for Bose gas, ∆ 2R ≪ ∆. V. THE INTERFEROMETRY OF MULTI-BOSON SOURCES The calculation of two-particle inclusive spectra is based on Eqs.(9), (10), (18). One can easily find that 1 p 0 2 a + p1 a p2 = n g (p) exp(−q 2 R 2 g /2) + n c (p) exp(−q 2 R 2 c /2),(25) where R 2 g = R 2 Φ 5/2 ( ξe − p2 2∆ 2 )/Φ 3/2 ( ξe − p2 2∆ 2 ) p=0, ξ=1 −→ ≈ R 2 /2, R 2 c = R/2∆.(26) The correlation function at small q 2 limit, qR ≪ 1, is then: C(p, q) = 1 + a + p1 a p2 a + p2 a p1 a + p1 a p1 a + p2 a p2 = 1 + exp − n g (p) n g (p) + n c (p) R 2 g q 2 .(27) At large q 2 limit, qR ≫ 1, it is C(p, q) = 1 + n c n g 2 exp(−R 2 c q 2 ).(28) The effective interferometry radius squared, corresponding to C(q ef f ) = 1 + 1/e, is R 2 ef f ≡ q −2 ef f = R 2 c / [1 + 2 ln(n c (p)/n g (p))] . Finally, we can see that when the density in phase space increases the interferometry radius of the gas component reduces to R/ √ 2 at most (see (26)). At very large densities, exceeding essentially the critical one, the Bose condensate determines the correlation function behavior at small p. Then the interferometry radius is reduced additionally as compared with its behavior for pure Bose-gas. At n/ n cr → ∞, the interferometry radius in inclusive measurements tends to zero whatever large is the geometric size of the system. Note that intercept of the inclusive correlation function is equal to 2 and does not depend on n/ n cr . This reflects the chaotic nature of the thermal multiboson source. We are not discussing here the very complicated problem of spontaneous symmetry breaking of the condensate in finite systems which could take place when there is the interaction destroying the degeneration in the system. . W Zajc, Phys. Rev. D. 353396W. Zajc, Phys. Rev. D 35, 3396 (1987). . W N Zhang, Phys. Rev. C. 47795W.N. Zhang et al., Phys. Rev. C 47, 795 (1993). . S Pratt, Phys. Lett. B. 301159S. Pratt, Phys. Lett. B 301, 159 (1993); . Phys. Rev. C. 50469Phys. Rev. C 50, 469 (1994). . G F Bertsch, Phys. Rev. Lett. 722349G.F. Bertsch, Phys. Rev. Lett. 72, 2349 (1994). . Yu M Sinyukov, B Lorstad, Z. Phys. C. 61587Yu.M. Sinyukov, B. Lorstad, Z. Phys. C 61, 587 (1994). . N S Amelin, R Lednicky, Heavy Ion Physics. 4241N.S. Amelin, R. Lednicky, Heavy Ion Physics 4, 241 (1996); SUBATECH 95-08. NantesSUBATECH 95-08, Nantes 1995. Internal Note ALICE 95-43. B Erazmus, GenevaB. Erazmus et al., Internal Note ALICE 95-43, Geneva 1995. . W Q Chao, C S Gao, Q H Zhang, J. Phys. G: Nucl. Phys. 21847W.Q. Chao, C.S. Gao, Q.H. Zhang, J. Phys. G: Nucl. Phys. 21, 847 (1995). . J Zimanyi, T Csorgo, hep-ph/9705432J. Zimanyi, T. Csorgo, hep-ph/9705432; . T Csorgo, J Zimanyi, hep-ph/9705433T. Csorgo, J. Zimanyi, hep-ph/9705433. D N Zubarev, Nonequilibrium Statistical Thermodynamics. Nauka, MoscowD.N.Zubarev, Nonequilibrium Statistical Thermodynamics, Nauka, Moscow, 1971. . G Ch, Weert, Ann.Phys. 140133Ch.G.von Weert, Ann.Phys. 140, 133 (1982). . M Gaudin, Nucl.Phys. 1589M.Gaudin, Nucl.Phys. 15, 89 (1960). . Yu M Sinyukov, Preprint ITP-93-8EunpublishedYu.M.Sinyukov. Preprint ITP-93-8E (unpublished); Hot Hadronic Matter:Theory and Experiment. J.Letessier, H.H.Gutbrod, J.RafelskiPlenum Publ. Corp566309Nucl.Phys. A 566, 589c (1994); in: Hot Hadronic Matter:Theory and Experiment , (J.Letessier, H.H.Gutbrod, J.Rafelski, eds.) p. 309, Plenum Publ. Corp., 1995.
[]
[ "Some unlimited families of minimal surfaces of general type with the canonical map of degree 8", "Some unlimited families of minimal surfaces of general type with the canonical map of degree 8" ]
[ "Nguyen Bin " ]
[]
[]
In this note, we construct nine families of projective complex minimal surfaces of general type having the canonical map of degree 8 and irregularity 0 or 1. For six of these families the canonical system has a non trivial fixed part.IntroductionLet X be a smooth complex surface of general type (see[3]or [1]) and let ϕ |KX | : X G G P pg(X)−1 be the canonical map of X, where p g (X) = dim H 0 (X, K X ) is the geometric genus and K X is the canonical divisor of X. A classical result of Beauville [2, Theorem 3.1] says that if the image of ϕ |KX | is a surface, either p g im ϕ |KX | = 0 or im ϕ |KX | is a surface of general type. In addition, the degree d of the canonical map of X is less than or equal to 36. While surfaces with d = 2 has been studied thoroughly by E. Horikawa in his several papers such as [7], [8], [10], [9], the case where d bigger than 2 remains to be one of the most interesting open problems in the theory of surfaces. Several surfaces with d bigger than 2 have been constructed, for example with d = 3, 5, 9 by R. Pardini [13] and S.L. Tan [18], d = 6, 8 by A. Beauville [2], d = 4 by A. Beauville [2], and F.J. Gallego and B.P. Purnaprajna [6], d = 16 by U. Persson [14] and C. Rito [17], d = 12, 24 by C.Rito [16] [15], etc.In the same paper [2], Beauville also proved that the degree of the canonical map is less than or equal to 9 if χ(O X ) ≥ 31. Later, G. Xiao showed that if the geometric genus of X is bigger than 132, the degree of the canonical map is less than or equal to 8[19]. In addition, he also proved that if the degree of the canonical map is 8 and geometric genus is bigger than 115, the irregularity q = h 0 Ω 1 X is less than or equal to 3 (see[20]). Beauville constructed an unlimited family of surfaces with d = 8 and arbitrarily high geometric genus [2]. These surfaces have irregularity q = 3 and the canonical linear system of these surfaces is base point free.In this note, we construct nine unlimited families of surfaces with d = 8 and q = 0 or q = 1. Furthermore, for some families the canonical linear systems are not base point free. The following theorem is the main result of this note:Mathematics Subject Classification (2010): 14J29.
10.1007/s00229-019-01147-4
[ "https://arxiv.org/pdf/1908.11363v1.pdf" ]
201,669,063
1908.11363
d9c9882e4a967008f8f866e5577090f53cf139de
Some unlimited families of minimal surfaces of general type with the canonical map of degree 8 29 Aug 2019 Nguyen Bin Some unlimited families of minimal surfaces of general type with the canonical map of degree 8 29 Aug 2019 In this note, we construct nine families of projective complex minimal surfaces of general type having the canonical map of degree 8 and irregularity 0 or 1. For six of these families the canonical system has a non trivial fixed part.IntroductionLet X be a smooth complex surface of general type (see[3]or [1]) and let ϕ |KX | : X G G P pg(X)−1 be the canonical map of X, where p g (X) = dim H 0 (X, K X ) is the geometric genus and K X is the canonical divisor of X. A classical result of Beauville [2, Theorem 3.1] says that if the image of ϕ |KX | is a surface, either p g im ϕ |KX | = 0 or im ϕ |KX | is a surface of general type. In addition, the degree d of the canonical map of X is less than or equal to 36. While surfaces with d = 2 has been studied thoroughly by E. Horikawa in his several papers such as [7], [8], [10], [9], the case where d bigger than 2 remains to be one of the most interesting open problems in the theory of surfaces. Several surfaces with d bigger than 2 have been constructed, for example with d = 3, 5, 9 by R. Pardini [13] and S.L. Tan [18], d = 6, 8 by A. Beauville [2], d = 4 by A. Beauville [2], and F.J. Gallego and B.P. Purnaprajna [6], d = 16 by U. Persson [14] and C. Rito [17], d = 12, 24 by C.Rito [16] [15], etc.In the same paper [2], Beauville also proved that the degree of the canonical map is less than or equal to 9 if χ(O X ) ≥ 31. Later, G. Xiao showed that if the geometric genus of X is bigger than 132, the degree of the canonical map is less than or equal to 8[19]. In addition, he also proved that if the degree of the canonical map is 8 and geometric genus is bigger than 115, the irregularity q = h 0 Ω 1 X is less than or equal to 3 (see[20]). Beauville constructed an unlimited family of surfaces with d = 8 and arbitrarily high geometric genus [2]. These surfaces have irregularity q = 3 and the canonical linear system of these surfaces is base point free.In this note, we construct nine unlimited families of surfaces with d = 8 and q = 0 or q = 1. Furthermore, for some families the canonical linear systems are not base point free. The following theorem is the main result of this note:Mathematics Subject Classification (2010): 14J29. Theorem 1. Let n be an integer number such that n ≥ 2. Then there exist minimal surfaces of general type X with canonical map ϕ |KX | of degree 8 and the following invariants The approach to construct these surfaces is using Z 3 2 −covers with some appropriate branch loci. Note that canonical maps defined by abelian covers of P 2 , and in particular the abelian covers with the group Z 3 2 , have been studied very explicitly by Rong Du and Yun Gao in [5]. K 2 X p g (X) q (X) |K X | Z 3 −coverings The construction of abelian covers was studied by R. Pardini in [12]. Let H i1,i2,i3 denote the nontrivial cyclic subgroup generated by (i 1 , i 2 , i 3 ) of Z 3 2 for all (i 1 , i 2 , i 3 ) ∈ Z 3 2 \ (0, 0, 0), and denote by χ j1,j2,j3 the character of Z 3 2 defined by χ j1,j2,j3 (a 1 , a 2 , a 3 ) := e (πa1j1)i e (πa2j2)i e (πa3j3)i for all j 1 , j 2 , j 3 , a 1 , a 2 , a 3 , a 4 ∈ Z 2 . For sake of simplicity, from now on we use notations D 1 , D 2 , D 3 , D 4 , D 5 , D 6 , D 7 instead of D (H0,0,1,χ0,0,1) , D (H0,1,0,χ0,1,0) , D (H0,1,1,χ0,1,0) , D (H1,0,0,χ1,0,0) , D (H1,0,1,χ1,0,0) , D (H1,1,0,χ1,0,0) , D (H1,1,1,χ1,0,0) , respectively. For details about the building data of abelian covers and their notations, we refer the reader to Section 1 and Section 2 of R. Pardini's work ( [12]). From [12, Theorem 2.1] we can define Z 3 2 −covers as follows: Proposition 1. Let Y be a smooth projective surface. Let L χ be divisors of Y such that L χ ≡ O Y for all nontrivial characters χ ∈ Z 3 2 * \ {χ 0,0,0 }. Let D 1 , D 2 , . . . , D 7 be effective divisors of Y such that the branch divisor B := 7 i=1 D i is reduced. Then {L χ , D j } χ,j is the building data of a Z 3 2 −cover f : X G G Y if and only if 2L 1,0,0 ≡ D 4 +D 5 +D 6 +D 7 2L 0,1,0 ≡ D 2 +D 3 +D 6 +D 7 2L 0,0,1 ≡ D 1 +D 3 +D 5 +D 7 2L 1,1,0 ≡ D 2 +D 3 +D 4 +D 5 2L 1,0,1 ≡ D 1 +D 3 +D 4 +D 6 2L 0,1,1 ≡ D 1 +D 2 +D 5 +D 6 2L 1,1,1 ≡ D 1 +D 2 +D 4 +D 7 . By [12,Theorem 3.1] if each D σ is smooth and B is a simple normal crossings divisor, then the surface X is smooth. Also from [12, Lemma 4.2, Proposition 4.2] we have: Proposition 2. Let f : X G G Y be a smooth Z 3 2 −cover with the building data D 1 , D 2 , . . . , D 7 , L χ , ∀χ ∈ Z 3 2 * \ {χ 0,0,0 }. The invariants of X are as follows: 2K X ≡ f *   2K Y + 7 j=1 D j   K 2 X = 2   2K Y + 7 j=1 D j   2 p g (X) = p g (Y ) + χ∈(Z 3 2 ) * \{χ0,0,0} h 0 (L χ + K Y ) χ (O X ) = 8χ (O Y ) + χ∈(Z 3 2 ) * \{χ0,0,0} 1 2 L χ (L χ + K Y ). Notation 1. We denote P = (k 1 , k 2 , . . . , k 7 ) when D 1 , D 2 , . . . , D 7 contain P with multiplicity k 1 , k 2 , . . . , k 7 , respectively. Constructions Construction 1 In this section, we construct the surfaces in the first four rows of Theorem 1. Construction and computation of invariants Let F 1 denote the Hirzebruch surface with the negative section ∆ 0 with self-intersection −1 and let Γ denote a fiber of the ruling. Let D 2 = 2nΓ be 2n fibers in F 1 and D 3 , D 6 , D 7 ∈ |2∆ 0 + 2Γ| be smooth curves in general position. Let f : X G G F 1 be a Z 3 2 −cover with the following branch locus B = D 1 + D 2 + D 3 + D 4 + D 5 + D 6 + D 7 , where D 1 = D 4 = D 5 = 0. By Proposition 1, L 0,1,0 ≡ 3∆ 0 + (n + 3) Γ and L χ is equivalent to either 2∆ 0 + 2Γ or ∆ 0 + (n + 1) Γ for all L χ = L 0,1,0 . Since each D σ is smooth and B is a normal crossings divisor, X is smooth. Moreover, by Proposition 2, we get 2K X ≡ f * (2∆ 0 + 2nΓ) . This implies that X is a minimal surface of general type. Furthermore, by Proposition 2, the invariants of X are as follows: K 2 X = 8 (2n − 1) (1) p g (X) = h 0 (∆ 0 + nΓ) = 2n + 1 (2) χ (O X ) = 2n + 2.(3) From (2) and (3), we get q (X) = 0. We show that |K X | is not composed with a pencil by considering the following double cover f 1 : X 1 G G F 1 ramifying on D 2 + D 3 + D 6 + D 7 . We have K X1 ≡ f * 1 (∆ 0 + nΓ) . Because |∆ 0 + nΓ| is not composed with a pencil, |K X1 | is not composed with a pencil, either. This leads to the fact that |K X | is not composed with a pencil and the degree of the canonical map is 8. Moreover, deg im ϕ |KX | = 2n − 1. Variations Now by adding a singular point to the above branch locus, we obtain the surfaces described in the second row of Theorem 1. In fact, by Proposition 1, a new branch locus can be formed by adding a point P = (0, 1, 1, 0, 0, 1, 1) (see Notation 1). And we consider the Z 3 2 −cover on Y instead of F 1 , where Y is the blow up of F 1 at P . More precisely, let P be a point in F 1 such that D 2 , D 3 , D 6 , D 7 contain P with multiplicity 1, 1, 1, 1, respectively. Let Y be the blow up of F 1 at P and E be the exceptional divisor. If we abuse notation and denote D 2 , D 3 , D 6 , D 7 , ∆ 0 , Γ their pullbacks to Y , then D 2 = 2nΓ−E, D 3 = 2∆ 0 +2Γ−E, D 6 = 2∆ 0 +2Γ−E and D 7 = 2∆ 0 +2Γ−E. Let f : X G G Y be a Z 3 2 −cover with the following branch locus B = D 1 + D 2 + D 3 + D 4 + D 5 + D 6 + D 7 , where D 1 = D 4 = D 5 = 0. The building data is as follows: L 1,0,0 ≡ 2∆ 0 +2Γ −E L 0,1,0 ≡ 3∆ 0 + (n + 3) Γ −2E L 0,0,1 ≡ 2∆ 0 +2Γ −E L 1,1,0 ≡ ∆ 0 + (n + 1) Γ −E L 1,0,1 ≡ 2∆ 0 +2Γ −E L 0,1,1 ≡ ∆ 0 + (n + 1) Γ −E L 1,1,1 ≡ ∆ 0 + (n + 1) Γ −E. Similarly to the above, we obtain minimal surfaces of general type with K 2 = 16n − 16, p g = 2n, q = 0, d = 8, and deg im ϕ |KX | = 2n − 2. Moreover, ϕ |KX | is a morphism. Analogously, by Proposition 1, a point (0, 0, 0, 0, 0, 2, 2) can be added to the original branch locus. In fact, let P be a point in F 1 such that D 6 , D 7 contain P with multiplicity 2, 2, respectively. Let Y be the blow up of F 1 at P and E be the exceptional divisor. If we abuse notation and denote D 2 , D 3 , D 6 , D 7 , ∆ 0 , Γ their pullbacks to Y , then D 2 = 2nΓ, D 3 = 2∆ 0 + 2Γ, D 6 = 2∆ 0 + 2Γ − 2E and D 7 = 2∆ 0 + 2Γ − 2E. Let f : X G G Y be a Z 3 2 −cover with the following branch locus B = D 1 + D 2 + D 3 + D 4 + D 5 + D 6 + D 7 , where D 1 = D 4 = D 5 = 0. The building data is as follows: L 1,0,0 ≡ 2∆ 0 +2Γ −2E L 0,1,0 ≡ 3∆ 0 + (n + 3) Γ −2E L 0,0,1 ≡ 2∆ 0 +2Γ −E L 1,1,0 ≡ ∆ 0 + (n + 1) Γ L 1,0,1 ≡ 2∆ 0 +2Γ −E L 0,1,1 ≡ ∆ 0 + (n + 1) Γ −E L 1,1,1 ≡ ∆ 0 + (n + 1) Γ −E. We get minimal surfaces of general type with K 2 = 16n − 16, p g = 2n, q = 1, d = 8, and deg im ϕ |KX | = 2n − 2. Furthermore, ϕ |KX | is a morphism. Therefore we obtain the surfaces described in the third row of Theorem 1. The Albanese pencil of these surfaces X G G Alb (X) is the pullback of the Albanese pencil of the intermediate surface Z, where Z is obtained by the Z 2 −cover ramifying on 2L 1,0,0 . For details about the surfaces with q > 0, we refer the reader to the work of Mendes Lopes and Pardini [11]. Remark 1. These surfaces in the first three rows of Theorem 1 can be obtained by taking three iterated Z 2 −covers. First, we ramify on D 2 , D 3 , D 6 , and D 7 (i.e. B = 2L 0,1,0 ) and we get Horikawa's surfaces with K 2 = 2p g − 4 [7]. The second cover ramifies only on nodes (i.e B = 2L 1,0,0 ). These nodes come from the intersection points between D 2 + D 3 and D 6 + D 7 . The last cover ramifies on nodes coming from the intersection points between D 2 and D 3 , and D 6 and D 7 (i.e. B = 2L 0,0,1 ) (see [4,Prop. 3.1]). Moreover, the following diagram commutes X Z 3 2 f G G 2L0,0,1 f3 6 6 ■ ■ ■ ■ ■ ■ ■ ■ ■ ϕ |K X | # # ✴ ✴ ✴ ✴ ✴ ✴ ✴ ✴ ✴ ✴ ✴ ✴ ✴ ✴ ✴ ✴ ✴ ✴ ✴ ✴ ✴ Y X 2 2L1,0,0 f2 G G X 1 ϕ | K X 1 | Ò Ò ✆ ✆ ✆ ✆ ✆ ✆ ✆ ✆ ✆ ✆ ✆ ✆ ✆ ✆ f1 2L0,1,0 b b ⑥ ⑥ ⑥ ⑥ ⑥ ⑥ ⑥ ⑥ P pg(X)−1 Now, by Proposition 1, a point (0, 0, 1, 0, −1, 1, 2) can be imposed on the original branch locus, where −1 in the fifth component means the exceptional divisor is added to D 5 . In fact, let P be a point in F 1 such that D 3 , D 6 , D 7 contain P with multiplicity 1, 1, 2, respectively. Let Y be the blow up of F 1 at P and E be the exceptional divisor. If we abuse notation and denote D 2 , D 3 , D 6 , D 7 , ∆ 0 , Γ their pullbacks to Y , then D 2 = 2nΓ, D 3 = 2∆ 0 + 2Γ− E, D 6 = 2∆ 0 + 2Γ− E and D 7 = 2∆ 0 + 2Γ − 2E. Let f : X G G Y be a Z 3 2 −cover with the following branch locus B = D 1 + D 2 + D 3 + D 4 + D 5 + D 6 + D 7 , where D 1 = D 4 = 0 and D 5 = E. The building data is as follows: L 1,0,0 ≡ 2∆ 0 +2Γ −E L 0,1,0 ≡ 3∆ 0 + (n + 3) Γ −2E L 0,0,1 ≡ 2∆ 0 +2Γ −E L 1,1,0 ≡ ∆ 0 + (n + 1) Γ L 1,0,1 ≡ 2∆ 0 +2Γ −E L 0,1,1 ≡ ∆ 0 + (n + 1) Γ L 1,1,1 ≡ ∆ 0 + (n + 1) Γ −E. We get minimal surfaces of general type with K 2 = 16n − 10, p g = 2n, q = 0, and deg im ϕ |KX | = 2n − 2. Moreover, |K X | is not base point free (we will prove this in the next section 3.1.3). Therefore, we obtain the surfaces described in the fourth row of Theorem 1. The fixed part of the canonical system In this section, we show that the canonical linear system |K X | of the surfaces in the fourth row of Theorem 1 has a nontrivial fixed part. Indeed, the Z 3 2 −cover f : X G G Y factors through X 2 , where X 2 is obtained by the Z 2 2 −cover ramifying on 2L 1,1,1 , 2L 1,0,1 . The linear system |K X2 | is base point free. The surface X is obtained by the Z 2 −cover ramifying on the pullback of D 5 = E and some A 1 points. So the moving part of |K X | is the pullback of |K X2 |. Therefore, the fixed part of |K X | is 1 2 f * (E). More precisely, we consider the Z 3 2 −cover as the composition of the following Z 2 −covers X Z 3 2 f G G 2L1,0,0 f3 5 5 • • • • • • • • • ϕ |K X | Y X 2 2L1,0,1 f2 G G ϕ | K X 2 | 4:1 Ó Ó ✟ ✟ ✟ ✟ ✟ ✟ ✟ ✟ ✟ ✟ ✟ ✟ ✟ X 1 f1 2L1,1,1 b b ⑥ ⑥ ⑥ ⑥ ⑥ ⑥ ⑥ ⑥ P 2n−1 The first cover ramifies on D 2 + D 7 (i.e. B = 2L 1,1,1 ) and we get a surface X 1 with K X1 ≡ f * 1 (−∆ 0 + (n − 2) Γ). Moreover, f * 1 (E) = E 1 with E 2 1 = −2, g (E 1 ) = 0. The second cover ramifies on D 3 + D 6 (i.e. B = 2L 1,0,1 ). We have K X2 ≡ f * 2 f * 1 (∆ 0 + nΓ − E) . So |K X2 | is base point free. Moreover, f * 2 (E 1 ) = E 2 with E 2 2 = −4, g (E 2 ) = 1. The last cover ramifies on f * 2 f * 1 (E) and 8n + 6 nodes (i.e. B = 2L 1,0,0 ). These nodes come from the intersection points between D 2 and D 7 , and D 3 and D 6 . And we obtain f * 3 (E 2 ) = 2E 3 with E 2 3 = −2, g (E 3 ) = 1. In addition, by the projection formula (see [5,Corollary 2 .3]), we get h 0 (K X ) = h 0 (f * 3 (K X2 )) = 2n.(4) On the other hand, K X ≡ f * 3 (K X2 ) + R, where R is the ramification of f 3 . Hence, K X ≡ f * 3 (K X2 ) + E 3 .(5) From (4) and (5), the elliptic curve E 3 is the fixed part of |K X |. Construction 2 In this section, we construct the surfaces in the last five rows of Theorem 1. Construction and computation of invariants Let D 3 = Γ, D 4 ∈ |∆ 0 + Γ| + ∆ 0 , D 7 = (2n + 1) Γ be in F 1 and D 5 , D 6 ∈ |2∆ 0 + 2Γ| be smooth curves in general position in F 1 . Let f : X G G F 1 be a Z 3 2 − cover with the following branch locus B = D 1 + D 2 + D 3 + D 4 + D 5 + D 6 + D 7 , where D 1 = D 2 = 0. By Proposition 1, L 1,0,0 ≡ 3∆ 0 + (n + 3) Γ and L χ is equivalent to either 2∆ 0 + 2Γ, ∆ 0 + (n + 2) Γ or ∆ 0 + (n + 1) Γ for all L χ = L 1,0,0 . Since each D σ is smooth and B is a normal crossings divisor, X is smooth. Furthermore, by Proposition 2, we get 2K X ≡ f * (2∆ 0 + (2n + 1) Γ) . This implies that X is a minimal surface of general type. Moreover, by Proposition 2, the invariants of X are as follows: K 2 X = 16n (6) p g (X) = h 0 (∆ 0 + nΓ) = 2n + 1 (7) χ (O X ) = 2n + 2.(8) From (7) and (8), we get q (X) = 0. We show that |K X | is not composed with a pencil by considering the following double cover g 1 : Y 1 G G F 1 ramifying on D 4 + D 5 + D 6 + D 7 . We have K Y1 ≡ g * 1 (∆ 0 + nΓ) . Because |∆ 0 + nΓ| is not composed with a pencil, |K Y1 | is not composed with a pencil, either. This yields that |K X | is not composed with a pencil and the degree of the canonical map is 8. The fixed part of the canonical system In this section, we show that the canonical linear system |K X | has a nontrivial fixed part. In fact, the Z 3 2 −cover f : X G G Y factors through X 2 , where X 2 is obtained by the Z 2 2 −cover ramifying on 2L 1,1,1 , 2L 0,1,1 . The linear system |K X2 | is base point free. The surface X is obtained by the Z 2 −cover ramifying on the pullback of D 3 = Γ and some A 1 points. So the moving part of |K X | is the pullback of |K X2 |. Therefore, the fixed part of |K X | is 1 2 f * (Γ). More precisely, we consider the Z 3 2 −cover as the compositions of the following Z 2 −covers X Z 3 2 f G G 2L0,1,0 f3 3 3 ❉ ❉ ❉ ❉ ❉ ❉ ❉ ❉ ϕ |K X | Y X 2 2L0,1,1 f2 G G ϕ | K X 2 | 4:1 Õ Õ ✡ ✡ ✡ ✡ ✡ ✡ ✡ ✡ ✡ ✡ ✡ ✡ ✡ X 1 f1 2L1,1,1 b b ⑥ ⑥ ⑥ ⑥ ⑥ ⑥ ⑥ ⑥ P 2n The first cover ramifies on D 4 + D 7 (i.e. B = 2L 1,1,1 ). We get a surface X 1 with K X1 ≡ f * 1 (−∆ 0 + (n − 2) Γ). Furthermore, f * 1 (D 3 ) = Γ 1 with g (Γ 1 ) = 0. The second cover ramifies on D 5 + D 6 (i.e. B = 2L 0,1,1 ). We get surface of general type X 2 with K X2 ≡ f * 2 f * 1 (∆ 0 + nΓ) . Hence, |K X2 | is base point free and deg im ϕ |KX 2 | = 2n − 1. Furthermore, f * 2 (Γ 1 ) = Γ 2 with g (Γ 2 ) = 3. The last cover ramifies on f * 2 f * 1 (D 3 ) and 8n+12 nodes (i.e. B = 2L 0,1,0 ). These nodes come from the intersection points between D 4 and D 7 , and D 5 and D 6 . And we get f * 3 (Γ 2 ) = 2Γ 3 with g (Γ 3 ) = 3. In addition, by the projection formula, we get h 0 (K X ) = h 0 (f * 3 (K X2 )) = 2n + 1. On the other hand, K X ≡ f * 3 (K X2 ) + R, where R is the ramification of f 3 . Hence, K X ≡ f * 3 (K X2 ) + Γ 3 .(10) Therefore, from (9) and (10), the curve Γ 3 is the fixed part of |K X |. Variations By Proposition 1, the branch locus can be imposed a point (0, 0, 0, 1, 1, 1, 1). In fact, let P be a point in F 1 such that D 4 , D 5 , D 6 , D 7 contain P with multiplicity 1, 1, 1, 1, respectively. Let Y be the blow up of F 1 at P and E be the exceptional divisor. If we abuse notation and denote D 3 , D 4 , D 5 , D 6 , D 7 , ∆ 0 , Γ their pullbacks to Y , then D 3 = Γ, D 4 = 2∆ 0 + Γ − E, D 5 = 2∆ 0 + 2Γ − E, D 6 = 2∆ 0 + 2Γ − E and D 7 = (2n + 1) Γ − E. Let f : X G G Y be a Z 3 2 −cover with the following branch locus B = D 1 + D 2 + D 3 + D 4 + D 5 + D 6 + D 7 , where D 1 = D 2 = 0. The building data is as follows: L 1,0,0 ≡ 3∆ 0 + (n + 3) Γ −2E L 0,1,0 ≡ ∆ 0 + (n + 2) Γ −E L 0,0,1 ≡ ∆ 0 + (n + 2) Γ −E L 1,1,0 ≡ 2∆ 0 +2Γ −E L 1,0,1 ≡ 2∆ 0 +2Γ −E L 0,1,1 ≡ 2∆ 0 +2Γ −E L 1,1,1 ≡ ∆ 0 + (n + 1) Γ −E. Similarly to the above, we get minimal surfaces of general type with K 2 = 16n − 8, p g = 2n, q = 0, d = 8, and deg im ϕ |KX | = 2n−2. Moreover, 1 2 f * (Γ) is the fixed part of |K X | and the following diagram commutes X Z 3 2 f G G 2L0,1,0 f3 5 5 • • • • • • • • • ϕ |K X | Y X 2 2L0,1,1 f2 G G ϕ | K X 2 | 4:1 Ó Ó ✟ ✟ ✟ ✟ ✟ ✟ ✟ ✟ ✟ ✟ ✟ ✟ ✟ X 1 f1 2L1,1,1 b b ⑥ ⑥ ⑥ ⑥ ⑥ ⑥ ⑥ ⑥ P 2n−1 So we obtain the surfaces in the sixth row of Theorem 1. Analogously, by Proposition 1, we can put a point (0, 0, 0, 0, 2, 2, 0) into the original branch locus. In fact, let P be a point in F 1 such that D 5 , D 6 contain P with multiplicity 2, 2, respectively. Let Y be the blow up of F 1 at P and E be the exceptional divisor. If we abuse notation and denote D 3 , D 4 , D 5 , D 6 , D 7 , ∆ 0 , Γ their pullbacks to Y , then D 3 = Γ, D 4 = 2∆ 0 + Γ, D 5 = 2∆ 0 + 2Γ − 2E, D 6 = 2∆ 0 + 2Γ − 2E and D 7 = (2n + 1) Γ. Let f : X G G Y be a Z 3 2 −cover with the following branch locus B = D 1 + D 2 + D 3 + D 4 + D 5 + D 6 + D 7 , where D 1 = D 2 = 0. The building data is as follows: L 1,0,0 ≡ 3∆ 0 + (n + 3) Γ −2E L 0,1,0 ≡ ∆ 0 + (n + 2) Γ −E L 0,0,1 ≡ ∆ 0 + (n + 2) Γ −E L 1,1,0 ≡ 2∆ 0 +2Γ −E L 1,0,1 ≡ 2∆ 0 +2Γ −E L 0,1,1 ≡ 2∆ 0 +2Γ −2E L 1,1,1 ≡ ∆ 0 + (n + 1) Γ. Similarly to the above, we get minimal surfaces of general type with K 2 = 16n − 8, p g = 2n, q = 1, d = 8, and deg im ϕ |KX | = 2n − 2. Furthermore, 1 2 f * (Γ) is the fixed part of |K X | and the following diagram commutes X Z 3 2 f G G 2L0,1,0 f3 5 5 • • • • • • • • • ϕ |K X | Y X 2 2L0,1,1 f2 G G ϕ | K X 2 | 4:1 Ó Ó ✟ ✟ ✟ ✟ ✟ ✟ ✟ ✟ ✟ ✟ ✟ ✟ ✟ X 1 f1 2L1,1,1 b b ⑥ ⑥ ⑥ ⑥ ⑥ ⑥ ⑥ ⑥ P 2n−1 Thus, we obtain the surfaces in the seventh row of Theorem 1. The Albanese pencil of these surfaces X G G Alb (X) is the pullback of the Albanese pencil of the intermediate surface Z, where Z is obtained by the Z 2 −cover ramifying on 2L 0,1,1 . Similarly, by Proposition 1, a new branch locus can be formed by adding a point (0, 0, −1, 1, 2, 0, 1), where −1 in the third component means the exceptional divisor E is added to D 3 . In fact, let P be a point in F 1 such that D 4 , D 5 , D 7 contain P with multiplicity 1, 2, 1, respectively. Let Y be the blow up of F 1 at P and E be the exceptional divisor. If we abuse notation and denote D 4 , D 5 , D 6 , D 7 , ∆ 0 , Γ their pullbacks to Y , then D 4 = 2∆ 0 + Γ − E, D 5 = 2∆ 0 + 2Γ − 2E, D 6 = 2∆ 0 + 2Γ and D 7 = (2n + 1) Γ − E. Let f : X G G Y be a Z 3 2 −cover with the following branch locus B = D 1 + D 2 + D 3 + D 4 + D 5 + D 6 + D 7 , where D 1 = D 2 = 0 and D 3 = Γ + E. The building data is as follows: L 1,0,0 ≡ 3∆ 0 + (n + 3) Γ −2E L 0,1,0 ≡ ∆ 0 + (n + 2) Γ L 0,0,1 ≡ ∆ 0 + (n + 2) Γ −E L 1,1,0 ≡ 2∆ 0 +2Γ −E L 1,0,1 ≡ 2∆ 0 +2Γ L 0,1,1 ≡ 2∆ 0 +2Γ −E L 1,1,1 ≡ ∆ 0 + (n + 1) Γ −E. Similarly to the above, we get minimal surfaces of general type with K 2 = 16n − 2, p g = 2n, q = 0, d = 8, and deg im ϕ |KX | = 2n − 2. Moreover, 1 2 f * (Γ + E) is the fixed part of |K X | and the following diagram commutes X Z 3 2 f G G 2L0,1,0 f3 5 5 • • • • • • • • • ϕ |K X | Y X 2 2L0,1,1 f2 G G ϕ | K X 2 | 4:1 Ó Ó ✟ ✟ ✟ ✟ ✟ ✟ ✟ ✟ ✟ ✟ ✟ ✟ ✟ X 1 f1 2L1,1,1 b b ⑥ ⑥ ⑥ ⑥ ⑥ ⑥ ⑥ ⑥ P 2n−1 Therefore, we obtain the surfaces in the eighth row of Theorem 1. Finally, for n ≥ 3 by Proposition 1, a point P = (0, 0, −1, 1, 2, 2, 1) can be added to the original branch locus, where −1 in the third component means the exceptional divisor is added to D 3 . In fact, let P be a point in F 1 such that D 4 , D 5 , D 6 , D 7 contain P with multiplicity 1, 2, 2, 1, respectively. Let Y be the blow up of F 1 at P and E be the exceptional divisor. If we abuse notation and denote D 4 , D 5 , D 6 , D 7 , ∆ 0 , Γ their pullbacks to Y , then D 4 = 2∆ 0 + Γ − E, D 5 = 2∆ 0 + 2Γ − 2E, D 6 = 2∆ 0 + 2Γ − 2E and D 7 = (2n + 1) Γ − E. Let f : X G G Y be a Z 3 2 −cover with the following branch locus B = D 1 + D 2 + D 3 + D 4 + D 5 + D 6 + D 7 , where D 1 = D 2 = 0 and D 3 = Γ + E. The building data is as follows: L 1,0,0 ≡ 3∆ 0 + (n + 3) Γ −3E L 0,1,0 ≡ ∆ 0 + (n + 2) Γ −E L 0,0,1 ≡ ∆ 0 + (n + 2) Γ −E L 1,1,0 ≡ 2∆ 0 +2Γ −E L 1,0,1 ≡ 2∆ 0 +2Γ −E L 0,1,1 ≡ 2∆ 0 +2Γ −2E L 1,1,1 ≡ ∆ 0 + (n + 1) Γ −E. After contracting the −1 curve arising from the fiber passing throught P , we get minimal surfaces of general type with K 2 = 16n − 16, p g = 2n − 2, q = 1, d = 8, and deg im ϕ |KX | = 2n − 4. Furthermore, 1 2 f * (Γ + E) is the fixed part of |K X | and the following diagram commutes X Z 3 2 f G G 2L0,1,0 f3 5 5 • • • • • • • • • ϕ |K X | Y X 2 2L0,1,1 f2 G G ϕ | K X 2 | 4:1 Ó Ó ✟ ✟ ✟ ✟ ✟ ✟ ✟ ✟ ✟ ✟ ✟ ✟ ✟ X 1 f1 2L1,1,1 b b ⑥ ⑥ ⑥ ⑥ ⑥ ⑥ ⑥ ⑥ P 2n−3 Thus, taking m = n − 1, m ≥ 2, we obtain the surfaces in the last row of Theorem 1. The Albanese pencil of these surfaces X G G Alb (X) is the pullback of the Albanese pencil of the intermediate surface Z, where Z is obtained by the Z 2 −cover ramifying on 2L 0,1,1 . AcknowledgmentsThe author is deeply indebted to Margarida Mendes Lopes for all her help. Many thanks are also due to the anonymous referee for his/her suggestions. The author is supported by Fundação para a Ciência e Tecnologia (FCT), Portugal under the framework of the program Lisbon Mathematics PhD (LisMath). W P Barth, K Hulek, C A M Peters, A Van De Ven, of Ergebnisse der Mathematik und ihrer Grenzgebiete. 3. Folge. A Series of Modern Surveys in Mathematics [Results in Mathematics and Related Areas. 3rd Series. A Series of Modern Surveys in Mathematics. BerlinSpringer-Verlag4second ed.W. P. Barth, K. Hulek, C. A. M. Peters, and A. Van de Ven, Compact complex surfaces, vol. 4 of Ergebnisse der Mathematik und ihrer Grenzgebiete. 3. Folge. A Series of Modern Surveys in Mathematics [Results in Mathematics and Related Areas. 3rd Series. A Series of Modern Surveys in Mathematics], Springer-Verlag, Berlin, second ed., 2004. L'application canonique pour les surfaces de type général. A Beauville, Invent. Math. 55A. Beauville, L'application canonique pour les surfaces de type général, Invent. Math., 55 (1979), pp. 121-140. Translated from the 1978 French original by. M Reid, R. BarlowCambridge University Press34CambridgeComplex algebraic surfaces, Complex algebraic surfaces, vol. 34 of London Mathematical Society Student Texts, Cambridge University Press, Cambridge, second ed., 1996. Translated from the 1978 French original by R. Barlow, with assistance from N. I. Shepherd-Barron and M. Reid. Rational surfaces with many nodes. I Dolgachev, M Mendes Lopes, R Pardini, Compositio Math. 132I. Dolgachev, M. Mendes Lopes, and R. Pardini, Rational surfaces with many nodes, Compositio Math., 132 (2002), pp. 349-363. Canonical maps of surfaces defined by abelian covers. R Du, Y Gao, Asian J. Math. 18R. Du and Y. Gao, Canonical maps of surfaces defined by abelian covers, Asian J. Math., 18 (2014), pp. 219-228. Classification of quadruple Galois canonical covers. I. F J Gallego, B P Purnaprajna, Trans. Amer. Math. Soc. 360F. J. Gallego and B. P. Purnaprajna, Classification of quadruple Galois canonical covers. I, Trans. Amer. Math. Soc., 360 (2008), pp. 5489-5507. Algebraic surfaces of general type with small C 2 1 . I, Ann. of Math. E Horikawa, 104E. Horikawa, Algebraic surfaces of general type with small C 2 1 . I, Ann. of Math. (2), 104 (1976), pp. 357-387. Algebraic surfaces of general type with small c 2. , Algebraic surfaces of general type with small c 2 . II, Invent. Math. 37II, Invent. Math., 37 (1976), pp. 121- 155. Algebraic surfaces of general type with small c 2. , Algebraic surfaces of general type with small c 2 . Invent. Math. 47III, Invent. Math., 47 (1978), pp. 209- 248. Algebraic surfaces of general type with small c 2 1 . IV. Invent. Math. 50, Algebraic surfaces of general type with small c 2 1 . IV, Invent. Math., 50 (1978/79), pp. 103-128. The geography of irregular surfaces. M M Lopes, R Pardini, Current developments in algebraic geometry. CambridgeCambridge Univ. Press59M. M. Lopes and R. Pardini, The geography of irregular surfaces, in Current develop- ments in algebraic geometry, vol. 59 of Math. Sci. Res. Inst. Publ., Cambridge Univ. Press, Cambridge, 2012, pp. 349-378. Abelian covers of algebraic varieties. R Pardini, J. Reine Angew. Math. 417R. Pardini, Abelian covers of algebraic varieties, J. Reine Angew. Math., 417 (1991), pp. 191- 213. Canonical images of surfaces. J. Reine Angew. Math. 417, Canonical images of surfaces, J. Reine Angew. Math., 417 (1991), pp. 215-219. Double coverings and surfaces of general type. U Persson, Algebraic geometry (Proc. Sympos., Univ. Tromsø. Tromsø; BerlinSpringer687U. Persson, Double coverings and surfaces of general type, in Algebraic geometry (Proc. Sympos., Univ. Tromsø, Tromsø, 1977), vol. 687 of Lecture Notes in Math., Springer, Berlin, 1978, pp. 168-195. New canonical triple covers of surfaces. C Rito, Proc. Amer. Math. Soc. 143C. Rito, New canonical triple covers of surfaces, Proc. Amer. Math. Soc., 143 (2015), pp. 4647-4653. A surface with canonical map of degree 24. Internat. J. Math. 2810, A surface with canonical map of degree 24, Internat. J. Math., 28 (2017), pp. 1750041, 10. A surface with q = 2 and canonical map of degree 16. Michigan Math. J. 66, A surface with q = 2 and canonical map of degree 16, Michigan Math. J., 66 (2017), pp. 99-105. Surfaces whose canonical maps are of odd degrees. S L Tan, Math. Ann. 292S. L. Tan, Surfaces whose canonical maps are of odd degrees, Math. Ann., 292 (1992), pp. 13- 29. Algebraic surfaces with high canonical degree. G Xiao, Math. Ann. 274G. Xiao, Algebraic surfaces with high canonical degree, Math. Ann., 274 (1986), pp. 473-483. Irregularity of surfaces with a linear pencil. Duke Math. J. 55, Irregularity of surfaces with a linear pencil, Duke Math. J., 55 (1987), pp. 597-602.
[]
[ "Contextually Guided Convolutional Neural Networks for Learning Most Transferable Representations", "Contextually Guided Convolutional Neural Networks for Learning Most Transferable Representations" ]
[ "Olcay Kursun [email protected] \nDepartment of Computer Science\nUniversity of Central Arkansas\n72035ConwayARUSA\n", "Semih Dinc [email protected] \nDepartment of Computer Science\nAuburn University at Montgomery\n36117MontgomeryALUSA\n", "Oleg V Favorov [email protected] \nJoint Department of Biomedical Engineering\nDepartment of Computer Science\nUniversity of North\nCarolina at Chapel Hill27599NCUSA\n\nUniversity of Central\nArkansas, 201 Donaghey Ave72035ConwayAR\n", "Olcay Kursun " ]
[ "Department of Computer Science\nUniversity of Central Arkansas\n72035ConwayARUSA", "Department of Computer Science\nAuburn University at Montgomery\n36117MontgomeryALUSA", "Joint Department of Biomedical Engineering\nDepartment of Computer Science\nUniversity of North\nCarolina at Chapel Hill27599NCUSA", "University of Central\nArkansas, 201 Donaghey Ave72035ConwayAR" ]
[]
Deep Convolutional Neural Networks (CNNs), trained extensively on very large labeled datasets, learn to recognize inferentially powerful features in their input patterns and represent efficiently their objective content. Such objectivity of their internal representations enables deep CNNs to readily transfer and successfully apply these representations to new classification tasks. Deep CNNs develop their internal representations through a challenging process of error backpropagation-based supervised training. In contrast, deep neural networks of the cerebral cortex develop their even more powerful internal representations in an unsupervised process, apparently guided at a local level by contextual information. Implementing such local contextual guidance principles in a single-layer CNN architecture, we propose an efficient algorithm for developing broad-purpose representations (i.e., representations transferable to new tasks without additional training) in shallow CNNs trained on limited-size datasets. A contextually guided CNN (CG-CNN) is trained on groups of neighboring image patches picked at random image locations in the dataset. Such neighboring patches are likely to have a common context and therefore are treated for the purposes of training as belonging to the same class. Across multiple iterations of such training on different context-sharing groups of image patches, CNN features that are optimized in one iteration are then transferred to the next iteration for further optimization, etc. In this process, CNN features acquire higher pluripotency, or inferential utility for any arbitrary classification task, which we quantify as a transfer utility. In our application to natural images, we find that CG-CNN features show the same, if not higher, transfer utility and classification accuracy as comparable transferable features in the first CNN layer of the well-known deep networks AlexNet, ResNet, and GoogLeNet. In general, the CG-CNN approach to development of pluripotent/transferable features can be applied to any type of data, besides imaging, that exhibit significant contextual regularities. Furthermore, rather than being trained on raw data, a CG-CNN can be trained on the outputs of another CG-CNN with already developed pluripotent features, thus using those features as building blocks for forming more descriptive higher-order features. Multi-layered CG-CNNs, comparable to current deep networks, can be built through such consecutive training of each layer.
10.1109/ism55400.2022.00047
[ "https://arxiv.org/pdf/2103.01566v2.pdf" ]
232,092,577
2103.01566
f118d74ac0b0f23e013c7e2e494e198912eafc27
Contextually Guided Convolutional Neural Networks for Learning Most Transferable Representations 24 Mar 2021 Olcay Kursun [email protected] Department of Computer Science University of Central Arkansas 72035ConwayARUSA Semih Dinc [email protected] Department of Computer Science Auburn University at Montgomery 36117MontgomeryALUSA Oleg V Favorov [email protected] Joint Department of Biomedical Engineering Department of Computer Science University of North Carolina at Chapel Hill27599NCUSA University of Central Arkansas, 201 Donaghey Ave72035ConwayAR Olcay Kursun Contextually Guided Convolutional Neural Networks for Learning Most Transferable Representations 24 Mar 20211Deep LearningContextual GuidanceUnsupervised LearningTransfer LearningFeature Extrac- tionPluripotency * Corresponding Deep Convolutional Neural Networks (CNNs), trained extensively on very large labeled datasets, learn to recognize inferentially powerful features in their input patterns and represent efficiently their objective content. Such objectivity of their internal representations enables deep CNNs to readily transfer and successfully apply these representations to new classification tasks. Deep CNNs develop their internal representations through a challenging process of error backpropagation-based supervised training. In contrast, deep neural networks of the cerebral cortex develop their even more powerful internal representations in an unsupervised process, apparently guided at a local level by contextual information. Implementing such local contextual guidance principles in a single-layer CNN architecture, we propose an efficient algorithm for developing broad-purpose representations (i.e., representations transferable to new tasks without additional training) in shallow CNNs trained on limited-size datasets. A contextually guided CNN (CG-CNN) is trained on groups of neighboring image patches picked at random image locations in the dataset. Such neighboring patches are likely to have a common context and therefore are treated for the purposes of training as belonging to the same class. Across multiple iterations of such training on different context-sharing groups of image patches, CNN features that are optimized in one iteration are then transferred to the next iteration for further optimization, etc. In this process, CNN features acquire higher pluripotency, or inferential utility for any arbitrary classification task, which we quantify as a transfer utility. In our application to natural images, we find that CG-CNN features show the same, if not higher, transfer utility and classification accuracy as comparable transferable features in the first CNN layer of the well-known deep networks AlexNet, ResNet, and GoogLeNet. In general, the CG-CNN approach to development of pluripotent/transferable features can be applied to any type of data, besides imaging, that exhibit significant contextual regularities. Furthermore, rather than being trained on raw data, a CG-CNN can be trained on the outputs of another CG-CNN with already developed pluripotent features, thus using those features as building blocks for forming more descriptive higher-order features. Multi-layered CG-CNNs, comparable to current deep networks, can be built through such consecutive training of each layer. Introduction Deep learning approach has led to great excitement and success in AI in recent years LeCun et al. (2015); Krizhevsky et al. (2012); Zhao et al. (2019); Marblestone et al. (2016); Goodfellow et al. (2016); Ravi et al. (2017). With the advances in computing power, the availability of manually labeled large data sets, and a number of incremental technical improvements, deep learning has become an important tool for machine learning involving big data Zhao et al. (2019); Marblestone et al. (2016); Goodfellow et al. (2016); Gao et al. (2020); Poggio (2016). Deep Convolutional Neural Networks (CNNs), organized in series of layers of computational units, use local-to-global pyramidal architecture to extract progressively more sophisticated features in the higher layers based on the features extracted in the lower ones Zhao et al. (2019); Goodfellow et al. (2016). Such incrementally built-up features underlie the remarkable performance capabilities of deep CNNs. When deep CNNs are trained on gigantic datasets to classify millions of images into thousands of classes, the features extracted by the intermediate hidden layers -as opposed to either the raw input variables or the task-specific complex features of the highest layers -come to represent efficiently the objective content of the images Gao et al. (2020); Caruana (1995); Bengio (2012). Such objectively significant and thus inferentially powerful features can be used not only in the classification task for which they were developed, but in other similar classification tasks as well. In fact, having such features can reduce complexity of learning new pattern recognition tasks Phillips et al. (1995); Clark & Thornton (1997) ;Favorov & Ryder (2004). Indeed, taking advantage of this in the process known as transfer learning Gao et al. (2020); Thrun & Pratt (2012); Pan et al. (2010); Yosinski et al. (2014), such broad-purpose features are used to preprocess the raw input data and boost the efficiency and accuracy of special-purpose machine learning classifiers trained on smaller datasets Zhao et al. (2019); Goodfellow et al. (2016); Yosinski et al. (2014); Bengio (2012). Transfer learning is accomplished by first training a "base" broad-purpose network on a big-data task and then transferring the learned features/weights to a special-purpose "target" network trained on new classes of a generally smaller "target" dataset Yosinski et al. (2014). Learning generalizable representations is improved with data augmentation, by incorporating variations of the training set images using predefined transformations Shorten & Khoshgoftaar (2019). Feature invariances, which have long been known to be important for regularization Becker & Hinton (1992); Hawkins & Blakeslee (2004), can also be promoted by such means as mutual information maximization Becker & Hinton (1992); Hjelm et al. (2019) ;Favorov & Ryder (2004), or by penalizing the derivative of the output with respect to the magnitude of the transformations to incorporate a priori information about the invariances Simard et al. (1992), or by creating auxiliary tasks for unlabeled data Ghaderi & Athitsos (2016); Dosovitskiy et al. (2014); Grandvalet & Bengio (2004); Ahmed et al. (2008). Although supervised deep CNNs are good at extracting pluripotent inferentially powerful transferable features, they require big labeled datasets with detailed external training supervision. Also, the backpropagation of the error all the way down to early layers can be problematic as the error signal weakens (a phenomenon known as the gradient vanishing Arjovsky & Bottou (2017)). To avoid these difficulties, in this paper we describe a self-supervised approach for learning pluripotent transferable features in a single CNN layer without reliance on feedback from higher layers and without a need for big labeled datasets. We demonstrate the use of this approach on two examples of a single CNN layer trained first on natural RGB images and then on hyperspectral images. Of course, there is a limit to sophistication of features that can be developed on raw input patterns by a single CNN layer. However, more complex and descriptive pluripotent features can be built by stacking multiple CNN layers, each layer developed in its turn by using our proposed approach on the outputs of the preceding layer(s). In Section 2, we briefly review deep learning, transfer learning, and neurocomputational antecedents of our unsupervised feature extraction approach. In Section 3, we present the proposed Contextually Guided CNN (CG-CNN) method and a measure of its transfer utility. We present experimental results in Section 4 and conclude the paper in Section 5. An early exploration of CG-CNN has been reported in conference proceedings Kursun & Favorov (2019). 2 Transferable Features at the Intersection of Deep Learning and Neuroscience Transfer of Pluripotent Features in Deep CNNs Deep CNNs apply successive layers of convolution (Conv) operations, each of which is followed by nonlinear functions such as the sigmoidal or ReLU activation functions and/or max-pooling. These successive nonlinear transformations help the network extract gradually more nonlinear and more inferential features. Besides their extraordinary classification accuracy on very large datasets, the deep learning approaches have received attention due to the fact that the features extracted in their first layers have properties similar to those extracted by real neurons in the primary visual cortex (V1). Discovering features with these types of receptive fields are now expected to the degree that obtaining anything else causes suspicion of poorly chosen hyperparameters or a software bug Yosinski et al. (2014). Pluripotent features developed in deep CNN layers on large datasets can be used in new classification tasks to preprocess the raw input data to boost the accuracy of the machine learning classifier Gao et al. (2020); Thrun & Pratt (2012); Pan et al. (2010). That is, a base network is first trained on a "base" task (typically with a big dataset), then the learned features/weights are transferred to a second network to be utilized for learning to classify a "target" dataset Yosinski et al. (2014). The learning task of the target network can be a new classification problem with different classes. The base network's pluripotent features will be most useful when the target task does not come with a large training dataset. When the target dataset is significantly smaller than the base dataset, transfer learning serves as a powerful tool for learning to generalize without overfitting. The transferred layers/weights can be updated by the second network (starting from a good initial configuration supplied by the base network) to reach more discriminatory features for the target task; or the transferred features may be kept fixed (transferred weights can be frozen) and used as a form of preprocessing. The transfer is expected to be most advantageous when the features transferred are pluripotent ones; in other words, suitable for both base and target tasks. The target network will have a new output layer for learning the new classification task with the new class labels; this final output layer typically uses softmax to choose the class with the highest posterior probability. Pluripotent Feature Extraction in the Cerebral Cortex Similar to deep CNNs, cortical areas making up the sensory cortex are organized in a modular and hierarchical architecture Hawkins et al. (2017); Marblestone et al. (2016). Column-shaped modules (referred to as columns) making up a cortical area work in parallel performing information processing that resembles a convolutional block (convolution, rectification, and pooling) of a deep CNN. Each column of a higher-level cortical area builds its more complex features using as input the features of a local set of columns in the lower-level cortical area. Thus, as we go into higher areas these features become increasingly more global and nonlinear, and thus more descriptive Clark & Thornton (1997) ;Favorov & Ryder (2004) ;Favorov & Kursun (2011);Grill-Spector & Malach (2004); Hawkins et al. (2017). Unlike deep CNNs, cortical areas do not rely on error backpropagation for learning what features should be extracted by their neurons. Instead, cortical areas rely on some local guiding information in optimizing their feature selection. While local, such guiding information nevertheless promotes feature selection that enables insightful perception and successful behavior. The prevailing consensus in theoretical neuroscience is that such local guidance comes from the spatial and temporal context in which selected features occur Kursun & Favorov (2019); Becker & Hinton (1992) ;Favorov & Ryder (2004); Körding & König (2000); Phillips & Singer (1997); Hawkins et al. (2017). The reason why contextually selected features turn out to be behaviorally useful is because they are chosen for being predictably related to other such features extracted from non-overlapping sensory inputs and this means that they capture the orderly causal dependencies in the outside world origins of these features Kay & Phillips (2011);Phillips et al. (1995); Clark & Thornton (1997); Hawkins et al. (2017);Favorov & Ryder (2004); Becker & Hinton (1992); Kursun & Favorov (2019). Contextually Guided Convolutional Neural Network (CG-CNN) Basic Design In this paper we apply the cortical context-guided strategy of developing pluripotent features in individual cortical areas to individual CNN layers. To explain our approach, suppose we want to develop pluripotent features in a particular CNN layer (performing convolution + ReLU + pooling) on a given dataset of images. We set up a three-layer training system ( Fig. 1-A) as: 1. The Input layer, which might correspond to a 2-dimensional field of raw pixels (i.e., a 3D tensor with two axes specifying row and column and one axis for the color channels) or the 3D tensor that was outputed by the preceding CNN layer with already developed features; 2. The CNN layer ("Feature Generator"), whose features we aim to develop; 3. The Classifier layer, a set of linear units fully connected with the output units of the CNN layer, each unit (with softmax activation) representing one of the training classes in the input patterns. As in standard CNNs, during this network's training the classification errors will be backpropagated and used to adjust connection weights in the Classifier layer and the CNN layer. While eventually (after its training) this CNN layer might be used as a part of a deep CNN to discriminate some particular application-specific classes of input patterns, during the training period the class labels will have to be assigned to the training input patterns internally; i.e., without any outside supervision. Adopting the cortical contextual guidance strategy, we can create a training class by picking at random a set of neighboring window patches in one of database images ( Fig. 1-B). Being close together or even overlapping, such patches will have a high chance of showing the same object and those that do will share something in common (i.e., the same context). Other randomly chosen locations in the dataset images -giving us other training classes -will likely come from other objects and at those locations the neighboring window patches will have some other contexts to share. We can thus create a training dataset X = {x t | 1 ≤ t ≤ CN } of C × N class-labeled input patterns by treating C sets of N neighboring window patches -each set drawn from a different randomly picked image location -as belonging to C training classes, uniquely labeled from 1 to C. These inputs are small a × a × b tensors, a × a patches (feature-maps) with b features. We will refer to each such class of neighboring image patches as a "contextual group." (B) Class-defining contextual groups of image patches. Each image patch -used as input in CG-CNN training -is shown as a small square box superimposed on one of the database images. Neighboring patches constitute a contextual group and during network training are treated as belonging to the same class. During network training, locations of contextual groups are picked at random. Six such groups, or classes, are shown on this photo with five patches in each (C = 6 and N = 5). Upon a presentation of a particular input pattern x t from the training dataset X, the response of the CNN layer is computed as: y t j = M axP ool([Wj * x t ] + )(1) where y t j is the response of output unit j in the CNN layer with d units (i.e., yj is CNN's feature j, where 1 ≤ j ≤ d), Wj is the input connection weights of that unit (each unit has w × w × b input connections), symbol * denotes convolution operation, and [·] + = max{·, 0} denotes the ReLU operation. Next, the response of the Classifier layer is computed by the softmax operation as: z t l = exp (V l · y t ) C c=1 exp (Vc · y t )(2) where z t l is the response of output unit l in the Classifier layer (expressing the probability of this input pattern x t belonging to class l), y t = [y t j ] d j=1 is the d-dimensional feature vector computed as the output of the CNN layer, and V l is the vector of connection weights of that unit from all the d units of the CNN layer. During training, connection weights W and V are adjusted by error backpropagation so as to maximize the log-likelihood (whose negative is the loss function): L V, W X = CN t=1 C c=1 r t c log z t c (3) where r t c ∈ 0, 1 indicates whether input pattern x t belongs to class c. The nomenclature given below lists (in alphabetical order) and briefly describes several symbols that we use in our description of the CG-CNN algorithm. List of Symbols Symbol Description a the width of the input image patches A CG (C) transferable classification accuracy b the number of channels (e.g., red-green-blue) of the input image patches C the number of contextual groups d the number of feature maps (the features extracted from the a × a × b input tensor and fed to the Classifier) g the extent of the spatial translation within contextual groups N the number of randomly chosen image patches in each contextual group r t one-hot vector for the class label of x t , where r t c = 1 if and only if x t belongs to contextual group c s the stride of the convolutions U Transfer Utility, which is used to estimate the pluripotency of the learned convolutional features V l softmax weights of Layer-5 for contextual group l ∈ {1, 2, . . . , C} for the current task of discriminating C groups (each softmax unit has d input connections coming from y) w the kernel size of the convolutions W j convolutional weights of unit j in Layer-2 (each unit/feature has w × w × b input connections) x t image patch number t used as input to CG-CNN X E the training dataset formed by x t image patches (1 ≤ t ≤ C × N ) from C contextual groups with N patches in each group. This E-dataset is used for optimizing V weights of Layer-5 in the E-step of the EM iterations X M the dataset formed similarly as X E , which is used after the E-step for estimating the goodness-of-fit, A CG , of the current Layer-2 weights W . It is also used for updating the Layer-2 weights (W new ) in the M-step y t the d-dimensional feature vector computed as the output of the CNN layer y t j the response of unit/feature j in the CNN layer with j ∈ {1, 2, . . . , d} z t l the response of output unit l in the Classifier layer to the input pattern x t (estimate of the probability that x t belongs to contextual group l) Iterative Training Algorithm If we want to develop pluripotent features in the CNN layer that will capture underlying contextual regularities in the domain images, it might be necessary to create tens of thousands of contextual groups for the network's training Dosovitskiy et al. (2014); Ghaderi & Athitsos (2016). We can avoid the complexity of training the system simultaneously on so many classes by using an alternative approach, in which training is performed over multiple iterations, with each iteration using a different small sample of contextual groups as training classes Finn et al. (2017). That is, in each iteration a new small (e.g., C = 100) number of contextual groups is drawn from the database and the system is trained to discriminate them. Once this training is finished, a new small number of contextual groups is drawn and training continues in the next iteration on these new classes without resetting the already developed CNN connection weights. For such iterative training of the CG-CNN system, we use an expectation-maximization (EM) algorithm Do & Batzoglou (2008); Alpaydin (2014). The EM iterations alternate between performing an expectation (E) step and a maximization (M) step. At each EM iteration, we create a new training dataset X = {x t | 1 ≤ t ≤ CN } of C × N self-class-labeled input patterns and randomly partition it into two subsets; one subset X E to be used in the E-step, the other subset X M to be used in the M-step. Next, we perform the E-step, which involves keeping W connection weights from the previous EM iteration (W old ), while training V connections of the Classifier layer on the newly created X E subset so as to maximize its log-likelihood L (Eq. 3): E-step: Vnew = argmax V L V, W old X E(4) Next, we perform the M-step, which involves holding the newly optimized V connection weights fixed, while updating W connections of the CNN layer on the X M subset so as to maximize log-likelihood L one more time: M-step: Wnew = argmax W L Vnew, W X M(5) Overall, EM training iterations help CG-CNN take advantage of transfer learning and make it possible to learn pluripotent features using a small number of classes in the Classifier layer (each softmax unit in this layer represents one class). By continuing to update the CNN layer weights W , while the contextual groups to be discriminated by the Classifier keep changing with every EM iteration, CG-CNN spreads the potentially high number of contextual groups (classes) needed for learning image-domain contextual regularities into multiple iterations Finn et al. (2017). The proposed EM algorithm for training CG-CNN achieves an efficient approach to learning the regularities that define contextual classes, which otherwise would theoretically require a C value in orders of tens of thousands Dosovitskiy et al. (2014). To monitor the progress of CG-CNN training across EM iterations -so as to be able to decide when to stop it -we can at each EM iteration compute the network's current classification accuracy. Since we are interested in transferability of the CNN-layer features, such accuracy evaluation should be performed after the E-step, when Classifier-layer connections V have been optimized on the current iteration's task (using the X E subset of input patterns), but before optimization of CNN-layer connections W (which were transferred from the previous EM iteration). Furthermore, classification accuracy should be tested on the new, X M , subset of input patterns. Such classification accuracy can be expressed as the fraction of correctly classified test (X M ) input patterns. We will refer to such classification accuracy of CG-CNNs with task-specific Classifier weights V but transferred CNN feature weights W as "transferable classification accuracy" and use it in Section 4 as an indicator of the usefulness of context-guided CNN features on a new task: A = 1 |X M | |X M | t=1 argmax r t c c = argmax z t c c (6) where the argmax operators return the indices of the expected and predicted classes of x t , respectively; and [i = j] is the Kronecker delta function (expressed using the Iverson bracket notation) used to compare the expected and predicted classes. As will be detailed further in Section 4, no particular CNN architecture is required for applying the CG-CNN training procedures. In Algorithm 1, we formulate the CG-CNN algorithm using a generic architecture (that somewhat resembles AlexNet because GoogLeNet, for example, does not use ReLU but it uses another layer called BatchNorm). Regardless of the particulars of the chosen architecture, CG-CNN accepts a small a × a × b tensor as input. Although CG-CNN can be applied repeatedly to extract higher level features on top of the features extracted in the previous layer as mentioned at the beginning of this section, in this paper, focusing on CG-CNN's first application to imagepixels directly, b simply denotes the number of color bands, i.e. b = 3 for RGB images, and a denotes the width of the image patches that form the contextual groups. The kernel size of the convolutions and the stride are denoted by w and s, respectively. In CNNs, pooling operations, e.g. MaxPool, are used to subsample (that leads to the pyramidal architecture) the feature maps. A 75% reduction is typical, which is achieved via pooling with a kernel size of 3 and stride of 2, which gives us a = w + 2s. For example, if w = 11 and s = 4 for the convolutions (as in AlexNet), then a = 19. That is, CG-CNN's Feature Generator (the CNN layer) learns to extract d features (e.g., d = 64) that most contextually and pluripotently represent any given a × a image patch. Note that at this level CG-CNN is not trying to solve an actual classification problem and is only learning a powerful local representation; only a pyramidal combination of these powerful local features can be used to describe an image big enough to capture real-world object class. Algorithm 1 The proposed CG-CNN method for learning broad-purpose transferable features. Pluripotency Estimation of CNN Features EM training of CG-CNN aims to promote pluripotency of features learned by the CNN layer; i.e., their applicability to new classification tasks. Ideally, pluripotency of a given set of learned features would be measured by applying them to a comprehensive repertoire of potential classification tasks and comparing their performance with that of: (1) naïve CNN-Classifiers, whose CNN-layer connection weights are randomly assigned (once randomly assigned for a given task, these W weights are never updated as in extreme learning machines Huang et al. (2006); Glorot & Bengio (2010) present a state-of-the-art weight initialization method); and (2) task-specific CNN-Classifiers, whose CNN-layer connections are specifically trained on each tested task. The more pluripotent the CG features, the greater their classification performance compared to that of random features and the closer they come to the performance of task-specific features. Such a comprehensive comparison, however, is not practically possible. Instead, we can resort to estimating pluripotency on a more limited assortment of tasks, such as for example discriminating among newly created contextual groups (as was done in EM training iterations). Regardless of the C-parameter used in the CG-CNN training tasks, these test tasks will vary in their selection of contextual groups as well as the number (C) of groups. The expected outcome is graphically illustrated in Figure 2, plotting expected classification accuracy of CNN-Classifiers with random, task-specific, and CG features as a function of the number of test classes. When the testing tasks have only one class, all three classifiers will have perfect accuracy. With increasing number of classes in a task, classification accuracy of random-feature classifiers will decline most rapidly, while that of task-specific classifiers will decline most slowly, although both will eventually converge to zero. CG-feature classifiers will be in-between. According to this plot, the benefit of using CG features is reflected in the area gained by the CG-feature classifiers in the plot over the baseline established by random-feature classifiers. Normalizing this area by the area gained by task-specific classifiers over the baseline, we get a measure of "Transfer Utility" of CG features: U = ∞ C=1 E ACG(C) − ∞ C=1 E A random (C) ∞ C=1 E A specif ic (C) − ∞ C=1 E A random (C)(7) where A random (C), A specif ic (C), and ACG(C) are classification accuracies of CNN-Classifiers with random, taskspecific, and CG features, respectively, on tasks involving discrimination of C contextual groups (Eq. 6). Fig. 2. Transfer Utility of CG-CNN features is based on the area under the curve of the test accuracy A CG as a function of the number of test classes C. Accuracies obtained using the random and task-specific CNN features, A random (C) and A specif ic (C) are also shown as they are used in Eq. 7 to quantify the Transfer Utility, U . The expectation of the test accuracies is computed over a number of tasks generated for each value of C. Sources of Contextual Guidance While in our presentation of CG-CNN so far we have explained the use of contextual guidance on an example of spatial proximity, using neighboring image patches to define training/testing classes, any other kind of contextual relations can also be used as a source of guidance in developing CNN-layer features. Temporal context is one such rich source. Frequency domain context is another source, most obviously in speech recognition, while in Section 4 we exploit it in a form of hyperspectral imaging. More generally, any natural phenomena in which a core of indigenous causally influential factors are reflected redundantly in multivariable raw sensor data will have contextual regularities, which might be possible to use to guide feature learning in the CNN layer. With regard to spatial-proximity-based contextual grouping, it is different from data augmentation used in deep learning Dosovitskiy et al. (2014). Data augmentation does shift input image patches a few pixels in each direction to create more examples of a known object category (such as a car or an animal); however, for a contextual group, we do not have such object categories to guide the placement of image patches and we take patches over a much larger range pixels from the center of the contextual group. Training CG-CNN using short shifts (similar to dataaugmentation) does not lead to tuning to V1-like features because other/suboptimal features can also easily cluster heavily overlapping image patches. Another source of contextual information that we utilize in this paper for extracting features from color images is based on multiple pixel-color representations (specifically, RGB and grayscale). Instead of using a feature-engineering approach that learns to extract color features and grayscale features separately, as in Krizhevsky et al. (2012), we use a data-engineering approach by extending the contextual group formation to the color and gray versions of the image windows: For every contextual group, some of the RGB image patches are converted to grayscale. This helps our network develop both gray and color features as needed for maximal transfer utility: if the training is performed only on gray images, even though the neurons might have access to separate RGB color channels, whose weights are randomly initialized and the visualization of feature weights initially looks colorful, they all gradually move towards gray features. Using no grayscale images leads to all-color features automatically. For our experiments in Section 4, the probability for the random grayscale transformation was set to 0.5. That is, we converted 50% of the image patches in each contextual group from color to gray, which led to emergence of gray-level features in addition to color ones. Experimental Results Demonstration on Natural Images To demonstrate the feasibility of CG-CNN developing pluripotent features using a limited number of images without any class-labels, we used images from the Caltech-101 dataset Fei-Fei et al. (2007). We used images from a single (face) class to emphasize that the proposed algorithm does not use any external supervision for tuning to its discriminatory features. Thus, our dataset had 435 color images, with sizes varying around 400 × 600 pixels (see Fig. 3 for two representative examples). We used half of these images to train CG-CNN and develop its features and the other half to evaluate pluripotency of these features. Fig. 3. Two exemplary images from Caltech-101 dataset used for snipping small image patches for the CG-CNN training. Only images that belong to the face-class were used in order to show that CG-CNN can develop its features on a small dataset without relying on class-labels or external supervision. Since our CG-CNN algorithm can be used with any CNN architecture, we applied it to AlexNet, ResNet, and GoogLeNet architectures. In its first convolutional block, AlexNet performs Conv+ReLU+MaxPool. This first block has d = 64 features, with a kernel size of 11 × 11 (i.e., w = 11) and a stride s = 4 pixels. ResNet performs Conv+BatchNorm+ReLU+MaxPool in its first block, with d = 64 features, kernel size of 7 × 7, and stride s = 2 pixels. GoogLeNet in its first block also has d = 64 features, 7 × 7 kernels, and stride s = 2. However, GoogLeNet performs Conv+BatchNorm+MaxPool. All three architectures use MaxPool with a kernel size of 3 × 3 and a stride s = 2. Therefore, the viewing window of a MaxPool unit is 19 pixels for AlexNet and 11 pixels for ResNet and GoogLeNet. (Note that although we could enrich these architectures by adding drop-out and/or local response normalization to adjust lateral inhibition, we chose not to do such optimizations in order to show that pluripotent features can develop solely under contextual guidance.) We used a moderate number of contextual groups (C = 100) for the CG-CNN training. For selecting image patches for each contextual group, the parameter g -used in Algorithm 1 to slide the seed window for spatial contextual guidance -was set to g = 25 pixels. Thus, each contextual group had (2 × 25 + 1) 2 = 2601 distinct patch positions. We also used color jitter and color-to-gray conversion to enrich contextual groups (see Section 3.4). We used PyTorch open source machine learning framework Paszke et al. (2019) to implement CG-CNN. Experiments were performed on a workstation with Intel i7-9700K 3.6GHz CPU with 32 GB RAM and NVIDIA GeForce RTX 2080 GPU with 8GB GDDR6 memory. In each EM training iteration, we used 10 epochs for the E-step and 10 epochs for the M-step. On the workstation used for the experiments, for C = 100, each EM iteration takes about two minutes. CG-CNN takes around 100 minutes to converge in about 50 iterations. Both the SGD (stochastic gradient descent) and Adam Kingma & Ba (2017) optimizers can reduce time. Adam helps cut down the runtime by reducing the number of epochs down to one epoch with minibatch updates. Increasing the number of EM iterations was more helpful than increasing the number of epochs in one iteration. With these improvements, 50 EM iterations took about 10 minutes. Figure 4 shows the time-course of the transferable classification accuracy (Eq. 6) improvements during network training, which rises quickly in the first few EM iterations and then slowly converges to a stable level. Most of the improvements are accomplished in the first 50 iterations. With each EM iteration, the network's features become progressively more defined and more resembling visual cortical features (gratings, Gabor-like features, and color blobs) as well as features extracted in the early layers of deep learning architectures AlexNet, GoogLeNet, and ResNet (see Figs. 5 and 6). To compare pluripotency of CG-CNN features to pluripotency of AlexNet, ResNet, and GoogLeNet features, we used the Transfer Utility approach described in Section 3.3 (see Fig. 2 and Eq. 7) and tested classification accuracy of CNN-Classifiers equipped with random, task-specific, and CG features, as well as pretrained AlexNet, GoogLeNet, and ResNet features, on new contextual groups/classes drawn from the test set images that were not used during the training of CG-CNN. These classification accuracy estimates are plotted in Figure 7 as a function of the number of test classes C in each classification task. As the two plots in Figure 7 show, the curves generated with CG-CNN features lay slightly more than halfway between the curves generated with random and task-specific features, indicating substantial degree of transfer utility. Most importantly, CG-CNN curves match or even exceed the curves generated with features taken from deep CNN systems, which are acknowledged -as reviewed in Sections 1 and 2.1 -to have desirable levels of transfer utility. Demonstration on Texture Image Classification As an additional test of pluripotency of CG-CNN features, we applied them to a texture classification task. Texture is a key element of human visual perception and texture classification is a challenging computer vision task, utilized in applications ranging from image analysis (e.g. biomedical and aerial) to industrial automation, remote sensing, face recognition, etc. Anam & Rushdi (2019). For this test, we used the Brodatz dataset Brodatz (1966) of 13 texture images, in which each image shows a different natural texture and is 512 × 512 pixels in size (Fig. 8). To compare with AlexNet (which has 11 × 11 pixel features, stride s = 4, and therefore pooled window size of 19 × 19 pixels), we trained classifiers to discriminate textures in 19 × 19 pixel Brodatz image patches. To compare with GoogLeNet and ResNet (which have 7 × 7 pixel features, stride s = 2, and therefore pooled window size of 11 × 11 pixels), we trained other classifiers to discriminate textures in 11 × 11 pixel Brodatz image patches. For either of these two window sizes, we subdivided each 512 × 512 texture image into 256 32 × 32 subregions and picked 128 training image patches at random positions within 128 of these subregions, and other 128 test image patches at random positions within the remaining 128 subregions. This selection process ensured that none of the training and test image patches overlapped to any degree, while sampling all the image territories. Using the 128 × 13 = 1664 training image patches, we trained CNN-Classifiers equipped with either CG-CNN features (previously developed on Caltech-101 images, as described above in Section 4.1), or AlexNet, GoogLeNet, or ResNet features. Note that these features were not updated during classifier training; i.e., they were transferred and used "as is" in this texture classification task. For additional benchmarking comparison, to gauge the difficulty of this texture classification task, we also applied some standard machine learning algorithms Fernandez-Delgado et al. These classifiers are straightforward to optimize without requiring many hyperparameters. For their implementation (including optimization/validation of the classifier hyperparameters), we used scikit-learn Python module for machine learning Pedregosa et al. (2011). For MLP, we used a single hidden layer with ReLU activation function (we used 64 hidden units in the layer to keep the complexity similar to that of CG-CNN). We used the default value for the regularization parameter (C = 1) for our SVMs and the automatic scaling for setting the RBF radius for RBF-SVM. We used K = 1 neighbor and the Euclidean distance metric for K-NN. For the random forest classifier, we used 100 trees as the number of estimators in the forest. All the classifiers were tested on the image patches from the test set, not used in classifier training. There are a total of 128 × 13 = 1664 test image patches. The accuracies of the classifiers are listed in Table 1. According to this table, all the CNN-classifiers with transferred features had very similar texture classification accuracies, with CG features giving the best performance. All the other classifiers performed much worse, indicating non-trivial nature of this classification task. These results demonstrate the superiority of using the transfer learning approach, with transferred features taken from CG-CNN or Pool-1 of pretrained deep networks. Table 1 Texture classification accuracies of 12 classifiers on 13 textures taken from the Brodatz (1966) dataset. Listed are means and standard deviations of the means computed over 10 test runs. Demonstration on Hyperspectral Image Classification Unlike color image processing that uses a large image window with a few color channels (grayscale or RGB), Hyperspectral Image (HSI) analysis typically aims at classification of a single pixel characterized by a high number of spectral channels (bands). Typically, HSI datasets are small, and application of supervised deep learning to such small datasets can result in overlearning, not yielding pluripotent task-transferrable HSI-domain features Sellami et al. (2019). To improve generalization, the supervised classification can benefit from unsupervised feature extraction of a small number of more complex/informative features than the raw data in the spectral channels. In this section, we demonstrate the usefulness of the features extracted by the proposed CG-CNN algorithm on the Indian Pines and Salinas datasets, which are well-known HSI datasets captured by the AVIRIS (Airborne Visible/Infrared Imaging Spectrometer) sensor. The Indian Pines dataset is a 145 × 145 pixel image with 220 spectral channels (bands) in the 400-2500 nm range of wavelengths Grana et al. (2018). The dataset includes 16 classes, some of which are various crops and others are grass and woods. Fig. 9A shows the color view and the ground truth of the image. Table 2 lists the class names and the number of examples. The Salinas dataset is a 512 × 217 HSI image with 220 spectral bands Grana et al. (2018). There are 16 classes in the dataset including vegetables, bare soils, and vineyard fields. Fig. 9B shows the color view and the ground truth of the image and Table 3 Corn-mintill 834 4 Corn 234 5 Grass-pasture 497 6 Grass-trees 747 7 Grass-pasture-mowed 26 8 Hay-windrowed 489 9 Oats 20 10 Soybean-notill 968 11 Soybean-mintill 2468 12 Soybean-clean 614 13 Wheat 212 14 Woods 1294 15 Build.-Grass-Trees-Drv. 380 16 Stone-Steel-Towers 95 Total 10366 To evaluate the quality of CG-CNN features on the hyperspectral data, the CG-CNN architecture presented in Algorithm 1 was used with the following parameters: a = 3 pixels, b = 220 bands, d = 30 features, w = 1 pixel (i.e., each convolution uses only the bands of a single pixel), C = 20 contextual groups, and g = 2 pixels for the extent of the spatial contextual guidance. In training of CG-CNN, the class labels of the HSI pixels were not used; instead, local groups of pixels (controlled by the g parameter) were treated as training classes, as described in Section 3.4. CG-CNN learns to represent its input HSI image patch, which is a hypercube of size 3 × 3 × 220, in such a way that the image patch and its neighboring windows/positions (obtained by shifting it g = ±2 pixels in each direction) can be maximally discriminable from other contextual groups centered elsewhere. Note that only a total of (2 × 2 + 1) 2 = 25 image patches are created for each contextual group. (We can also enrich contextual groups by adding band-specific noise or frequency shift, but leave this for future work.) Then, we used the extracted features as inputs to various supervised classifiers. Decision Trees (ID3), Random Forests (RF), Linear Discriminant Analysis (LDA), Linear SVM, RBF SVM, SVM with cubic-polynomial kernel (Cubic-SVM), and K-Nearest Neighbor (K-NN) classifiers were selected due to their popularity and robustness Fernandez-Delgado et al. (2014); Alpaydin (2014). The d = 30 features, learned in the CNN-layer of CG-CNN, were fed to these classifiers to use them in the final classification task with 16 target classes (various vegetation, buildings, etc.). Pixel classification accuracies were computed using 10-fold cross validation. For comparison, we evaluated the use of all of the original raw variables (i.e., 220 bands) as inputs to the classifiers and we also compared with first 30 principal components of the Principal Component Analysis (PCA) and best 30 Random Forest (RF) features Breiman (2001) Table 4 and on the Salinas dataset in Table 5. As shown in these tables, the best performance was achieved by combining CG-CNN features with Random Forest or K-NN classifiers. For the best feature and classifier combination (CG-CNN features + RF Classifier), individual class accuracies (precision and recall) were compared with the accuracies obtained by using RF classifier on 220 original/raw features or first 30 PCA features. Tables 6 and 7 show these comparisons for the Indian Pines and Salinas datasets, respectively. CG-CNN features yielded better results for an overwhelming majority of classes. Conclusions Deep neural networks trained on millions of images from thousands of classes yield inferentially powerful features in their layers. Similar sets of features emerge in many different deep learning architectures, especially in their early layers. These pluripotent features can be transferred to other image recognition tasks to achieve better generalization. The proposed Contextually Guided Convolutional Neural Network (CG-CNN) method is an alternative approach for learning such pluripotent features. Instead of a big and deep network, we use a shallow CNN network with a small number of output units, trained to recognize contextually related input patterns. Once the network is trained on one task, involving one set of different contexts, the convolutional features it develops are transferred to a new task, involving a new set of different contexts, and training continues. In a course of repeatedly transferred training on a sequence of such tasks, convolutional features progressively develop greater pluripotency, which we quantify as a transfer utility (the degree of usefulness when transferred to new tasks). Thus, our approach to developing high utility features combines transfer learning and contextual guidance. In our comparative studies, such CG-CNN features showed the same, if not higher, transfer utility and texture classification accuracy as comparable features in the well-known deep networks -AlexNet, ResNet, and GoogLeNet. With regard to learning powerful broad-purpose features, CG-CNN has an important practical advantage over deep CNNs in its greatly reduced architectural complexity and size and in its ability to find pluripotent features even in small unlabeled datasets. We demonstrate these advantages by using CG-CNN to find pluripotent features in two small hyperspectral image datasets and showing that with these features we can achieve the best classification performance on these datasets. Turning to limitations of the CG-CNN design presented in this paper, a single CNN layer is, obviously, limited in the complexity of features it can develop. A series of CNN layers -each developed in its own turn using local contextual guidance on the outputs of the already developed preceding layer(s) -might be expected to extract more globally descriptive features capable of object/target recognition. However, it remains to be determined whether such features will rival pluripotency of features extracted by comparable series of layers in conventional deep CNN. Also it remains to be explored how much contextual information, which might be used in guiding the development of higher-level CNN layers, is present at higher levels. This brings us to an even more fundamental question: How can we recognize and exploit potential sources and kinds of contextual information present in a particular data source? This is a critical question, since it determines how contextual groups (i.e., training classes) will be chosen for EM training of the system. Finally, feedback guidance (FG) from higher to lower CNN layers, combined with local withinlayer contextual guidance, would undoubtedly further enhance pluripotency of features developed in such multilayer CG-FG-CNN designs. Thus, for future work, the proposed CG-CNN architecture can be improved by stacking multiple CG-CNN layers and incorporating other forms of contextual guidance (such as spatiotemporal proximity, feature-based similarity) and feedback guidance from higher CG-CNN layers. Fig. 1 . 1Contextually Guided Convolutional Neural Network (CG-CNN) design. (A) CG-CNN architecture. i n i t i a l i z e t h e new C l a s s i f i e r w e i g h t s V o f Layer −5 Compute Vnew by u s i n g t h e E−d a t a s e t a s i n Eq . 4 // Using Vnew and W on t h e M −d a t a s e t Compute T r a n s f e r a b l e C l a s s i f i c a t i o n Accuracy , A , a s i n Eq . 6 //As W becomes i n c r e a s i n g l y more p l u r i p o t e n t , A w i l l k e e p r i s i 4 Fig. 4 . 4Transferable Classification Accuracy (Eq. 6) plotted as a function of EM iterations during training of AlexNet-compatible CG-CNN. Fig. 5 . 5Visualizations of the 11 × 11 weights of the 64 features in the CNN layer of CG-CNN after 1, 5, 20, 50, and 100 EM iterations. Also shown are the weights of the 64 features in the first layer of AlexNet. While even after 20 EM iterations the features are still quite crude, the features at iterations 50 and 100 are sharp and almost identical and resemble AlexNet features. Fig. 6 . 6Visualizations of the 7 × 7 weights of the 64 features in the CNN layer of CG-CNN after 100 EM iterations, as well as 64 features in the first layer of GoogLeNet, ResNet-101, and ResNet-18. Fig. 7 . 7Transfer Utility of GC-CNN features, demonstrated following the format ofFigure 2. (A) 11 × 11 pixel features. Average Classification Accuracies of CNN-Classifiers with task-specific, random, and CG features (A specif ic , A random , and A CG ; black, blue, and red curves, respectively) are plotted as a function of the number of contextual classes used in a classification task. For a comparison, also plotted is the average Classification Accuracy of CNN-Classifiers with Pool-1 features of the pretrained AlexNet (green curve). (B) 7 × 7 pixel features. Average Classification Accuracies of CNN-Classifiers with task-specific (black), random (blue), and CG (red) features are plotted as a function of the number of test classes. For a comparison, also plotted are the average Classification Accuracies of CNN-Classifiers with Pool-1 features of the pretrained GoogLeNet (green), ResNet-101 (magenta), and ResNet-18 (yellow) networks. (2014), including Decision Trees and Random Forests, Linear and RBF SVMs, Logistic Regression, Naive Bayes, MLP (Multi-Layer Perceptron), and K-NN (K-Nearest Neighbor). Fig. 8 . 8Brodatz texture images. Shown are 4 representative 50 × 50 pixel extracts from each of the 13 512 × 512 pixel images in the dataset. CG-CNN and 11 other classifiers were trained to identify these textures based on either 11 × 11 or 19 × 19 pixel image patches (see main text for details). lists each class and the number of examples. Fig. 9 . 9Hyperspectral Image dataset. (A) The Indian Pines image (left) and its ground truth labeling (right). (B) The Salinas image (top) and its ground truth labeling (bottom). ; Fernandez-Delgado et al. (2014) extracted using algorithms available in MATLAB Statistics and Machine Learning Toolbox. We performed our experiments using MATLAB Classification Learner toolbox. Use of CG-CNN features yielded the highest classification accuracies. The results on the Indian Pines dataset are reported in Table 2 2The class names and the number of examples in each class of the Indian Pines datasetNum Class Examples 1 Alfalfa 54 2 Corn-notill 1434 3 Table 3 The 3class names and the number of examples in each class of the Salinas dataset Num Class Examples 1 Brocoli (green weeds 1) 2009 2 Brocoli (green weeds 2) 3726 3 Fallow 1976 4 Fallow (rough plow) 1394 5 Fallow (smooth) 2678 6 Stubble 3959 7 Celery 3579 8 Grapes (untrained) 11271 9 Soil (vinyard develop) 6203 10 Corn (senesced green weeds) 3278 11 Lettuce (romaine 4wk) 1068 12 Lettuce (romaine 5wk) 1927 13 Lettuce (romaine 6wk) 916 14 Lettuce (romaine 7wk) 1070 15 Vinyard (untrained) 7268 16 Vinyard (vertical trellis) 1807 Total 54129 Table 4 4HSI classification results on the Indian Pines dataset. HSI classification results on the Salinas dataset. Random Forest classification results for individual classes in the Indian Pines dataset. Original 220 features are compared with 30 PCA and 30 CG-CNN features. Random Forest classification results for individual classes in the Salinas dataset. Original 220 features are compared with 30 PCA and 30 CG-CNN features. Class #Examples Orig. (220) PCA (30) CG-CNN (30) Orig. (220) PCA (30) CG-CNN (30)Accuracy (%) Orig.(220) PCA(30) RF(30) CG-CNN(30) ID3 67.8 ± 1.1 68.2 ± 1.3 67.0 ± 1.9 76.5 ± 1.6 LDA 79.4 ± 1.0 61.0 ± 1.4 62.1 ± 1.7 72.3 ± 1.5 Linear-SVM 85.2 ± 1.7 75.1 ± 1.4 76.0 ± 1.3 79.5 ± 0.8 Cubic-SVM 91.9 ± 0.8 86.0 ± 1.0 87.0 ± 1.0 94.7 ± 0.7 RBF-SVM 87.1 ± 0.6 81.5 ± 0.8 80.7 ± 1.8 90.5 ± 1.0 K-NN 76.0 ± 1.1 74.2 ± 1.2 80.0 ± 0.9 95.4 ± 0.5 RF 86.5 ± 0.9 82.8 ± 1.1 83.4 ± 1.3 96.2 ± 0.6 Table 5 Accuracy (%) Orig.(220) PCA(30) RF(30) CG-CNN(30) ID3 88.3 ± 0.6 90.2 ± 0.3 85.9 ± 0.5 87.3 ± 0.5 LDA 91.7 ± 0.3 90.6 ± 0.4 89.6 ± 0.5 88.7 ± 0.6 Linear-SVM 92.9 ± 0.4 93.2 ± 0.3 92.2 ± 0.4 92.8 ± 0.4 Cubic-SVM 96.1 ± 1.5 96.2 ± 1.0 94.8 ± 1.0 96.8 ± 0.2 RBF-SVM 95.2 ± 0.3 96.5 ± 0.3 94.1 ± 0.3 95.7 ± 0.2 K-NN 91.9 ± 0.5 92.6 ± 0.4 93.0 ± 0.3 97.8 ± 0.3 RF 95.3 ± 0.3 96.1 ± 0.2 95.2 ± 0.3 97.8 ± 0.3 Table 6 Precision (%) Recall (%) Class #Examples Orig. (220) PCA (30) CG-CNN (30) Orig. (220) PCA (30) CG-CNN (30) 1 46 94.6 93.1 100.0 76.1 58.7 93.5 2 1428 85.3 71.4 96.1 81.1 74.4 95.3 3 830 85.7 78.4 92.2 71.3 64.5 95.7 4 237 74.1 72.1 97.0 68.8 46.8 95.4 5 483 93.7 92.0 99.4 92.3 90.7 97.5 6 730 90.3 91.1 98.9 97.8 96.2 99.3 7 28 95.2 91.7 100 71.4 78.6 100 8 478 95.6 94.6 99.8 99.2 99.2 100 9 20 92.3 92.9 100 60.0 65.0 90.0 10 972 84.4 76.4 95.3 84.4 78.7 97.9 11 2455 82.9 79.6 96.5 91.2 86.3 97.6 12 593 80.0 75.3 97.3 75.4 66.3 91.1 13 205 96.1 94.3 100 96.1 96.6 99.5 14 1265 93.0 91.1 98.5 96.8 97.5 99.0 15 386 78.2 75.5 95.5 63.2 54.1 88.3 16 93 98.8 96.5 97.9 88.2 89.2 100 AcknowledgmentThis work was supported, in part, by the Office of Naval Research, by the Arkansas INBRE program with a grant from the National Institute of General Medical Sciences (NIGMS) P20 GM103429 from the National Institutes of Health, and by the DART (Data Analytics That Are Robust and Trusted) grant from NSF EPSCoR RII Track-1. Training hierarchical feed-forward visual recognition models using transfer learning from pseudo-tasks. A Ahmed, K Yu, W Xu, Y Gong, E Xing, Computer Vision -ECCV 2008. Forsyth, D., Torr, P., & Zisserman, A.Berlin, Heidelberg; Berlin HeidelbergSpringerAhmed, A., Yu, K., Xu, W., Gong, Y., & Xing, E. (2008). Training hierarchical feed-forward visual recognition models using transfer learning from pseudo-tasks. In Forsyth, D., Torr, P., & Zisserman, A., editors, Computer Vision -ECCV 2008, pages 69-82, Berlin, Heidelberg. Springer Berlin Heidelberg. Introduction to machine learning. E Alpaydin, The MIT PressCambridgethird editionAlpaydin, E. (2014). Introduction to machine learning, third edition. The MIT Press, Cambridge. Classification of scaled texture patterns with transfer learning. A M Anam, M A Rushdi, Expert Systems with Applications. 120Anam, A. M. & Rushdi, M. A. (2019). Classification of scaled texture patterns with transfer learning. Expert Systems with Applications, 120:448 -460. Towards principled methods for training generative adversarial networks. M Arjovsky, L Bottou, International Conference on Neural Information Processing Systems (NIPS) 2016 Workshop on Adversarial Training. In review for ICLR. Arjovsky, M. & Bottou, L. (2017). Towards principled methods for training generative adversarial networks. In Inter- national Conference on Neural Information Processing Systems (NIPS) 2016 Workshop on Adversarial Training. In review for ICLR, volume 2016. Self-organizing neural network that discovers surfaces in random-dot stereograms. S Becker, G E Hinton, Nature. 3556356161Becker, S. & Hinton, G. E. (1992). Self-organizing neural network that discovers surfaces in random-dot stereograms. Nature, 355(6356):161. Deep learning of representations for unsupervised and transfer learning. Y Bengio, Proceedings of ICML Workshop on Unsupervised and Transfer Learning. ICML Workshop on Unsupervised and Transfer LearningBengio, Y. (2012). Deep learning of representations for unsupervised and transfer learning. In Proceedings of ICML Workshop on Unsupervised and Transfer Learning, pages 17-36. Random forests. Machine learning. L Breiman, 45Breiman, L. (2001). Random forests. Machine learning, 45(1):5-32. Textures: A photographic album for artists and designers. P Brodatz, Dover PubnsBrodatz, P. (1966). Textures: A photographic album for artists and designers. Dover Pubns. Learning many related tasks at the same time with backpropagation. R Caruana, Advances in Neural Information Processing Systems. Caruana, R. (1995). Learning many related tasks at the same time with backpropagation. In Advances in Neural Information Processing Systems, pages 657-664. Trading spaces: Computation, representation, and the limits of uninformed learning. A Clark, C Thornton, Behavioral and Brain Sciences. 201Clark, A. & Thornton, C. (1997). Trading spaces: Computation, representation, and the limits of uninformed learning. Behavioral and Brain Sciences, 20(1):57-66. What is the expectation maximization algorithm?. C B Do, S Batzoglou, Nature Biotechnology. 268Do, C. B. & Batzoglou, S. (2008). What is the expectation maximization algorithm? Nature Biotechnology, 26(8):897- 899. Discriminative unsupervised feature learning with convolutional neural networks. A Dosovitskiy, J T Springenberg, M Riedmiller, T Brox, Advances in Neural Information Processing Systems. Dosovitskiy, A., Springenberg, J. T., Riedmiller, M., & Brox, T. (2014). Discriminative unsupervised feature learning with convolutional neural networks. In Advances in Neural Information Processing Systems, pages 766-774. Neocortical layer 4 as a pluripotent function linearizer. O V Favorov, O Kursun, Journal of Neurophysiology. 1053Favorov, O. V. & Kursun, O. (2011). Neocortical layer 4 as a pluripotent function linearizer. Journal of Neurophys- iology, 105(3):1342-1360. Sinbad: A neocortical mechanism for discovering environmental variables and regularities hidden in sensory input. O V Favorov, D Ryder, Biological Cybernetics. 903Favorov, O. V. & Ryder, D. (2004). Sinbad: A neocortical mechanism for discovering environmental variables and regularities hidden in sensory input. Biological Cybernetics, 90(3):191-202. Learning generative visual models from few training examples: An incremental bayesian approach tested on 101 object categories. L Fei-Fei, R Fergus, P Perona, Computer Vision and Image Understanding. 1061Fei-Fei, L., Fergus, R., & Perona, P. (2007). Learning generative visual models from few training examples: An incre- mental bayesian approach tested on 101 object categories. Computer Vision and Image Understanding, 106(1):59- 70. Do we need hundreds of classifiers to solve real world classification problems. M Fernandez-Delgado, E Cernadas, S Barro, D Amorim, Journal of Machine Learning Research. 15Fernandez-Delgado, M., Cernadas, E., Barro, S., & Amorim, D. (2014). Do we need hundreds of classifiers to solve real world classification problems? Journal of Machine Learning Research, 15:3133-3181. Model-agnostic meta-learning for fast adaptation of deep networks. C Finn, P Abbeel, S Levine, Finn, C., Abbeel, P., & Levine, S. (2017). Model-agnostic meta-learning for fast adaptation of deep networks. A feature transfer enabled multi-task deep learning model on medical imaging. F Gao, H Yoon, T Wu, X Chu, Expert Systems with Applications. 143112957Gao, F., Yoon, H., Wu, T., & Chu, X. (2020). A feature transfer enabled multi-task deep learning model on medical imaging. Expert Systems with Applications, 143:112957. Selective unsupervised feature learning with convolutional neural network (s-cnn). A Ghaderi, V Athitsos, 23rd International Conference on Pattern Recognition (ICPR). Ghaderi, A. & Athitsos, V. (2016). Selective unsupervised feature learning with convolutional neural network (s-cnn). 2016 23rd International Conference on Pattern Recognition (ICPR). Understanding the difficulty of training deep feedforward neural networks. X Glorot, Y Bengio, Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics. Teh, Y. W. & Titterington, M.the Thirteenth International Conference on Artificial Intelligence and StatisticsSardinia, Italy9Chia Laguna ResortWorkshop and Conference ProceedingsGlorot, X. & Bengio, Y. (2010). Understanding the difficulty of training deep feedforward neural networks. In Teh, Y. W. & Titterington, M., editors, Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, volume 9 of Proceedings of Machine Learning Research, pages 249-256, Chia Laguna Resort, Sardinia, Italy. JMLR Workshop and Conference Proceedings. Deep learning. I Goodfellow, Y Bengio, A Courville, MIT PressCambridgeGoodfellow, I., Bengio, Y., & Courville, A. (2016). Deep learning. MIT Press, Cambridge. Hyperspectral remote sensing scenes -grupo de inteligencia computacional (GIC). M Grana, M Veganzons, B Ayerdi, Grana, M., Veganzons, M., & Ayerdi, B. (2018). Hyperspectral remote sensing scenes -grupo de inteligencia com- putacional (GIC). http://www.ehu.eus/ccwintco/index.php. (Accessed on 12/22/2018). Semi-supervised learning by entropy minimization. Y Grandvalet, Y Bengio, Proceedings of the 17th International Conference on Neural Information Processing Systems, NIPS'04. the 17th International Conference on Neural Information Processing Systems, NIPS'04Cambridge, MA, USAMIT PressGrandvalet, Y. & Bengio, Y. (2004). Semi-supervised learning by entropy minimization. In Proceedings of the 17th International Conference on Neural Information Processing Systems, NIPS'04, page 529-536, Cambridge, MA, USA. MIT Press. The human visual cortex. K Grill-Spector, R Malach, Annual Review of Neuroscience. 27Grill-Spector, K. & Malach, R. (2004). The human visual cortex. Annual Review of Neuroscience, 27:649-677. A theory of how columns in the neocortex enable learning the structure of the world. J Hawkins, S Ahmad, Y Cui, Frontiers in Neural Circuits. 1181Hawkins, J., Ahmad, S., & Cui, Y. (2017). A theory of how columns in the neocortex enable learning the structure of the world. Frontiers in Neural Circuits, 11:81. J Hawkins, S Blakeslee, On Intelligence. Times Books. USAHawkins, J. & Blakeslee, S. (2004). On Intelligence. Times Books, USA. Learning deep representations by mutual information estimation and maximization. R D Hjelm, A Fedorov, S Lavoie-Marchildon, K Grewal, P Bachman, A Trischler, Y Bengio, Hjelm, R. D., Fedorov, A., Lavoie-Marchildon, S., Grewal, K., Bachman, P., Trischler, A., & Bengio, Y. (2019). Learning deep representations by mutual information estimation and maximization. Extreme learning machine: Theory and applications. Neurocomputing. G.-B Huang, Q.-Y Zhu, C.-K Siew, Neural Networks. 701Huang, G.-B., Zhu, Q.-Y., & Siew, C.-K. (2006). Extreme learning machine: Theory and applications. Neurocom- puting, 70(1):489 -501. Neural Networks. Coherent infomax as a computational goal for neural systems. J W Kay, W Phillips, Bulletin of Mathematical Biology. 732Kay, J. W. & Phillips, W. (2011). Coherent infomax as a computational goal for neural systems. Bulletin of Mathematical Biology, 73(2):344-372. Adam: A method for stochastic optimization. D P Kingma, J Ba, Kingma, D. P. & Ba, J. (2017). Adam: A method for stochastic optimization. Imagenet classification with deep convolutional neural networks. A Krizhevsky, I Sutskever, G E Hinton, Advances in Neural Information Processing Systems. Krizhevsky, A., Sutskever, I., & Hinton, G. E. (2012). Imagenet classification with deep convolutional neural networks. In Advances in Neural Information Processing Systems, pages 1097-1105. Suitability of features of deep convolutional neural networks for modeling somatosensory information processing. O Kursun, O V Favorov, Pattern Recognition and Tracking XXX. SPIE10995Kursun, O. & Favorov, O. V. (2019). Suitability of features of deep convolutional neural networks for modeling somatosensory information processing. In Pattern Recognition and Tracking XXX, volume 10995, pages 94 -105. International Society for Optics and Photonics, SPIE. Learning with two sites of synaptic integration. K P Körding, P König, Network: Computation in Neural Systems. 111Körding, K. P. & König, P. (2000). Learning with two sites of synaptic integration. Network: Computation in Neural Systems, 11(1):25-39. Deep learning. Y Lecun, Y Bengio, G Hinton, Nature. 5217553436LeCun, Y., Bengio, Y., & Hinton, G. (2015). Deep learning. Nature, 521(7553):436. Toward an integration of deep learning and neuroscience. A H Marblestone, G Wayne, K P Kording, Frontiers in Computational Neuroscience. 1094Marblestone, A. H., Wayne, G., & Kording, K. P. (2016). Toward an integration of deep learning and neuroscience. Frontiers in Computational Neuroscience, 10:94. A survey on transfer learning. S J Pan, Q Yang, IEEE Transactions on Knowledge and Data Engineering. 2210Pan, S. J., Yang, Q., et al. (2010). A survey on transfer learning. IEEE Transactions on Knowledge and Data Engineering, 22(10):1345-1359. Pytorch: An imperative style, high-performance deep learning library. A Paszke, S Gross, F Massa, A Lerer, J Bradbury, G Chanan, T Killeen, Z Lin, N Gimelshein, L Antiga, A Desmaison, A Kopf, E Yang, Z Devito, M Raison, A Tejani, S Chilamkurthy, B Steiner, L Fang, J Bai, S Chintala, Advances in Neural Information Processing Systems. Curran Associates, Inc32Online documentation for the transforms package is available atPaszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., Desmaison, A., Kopf, A., Yang, E., DeVito, Z., Raison, M., Tejani, A., Chilamkurthy, S., Steiner, B., Fang, L., Bai, J., & Chintala, S. (2019). Pytorch: An imperative style, high-performance deep learning library. In Advances in Neural Information Processing Systems 32, pages 8024-8035. Curran Associates, Inc. [Online documentation for the transforms package is available at: https://pytorch.org/docs/stable/torchvision/transforms.html]. Scikit-learn: Machine learning in python. F Pedregosa, G Varoquaux, A Gramfort, V Michel, B Thirion, O Grisel, M Blondel, P Prettenhofer, R Weiss, V Dubourg, J Vanderplas, A Passos, D Cournapeau, M Brucher, M Perrot, E Duchesnay, Journal of Machine Learning Research. 12Pedregosa, F., Varoquaux, G., Gramfort, A., Michel, V., Thirion, B., Grisel, O., Blondel, M., Prettenhofer, P., Weiss, R., Dubourg, V., Vanderplas, J., Passos, A., Cournapeau, D., Brucher, M., Perrot, M., & Duchesnay, E. (2011). Scikit-learn: Machine learning in python. Journal of Machine Learning Research, 12:2825-2830. The discovery of structure by multi-stream networks of local processors with contextual guidance. W Phillips, J Kay, D Smyth, Network: Computation in Neural Systems. 62Phillips, W., Kay, J., & Smyth, D. (1995). The discovery of structure by multi-stream networks of local processors with contextual guidance. Network: Computation in Neural Systems, 6(2):225-246. In search of common foundations for cortical computation. W A Phillips, W Singer, Behavioral and Brain Sciences. 204Phillips, W. A. & Singer, W. (1997). In search of common foundations for cortical computation. Behavioral and Brain Sciences, 20(4):657-683. Deep learning: Mathematics and neuroscience. A Sponsored Supplement to Science, Brain-Inspired intelligent robotics: The intersection of robotics and neuroscience. T Poggio, Poggio, T. (2016). Deep learning: Mathematics and neuroscience. A Sponsored Supplement to Science, Brain-Inspired intelligent robotics: The intersection of robotics and neuroscience, pp. 9-12. Deep learning for health informatics. D Ravi, C Wong, F Deligianni, M Berthelot, J Andreu-Perez, B Lo, G.-Z Yang, IEEE Journal of Biomedical and Health Informatics. 211Ravi, D., Wong, C., Deligianni, F., Berthelot, M., Andreu-Perez, J., Lo, B., & Yang, G.-Z. (2017). Deep learning for health informatics. IEEE Journal of Biomedical and Health Informatics, 21(1):4-21. Hyperspectral imagery classification based on semisupervised 3-d deep neural network and adaptive band selection. A Sellami, M Farah, I Farah, B Solaiman, Expert Systems with Applications. 129Sellami, A., Farah, M., Riadh Farah, I., & Solaiman, B. (2019). Hyperspectral imagery classification based on semi- supervised 3-d deep neural network and adaptive band selection. Expert Systems with Applications, 129:246 - 259. A survey on image data augmentation for deep learning. C Shorten, T Khoshgoftaar, Journal of Big Data. 6Shorten, C. & Khoshgoftaar, T. (2019). A survey on image data augmentation for deep learning. Journal of Big Data, 6:1-48. Tangent prop -a formalism for specifying selected invariances in an adaptive network. P Simard, B Victorri, Y Lecun, J Denker, Advances in Neural Information Processing Systems. Moody, J., Hanson, S., & Lippmann, R. P.Morgan-Kaufmann4Simard, P., Victorri, B., LeCun, Y., & Denker, J. (1992). Tangent prop -a formalism for specifying selected invariances in an adaptive network. In Moody, J., Hanson, S., & Lippmann, R. P., editors, Advances in Neural Information Processing Systems, volume 4. Morgan-Kaufmann. Learning to learn. S Thrun, L Pratt, Springer Science & Business MediaThrun, S. & Pratt, L. (2012). Learning to learn. Springer Science & Business Media. How transferable are features in deep neural networks?. J Yosinski, J Clune, Y Bengio, H Lipson, Advances in Neural Information Processing Systems. Yosinski, J., Clune, J., Bengio, Y., & Lipson, H. (2014). How transferable are features in deep neural networks? In Advances in Neural Information Processing Systems, pages 3320-3328. Object detection with deep learning: A review. Z Zhao, P Zheng, S Xu, X Wu, IEEE Transactions on Neural Networks and Learning Systems. Zhao, Z., Zheng, P., Xu, S., & Wu, X. (2019). Object detection with deep learning: A review. IEEE Transactions on Neural Networks and Learning Systems, pages 1-21.
[]
[ "A fine-grained approach to scene text script identification", "A fine-grained approach to scene text script identification", "A fine-grained approach to scene text script identification", "A fine-grained approach to scene text script identification" ]
[ "Lluís Gómez [email protected] \nComputer Vision Center\nUniversitat Autònoma de Barcelona\n\n", "Dimosthenis Karatzas \nComputer Vision Center\nUniversitat Autònoma de Barcelona\n\n", "Lluís Gómez [email protected] \nComputer Vision Center\nUniversitat Autònoma de Barcelona\n\n", "Dimosthenis Karatzas \nComputer Vision Center\nUniversitat Autònoma de Barcelona\n\n" ]
[ "Computer Vision Center\nUniversitat Autònoma de Barcelona\n", "Computer Vision Center\nUniversitat Autònoma de Barcelona\n", "Computer Vision Center\nUniversitat Autònoma de Barcelona\n", "Computer Vision Center\nUniversitat Autònoma de Barcelona\n" ]
[]
This paper focuses on the problem of script identification in unconstrained scenarios. Script identification is an important prerequisite to recognition, and an indispensable condition for automatic text understanding systems designed for multi-language environments. Although widely studied for document images and handwritten documents, it remains an almost unexplored territory for scene text images.We detail a novel method for script identification in natural images that combines convolutional features and the Naive-Bayes Nearest Neighbor classifier. The proposed framework efficiently exploits the discriminative power of small stroke-parts, in a finegrained classification framework.In addition, we propose a new public benchmark dataset for the evaluation of joint text detection and script identification in natural scenes. Experiments done in this new dataset demonstrate that the proposed method yields state of the art results, while it generalizes well to different datasets and variable number of scripts. The evidence provided shows that multi-lingual scene text recognition in the wild is a viable proposition. Source code of the proposed method is made available online.
10.1109/das.2016.64
[ "https://arxiv.org/pdf/1602.07475v1.pdf" ]
17,533,859
1602.07475
b786a16ca5d84257bb98024751429c9f42005e62
A fine-grained approach to scene text script identification Lluís Gómez [email protected] Computer Vision Center Universitat Autònoma de Barcelona Dimosthenis Karatzas Computer Vision Center Universitat Autònoma de Barcelona A fine-grained approach to scene text script identification This paper focuses on the problem of script identification in unconstrained scenarios. Script identification is an important prerequisite to recognition, and an indispensable condition for automatic text understanding systems designed for multi-language environments. Although widely studied for document images and handwritten documents, it remains an almost unexplored territory for scene text images.We detail a novel method for script identification in natural images that combines convolutional features and the Naive-Bayes Nearest Neighbor classifier. The proposed framework efficiently exploits the discriminative power of small stroke-parts, in a finegrained classification framework.In addition, we propose a new public benchmark dataset for the evaluation of joint text detection and script identification in natural scenes. Experiments done in this new dataset demonstrate that the proposed method yields state of the art results, while it generalizes well to different datasets and variable number of scripts. The evidence provided shows that multi-lingual scene text recognition in the wild is a viable proposition. Source code of the proposed method is made available online. I. INTRODUCTION Script and language identification are important steps in modern OCR systems designed for multi-language environments. Since text recognition algorithms are languagedependent, detecting the script and language at hand allows selecting the correct language model to employ [1]. While script identification has been widely studied in document analysis, it remains an almost unexplored problem for scene text. In contrast to document images, scene text presents a set of specific challenges, stemming from the high variability in terms of perspective distortion, physical appearance, variable illumination and typeface design. At the same time, scene text comprises typically a few words, contrary to longer text passages available in document images. Current end-to-end systems for scene text reading [2], [3] assume single script and language inputs given beforehand, i.e. provided by the user, or inferred from available meta-data. The unconstrained text understanding problem for large collections of images from unknown sources has not been considered up to very recently [4]. While there exists some research in script identification of text over complex backgrounds [5], [6], such methods have been so far limited to video overlaid-text, which presents in general different challenges than scene text. This paper addresses the problem of script identification in natural scene images, paving the road towards true multilanguage end-to-end scene text understanding. Multi-script text exhibits high intra-class variability (words written in the same script vary a lot) and high inter-class similarity (certain scripts resemble each other). Examining text samples from different scripts, it is clear that some stroke-parts are quite discriminative, whereas others can be trivially ignored as they occur in multiple scripts. The ability to distinguish these relevant stroke-parts can be leveraged for recognizing the corresponding script. Figure 2 shows an example of this idea. The method presented is based on a novel combination of convolutional features [7] with the Naive-Bayes Nearest Neighbor (NBNN) classifier [8]. The key intuition behind the proposed framework is to construct powerful local feature representations and use them within a classifier framework that is able to retain the discriminative power of small image parts. In this sense, script identification can be seen as a particular case of fine-grained recognition. Our work takes inspiration from recent methods in fine-grained recognition that make use of small image patches [9], [10], like NBNN does. Both NBNN and those template-patch based methods implicitly avoid any code-word quantization, in order to avoid loss of discriminability. Moreover, we propose a novel way to discover the most discriminative per-class stroke-parts (patched) by leveraging the topology of the NBNN search space, providing a weighted image to class metric distance. The paper also introduces a new benchmark dataset, namely the "MLe2e" dataset, for the evaluation of scene text endto-end reading systems and all intermediate stages such as text detection, script identification and text recognition. The dataset contains a total of 711 scene images covering four different scripts (Latin, Chinese, Kannada, and Hangul) and a large variability of scene text samples. II. RELATED WORK Research in script identification on non traditional paper layouts is scarce, and to the best of our knowledge it has been so far mainly dedicated to video overlaid-text. Gllavatta et al. [5], in the first work dealing this task, propose the use of the wavelet transform to detect edges in text line images. Then, they extract a set of low-level edge features, and make use of a K-NN classifier. Sharma et al. [11] have explored the use of traditional document analysis techniques for video overlaid-text script identification at word level. They analyze three sets of features: Zernike moments, Gabor filters, and a set of hand-crafted gradient features previously used for handwritten character recognition; and propose a number of pre-processing algorithms to overcome the inherent challenges of video. In their experiments the combination of super resolution, gradient features, and a SVM classifier perform significantly better that the other combinations. Shivakumara et al. [12], [6] rely on skeletonization of the dominant gradients and then analyze the angular curvatures [12] of skeleton components, and the spatial/structural [6] distribution of their end, joint, and intersection points to extract a set of hand-crafted features. For classification they build a set of feature templates from train data, and use the Nearest Neighbor rule for classifying scripts at word [12] or text block [6] level. As said before, all these methods have been designed specifically for video overlaid-text, which presents in general different challenges than scene text. Concretely, they mainly rely in accurate edge detection of text components and this is not always feasible in scene text. A much more recent approach to scene text script identification is provided by Shi et al. [4] where the authors propose the Multi-stage Spatially-sensitive Pooling Network (MSPN). The MSPN network overcomes the limitation of having a fixed size input in traditional Convolutional Neural Networks by pooling along each row of the intermediate layers' outputs by taking the maximum (or average) value in each row. Our work takes inspiration from recent methods in finegrained recognition. In particular, Krause et al. [10] focus on learning expressive appearance descriptors and localizing discriminative parts. By analyzing images of objects with the same pose they automatically discover which are the most important parts for class discrimination. Yao et al. [9] obtain image representations by running template matching using a large number of randomly generated image templates. Then they use a bagging-based algorithm to build a classifier by aggregating a set of discriminative yet largely uncorrelated classifiers. Our method resembles [9], [10] in trying to discover the most discriminative parts (or templates) per class. However, in our case we do not assume those discriminative parts to be constrained in space, because the relative arrangement of individual patches in text samples of the same script is largely variable. III. METHOD DESCRIPTION Our method for script identification in scene images follows a multi-stage approach. Given a text line provided by a text detection algorithm, our script identification method proceeds as follows: First we resize the input image to a fixed height of 64 pixels, but maintaining its original aspect ratio in order to preserve the appearance of stroke-parts. Second we densely extract 32 × 32 image patches, that we call stroke-parts, with sliding window. And third, each stroke-part is fed into a single layer Convolutional Neural Network to obtain its feature representation. These steps are illustrated in Figure 3 which shows an end-to-end system pipeline incorporating our method (the script-agnostic text detection module is abstracted in a single step as the focus of this paper is on the script identification part). This way, each input region is represented by a variable number of descriptors (one for each stroke-part), the number of which depends on the length of the input region. Thus, a given text line representation can be seen as a bag of strokepart descriptors. However, in our method we do not make use of the Bag of visual Words model, as the quantization process severely degrades informative (rare) descriptors [8]. Instead we directly classify the text lines using the Naive Bayes Nearest Neighbor classifier. A. Stroke Part representation with Convolutional Features Convolutional Features provide the expressive representations of stroke-parts needed in our method. We make use of a single layer Convolutional Neural Network [7] which provides us with highly discriminative descriptors while not requiring the large amount of training resources typically needed by deeper networks. The weights of the convolutional layer can be efficiently learned using the K-means algorithm. We adopt a similar design for our network as the one presented in [13]. We set the number of convolutional kernels to 256, the receptive field size to 8 × 8, and we adopt the same non-linear activation function as in [13]. After the convolutional layer we stack a spatial average pooling layer to reduce the dimensionality of our representation to 2304 (3 × 3 × 256). The number of convolutional kernels and kernel sizes of the convolution and pooling layers have been set experimentally, by cross-validation through a number of typical possible values for single-layer networks. To train the network we first resize all train images to a fixed height, while retaining the original aspect ratio. Then we extract random patches with size equal to the receptive field size, and perform contrast normalization and ZCA whitening [14]. Finally we apply the K-means algorithm to the pre-processed patches in order to learn the K = 256 convolutional kernels of the CNN. Figure 4 depicts a subset of the learned convolutional kernels where it can be appreciated their resemblance to small elementary stroke-parts. Once the network is trained, the convolutional feature representation of a stroke-part is obtained by feeding its 32 × 32 pixels image patch into the CNN input, after contrast normalization and ZCA whitening. A key difference of our work with [13], and in general with the typical use of CNN feature representations, is that we do not aim at representing the whole input image with a single feature vector, but instead we extract a set of convolutional features from small parts in a dense fashion. The number of features per image vary according to its aspect ratio. Notice that the typical use of a CNN, resizing the input images to a fixed aspect ratio, is not appealing in our case because it may produce a significant distortion of the discriminative parts of the image that are characteristic of its class. B. Naive-Bayes Nearest Neighbor The Naive-Bayes Nearest Neighbor (NBNN) classifier [8] is a natural choice in our pipeline because it computes direct Image-to-Class (I2C) distances without any intermediate descriptor quantization. Thus, there is no loss in the discriminative power of the stroke-part representations. Moreover, having classes with large diversity encourages the use of I2C distances instead of measuring Image-to-Image similarities. All stroke-parts extracted from the training set images provide the templates that populate the NBNN search space. In Figure 3 shows how computation of I2C distances in our pipeline reduces to N × n Nearest Neighbor searches, where N is the number of classes and n is the number of descriptors in the query image. To efficiently search for the N N C (d i ) we make use of the Fast Approximate Nearest Neighbor kd-tree algorithm described in [15]. NBNN the I2C distance d I2C (I, C) is computed as n i=1 d i − N N C (d i ) 2 , C. Weighting per class stroke-part templates by their importance When measuring the I2C distance d I2C (I, C) it is possible to use a weighted distance function which weights each stroke-part template in the train dataset accounting for its discriminative power. The weighted I2C is then computed as n i=1 (1 − w N N C (di) ) d i − N N C (d i ) 2 , where w N N C (di) is the weight of the Nearest Neighbor of d i of class C. The weight assigned to each template reflects the ability to discriminate against the class that the template can discriminate best. We learn the weights associated to each stroke-part template as follows. First, for each template we search for the maximum distance to any of its Nearest Neighbors in all classes except their own class, then we normalize these values in the range [0, 1] dividing by the largest distance encountered over all templates. This way, templates that are important in discriminating one class against, at least, one other class have lower contribution to the I2C distance when they are matched as N N C of one of the query image's parts. IV. EXPERIMENTS All reported experiments were conducted over two datasets, namely the Video Script Identification Competition (CVSI-2015) [16] dataset and the MLe2e dataset. The CVSI-2015 [16] dataset comprises pre-segmented video words in ten scripts: English, Hindi, Bengali, Oriya, Gujrathi, Punjabi, Kannada, Tamil, Telegu, and Arabic. The dataset contains about 1000 words for each script and is divided into three parts: a training set ( 60% of the total images), a validation set (10%), and a test set (30%). Text is extracted from various video sources (news, sports etc.) and, while it contains a few instances of scene text, it covers mainly overlay video text. A. The MLe2e dataset This paper introduces the MLe2e multi-script dataset for the evaluation of scene text end-to-end reading systems and all intermediate stages: text detection, script identification and text recognition. The MLe2e dataset has been harvested from various existing scene text datasets for which the images and ground-truth have been revised in order to make them homogeneous. The original images come from the following datasets: Multilanguage(ML) [17] and MSRA-TD500 [18] contribute Latin and Chinese text samples, Chars74K [19] and MSRRC [20] contribute Latin and Kannada samples, and KAIST [21] contributes Latin and Hangul samples. MLe2e is available at http://158.109.8.43/script identification/. In order to provide a homogeneous dataset, all images have been resized proportionally to fit in 640 × 480 pixels, which is the default image size of the KAIST dataset. Moreover, the ground-truth has been revised to ensure a common text line annotation level [22]. During this process human annotators were asked to review all resized images, adding the script class labels to the ground-truth, and checking for annotation consistency: discarding images with unknown scripts or where all text is unreadable (this may happen because images were resized); joining individual word annotations into text line level annotations; discarding images where correct text line segmentation is not clear or cannot be established, and images where a bounding box annotation contains significant parts of more than one script or significant parts of background (this may happen with heavily slanted or curved text). Arabic numerals (0, .., 9), widely used in combination with many (if not all) scripts, are labeled as follows. A text line containing text and Arabic numerals is labeled as the script of the text it contains, while a text line containing only Arabic numerals is labeled as Latin. The MLe2e dataset contains a total of 711 scene images covering four different scripts (Latin, Chinese, Kannada, and Hangul) and a large variability of scene text samples. The dataset is split into a train and a test set with 450 and 261 images respectively. The split was done randomly, but in a way that the test set contains a balanced number of instances of each class (approx. 160 text lines samples of each script), leaving the rest of the images for the train set (which is not balanced by default). The dataset is suitable for evaluating various typical stages of end-to-end pipelines, such as multiscript text detection, joint detection and script identification, and script identification in pre-segmented text lines. For the latter, the dataset also provides the cropped images with the text lines corresponding to each data split: 1178 and 643 images in the train and test set respectively. B. Script identification in pre-segmented text lines First, we study the performance of the proposed method for script identification in pre-segmented text lines. Table I show the obtained results with two variants of our method that only differ by the number of pixels used as step size in the sliding window stage (8 or 16). We provide a comparison with three well known image recognition pipelines using Scale Invariant Features [23] (SIFT) in three different encodings: Fisher Vectors, Vector of Locally Aggregated Descriptors (VLAD), and Bag of Words(BoW); and a linear SVM classifier. In all baselines we extract SIFT features at four different scales in sliding window with a step of 8 pixels. For the Fisher vectors we use a 256 visual words GMM, for VLAD a 256 vector quantized visual words, and for BoW 2,048 vector quantized visual words histograms. The step size and number of visual words were set to similar values to our method when possible in order to offer a fair evaluation. These three pipelines have been implemented with the VLFeat [24] and liblinear [25] open source libraries. We also compare against a Local Binary Pattern variant, the SRS-LBP [26] pooled over the whole image and followed by a simple KNN classifier. Methods marked with an asterisk make use of a 16-pixel step size in the sliding window for stroke-parts extraction. As shown in Table I the proposed method outperforms all baseline methods. The contribution of weighting per class Stroke Part Templates by their importance as explained in section III-C is significant in the MLe2e dataset, especially when using larger steps for the sliding window stroke-parts extraction, while producing a small accuracy discount in the CVSI-2015 dataset. Our interpretation of these results relates to the distinct nature of the two datasets. On the one hand, CVSI's overlaid-text variability is rather limited compared with that found in the scene text of MLe2e, and secondly in CVSI the number of templates is much larger (about one order of magnitude). Thus, our weighting strategy is more indicated for the MLe2e case where important (discriminative) templates may fall isolated in some region of the NBNN search space. Table II shows the overall performance comparison of our method with the participants in the ICDAR2015 Competition on Video Script Identification (CVSI 2015) [16]. Methods labeled as CVC-1 and CVC-2 correspond to the method described in this paper, however notice that as participants in the competition we used the configuration with a 16 pixels step sliding window, i.e. CVC-1 and CVC-2 correspond respectively to "Convolutional Features* + NBNN" and "Convolutional Features* + NBNN + weighting" in Table I [16]. The CVSI-2015 competition winner (Google) makes use of a deep convolutional network for class prediction that is trained using data-augmentation techniques. Our method demonstrates competitive performance with a shallower design that implies a much faster and attainable training procedure. C. Joint text detection and script identification in scene images In this experiment we evaluate the performance of a complete pipeline for detection and script identification in its joint ability to detect text lines in natural scene images and properly recognizing their scripts. The key interest of this experiment is to study the performance of the proposed script identification algorithm when realistic, non-perfect, text localization is available. Most text detection pipelines are trained explicitly for a specific script (typically English) and generalize pretty badly to the multi-script scenario. We have chosen to use here the script-agnostic method of Gomez et al. [27], which is designed for multi-script text detection that generalizes well to any script. The method detects character candidates using the Maximally Stable Extremal Regions (MSER) [28], and builds different hierarchies where the initial regions are grouped by agglomerative clustering, using complementary similarity measures. In such hierarchies each node defines a possible text hypothesis. Then, an efficient classifier, using incrementally computable descriptors, is used to walk each hierarchy and select the nodes with larger text-likelihood. In this paper script identification is performed at the text line level, because segmentation into words is largely scriptdependent. Notice however that in some cases, by the intrinsic nature of scene text, a text line provided by the text detection module may correspond to a single word, so we must deal with a large variability on the length of provided text lines. The experiments are performed over the new MLe2e dataset. For evaluation of the joint text detection and script identification task in the MLe2e dataset we propose the use of simple two-stage framework. First, localization is assessed based on the Intersection-over-Union (IoU) metric between detected and ground-truth regions, as commonly used in object recognition tasks [29] and the recent ICDAR 2015 competition 1 . Second, the predicted script is verified against the ground-truth. A detected bounding box is thus considered a True Positive if has a IoU > 0.5 with a bounding box in the ground-truth and the predicted script is correct. The localization-only performance, corresponding to the first stage of the evaluation, yields an F-score of 0.58 (Precision of 0.54 and Recall of 0.62). This defines the upper-bound for the joint task. The two stage evaluation, including script identification using the proposed method, achieves an F-score of 0.51, analyzed into a Precision of 0.48 and a Recall of 0.55. The results demonstrate that the proposed method for script identification is effective even when the text region is badly localized, as long as part of the text area is within the localized region. This extends to regions that did not pass the 0.5 IoU threshold, but had their script correctly identified. Such a behavior is to be expected, due to the way our method treats local information to decide on a script class. This opens the possibility to make use of script identification to inform and / or improve the text localization process. The information of the identified script can be used to refine the detections. D. Cross-domain performance and confusion in Latin-only datasets In this experiment we evaluate the cross-domain performance of learned stroke-part templates from one dataset to the other. We evaluate on the CVSI test set using the templates learned in the MLe2e train set, and the other way around, by measuring classification accuracy only for the two common script classes: Latin and Kannada. Finally, we evaluate the misclassification error of our method using script-part templates learned from both datasets over a third Latin-only dataset. For Fig. 6: Examples of correctly classified instances this experiment we use the ICDAR2013 scene text dataset [30] which provides cropped word images of English text, and measure the classification accuracy of our method. Table III shows the results of these experiments. From the above table we can see how features learned on the MLe2e dataset are much better in generalizing to other datasets. In fact, this is an expected result, because the domain of overlay text in CVSI can be seen as a subdomain of the scene text MLe2e's domain. Since the MLe2e dataset is richer in text variability, e.g. in terms of perspective distortion, physical appearance, variable illumination and typeface designs, makes script identification on this dataset a more difficult problem, but also more indicated if one wants to learn effective cross-domain stroke-part descriptors. Significantly important is the result obtained in the English-only ICDAR dataset which is near 95%. This demonstrates that our method is able to learn discriminative stroke-part representations that are not dataset-specific. It is important to notice that the rows in Table III are not directly comparable as both models have different numbers of classes, 10 in the case of training over the CVSI dataset and 4 in the case of the MLe2e. However, the experiment is relevant when comparing the performance of a learned model on datasets different from the one used for training. In this sense, the obtained results show a clear weakness of the features learned on the video overlaid text of CVSI for correctly identifying the script in scene text images. On the contrary, features learned in the MLe2e dataset perform very well in other scene text data (ICDAR), while exhibit an expected but acceptable decrement in performance in video overlaid text (CVSI-2015). V. CONCLUSION A novel method for script identification in natural scene images was presented. The method combines the expressive representation of convolutional features and the finegrained classification characteristics of the Naive-Bayes Nearest Neighbor classifier. In addition, a new public benchmark dataset for the evaluation of all stages of end-to-end scene text reading systems was introduced. Experiments done in this new dataset and the CVSI video overlay dataset exhibit state of the art accuracy rates in comparison to a number methods, including the participants in the CVSI-2015 competition and standard image recognition pipelines. Our work demonstrates the viability of script identification in natural scene images, paving the road towards true multi-language end-to-end scene text understanding. Source code of our method and the MLe2e dataset are available online at http://158.109.8.43/script identification/. VI. ACKNOWLEDGMENTS This work was supported by the Spanish project TIN2014-52072-P, the fellowship RYC-2009-05031, and the Catalan govt scholarship 2014FI B1-0017. Fig. 1 : 1Collections of images from unknown sources may contain textual information in different scripts. Fig. 2 : 2Certain stroke-parts (in green) are discriminative for the identification of a particular script (left), while others (in red) can be trivially ignored because are frequent in other classes (right). Fig. 3 : 3Method deploy pipeline: Text lines provided by a text detection algorithm are resized to a fixed height, image patches (stroke-parts) are extracted with a sliding window and fed into a single layer Convolutional Neural Network (CNN). This way, each text line is represented by a variable number of stroke-parts descriptors, that are used to calculate image to class (I2C) distances and classify the input text line using the Naive Bayes Nearest Neighbor (NBNN) classifier. Fig. 4 : 4Convolution kernels of our single layer network learned with k-means. where d i is the i-th descriptor of the query image I, and N N C (d i ) is the Nearest Neighbor of d i in class C. Then the NBNN classifies the query image to the classĈ with lower I2C distance, i.e. C = argmin C d I2C (I, C). Fig. 5 : 5A selection of misclassified samples by our method: low contrast images, rare font types, degraded text, letters mixed with numerals, etc. TABLE I : IScript identification accuracy on pre-segmented text lines. . As can be appreciated in the table, adding the configuration with an 8 pixels step our method ranks second in the table, only 1% under the winner of the competition.Method CVSI (Overall performance) Google 98.91 Ours (8 pixel step) 97.91 HUST [4] 96.69 CVC-2 96.00 CVC-1 95.88 C-DAC 84.66 CUK 74.06 TABLE II : IIOverall classification performance comparison with participants in the ICDAR2015 competition on video script identification CVSI considering all the ten scripts TABLE III : IIICross-domain performance of our method measured by training/testing in different datasets. http://rrc.cvc.uab.es Combined script and page orientation estimation using the tesseract ocr engine. R Unnikrishnan, R Smith, MOCR. R. Unnikrishnan and R. Smith, "Combined script and page orientation estimation using the tesseract ocr engine," in MOCR, 2009. 1 Photoocr: Reading text in uncontrolled conditions. A Bissacco, M Cummins, Y Netzer, H Neven, ICCV. A. Bissacco, M. Cummins, Y. Netzer, and H. Neven, "Photoocr: Reading text in uncontrolled conditions," in ICCV, 2013. 1 Deep features for text spotting. M Jaderberg, A Vedaldi, A Zisserman, ECCV. M. Jaderberg, A. Vedaldi, and A. Zisserman, "Deep features for text spotting," in ECCV, 2014. 1 Automatic script identification in the wild. B Shi, C Yao, C Zhang, X Guo, F Huang, X Bai, ICDAR, 2015. 1. 25B. Shi, C. Yao, C. Zhang, X. Guo, F. Huang, and X. Bai, "Automatic script identification in the wild," ICDAR, 2015. 1, 2, 5 Script recognition in images with complex backgrounds. J Gllavata, B Freisleben, SPIT. 1J. Gllavata and B. Freisleben, "Script recognition in images with complex backgrounds," in SPIT, 2005. 1, 2 New gradientspatial-structural features for video script identification. P Shivakumara, Z Yuan, D Zhao, T Lu, C L Tan, CVIU. 12P. Shivakumara, Z. Yuan, D. Zhao, T. Lu, and C. L. Tan, "New gradient- spatial-structural features for video script identification," CVIU, 2015. 1, 2 An analysis of single-layer networks in unsupervised feature learning. A Coates, A Y Ng, H Lee, AIStats. 1A. Coates, A. Y. Ng, and H. Lee, "An analysis of single-layer networks in unsupervised feature learning," in AIStats, 2011. 1, 2 In defense of nearest-neighbor based image classification. O Boiman, E Shechtman, M Irani, CVPR. 13O. Boiman, E. Shechtman, and M. Irani, "In defense of nearest-neighbor based image classification," in CVPR, 2008. 1, 2, 3 A codebook-free and annotationfree approach for fine-grained image categorization. B Yao, G Bradski, L Fei-Fei, CVPR. 1B. Yao, G. Bradski, and L. Fei-Fei, "A codebook-free and annotation- free approach for fine-grained image categorization," in CVPR, 2012. 1, 2 Learning features and parts for fine-grained recognition. J Krause, T Gebru, J Deng, L.-J Li, L Fei-Fei, ICPR. 1J. Krause, T. Gebru, J. Deng, L.-J. Li, and L. Fei-Fei, "Learning features and parts for fine-grained recognition," in ICPR, 2014. 1, 2 Word-wise script identification from video frames. N Sharma, S Chanda, U Pal, M Blumenstein, ICDAR. 2N. Sharma, S. Chanda, U. Pal, and M. Blumenstein, "Word-wise script identification from video frames," in ICDAR, 2013. 2 Gradient-angular-features for word-wise video script identification. P Shivakumara, N Sharma, U Pal, M Blumenstein, C L Tan, ICPR. 2P. Shivakumara, N. Sharma, U. Pal, M. Blumenstein, and C. L. Tan, "Gradient-angular-features for word-wise video script identification," in ICPR, 2014. 2 Text detection and character recognition in scene images with unsupervised feature learning. A Coates, B Carpenter, C Case, S Satheesh, B Suresh, T Wang, D Wu, A Ng, Proc. ICDAR. ICDAR23A. Coates, B. Carpenter, C. Case, S. Satheesh, B. Suresh, T. Wang, D. Wu, and A. Ng, "Text detection and character recognition in scene images with unsupervised feature learning," in Proc. ICDAR, 2011. 2, 3 Optimal whitening and decorrelation. A Kessy, A Lewin, K Strimmer, arXiv:1512.00809arXiv preprintA. Kessy, A. Lewin, and K. Strimmer, "Optimal whitening and decor- relation," arXiv preprint arXiv:1512.00809, 2015. 3 Fast approximate nearest neighbors with automatic algorithm configuration. M Muja, D G Lowe, VISAPP. 3M. Muja and D. G. Lowe, "Fast approximate nearest neighbors with automatic algorithm configuration." VISAPP, 2009. 3 N Sharmai, R Mandal, M Blumenstein, U Pal, ICDAR 2015 competition on video script identification (CVSI-2015). 45N. Sharmai, R. Mandal, M. Blumenstein, and U. Pal, "ICDAR 2015 competition on video script identification (CVSI-2015)," ICDAR, 2015. 4, 5 Text localization in natural scene images based on conditional random field. Y.-F Pan, X Hou, C.-L Liu, Proc. ICDAR. ICDARY.-F. Pan, X. Hou, and C.-L. Liu, "Text localization in natural scene images based on conditional random field," in Proc. ICDAR, 2009. 4 Detecting texts of arbitrary orientations in natural images. C Yao, X Bai, W Liu, Y Ma, Z Tu, CVPR. C. Yao, X. Bai, W. Liu, Y. Ma, and Z. Tu, "Detecting texts of arbitrary orientations in natural images," in CVPR, 2012. 4 Character recognition in natural images. T E De Campos, B R Babu, M Varma, ICCVTA. T. E. de Campos, B. R. Babu, and M. Varma, "Character recognition in natural images," in ICCVTA, 2009. 4 Multi-script robust reading competition in icdar 2013. D Kumar, M Prasad, A Ramakrishnan, MOCR. D. Kumar, M. Prasad, and A. Ramakrishnan, "Multi-script robust reading competition in icdar 2013," in MOCR, 2013. 4 Scene text extraction with edge constraint and text collinearity. S Lee, M S Cho, K Jung, J H Kim, ICPR. 4S. Lee, M. S. Cho, K. Jung, and J. H. Kim, "Scene text extraction with edge constraint and text collinearity." in ICPR, 2010. 4 An on-line platform for ground truthing and performance evaluation of text extraction systems. D Karatzas, S Robles, L Gomez, DAS. 4D. Karatzas, S. Robles, and L. Gomez, "An on-line platform for ground truthing and performance evaluation of text extraction systems," in DAS, 2014. 4 Object recognition from local scale-invariant features. D G Lowe, ICCV. D. G. Lowe, "Object recognition from local scale-invariant features," in ICCV, 1999. 4 VLFeat: An open and portable library of computer vision algorithms. A Vedaldi, B Fulkerson, A. Vedaldi and B. Fulkerson, "VLFeat: An open and portable library of computer vision algorithms," http://www.vlfeat.org/, 2008. 4 LIBLINEAR: A library for large linear classification. R.-E Fan, K.-W Chang, C.-J Hsieh, X.-R Wang, C.-J Lin, JMLR. 4R.-E. Fan, K.-W. Chang, C.-J. Hsieh, X.-R. Wang, and C.-J. Lin, "LIBLINEAR: A library for large linear classification," JMLR, 2008. 4 Sparse radial sampling lbp for writer identification. A Nicolaou, A D Bagdanov, M Liwicki, D Karatzas, ICDAR. 4A. Nicolaou, A. D. Bagdanov, M. Liwicki, and D. Karatzas, "Sparse radial sampling lbp for writer identification," ICDAR, 2015. 4 A fast hierarchical method for multiscript and arbitrary oriented scene text extraction. L Gomez, D Karatzas, arXiv:1407.7504arXiv preprintL. Gomez and D. Karatzas, "A fast hierarchical method for multi- script and arbitrary oriented scene text extraction," arXiv preprint arXiv:1407.7504, 2014. 5 Robust wide-baseline stereo from maximally stable extremal regions. J Matas, O Chum, M Urban, T Pajdla, Image and Vision Computing. 5J. Matas, O. Chum, M. Urban, and T. Pajdla, "Robust wide-baseline stereo from maximally stable extremal regions," Image and Vision Computing, 2004. 5 The pascal visual object classes challenge: A retrospective. M Everingham, S A Eslami, L Van Gool, C K Williams, J Winn, A Zisserman, IJCV. 5M. Everingham, S. A. Eslami, L. Van Gool, C. K. Williams, J. Winn, and A. Zisserman, "The pascal visual object classes challenge: A retrospective," IJCV, 2014. 5 . D Karatzas, F Shafait, S Uchida, M Iwamura, S R Mestre, J Mas, D F Mota, J A Almazan, L P De Las Heras, Icdar 2013 robust reading competition," in ICDARD. Karatzas, F. Shafait, S. Uchida, M. Iwamura, S. R. Mestre, J. Mas, D. F. Mota, J. A. Almazan, L. P. de las Heras et al., "Icdar 2013 robust reading competition," in ICDAR, 2013. 6
[]
[ "Source Detection in Interferometric Visibility Data I. Fundamental Estimation Limits", "Source Detection in Interferometric Visibility Data I. Fundamental Estimation Limits" ]
[ "Cathryn M Trott [email protected] ", "Randall B Wayth ", "Jean-Pierre R Australia ", "Macquart ", "Steven J Tingay ", "\nDepartment of Radiology\nInternational Centre for Radio Astronomy Research\nMassachusetts General Hospital\n02114BostonMA\n", "\nInternational Centre for Radio Astronomy Research\nCurtin University\nBentleyWAAustralia\n", "\nInternational Centre for Radio Astronomy Research\nCurtin University\nBentleyWA\n", "\nInternational Centre for Radio Astronomy Research\nCurtin University\nBentleyWAAustralia\n", "\nCurtin University\nBentleyWAAustralia\n" ]
[ "Department of Radiology\nInternational Centre for Radio Astronomy Research\nMassachusetts General Hospital\n02114BostonMA", "International Centre for Radio Astronomy Research\nCurtin University\nBentleyWAAustralia", "International Centre for Radio Astronomy Research\nCurtin University\nBentleyWA", "International Centre for Radio Astronomy Research\nCurtin University\nBentleyWAAustralia", "Curtin University\nBentleyWAAustralia" ]
[]
Transient radio signals of astrophysical origin present an avenue for studying the dynamic universe. With the next generation of radio interferometers being planned and built, there is great potential for detecting and studying large samples of radio transients. Currently-used image-based techniques for detecting radio sources have not been demonstrated to be optimal, and there is a need for development of more sophisticated algorithms, and methodology for comparing different detection techniques. A visibility-space detector benefits from our good understanding of visibility-space noise properties, and does not suffer from the image artifacts and need for deconvolution in image-space detectors. In this paper, we propose a method for designing optimal source detectors using visibility data, building on statistical decision theory. The approach is substantially different to conventional radio astronomy source detection. Optimal detection requires an accurate model for the data, and we present a realistic model for the likelihood function of radio interferometric data, including the effects of calibration, signal confusion and atmospheric phase fluctuations. As part of this process, we derive fundamental limits on the calibration of an interferometric array, including the case where many relatively weak "in-beam" calibrators are used. These limits are then applied, along with a model for atmospheric phase fluctuations, to determine the limits on measuring source position, flux density and spectral index, in the general case. We then present an optimal visibility-space detector using realistic models for an interferometer.
10.1088/0004-637x/731/2/81
[ "https://arxiv.org/pdf/1102.3746v1.pdf" ]
44,437,276
1102.3746
bb3cbc19467e79e8d8733ae42d5e79ab936c8f09
Source Detection in Interferometric Visibility Data I. Fundamental Estimation Limits 18 Feb 2011 Cathryn M Trott [email protected] Randall B Wayth Jean-Pierre R Australia Macquart Steven J Tingay Department of Radiology International Centre for Radio Astronomy Research Massachusetts General Hospital 02114BostonMA International Centre for Radio Astronomy Research Curtin University BentleyWAAustralia International Centre for Radio Astronomy Research Curtin University BentleyWA International Centre for Radio Astronomy Research Curtin University BentleyWAAustralia Curtin University BentleyWAAustralia Source Detection in Interferometric Visibility Data I. Fundamental Estimation Limits 18 Feb 2011Received ; accepted -2 -Subject headings: methods: statistical -radio continuum: general -techniques: interferometric Transient radio signals of astrophysical origin present an avenue for studying the dynamic universe. With the next generation of radio interferometers being planned and built, there is great potential for detecting and studying large samples of radio transients. Currently-used image-based techniques for detecting radio sources have not been demonstrated to be optimal, and there is a need for development of more sophisticated algorithms, and methodology for comparing different detection techniques. A visibility-space detector benefits from our good understanding of visibility-space noise properties, and does not suffer from the image artifacts and need for deconvolution in image-space detectors. In this paper, we propose a method for designing optimal source detectors using visibility data, building on statistical decision theory. The approach is substantially different to conventional radio astronomy source detection. Optimal detection requires an accurate model for the data, and we present a realistic model for the likelihood function of radio interferometric data, including the effects of calibration, signal confusion and atmospheric phase fluctuations. As part of this process, we derive fundamental limits on the calibration of an interferometric array, including the case where many relatively weak "in-beam" calibrators are used. These limits are then applied, along with a model for atmospheric phase fluctuations, to determine the limits on measuring source position, flux density and spectral index, in the general case. We then present an optimal visibility-space detector using realistic models for an interferometer. Introduction The next generation of wide-field survey radio interferometers (MWA, ASKAP, ATA, LOFAR, LWA), culminating in the Square Kilometre Array (SKA), faces new challenges in meeting the ambitious science goals of the twenty-first century. These goals demand advances in telescope engineering and data-processing design, as well as sophisticated and novel observational techniques. Wide-field instruments will generate data at high rates, and therefore require techniques that optimally use the data. One of the main science goals of these instruments is to detect transient and variable radio sources, and as such a key requirement will be optimal source detection. The time domain, historically dominated by pulsar observations, is broadening to study new classes of dynamic sources. New high-sensitivity, large collecting area instruments afford the opportunity to detect and study transient signals. One group of well-known radio transients are pulsars. The periodicity of these sources is used to detect them. The more general class of transients sources, with non-periodic behaviour (episodic), has not been surveyed and studied systematically. As well as the knowledge we can gain from studying expected transient sources (e.g., GRBs, pulsating stars), there is great potential for observing exotic astrophysical events such as annihilating black holes, gravity wave events (e.g., colliding black holes), magnetars and extraterrestrial signals (Macquart et al. 2010). Optimal source detection has a role for detection of both fast and slow transients. Detection of signals in radio astronomy typically occurs in the image domain, and uses either a simple highest-peak thresholding, or a matched filtering operation (Cordes 2009). The former sets a threshold value above the noise level, and attempts to detect the strongest signal in an image, and then model the incompleteness to remove its sidelobes. The next strongest signal is then compared to the threshold and modelled, and the process continues until there are no more signals above the threshold. This process works well for strong signals that are well-separated (no source confusion), and well-understood noise. Matched filtering is an operation that is derived from statistical decision theory. The matched filter correlates the received data with a replica of the signal (Kay 1998). It weights the data according to the signal strength, thereby allowing the datapoints with the strongest signals to contribute more to the filter output. The matched filter is optimal for white gaussian noise (uncorrelated noise) and known signal. For non-Gaussian noise, the matched filter is no longer optimal (Kay 1998). Hence, the matched filter is not necessarily the best detector that can be designed for source detection in image-space data. In addition, to perform a matched filtering operation with sources with unknown parameters (e.g., sky position, strength), radio astronomers typically correlate (match) the data with a set of pre-determined templates, to find the template that produces the maximal output (Cordes 2009). Any inaccuracies in the templates compared with the true signal will degrade performance. Matched filters are applied in static fields, and their application becomes problematic for dynamic datasets: the loss in detection performance due to a spatial mismatch of the filter can be compounded by a temporal mismatch. Recent radio surveys (e.g., FIRST, NVSS, SUMSS) have employed matched filter plus flux limit threshold detectors, typically fitting gaussians with free parameters as the signal filter, and choosing a flux limit based on the estimated noise level of the dataset (Becker et al. 1995,;Condon et al. 1998;Mauch et al. 2003). Detection of sources in image space can be problematic, firstly because it requires deconvolution of the image prior to detection: over-, under-and inaccurate cleaning can lead to biased images (Condon et al. 1998). Perley (1999) and Rau et al. (2009) discuss the origin and magnitude of errors in synthesis images. In general, the many-to-one fourier operation performed on visibility data to obtain image data propagates any non-gaussianity in the (uv) data to the image plane, producing correlations in the noise between pixels across the field (Refregier and Brown 1998). Imperfect calibration and atmospheric effects yield deviations from gaussianity in visibility data. In addition, images contain structured backgrounds: source sidelobes, gridding artifacts (although, these are minimal for snapshot observations), and, in some cases, confusing sources. The presence of source sidelobes due to data incompleteness confounds identification of true sources. Differencing of temporally adjacent images to detect transient sources is complicated by non-uniform image pixel size and changes in the sidelobe distribution due to the different uv-plane sampling. It can also lead to artifacts when subtracting one image with a complicated noise structure, from another with a different noise structure. Finally, removal (flagging) of baselines changes the beam shape, yielding subtraction artifacts in the image. Although data incompleteness and non-gaussian noise also exist in visibility data, in that space we can explicitly consider only the measured data in our detector, and model the deviations from gaussianity: detectors in image space work on the intensity of image pixels, and cannot account for signal that is inferred by the image production process. We do not mean to suggest that practical and useful source detection cannot be performed in the image plane. Wijnholds and van der Veen (2008) describe how to propagate errors in visibility space to image space. This type of technique (and others, such as bootstrapping, used by Kemball et al. 2010, as a tool to assess radio image fidelity) can be used to extend an understanding of the statistical properties of the data into image space. We do, however, regard visibility space as a more natural space for optimal detection, because the data are in a form closer to the original signals collected by the antennas, the covariance structure can be more simply expressed, and no data are inferred through interpolation in the image plane and extrapolation from the uv plane (as is the case with forming an image from incomplete uv data). An emerging application of general source detection is detection of transient sources. Transient source detection and characterization are key science drivers for many synthesis imaging arrays that are under construction. The Murchison Widefield Array (MWA) and Australian SKA Pathfinder (ASKAP) are two SKA precursor instruments under construction in the Western Australian desert. ASKAP is an array of 36 12-metre dish antennas, currently being constructed at the Murchison Radio-astronomy Observatory (MRO), Western Australia, and operated by CSIRO. The MWA will probe much of the parameter space of interest to the SKA. It is also under construction at MRO, will comprise 512 tile-type antennas with multiple dipole sensors per tile, and is being constructed by a consortium of local and foreign institutions and government agencies. The MWA and ASKAP use fundamentally different hardware for signal reception and processing, and operate in different frequency ranges (ASKAP: 0.7-1.8 GHz, MWA: 80-300 MHz), thereby producing complementary data sets, and pursuing different science goals. The MRO is Australia and New Zealand's candidate site for the core of the SKA. The MWA and ASKAP are both wide-field instruments, and will both contend with variations in calibration across the field-of-view. As such, they will make use of 'field-based' calibration, whereby calibration sources across the field are used to form a model for the antenna beam (Kassim et al. 2007). The operating frequencies of both instruments also make them subject to atmospheric and ionospheric effects on the signal wavefront. These effects include blurring of the source position, due to phase fluctuations from the troposphere (ASKAP), and source shifting due to differential excess path length to antennas produced by the ionosphere (MWA, Mitchell et al. 2008). These are some of the challenges faced by wide-field instruments. Design of transient detectors is relatively new. In general, expected transient source populations will be dominated by weak sources and therefore extraction of sources close to the noise limit will generate the most new and interesting science. Hence, transient detector design is an important field of research. The Allen Telescope Array (ATA) has recently reported the initial development of their slow transient detection pipeline (Croft et al. 2010). At this preliminary stage, they are matching known catalogues with sources in their fields, and have not found any new convincing transient candidates. Within the ASKAP project, the same fields of sky will be observed periodically to detect slow transients with the VAST survey. An image of the sky will be produced periodically, and these images searched for all sources. Any detections will be added to a searchable database, from which light-curves of objects can be extracted. Transient signals will necessarily show brightness variability over time. The current method being proposed to detect sources is Duchamp 1 . Duchamp uses either a simple thresholding to find sources, or a more sophisticated algorithm based on statistical decision theory. The downside to the method is the use of global parameters, and assumed white gaussian noise properties to define the PDFs. Noise properties can be assumed to be uniform for small fields, but may vary greatly over the field for the large fields-of-view sampled in ASKAP and the MWA. In addition, the technique is not 'real-time'. Fridman (2010) has recently proposed a method for detecting single fast transient events from single-dish datasets, using a cumulative signal method based on statistical decision theory. Optimal detection of a signal relies on our knowledge of the properties of the signallocation, shape and amplitude. For signals with unknown parameters (e.g., transient radio sources), the detection performance is governed by our ability to accurately and precisely estimate the parameter values, and accurately model the data likelihood function. It is therefore crucial to understand the fundamental estimation limits of a particular instrument, before proceeding to determine the detection limits (see Wijnholds and van der Veen 2008, for a recent review of fundamental radio imaging limits). In this paper, we describe how to design an optimal source detector with visibility data. We then describe the form of this detector for realistic interferometers, including the effects of imperfect calibration, signal confusion and atmospheric phase fluctuations. As part of this process, and to investigate the detection limits of instruments, we derive fundamental estimation limits for measurement of source parameters with interferometers. In Paper II we explore particular algorithms for source detection with real interferometers and simulated and real datasets, using the estimation limit results and theory from Paper I. We particularly focus on the problem of optimally detecting slow radio transients, although the methods presented here are generally applicable to source detection. As such, the estimation precision results we derive are based on short integrations (8-10s), appropriate for the instruments under consideration, for which a transient detection test is performed at each output of the correlator. In section 2 we introduce statistical decision theory, including methodology for designing an optimal detector. We then introduce the Cramer-Rao lower bound (CRB) on estimation precision, as a metric for evaluating the source parameter measurement precision of interferometers. We then discuss detection of a single point source (section 3.1), a single point source embedded in a field (section 3.2), and a single transient point source embedded in a field (section 3.3), with a visibility dataset and thermal noise. In section 4.1 we discuss the effect of calibration errors, source confusion, and atmospheric phase noise on signal estimation and detection, and in section 5 present a realistic detector for visibility data. Statistical decision theory Neyman-Pearson test and simple hypothesis testing Statistical decision theory is the branch of mathematical statistics that describes the detection of signals in noise. Signal detection is underpinned by hypothesis testing: in the binary case, this means deciding between two hypotheses (signal present and signal absent). For a given set of observational data, the likelihood that data was obtained from each hypothesis is calculated, and the ratio of these probabilities is used to compare to a threshold. If the ratio is greater than the threshold, we decide that a signal has been detected. The value of the threshold is set according to the tolerance on the false alarm rate (rate of false positives or misses). This is a likelihood ratio test (LRT), and is applicable for a deterministic signal in known noise. The likelihood is the probability of the data given a set of parameters. If the null hypothesis (signal absent) is denoted H 0 , and the alternative hypothesis (signal present) is denoted H 1 , then the hypotheses can be written as: H 1 : x[n] = s[n] + w[n] (n = 1, ..., N) (1) H 0 : x[n] = w[n], where s[n] is the known deterministic signal we wish to detect, and w[n] is the known noise. Under these two hypotheses, the likelihood ratio test for the dataset x[n] decides a signal is present if, T (x) = L(x; H 1 ) L(x; H 0 ) > λ,(2) where T (x) is the test statistic, λ is the threshold, and L(x) is the likelihood function (probability distribution function parametrized by the model parameters). For example, if the signal is a DC level, A, in white Gaussian noise (WGN), N(0, σ 2 ), the LRT is: exp − 1 2σ 2 N n=1 (x[n] − A) 2 exp − 1 2σ 2 N n=1 x 2 [n] > λ.(3) Taking the logarithm of both sides, and incorporating non-data terms into the threshold, we decide H 1 if, T (x) = 1 N N n=1 x[n] > λ ′ .(4) In this simple case, the test statistic is the sample mean, which makes sense intuitively. We wish to maximise the probability of detection, subject to a given probability of false detection. Mathematically, the probability of detection, P D is the chance of deciding H 1 when the datum is actually drawn from H 1 , and is given by, P D ≡ P (H 1 ; H 1 ) = R 1 L(x; H 1 )dx,(5) where R 1 is the region of the likelihood function above the threshold. The false positive rate is given by, P F A ≡ P (H 1 ; H 0 ) = R 1 L(x; H 0 )dx = α.(6) For a given probability of false detection, α, the Neyman-Pearson theorem states that the probability of detection, P D , is maximised when we decide H 1 according equation 2. Hence, for a given tolerance of false positive signals (chosen according to the detection task and the implications of detecting false positives), the threshold and detection performance are defined. The LRT therefore yields an optimal detector (P D maximised for a given P F A ). To design an optimal detector, the likelihood function needs to accurately describe the data. In the following sections, we describe how to accurately model (i) the signal, (ii) the noise and background properties of interferometric data. Generalised Likelihood Ratio Test The likelihood ratio test described in the previous section can be applied to known deterministic signals with additive known noise. In the case where some parameters are unknown, the values of these parameters need to estimated before proceeding (e.g., source position, flux density). To achieve this, the likelihood function is maximized with respect to each of the unknown parameters, and the LRT is evaluated at the maximum likelihood estimates (MLE). MLEs are asymptotically efficient: for large datasets, they achieve the optimal estimation precision and are unbiased. The hypothesis test is then termed the Generalised Likelihood Ratio Test (GLRT). As the number of unknown parameters increases, the detection performance is degraded. Similarly, as the number of samples increases, the variance on the parameter estimate will decrease, and the estimation will be more precise. In addition to signal parameters that are unknown, a signal model may include additional parameters, in which we are not interested, that need to be estimated. These are nuisance parameters, and further degrade detection performance. The clairvoyant detector (Kay 1998, Ch. 6) assumes perfect knowledge of all parameters, and is useful as a comparison to the real detection performance obtained with the GLRT. The clairvoyant detector is an optimal detector, however the GLRT yields a sub-optimal detector, because estimation of unknown parameters is required. This observation is important when we wish to compare optimal detection performance with the performance we obtain with different detectors we might design. Bayesian approach In the case where some of the unknown parameters are random (as opposed to deterministic), one can use a Bayesian approach to remove them from the detector. In the Bayesian framework, each datum contains a realization of the random variable, and the likelihood function is conditioned on the parameter. One assigns prior PDFs to the random parameters, and determines the unconditional likelihood functions according to: L(x; H 1 ) = L(x|θ; H 1 )p(θ)dθ,(7) where L(x|θ; H 1 ) is the conditional PDF, p(θ) is the prior distribution, and the integration is performed over the random parameter. If the mean value of the random parameter is unknown, one can initially estimate this using MLE. Thus, instead of calculating the MLE of the parameter to completely specify the likelihood function, one can remove it via integration. Detection performance As described in section 2.1, the detection threshold, λ is determined according to the acceptable false positive rate, and balances the probability of detection (P D ) with the probability of a false positive P F A . It is important to quantify the performance of a detector, for comparison to other detectors. The optimal detection performance, that of the clairvoyant detector, can be calculated and used as a comparison for the realised performance, as an objective means to measure the utility of a detector. Estimation performance: Cramer-Rao lower bound The complete specification of the likelihood function will require the estimation of some parameters. Imprecise estimation will degrade detection performance. It is useful to have a sense of the ability to estimate the value of a parameter for given a dataset. To determine the theoretical optimal estimation performance with a given dataset, we can calculate the Cramer-Rao lower bound (CRB) on the precision of parameter estimates. The CRB calculates the precision with which a minimum-variance unbiased estimator could estimate a parameter value, using the information content of the dataset. It is computed as the square-root of the corresponding diagonal element of the inverse of the Fisher information matrix (FIM). The (ij)th entry of the FIM for a vector θ of unknown parameters is given by: [I(θ)] ij = −E ∂ 2 log L(x; θ) ∂θ i ∂θ j ,(8) where E denotes the expectation value. For N independent samples in WGN and complex data, this expression simplifies to (Kay 1993), [I(θ)] ij = 2Re 1 σ 2 N n=1 ∂s H [n; θ] ∂θ i ∂s[n; θ] ∂θ j .(9) The CRB is a useful metric because it places a fundamental lower limit on the measurement precision of any parameter. In this work it will be used to gain an understanding of the fundamental limits of an instrument, and how these affect its estimation and detection performance. It has previously been used in astronomy to determine limits on optical astrometry with the WFPC2 camera aboard the Hubble Space Telescope (Adorf 1996), and with focal plane array bolometers (Saklatvala et al. 2008). Detection of sources in visibility data We describe here a method for detecting a point source within visibility data. In general, this requires detection of a source of unknown flux density, spectral index, location, arrival time and duration (in the case of a transient), contained within confounding (nuisance) signals within the field. As described in section 2.2, the method involves maximum likelihood estimation of the unknown parameters, followed by a GLRT to decide the presence of a signal. We begin with the simplest problem of detecting a single point source in an empty field, and then add complexity. It is assumed initially that there are no calibration errors, source confusion or atmospheric effects present in the dataset. These errors will complicate the formulation of the problem, and they will be considered in section 4.1. Detection of a single point source in an empty field Estimation of unknown parameters Detection of a single point source in visibility data requires estimation of unknown parameters, followed by detection of a complex signal in WGN. We assume we use data from one integration step, F frequency channels and N baselines. The location of the source and its amplitude (spectral flux density) are unknown, and need to be estimated before hypothesis testing. The data are modelled under each hypothesis as: H 1 :x[f, n] =s[f, n] +w[f, n] (n = 1, ..., N), (f = 1, ..., F ) (10) H 0 :x[f, n] =w[f, n], where the tilde denotes complex quantities. We will write the signal and data as real and imaginary components, under which the noise can be modelled as white gaussian. The signal,s[f, n], is the complex visibility for channel f and baseline n, and is given by (Thompson et al. 2004): s[f, n] = V (u f n , v f n ) = A(l ′ , m ′ )I(l ′ , m ′ ) ν(f ) ν 0 α exp [−2πi(u f n l ′ + v f n m ′ )]dl ′ dm ′ ,(11) where A(l ′ , m ′ ) and I(l ′ , m ′ ) are the antenna response function and source intensity function at sky position (l ′ , m ′ ), and the spectral dependence is modelled as a power-law with index α and normalized by the base frequency, ν 0 . Assuming a point source located at (l ′ = l, m ′ = m), the visibility function becomes: V (u f n , v f n ) = A(l, m)I(l, m) ν(f ) ν 0 α exp [−2πi(u f n l + v f n m)].(12) The antenna response and source flux density functions are nuisance parameters for detection, and we combine them to form one scaling factor, B(l, m) = A(l, m)I(l, m), which is independent of baseline. Hence, our model is: s[f, n] = V (u f n , v f n ) = B(l, m) ν(f ) ν 0 α exp [−2πi(u f n l + v f n m)].(13) We form the likelihood function under WGN, assuming initially that the noise variance is known and identical for all baselines and channels. The likelihood function, which is the joint PDF for F channels and N baselines, is given by (Kay 1998): L(x; H 1 ) = N n=1 F f =1 1 πσ 2 exp − 1 σ 2 (x[f, n] −s[f, n]) * (x[f, n] −s[f, n]) = 1 π N F σ 2N F exp − 1 σ 2 (x −s) H (x −s)(14) where H denotes Hermitian conjugate (complex conjugate tranpose), and the product has been collected in the matrix inner product. Substituting the signal, equation 13, into the likelihood function yields, L(x; H 1 ) = 1 π F N σ 2F N exp − Z σ 2 (15) where Z = N n=1 F f =1 x[f, n] − B(l, m) ν(f ) ν 0 α exp [−2πi(u f n l + v f n m)] * × x[f, n] − B(l, m) ν(f ) ν 0 α exp [−2πi(u f n l + v f n m)] .(16) To obtain the values of the unknown parameters, (B, α, l, m), the maximum likelihood estimates of these parameters, (B,α,l,m), are determined. Methods for determining these estimates are presented in Paper II. Here, we explore the precision with which these parameters can be estimated for a real instrument using the CRB formalism described in Section 2.5. Estimation precision for a point source For the signal, equation 13, the FIM for parameters (l, m, α, B) is given by; [I(θ)] = 2 σ 2          4π 2 B 2 I u 2 4π 2 B 2 I uv 0 0 4π 2 B 2 I uv 4π 2 B 2 I v 2 0 0 0 0 NB 2 I ν 2 NB I ν 0 0 NB I ν NI 1          ,(17) where I u 2 = N n=1 F f =1 u 2 f n ν(f ) ν 0 2α ,(18)I uv = N n=1 F f =1 u f n v f n ν(f ) ν 0 2α ,(19)I v 2 = N n=1 F f =1 v 2 f n ν(f ) ν 0 2α ,(20)I ν 2 = F f =1 ν(f ) ν 0 2α [log (ν(f )/ν 0 )] 2 ,(21)I ν = F f =1 ν(f ) ν 0 2α log (ν(f )/ν 0 ),(22) and I 1 = F f =1 ν(f ) ν 0 2α .(23) Interestingly, the FIM does not depend directly on the source location (only indirectly through the antenna response function). This is because it is only the phase difference between antennas that is important. Instead, the baseline projections (uv) weight the information in each element of the summations. Intuitively this is because the longer baselines are sensitive to smaller changes in the source position, and therefore are weighted more highly in the information measure. Because the intensity scaling, B, is a linear multiplier for each signal, the information carried in the data about it scales directly with the number of baselines, and the noise variance. There is no covariance between the position parameters and the signal amplitude parameters. Therefore, any prior information on the values of these parameters, will not affect one's ability to estimate the other parameters. Conversely, any factor that degrades the estimation of one group will not affect the other group. Inverting the FIM yields the following lower bounds on the precision of the parameter estimates: ∆l ≥ σI 1/2 v 2 2 √ 2πB I v 2 I u 2 − (I uv ) 2 −1/2 (24) ∆m ≥ σI 1/2 u 2 2 √ 2πB I v 2 I u 2 − (I uv ) 2 −1/2 (25) ∆α ≥ σI 1/2 1 √ 2NB I ν 2 I 1 − (I ν ) 2 −1/2 (26) ∆B ≥ σI 1/2 ν 2 √ 2N I ν 2 I 1 − (I ν ) 2 −1/2(27) Note that the sky positions here are direction cosines, and are defined in radians for small angles, and B and σ have the same units (i.e., Jy). This is a general expression for the theoretical maximum precision (estimation performance) on the position, amplitude and spectral index of a source in visibility data at (l, m), using F frequency channels and N baselines. The noise, σ, is thermal noise per channel and baseline. These expressions exclude any systematic effects. They are the lower bounds on estimation of these parameters for an estimator that uses all of the available information in an unbiased manner. Equation 24 suggests that a centrally concentrated array will have poorer astrometric precision than an array that is not centrally concentrated but has the same number of baselines and the same maximum baseline (i.e., one with uniform uv coverage). In conventional radio astronomy terminology, a naturally-weighted synthesized beam from a concentrated array will have a broader beam than a uniform uv array. These results agree with, and quantify, our intuitive expectations. The covariance (non-zero off-diagonal elements of the FIM) between the signal amplitude and spectral index degrade the estimation performance for each parameter individually. If the spectral index is known, and for α = 0, the precision on the signal amplitude is given by ∆B ≥ σ/ √ 2F N i.e., it is proportional to the noise, integrated over all antennas and the total bandwidth. Introducing uncertainty in the value of the spectral index degrades the estimation of the signal amplitude, and vice versa. baselines; • Increasing the number of frequency channels degrades amplitude estimation performance, but improves spectral index estimation performance, due to the covariance of these parameters; • The position parameters have the same functional form, and improvement in one generally implies improvement in the other (unless the improvement is due to increased interferometer extent along only one axis); • In general, the uncertainty in the value of the spectral index is large compared with typical values, and this poor estimation is due to the high noise (short integration time) in this example; • The bounds for m are slightly higher than for l due to the reduced spatial extent of the MWA 32T array in the y-direction (elongated in the x-direction), but both are ∼1 arcminute for S/N=5; • The CRBs on position estimates are much smaller than the synthesized beam of the telescope. This is because the CRB represents the maximal precision, and thus represents the performance of the optimal estimator (or deconvolution algorithm), if it exists (which it may not). The synthesized beam, or half-power beam width, is a coarse measure that only considers the contribution from the longest baseline in the array, without reference to the spatial location information carried by the shorter baselines. These results are for a "perfect" interferometer, with no calibration errors and no systematic bias. Once additional errors are taken into account, the effective noise term (σ) will increase, and the estimation performance will be degraded. This will be closer to the real situation in an interferometer, but will still omit any errors introduced by systematic bias i.e., the CRB is applicable only for unbiased estimation. Inclusion of other sources in the field Up to this point, we have considered the field to contain a single source, with unknown position and amplitude. We now generalise these results to the more realistic situation where the field contains many other static sources. From the perspective of transient detection, these static sources are nuisance signals that need to be removed. Removal of these signals might involve signal subtraction, or modelling, and this is a large field of current research, in itself. Algorithms that account for static sources will be explored in Paper II. These sources are modelled in the visibility data, and included in the GLRT as known parameters, viz ; p(x;B,α,l,m, θ, H 1 ) p(x; H 0 ) > λ,(28) where θ is a vector that describes the parameters of the nuisance sources. The ML estimation of the source position and amplitude can proceed under this scheme as described above. For K known static sources, and one unknown source, the signal is given by the sum of the complex visibilities for each source, s[f, n] = B(l, m) ν(f ) ν 0 α exp [−2πi(u f n l + v f n m)] + K k=1 B k exp [−2πi(u f n l k + v f n m k )].(29) Unknown signal arrival time Transient sources, by their nature, appear at unknown times (for non-periodic sources) and remain visible for unknown durations. Including these parameters in the modelling necessarily requires extending the discussion to multiple integration steps. This has the benefit of increasing the number of data samples, thereby improving parameter estimation, but is complicated by the evolution of (uv) as the Earth rotates. The ML estimation and GLRT detection test scheme described above, applied at each integration timestep (output of the correlator), naturally allow for appearance of a signal at a given timepoint. No signal-present will produce amplitude estimates within the noise level, and position estimates within the CRB of the phase centre. Appearance of a (sufficiently strong) signal will produce non-zero estimates and the test statistic will exceed the detection threshold. At this point, it is statistically advantageous to include all previous timesteps when a signal has been present in the ML estimation of parameters. The position of the source (l, m) and amplitude, B(l, m)(ν(f )/ν 0 ) α , are constant over time. The GLRT then becomes: L(x,B,α,l,m, θ; H 1 ) = 1 (πσ 2 ) N F T exp − 1 σ 2 T t=1 N n=1 F F =1 Z f nt ,(30) where if the signal has disappeared, it will be more difficult to detect this if all of the previous signal-present information is used. Z f nt =  x [f, n, t] − B(l, m) ν(f ) ν 0 α exp [−2πi(u f nt l + v f nt m)] − K k=1 B k exp [−2πi(u f nt l k + v f nt m k )]   * ×  x [f, n, t] − B(l, m) ν(f ) ν 0 α exp [−2πi(u f nt l + v f nt m)] − K k=1 B k exp [−2πi(u f nt l k + v f nt m k )]   ,(31) Other sources of uncertainty Up to this point we have considered only thermal noise in visibility data, however this is a simplification of reality. Amplitude and phase calibration errors, background confusing sources and atmospheric/ionospheric effects on the signal wavefront alter the noise properties of the visibility data. Note that here we use the term 'noise' in the general sense, including statistical noise, system noise and unresolved background. Accurate characterization of these effects is required to design an optimal detector. As well as designing an optimal detector, these effects introduce additional uncertainty to the modelling and will degrade the estimation and detection performance. In this section we use the Cramer-Rao bound to determine the precision on measuring calibration gain parameters, and include this additional uncertainty in the likelihood function describing the data. To include the additional uncertainty in the likelihood function, we add a term to the thermal noise that quantifies the uncertainty for each baseline and channel. In general, this is a covariance matrix, C c , where non-zero off-diagonal terms quantify baseline-baseline covariances. The likelihood function under the signal-present hypothesis becomes; L(x; H 1 ) = 1 π F N det(C c + σ 2 I) exp −(x −s) H (C c + σ 2 I) −1 (x −s) .(32) Multiplying the data model vector by the inverse of the covariance matrix prewhitens the data (removes the correlations between baselines, and weights each baseline according to the amount of information available about it). Calibration errors, confusion and atmospheric phase noise Liu et al. (2010) have recently described errors introduced by different calibration techniques, and extend earlier work by Cornwell (1981), Cornwell and Fomalont (1989) and Wieringa (1992). Calibration errors can be classified into two types; (1) systematic bias, due to a biased calibration estimation method; and (2) estimation uncertainty (imprecision), due to limited information (with the Cramer-Rao bound as the lower limit). Systematic bias will shift the position of sources, but may not increase the model uncertainty. Bias on an antenna-by-antenna basis will blur the position of signals. We will consider unbiased estimation precision, and represent the amplitude and phase calibration errors as additive parameters with zero mean and known covariance. Low-resolution instruments may suffer from source confusion, whereby the density of background sources is sufficient to produce overlapping sources in the image plane through the source primary signal and sidelobes. Confusion-limited instruments have a natural detection limit that corresponds to the confusion level, as opposed to the thermal noise level, which may be lower. The confusing signal is structured and behaves differently to thermal noise, because it corresponds to real signals. The MWA 32T and 512T are confusion-limited instruments. In general, the ASKAP instrument will not be confusion-limited, due to its higher angular resolution compared with the MWA 32T and higher operating frequencies (this may not be the case for long integrations, but will be when considering each integration timestep independently). However, the higher frequencies are subject to tropospheric fluctuations, yielding visibilities that include atmospheric phase noise. This noise acts to blur the position of the source. The phase noise is a function of the baseline length, d, and its variance can be modelled by (Thompson et al. 2004): σ 2 atmos = 4π 2 a 2 d 2β λ 2 ,(33) where β is the index of the structure function describing the fluctuations (β=5/6 for a Kolmogorov spectrum), and a is a scaling factor. The phase noise is largest for the longest baselines. A typical value for a of 10 −6 corresponds to an rms phase noise of 2.5 degrees for the longest ASKAP baseline. Confusion will increase the effective noise level in the visibilities, and the rms confusing signal can be added in quadrature to the thermal noise. Atmospheric phase noise introduces additional uncertainty in the phase, and therefore in the argument of the trigonometric functions describing the signal. Before their inclusion into the detector, we derive the impact of calibration on source parameter estimation precision. There are two steps in determining the effect of calibration uncertainty on estimating the parameters of a source. The first is to determine how precisely calibration can be performed given a set of calibrators in the field. The second is to include this uncertainty in the overall system noise when estimating the source parameters. Form of covariance matrix Primary (amplitude) calibration is achieved by observing a source of known flux density, typically at the phase centre, and adjusting the antenna-based gains to yield the known flux density as an output. Secondary (phase) calibration can be performed in two ways: (1) observation of a bright point source at the phase centre, and adjustment of the antenna-based phases to be identically zero, and (2) self-calibration, using sources available in the field to produce a consistent phase solution. For single-dish instruments and interferometers with a small field-of-view, the former technique yields adequate results. For instruments with large fields-of-view, where multiple phase solutions are required (variation in calibration across the field), self-calibration will produce more accurate results at the edge of the field. Observations at frequencies below ∼300 MHz have the additional complication of propagation delays introduced by the ionosphere. This delay shifts the position of sources in the sky, but does not blur the image. In this case, one forms a simple time-dependent model for the ionospheric phase screen from sources in the field-of-view, and performs a 'field-based' calibration (Kassim et al. 2007). This requires a high cadence of phase calibration to be performed, and to be practical, necessitates the field-based calibration method. At higher frequencies, phase noise caused by the neutral troposphere causes a blurring of the source position. An adequate distribution of secondary calibrators across the field should produce unbiased phase solutions, with the spatial variation accounted for in the solutions. The phase errors in this case will reduce to measurement errors based on the number and strength of the sources available. An inadequate density of sources may lead to large uncertainty on the phase calibration. Primary (bandpass) calibration typically employs a very strong source, occurs relatively infrequently (∼hours), and is performed for each frequency channel. The high source signal-to-noise ratio will yield high precision on the calibration. The field-based calibration, however, is subject to short timescale atmospheric fluctuations. For these instruments it will employ sources of varying strengths, occur frequently (∼10s), and will involve simultaneous solution for all antennas across the whole bandwidth. Here we will consider the effect of field-based calibration on source estimation. The covariance matrix element for baselines i, j is given by; C ij = E[(x i −x i )(x j −x j )],(34) where the expectation value is taken over multiple realisations of the calibration. With an ionospheric model and a given density of static field sources, the covariance matrix can be approximated analytically. For more complex and realistic distributions, simulations can be used to calculate the covariance matrix empirically. Real datasets may also be used to quantify the covariance matrix: each independent integration in an observation of a static field can be used to approximate an independent noise realization, and the covariance matrix estimated using equation 34. For the purposes of this paper, and to demonstrate the magnitude and impact of these errors on source estimation and detection, we form an approximate analytical covariance matrix based on the theoretical precision with which calibration can be performed, and known expressions for the magnitude and distribution of uncertainty introduced by confusion and atmospheric phase noise. To approximate the form of the covariance matrix, C c , we consider the measurement errors for amplitude and phase calibration for a given antenna, and use error propagation to express the variance for a given baseline. Errors in radio astronomy are typically antenna-based, however the covariance matrix we require needs to describe uncertainty on a baseline basis (since these are the data we measure). Note that the formulation of the problem and the error propagation take into account the connectivity of antennas: i.e., an error on one antenna will propagate through all of the baselines it forms with every other antenna. Cramer-Rao bounds on estimation of gain parameters We begin by calculating the theoretical optimal precision with which the amplitude and phase calibration can be measured for a single antenna, for an M-antenna interferometer and N c calibrators, with positions (l Nc , m Nc ) and flux densities, B Nc (ν(f )/ν 0 ) α Nc . We write the complex gain for baseline n comprising antennas β and γ, as: G n =G βGγ = 1 b β b γ exp 2πi(φ β − φ γ ),(35) where b β and φ β are the amplitude and phase gain parameters for antenna, β. The joint PDF for all of the baselines is proportional to: L(x) ∝ exp − 1 2σ 2 F f =1 M β=1 M γ =β Z H βγ Z βγ(36) where Z βγ =x βγ − Nc i=1 B i ν(f ) ν 0 α i b β b γ exp −2πi(u f βγ l i + v f βγ m i + φ β − φ γ ),(37) and there are M antennas. The Fisher Information Matrix is a 2M × 2M matrix to estimate all of the b and φ parameters. There are no covariances between the b and φ parameters, so the FIM is equivalent to two M ×M matrices. Therefore, there are two FIMs to invert, FIM b and FIM φ . Constructing FIM φ yields a singular matrix, due to the phases being relative quantities. Typically, the phase gain for one antenna is set to zero, and the others are defined relative to this. Hence, we set φ 1 = 0, and remove this parameter from the estimation (it is assumed completely specified). Therefore FIM φ becomes a (M − 1) × (M − 1) matrix. We derive the CRBs in Appendix A and present the solutions here. The general solutions for the baseline precision N c calibrators and M antennas are: ∆b αβ ≥ σ √ 2 F f =1 Nc i=1 B 2 i ν(f ) ν 0 2α i + Nc i=1 Nc j =i B i B j ν(f ) ν 0 α i +α j cos 2π(u f αβ (l j − l i ) + v f αβ (m j − m i )) −1/2 (38) ∆φ αβ ≥ σ 2(2π) 2 ε ab...z =α =β A 1,a A 2,b ...A M −1,z ε ab...z A 1,a A 2,b ...A M −1,z ,(39) where ε is the Levi-Civita permutation symbol, (a, b, ..., z) ∈ [1, M − 1], there are implicit summations over all indices, and A a,b are the FIM matrix elements, and are given by, A a,b =        b 2 a M −1 k =a b 2 k X ak , a = b −b 2 a b 2 b X ab , a = b ,(40) and X ab is given by: F f =1 Nc i=1 B 2 i ν(f ) ν 0 2α i + Nc i=1 Nc j =i B i B j ν(f ) ν 0 α i +α j cos 2π(u f ab (l j − l i ) + v f ab (m j − m i )) .(41) The expression for the phase uncertainty is not easy to implement, and in practise, it is far simpler to form the FIM and invert numerically, and extract the baseline-based uncertainties using error propagation on the inverse FIM elements. Equations 38-39 are the baseline-based gain precision limits, and we have used error propagation (including covariances) from the antenna-based uncertainties to obtain these. These expressions make sense intuitively. If we ignore the cross-terms initially, the solution scales inversely with the total calibrator signal strength ( B i ). The amplitude precision depends solely on the antennas forming the baseline, whereas the phase precision depends on the relative contributions from all baselines involving the antennas in question. This is due to the phase being a relative quantity. Increasing the number of antennas improves the estimation precision, as does increasing the signal strengths. The cross-terms weight the contributions from individual antennas, according to the baseline projections on the vector between the calibrator sources. Up to this point, the noise parameter, σ, has referred to the thermal noise, which, for the MWA 32T is ∼15 Jy per visibility in each coarse 1.28 MHz channel. However, the MWA will be confusion-limited, and the actual 'system noise' will be higher due to the rms fluctuations generated by the confusing sources. Assuming a confusion of 1 Jy/beam, this corresponds to ∼100 Jy rms in each visibility. These 'noise' terms are independent, and can be added in quadrature to produce an overall system noise of ∼100 Jy in each visibility. In Figure 3 we where each calibrator has B=3.5 Jy. This corresponds to S/N=5 for an 8s integration. The precision here scales as the inverse square-root of the number of calibrators. In all cases, the amplitude gain parameters (b) are set to unity. Note that these plots are for a particular baseline. In general, the plots could vary substantially between baselines. If, for example, there was little calibrator information for a particular antenna, the precision will be degraded for the visibilities across all of the baselines it forms. Therefore, the connectivity (correlation) of the antennas is naturally accounted for in this formalism. Inclusion of calibration uncertainty in estimation of source parameters Now we have quantified the uncertainty that field-based calibration introduces into the data, the next step is to incorporate the additional uncertainty in measuring the parameters of a source. Previously, the Cramer-Rao bounds on estimating source parameters assumed only thermal noise in the system. To this noise we now add components that quantify the additional uncertainty. There are two sets of uncertainties to be introduced: a covariance matrix, C c that quantifies the amplitude uncertainty, and a set of phase parameters that quantify the phase uncertainty (broadening the overall likelihood function acts on the real and imaginary components of the data, and therefore cannot easily include errors on the phase). For a general covariance matrix and complex data, the general expression for the Fisher Information Matrix becomes: [I(θ)] ij = tr C −1 (θ) ∂C(θ) ∂θ i C −1 (θ) ∂C(θ) ∂θ j(42) +2Re ∂s H (θ) ∂θ i C −1 (θ) ∂s(θ) ∂θ j , where we write the covariance matrix, C, as: C = (σ 2 I + C c )(43) and C c is the covariance matrix due to the amplitude calibration uncertainty. Note that the noise term, σ, is the thermal noise -we assume that the background sources have been modelled and subtracted (including confusing sources). In practise, some level of background source will remain, and this also can be included in the modelling. The construction of the covariance matrix, C c reflects the calibration uncertainties on each baseline (equations 38-39). For the pedagogical case we are considering here, we assume that the calibration process does not introduce any correlations between baselines or channels, other than through the common antenna gain term. This is equivalent to asserting that there are no baseline-based errors. In reality, the off-diagonal terms will be non-zero, but small. We write the nth component of the diagonal covariance matrix as: [C c ] n = B 2 ∆b 2 n ,(44) where B is the strength of the source and ∆b n is the calibration amplitude precision for baseline n. The source strength is here to have the correct units, and reflects the dependence of the absolute scale of the calibration errors on the source strength. Using this scheme, the system noise for low signal-to-noise ratio sources will be dominated by the thermal noise, whereas high signal-to-noise sources will have a relatively larger calibration error component. The phase uncertainty is modelled as an additional parameter, ψ n , in the argument of the exponential in the signal. This is a random (as compared with a deterministic) parameter, for which we possess prior knowledge (the phase calibration uncertainty), and is dependent on baseline, n. Note that we are referring here to uncertainty in a statistical sense: we do not wish to estimate the actual phase fluctuations for each antenna (which are constrained by closure phase, and are nuisance parameters), but instead want to understand the additional uncertainty they introduce. Instead of estimating the four deterministic parameters of the transient source (B, α, l, m), we simultaneously estimate these parameters and the N random phase parameters, ψ n . We use the prior knowledge of how these parameters are distributed to include additional information in a modified Fisher Information Matrix. The CRLB cannot be extended easily to include prior information. An equivalent expression for random parameters is available using a Bayesian approach where the probability distribution function describing the data includes the probability distribution function of the parameter. This approach allows prior information on the value of the parameter to be incorporated into the bound. Rockah and Schultheiss (1987) introduced the Hybrid Cramer-Rao lower bound (HCRLB) as an extension to the CRLB that allows estimation of both random and deterministic parameters. The probability distribution functions of the random parameters can contain prior information on the distribution of that parameter, and improve the estimation performance. In practice, this is achieved by the Fisher Information containing contributions from both the data (classical CRLB) and the prior information. The modified FIM is given by: I(θ) ′ = E ψ [I(θ)] + I pr (ψ),(45) where I(θ) is the classical (data) FIM, and I pr (ψ) is the prior information, and is given by: E ψ ∂ log p(ψ) ∂ψ i H ∂ log p(ψ) ∂ψ j(46) for component ij. The expectation over the random parameter of the data component is often omitted for tight prior distributions, resulting in a modified FIM that is the sum of the data and prior components. For gaussian PDFs with variance, σ 2 , the prior information is: I pr (ψ n ) = 1/σ 2 ψn .(47) As discussed above, there is no covariance between the source position parameters (l, m) and the source amplitude parameters (B, α). The random phase parameters, ψ n , also do not co-vary with the amplitude parameters, and their inclusion therefore has no impact on the ability to estimate them (the additional amplitude uncertainty does affect all parameters, however). In the case of calibration phase uncertainty and atmospheric phase noise, the prior PDF is broadened to include contributions from both uncertainties. We now form the (N + 4) × (N + 4) FIM for the information carried in the data about the source parameters, and invert to yield the lower bounds on parameter estimates. This FIM now includes the effects of amplitude and phase calibration. Figures 4(a-c) display the maximum estimation precision for a source, as a function of source signal-to-noise ratio (thermal noise), for system noise being purely thermal, and for thermal+calibration errors. In these figures, the calibration is performed using five 1 Jy (each with S/N=7) sources in the field, and is performed every 8 seconds. It is clear from these plots that the MWA 32T is dominated by thermal noise and confusion for this case, and that the field-based calibration is not a significant source of uncertainty for low signal-to-noise ratio sources. Of course, a reduction in the number or strength of calibrator sources will affect these results. As mentioned earlier, the number of calibrators considered here are those within some scale on the sky over which the atmosphere can be considered stable. This group of calibrators can then be used jointly to estimate the gains. For stable atmospheres, and large patches of sky, the number of calibrators will be large, and the calibration precision will be high. For an unstable atmosphere with small patches, the number of calibrators will be low, and the calibration precision will be low. Therefore, this formalism includes the effects of position-dependent calibration in a rudimentary way. Figure 4(d) shows the ideal precision for estimation of source position (l) for ASKAP at 1.4GHz (∆ν=300MHz, 32 channels, 5 second integration), and including three levels of atmospheric phase noise. In the case of the ASKAP system, the calibration uncertainty is small, and the additional uncertainty on the source position is dominated by the atmospheric phase noise. Optimal detector We have presented an analytical model for the impact of calibration, background confusing sources and the atmosphere on source estimation and precision. With this model, and the statistical framework developed in section 2, we can describe the optimal detector for visibility data. The signal-present hypothesis likelihood function for one integration timestep can be described by: L(x; H 1 ) = 1 π F N det(C c + σ 2 I) exp −(x −s) H (C c + σ 2 I) −1 (x −s) ,(48) where C c contains the amplitude calibration uncertainties, and the signal for channel f and baselines n is given by: s[f, n] = B(l, m) ν(f ) ν 0 α exp [−2πi(u f n l + v f n m + ψn)] + K k=1 B k exp [−2πi(u f n l k + v f n m k + ψn)],(49) and the phase parameters are gaussian distributed according to the calibration and atmospheric phase uncertainties. The likelihood function for the signal-absent hypothesis, H 0 , has the same form as equation 48, and the 'signal' is given by: s[f, n] = K k=1 B k exp [−2πi(u f n l k + v f n m k + ψ n )],(50) (i.e., background static sources). Note that both likelihood functions require the phase uncertainty terms to be removed. To perform the detection, one needs to estimate or remove all of the unknown parameters. The unknown source parameters are estimated using maximum likelihood estimation (ideally), and the random phase parameters are integrated out, using their prior PDFs and the Bayesian approach described in section 2.3. Algorithmically, the unknown deterministic parameters are estimated first, with the phase parameters set at their mean values (ψ n = 0) for simplicity (in practise, the phase errors will be small, and this simplification will have minimal impact on the detection performance). Then, the N one-dimensional integrals are performed to remove the N random parameters. Finally, the GLRT is performed by taking the ratio of the values of the likelihood functions, and the result compared with a threshold. In Paper II we use the results derived here to form realistic likelihood functions, and present algorithms for implementing the optimal detector. Conclusions We have presented a framework for designing optimal source detectors with visibilityspace data from interferometric arrays, and applied this to describe a realistic optimal detector. Working in visibility space allows a more natural characterisation of the data likelihood functions than image space, where noise is structured and not well-understood. Source detection is complicated by unknown source strength, spectral index, position, arrival time and duration (transient sources), and the uncertainty on these parameters reduces detection performance. Estimation of these parameters is required before signal detection can be performed. Uncertainty introduced by field-based calibration, confusing sources and atmospheric phase noise further complicates signal detection and reduces detection performance. We have explored the impact of these additional sources of uncertainty on the ability of an efficient estimator to determine the parameters of a source, and applied these methods to two SKA pathfinder instruments: the MWA 32T and ASKAP. We then used an understanding of these effects to present a realistic model of visibility data, and design an optimal detector. theoretically be measured is given by: ∆b αβ ≥ σ B 2 F f =1 ν(f ) ν 0 2α ,(A1) i.e., the inverse signal-to-noise for complex data. For two calibrators, the expression becomes: ∆b αβ ≥ σ √ 2 F f =1 B 2 1 ν(f ) ν 0 2α 1 + B 2 2 ν(f ) ν 0 2α 2 +2B 1 B 2 ν(f ) ν 0 α 1 +α 2 cos 2π(u f αβ (l 2 − l 1 ) + v f αβ (m 2 − m 1 )) −1/2 .(A2) Extrapolating to N c calibrators yields: ∆b αβ ≥ σ √ 2 F f =1 Nc i=1 B 2 i ν(f ) ν 0 2α i + Nc i=1 Nc j =i B i B j ν(f ) ν 0 α i +α j cos 2π(u f αβ (l j − l i ) + v f αβ (m j − m i )) −1/2 ,(A3) which, for N c identical, co-located calibrators with strength B and α = 0, gives σ/( √ 2F N c B). For the gain phase precision (setting φ 1 to zero) and one calibrator, the precision is: ∆φ βγ ≥ σ 8π 2 F f =1 ν(f ) ν 0 2α b 2 β + b 2 γ Bb β b γ 1 b 2 1 + b 2 2 + b 2 3 .(A4) For two calibrators, this expands to include the cosine cross-terms, and is given by: ∆φ 12 ≥ σ √ 8π 2 b 2 1 X 13 + b 2 2 X 23 b 1 b 2 1 b 2 1 X 12 X 13 + b 2 2 X 12 X 23 + b 2 3 X 13 X 23 . where X ab = F f =1 B 2 1 ν(f ) ν 0 2α 1 + B 2 2 ν(f ) ν 0 2α 2 + 2B 1 B 2 ν(f ) ν 0 α 1 +α 2 cos 2π(u ab (l 2 − l 1 ) + v ab (m 2 − m 1 )) . ) Extrapolating to N c calibrators gives: ∆φ 12 ≥ σ 2(2π) 2 b 2 1 X 13 + b 2 2 X 23 b 1 b 2 1 b 2 1 X 12 X 13 + b 2 2 X 12 X 23 + b 2 3 X 13 X 23 . where X ab = F f =1 Nc i=1 B 2 i ν(f ) ν 0 2α i + Nc i=1 Nc j =i B i B j ν(f ) ν 0 α i +α j cos 2π(u ab (l j − l i ) + v ab (m j − m i )) .(A8) Note that this is still the solution for a three-antenna system. The general solutions for N c calibrators and M antennas are: ∆b βγ ≥ σ √ 2 F f =1 Nc i=1 B 2 i ν(f ) ν 0 2α i + Nc i=1 Nc j =i B i B j ν(f ) ν 0 α i +α j cos 2π(u f βγ (l j − l i ) + v f βγ (m j − m i )) −1/2(A9) ∆φ αβ ≥ σ 2(2π) 2 ε ab...z =α =β A 1,a A 2,b ...A M −1,z ε ab...z A 1,a A 2,b ...A M −1,z , where ε is the Levi-Civita permutation symbol, (a, b, ..., z) ∈ [1, M − 1], and A a,b are the FIM matrix elements, and are given by, A a,b =        b 2 a M −1 k =a b 2 k X ak , a = b −b 2 a b 2 b X ab , a = b ,(A11) where X ab is the same as in equation A8 The expression for the phase uncertainty is not easy to implement, and in practise, it is far simpler to form the FIM and invert numerically, and extract the baseline-based uncertainties using error propagation on the inverse FIM elements. Figure 1 1for the 32-tile system (32T) of the MWA, where the phase centre has been set at the zenith. The MWA 32T has a linear extent of ∼330m, and a synthesized beam at 150 MHz of θ syn ∼25 arcmin. Figure 2 displays the antenna positions for the MWA 32T and ASKAP telescopes. Assuming a system equivalent flux density of 10,000 Jy, 1.28 MHz channels and an 8 second integration, the noise in each of 24 channels is ∼15 Jy. In all cases, the source has a constant amplitude of B = 0.7 Jy, translating to an integrated signal-to-noise ratio, S/N=5, over 30.72 MHz bandwidth (center frequency of 153 MHz) and using all 32 antennas. The upper figures display bounds as a function of the number of baselines, where the baselines have been ordered descending in length (i.e., the longest baselines are counted first), and for 24 frequency channels. The lower panels display bounds as a function of the number of spectral channels used for a fixed total bandwidth of 30.72 MHz, constant source amplitude, and using all antennas. The following observations can be made: • Inclusion of the shortest baselines does affect estimation performance, due to the increase in sensitivity obtained by including additional antennas. This is important when considering removing short baselines to reduce the impact of diffuse emission on your signal; • The Fisher information on the source amplitude and spectral index are independent of the distribution of antennas in the array: they depend on the number of antennas, frequency channels and bandwidth, and have a 1/ √ N dependence on number of Fig. 1.-(Upper) Bounds on unknown parameters as a function of number of baselines, where the baselines have been ordered descending in length (B = 0.7 Jy, σ = 15 Jy/channel/baseline, α = 0, ∆t = 8s, F = 24, ∆ν = 30.72 MHz).(Lower) Bounds as a function of number of frequency channels, F , using all 32 antennas, and the same total bandwidth, ∆ν = 30.72 MHz. Fig. 2 . 2-Antenna positions for the MWA 32T telescope (left) and ASKAP telescope (right). and t = (1, ..., T ) denotes the timestep, T timesteps have occurred since the initial detection of the transient signal, and the summation contains the contribution from the known, static sources within the field. The baseline projections are now functions of time as the Earth rotates, and are completely specified. Estimation of the transient source position and amplitude is extended to include all T timesteps, effectively reducing the estimation uncertainty by a factor of ∼1/ √ T . Hence, at each integration timestep, after initial detection of a transient signal, two processes would occur. Firstly, the total dataset (since detection) would be used to estimate the values of the source parameters. Then, these ML estimates would be used in a GLRT, evaluated at the current timestep alone. This would allow a decision to be made about the continued presence of the source at that timestep.If all of the data were used for the detection, previous detections of the signal (in previous timesteps) would dilute the presence of the signal in the current timestep. In other words, show the calibration amplitude and phase estimation precision, for the current MWA 32-tile system, and for varying calibrator numbers and strengths. For each plot, the precision is shown for thermal noise alone ('Therm') and for thermal+confusion ('Con+therm'). The upper figures show the precision on phase and amplitude estimation for a single baseline, for a total calibrator flux density of 3.5 Jy, and as a function of number of calibrators. These are ensemble averages to remove the effects of calibrator position. These figures demonstrate that it is advantageous to have a single bright calibrator, rather than a few lower signal-to-noise calibrators. The precision scales as the square-root of the number of calibrators. The lower figures show the precision as a function of number of calibrators, Fig. 3 . 3-(Upper) Gain phase and amplitude precision for a single baseline, and all frequency channels, for a total calibrator flux density of 3.5Jy, as a function of number of calibrators (σ = 15 Jy/channel/baseline, α = 0, ∆t = 8s, F = 24, ∆ν = 30.72 MHz). (Lower) Bounds as a function of number of calibrators, where each has a flux density of 3.5 Jy. Fig. 4 . 4-(a) Precision in estimating the source sky position (l) for a source as a function of signal-to-noise ratio, for calibration based on five 1 Jy (S/N=7) sources (σ = 15 Jy/channel/baseline, α = 0, ∆t = 8s, F = 24, ∆ν = 30.72 MHz). (b) Amplitude (B) precision. (c) Spectral index (α) precision. (d) Sky position (l) precision for ASKAP, including three magnitudes of atmospheric phase noise (a=10 −6 , 5×10 −6 , 10 −5 ). ε . There is an implicit summation over all indices in equation A10, viz,ε ab...z A 1,a A 2,b ...A M ab...z A 1,a A 2,b ...A M −1,z . Software and user guide available at http://www.atnf.csiro.au/people/Matthew.Whiting/Duchamp AcknowledgmentsWe would like to thank Matthew Whiting for providing the ASKAP antenna specifications and system characteristics. We would also like to thank the anonymous referee for providing a very considered and constructive review of the manuscript. Their input has improved the manuscript considerably.AppendixA. Derivation of calibration precisionWe have calculated (and will present below) the estimation precision for a single calibrator, and two calibrators. From this, we can extrapolate to N c calibrators, based on the form. The precision with which the calibration solution for a given baseline can be theoretically measured (since this is the data we measure) can be calculated using error propagation from the bounds on the individual antennas alone (and the covariances). For a single calibrator, and three antennas, the precision with which the gain amplitude can H Adorf, Astronomical Data Analysis Software and Systems V. G. H. Jacoby & J. Barnes10113Adorf H 1996 in G. H. Jacoby & J. Barnes, ed., 'Astronomical Data Analysis Software and Systems V' Vol. 101 of Astronomical Society of the Pacific Conference Series pp. 13-+. . R H Becker, R White, D J Helfand, ApJ. 450559Becker R H, White R L and Helfand D J 1995, ApJ, 450, 559-+. . J J Condon, W D Cotton, E W Greisen, Q F Yin, R A Perley, G Taylor, J J Broderick, AJ. 115Condon J J, Cotton W D, Greisen E W, Yin Q F, Perley R A, Taylor G B and Broderick J J 1998 AJ 115, 1693-1716. J Cordes, SKA Memo 97. Cordes J 2009 SKA Memo 97 . error analysis of calibration. T Cornwell, VLA Scientific Memorandum No. 135Technical reportCornwell T 1981 "error analysis of calibration" Technical report VLA Scientific Memorandum No. 135. T Cornwell, E B Fomalont, of Astronomical Society of the Pacific Conference Series. R. A. Perley, F. R. Schwab, & A. H. Bridle6185Cornwell T and Fomalont E B 1989 in R. A. Perley, F. R. Schwab, & A. H. Bridle, ed., 'Synthesis Imaging in Radio Astronomy' Vol. 6 of Astronomical Society of the Pacific Conference Series pp. 185-+. . S Croft, G C Bower, R Ackermann, S Atkinson, D Backer, P Backus, W C Barott, A Bauermeister, L Blitz, D Bock, T Bradford, C Cheng, C Cork, M Davis, D Deboer, M Dexter, J Dreher, G Engargiola, E Fields, M Fleming, J R Forster, C Gutierrez-Kraybill, G Harp, T Helfer, C Hull, J Jordan, S Jorgensen, G Keating, T Kilsdonk, C Law, J Van Leeuwen, J Lugten, D Macmahon, P Mcmahon, O Milgrome, T Pierson, K Randall, J Ross, S Shostak, A Siemion, K Smolek, J Tarter, D Thornton, L Urry, A Vitouchkine, N Wadefalk, J Welch, D Werthimer, D Whysong, P Williams, M Wright, ApJ. 719Croft S, Bower G C, Ackermann R, Atkinson S, Backer D, Backus P, Barott W C, Bauermeister A, Blitz L, Bock D, Bradford T, Cheng C, Cork C, Davis M, DeBoer D, Dexter M, Dreher J, Engargiola G, Fields E, Fleming M, Forster J R, Gutierrez-Kraybill C, Harp G, Helfer T, Hull C, Jordan J, Jorgensen S, Keating G, Kilsdonk T, Law C, van Leeuwen J, Lugten J, MacMahon D, McMahon P, Milgrome O, Pierson T, Randall K, Ross J, Shostak S, Siemion A, Smolek K, Tarter J, Thornton D, Urry L, Vitouchkine A, Wadefalk N, Welch J, Werthimer D, Whysong D, Williams P K G and Wright M 2010 ApJ 719, 45-58. . P Fridman, MNRAS. 409Fridman P A 2010 MNRAS 409, 808-820. . N E Kassim, T J W Lazio, W C Erickson, R A Perley, W D Cotton, E W Greisen, A S Cohen, B Hicks, H Schmitt, D Katz, ApJS. 172Kassim N E, Lazio T J W, Erickson W C, Perley R A, Cotton W D, Greisen E W, Cohen A S, Hicks B, Schmitt H R and Katz D 2007 ApJS 172, 686-719. Fundamentals of statistical signal processing: estimation theory. S Kay, Prentice-HallKay S M 1993 Fundamentals of statistical signal processing: estimation theory Prentice-Hall. Fundamentals of statistical signal processing: detection theory. S Kay, Prentice-HallKay S M 1998 Fundamentals of statistical signal processing: detection theory Prentice-Hall. . A Kemball, A Martinsek, Mitra M Chiang, H , AJ. 139Kemball A, Martinsek A, Mitra M and Chiang H 2010 AJ 139, 252-266. . A Liu, M Tegmark, S Morrison, A Lutomirski, M Zaldarriaga, MNRAS. 408Liu A, Tegmark M, Morrison S, Lutomirski A and Zaldarriaga M 2010 MNRAS 408, 1029-1050. . J Macquart, CRAFT CollaborationM Bailes, CRAFT CollaborationN D R Bhat, CRAFT CollaborationG C Bower, CRAFT CollaborationJ D Bunton, CRAFT CollaborationS Chatterjee, CRAFT CollaborationT Colegate, CRAFT CollaborationJ M Cordes, CRAFT CollaborationD &apos;addario, CRAFT CollaborationL Deller, CRAFT CollaborationA Dodson, CRAFT CollaborationR Fender, CRAFT CollaborationR Haines, CRAFT CollaborationK Halll, CRAFT CollaborationP Harris, CRAFT CollaborationC Hotan, CRAFT CollaborationA Jonston, CRAFT CollaborationS Jones, CRAFT CollaborationD L Keith, CRAFT CollaborationM Koay, CRAFT CollaborationJ Y Lazio, CRAFT CollaborationT J W Majid, CRAFT CollaborationW Murphy, CRAFT CollaborationT Navarro, CRAFT CollaborationR Phillips, CRAFT CollaborationC Quinn, CRAFT CollaborationP Preston, CRAFT CollaborationR A Stansby, CRAFT CollaborationB Stairs, CRAFT CollaborationI Stappers, CRAFT CollaborationB Staveley-Smith, CRAFT CollaborationL Tingay, CRAFT CollaborationS Thompson, CRAFT CollaborationD Van Straten, CRAFT CollaborationW Wagstaff, CRAFT CollaborationK Warren, CRAFT CollaborationM Wayth, CRAFT CollaborationR Wen, CRAFT CollaborationL , CRAFT Collaboration27Macquart J, Bailes M, Bhat N D R, Bower G C, Bunton J D, Chatterjee S, Colegate T, Cordes J M, D'Addario L, Deller A, Dodson R, Fender R, Haines K, Halll P, Harris C, Hotan A, Jonston S, Jones D L, Keith M, Koay J Y, Lazio T J W, Majid W, Murphy T, Navarro R, Phillips C, Quinn P, Preston R A, Stansby B, Stairs I, Stappers B, Staveley-Smith L, Tingay S, Thompson D, van Straten W, Wagstaff K, Warren M, Wayth R, Wen L and CRAFT Collaboration 2010 PASA 27, 272-282. . T Mauch, T Murphy, H J Buttery, J Curran, R W Hunstead, B Piestrzynski, J Robertson, E M Sadler, MNRAS. 342Mauch T, Murphy T, Buttery H J, Curran J, Hunstead R W, Piestrzynski B, Robertson J G and Sadler E M 2003 MNRAS 342, 1117-1130. . D A Mitchell, L J Greenhill, R B Wayth, R J Sault, C J Lonsdale, R J Cappallo, M Morales, S M Ord, IEEE Journal of Selected Topics in Signal Processing. 2Mitchell D A, Greenhill L J, Wayth R B, Sault R J, Lonsdale C J, Cappallo R J, Morales M F and Ord S M 2008 IEEE Journal of Selected Topics in Signal Processing, Vol. 2, Issue 5, p.707-717 2, 707-717. B Perley R A ; G, C L Taylor, &amp; R A Carilli, Perley, Synthesis Imaging in Radio Astronomy II. 180275Perley R A 1999 in G. B. Taylor, C. L. Carilli, & R. A. Perley, ed., 'Synthesis Imaging in Radio Astronomy II' Vol. 180 of Astronomical Society of the Pacific Conference Series pp. 275-+. . U Rau, S Bhatnagar, M Voronkov, T J Cornwell, IEEE Proceedings. 97Rau U, Bhatnagar S, Voronkov M A and Cornwell T J 2009 IEEE Proceedings 97, 1472-1481. . A Refregier, S T Brown, arXiv:astro-ph/9803279v1Refregier A and Brown S T 1998 arXiv:astro-ph/9803279v1 . . Y Rockah, P Schultheiss, IEEE Trans. Acoustics, Speech & Signal Proc. 3Rockah Y and Schultheiss P 1987 IEEE Trans. Acoustics, Speech & Signal Proc. 3, 286-289. . G Saklatvala, S Withington, M P Hobson, MNRAS. 383Saklatvala G, Withington S and Hobson M P 2008 MNRAS 383, 513-524. . A R Thompson, J Moran, G W Swenson, GermanyInterferometry and synthesis in radio astronomy 2nd edn Wiley-VCHThompson A R, Moran J M and Swenson G W 2004 Interferometry and synthesis in radio astronomy 2nd edn Wiley-VCH, Germany. . M Wieringa, Experimental Astronomy. 2Wieringa M H 1992 Experimental Astronomy 2, 203-225. . S Wijnholds, A Van Der Veen, IEEE Journal of Selected Topics in Signal Processing. 2Wijnholds S J and van der Veen A 2008 IEEE Journal of Selected Topics in Signal Processing 2, 613-623.
[]
[ "MONOTONIC DIFFERENTIABLE SORTING NETWORKS", "MONOTONIC DIFFERENTIABLE SORTING NETWORKS" ]
[ "Felix Petersen [email protected] \nUniversity of Konstanz\n\n", "Christian Borgelt \nUniversity of Salzburg\n\n", "Hilde Kuehne \nUniversity of Frankfurt\n\n\nIBM-MIT Watson AI Lab\n\n", "Oliver Deussen \nUniversity of Konstanz\n\n" ]
[ "University of Konstanz\n", "University of Salzburg\n", "University of Frankfurt\n", "IBM-MIT Watson AI Lab\n", "University of Konstanz\n" ]
[]
Differentiable sorting algorithms allow training with sorting and ranking supervision, where only the ordering or ranking of samples is known. Various methods have been proposed to address this challenge, ranging from optimal transport-based differentiable Sinkhorn sorting algorithms to making classic sorting networks differentiable. One problem of current differentiable sorting methods is that they are non-monotonic. To address this issue, we propose a novel relaxation of conditional swap operations that guarantees monotonicity in differentiable sorting networks. We introduce a family of sigmoid functions and prove that they produce differentiable sorting networks that are monotonic. Monotonicity ensures that the gradients always have the correct sign, which is an advantage in gradient-based optimization. We demonstrate that monotonic differentiable sorting networks improve upon previous differentiable sorting methods.
10.48550/arxiv.2203.09630
[ "https://arxiv.org/pdf/2203.09630v1.pdf" ]
247,594,724
2203.09630
09a4e48ecfcf01a11dd78bef525255d683226345
MONOTONIC DIFFERENTIABLE SORTING NETWORKS Felix Petersen [email protected] University of Konstanz Christian Borgelt University of Salzburg Hilde Kuehne University of Frankfurt IBM-MIT Watson AI Lab Oliver Deussen University of Konstanz MONOTONIC DIFFERENTIABLE SORTING NETWORKS Published as a conference paper at ICLR 2022 Differentiable sorting algorithms allow training with sorting and ranking supervision, where only the ordering or ranking of samples is known. Various methods have been proposed to address this challenge, ranging from optimal transport-based differentiable Sinkhorn sorting algorithms to making classic sorting networks differentiable. One problem of current differentiable sorting methods is that they are non-monotonic. To address this issue, we propose a novel relaxation of conditional swap operations that guarantees monotonicity in differentiable sorting networks. We introduce a family of sigmoid functions and prove that they produce differentiable sorting networks that are monotonic. Monotonicity ensures that the gradients always have the correct sign, which is an advantage in gradient-based optimization. We demonstrate that monotonic differentiable sorting networks improve upon previous differentiable sorting methods. INTRODUCTION Recently, the idea of end-to-end training of neural networks with ordering supervision via continuous relaxation of the sorting function has been presented by Grover et al. [1]. The idea of ordering supervision is that the ground truth order of some samples is known while their absolute values remain unsupervised. This is done by integrating a sorting algorithm in the neural architecture. As the error needs to be propagated in a meaningful way back to the neural network when training with a sorting algorithm in the architecture, it is necessary to use a differentiable sorting function. Several such differentiable sorting functions have been introduced, e.g., by Grover et al. [1], Cuturi et al. [2], Blondel et al. [3], and Petersen et al. [4]. In this work, we focus on analyzing differentiable sorting functions [1]- [4] and demonstrate how monotonicity improves differentiable sorting networks [4]. Sorting networks are a family of sorting algorithms that consist of two basic components: so called "wires" (or "lanes") carrying values, and conditional swap operations that connect pairs of wires [5]. An example of such a sorting network is shown in the center of Figure 1. The conditional swap operations swap the values carried by these wires if they are not in the desired order. They allow for fast hardware-implementation, e.g., in ASICs, as well as on highly parallelized general-purpose hardware like GPUs. Differentiable sorting networks [4] continuously relax the conditional swap operations by relaxing their step function to a logistic sigmoid function. One problem that arises in this context is that using a logistic sigmoid function does not preserve monotonicity of the relaxed sorting operation, which can cause gradients with the wrong sign. In this work, we present a family of sigmoid functions that preserve monotonicity of differentiable sorting networks. These include the cumulative density function (CDF) of the Cauchy distribution, as well as a function that minimizes the error-bound and thus induces the smallest possible approximation error. For all sigmoid functions, we prove and visualize the respective properties and validate their advantages empirically. In fact, by making the sorting function monotonic, it also becomes quasiconvex, which has been shown to produce favorable convergence rates [6]. In Figure 2, we demonstrate monotonicity for different choices of sigmoid functions. As can be seen in Figure 4, existing differentiable sorting operators are either non-monotonic or have an unbounded error. Following recent work [1], [2], [4], we benchmark our continuous relaxations by predicting values displayed on four-digit MNIST images [7] supervised only by their ground truth order. The evaluation shows that our method outperforms existing relaxations of the sorting function on the four-digit MNIST ordering task as well as the SVHN ranking task. Figure 1: The architecture for training with ordering supervision. Left: input values are fed separately into a Convolutional Neural Network (CNN) that has the same weights for all instances. The CNN maps these values to scalar values a 0 , ..., a 5 . Center: the odd-even sorting network sorts the scalars by parallel conditional swap operations such that all inputs can be propagated to their correct ordered position. Right: It produces a differentiable permutation matrix P. In this experiment, the training objective is the cross-entropy between P and the ground truth permutation matrix Q. By propagating the error backward through the sorting network, we can train the CNN. Contributions. In this work, we show that sigmoid functions with specific characteristics produce monotonic and error-bounded differentiable sorting networks. We provide theoretical guarantees for these functions and also give the monotonic function that minimizes the approximation error. We empirically demonstrate that the proposed functions improve performance. RELATED WORK Recently, differentiable approximations of the sorting function for weak supervision were introduced by Grover et al. [1], Cuturi et al. [2], Blondel et al. [3], and Petersen et al. [4]. In 2019, Grover et al. [1] proposed NeuralSort, a continuous relaxation of the argsort operator. A (hard) permutation matrix is a square matrix with entries 0 and 1 such that every row and every column sums up to 1, which defines the permutation necessary to sort a sequence. Grover et al. relax hard permutation matrices by approximating them as unimodal row-stochastic matrices. This relaxation allows for gradient-based stochastic optimization. On various tasks, including sorting four-digit MNIST numbers, they benchmark their relaxation against the Sinkhorn and Gumbel-Sinkhorn approaches proposed by Mena et al. [8]. Cuturi et al. [2] follow this idea and approach differentiable sorting by smoothed ranking and sorting operators using optimal transport. As the optimal transport problem alone is costly, they regularize it and solve it using the Sinkhorn algorithm [9]. By relaxing the permutation matrix, which sorts a sequence of scalars, they also train a scalar predictor of values displayed by four-digit numbers while supervising their relative order only. Blondel et al. [3] cast the problem of sorting and ranking as a linear program over a permutahedron. To smooth the resulting discontinuous function and provide useful derivatives, they introduce a strongly convex regularization. They evaluate the proposed approach in the context of top-k classification and label ranking accuracy via a soft Spearman's rank correlation coefficient. Recently, Petersen et al. [4] proposed differentiable sorting networks, a differentiable sorting operator based on sorting networks with differentiably relaxed conditional swap operations. Differentiable sorting networks achieved a new state-of-the-art on both the four-digit MNIST sorting benchmark and the SVHN sorting benchmark. Petersen et al. [10] also proposed a general method for continuously relaxing algorithms via logistic distributions. They apply it, i.a., to the bubble sort algorithm and benchmark in on the MNIST sorting benchmark. Applications and Broader Impact. In the domain of recommender systems, Lee et al. [11] propose differentiable ranking metrics, and Swezey et al. [12] propose PiRank, a learning-to-rank method using differentiable sorting. Other works explore differentiable sorting-based top-k for applications such as differentiable image patch selection [13], differentiable k-nearest-neighbor [1], [14], top-k attention for machine translation [14], and differentiable beam search methods [14], [15]. BACKGROUND: SORTING NETWORKS Sorting networks have a long tradition in computer science since the 1950s [5]. They are highly parallel data-oblivious sorting algorithms. They are based on so-called conditional pairwise swap operators that map two inputs to two outputs and ensure that these outputs are in a specific order. This is achieved by simply passing through the inputs if they are already in the desired order and swapping them otherwise. The order of executing conditional swaps is independent of the input values, which makes them data-oblivious. Conditional swap operators can be implemented using only min and max. That is, if the inputs are a and b and the outputs a and b , a swap operator ensures a ≤ b and can easily be formalized as a = min(a, b) and b = max(a, b). Examples of sorting nets are the odd-even network [16], which alternatingly swaps odd and even wires with their successors, and the bitonic network [17], which repeatedly merges and sorts bitonic sequences. While the odd-even network requires n layers, the bitonic network uses the divide-and-conquer principle to sort within only (log 2 n)(1 + log 2 n)/2 layers. Note that, while they are similar in name, sorting networks are not neural networks that sort. DIFFERENTIABLE SORTING NETWORKS In the following, we recapitulate the core concepts of differentiable sorting networks [4]. An example of an odd-even sorting network is shown in the center of Figure 1. Here, odd and even neighbors are conditionally swapped until the entire sequence is sorted. Each conditional swap operation can be defined in terms of min and max as detailed above. These operators can be relaxed to differentiable min and max . Note that we denote the differentiable relaxations in italic font and their hard counterparts in roman font. Note that the differentiable relaxations min and max are different from the commonly used softmin and softmax, which are relaxations of argmin and argmax [18]. One example for such a relaxation of min and max is the logistic relaxation min σ (a, b) = a · σ(b − a) + b · σ(a − b) and max σ (a, b) = a · σ(a − b) + b · σ(b − a) (1) where σ is the logistic sigmoid function with inverse temperature β > 0: σ : x → 1 1 + e −βx .(2) Any layer of a sorting network can also be represented as a relaxed and doubly-stochastic permutation matrix. Multiplying these (layer-wise) permutation matrices yields a (relaxed) total permutation matrix P. Multiplying P with an input x yields the differentiably sorted vectorx = Px, which is also the output of the differentiable sorting network. Whether it is necessary to compute P, or whetherx suffices, depends on the specific application. For example, for a cross-entropy ranking / sorting loss as used in the experiments in Section 6, P can be used to compute the cross-entropy to a ground truth permutation matrix Q. In the next section, we build on these concepts to introduce monotonic differentiable sorting networks, i.e., all differentiably sorted outputsx are non-decreasingly monotonic in all inputs x. MONOTONIC DIFFERENTIABLE SORTING NETWORKS In this section, we start by introducing definitions and building theorems upon them. In Section 4.2, we use these definitions and properties to discuss different relaxations of sorting networks. THEORY We start by defining sigmoid functions and will then use them to define continuous conditional swaps. Definition 1 (Sigmoid Function). We define a (unipolar) sigmoid (i.e., s-shaped) function as a continuous monotonically non-decreasing odd-symmetric (around min f (a, b) = a · f (b − a) + b · f (a − b), max f (a, b) = a · f (a − b) + b · f (b − a), (3) argmin f (a, b) = ( f (b − a), f (a − b) ) , argmax f (a, b) = ( f (a − b), f (b − a) ) . (4) We require a continuous odd-symmetric sigmoid function to preserve most of the properties of min and max while also making argmin and argmax continuous as shown in Supplementary Material B. In the following, we establish doubly-stochasticity and differentiability of P, which are important properties for differentiable sorting and ranking operators. Lemma 3 (Doubly-Stochasticity and Differentiability of P). (i) The relaxed permutation matrix P, produced by a differentiable sorting network, is doubly-stochastic. (ii) P has the same differentiability as f , e.g., if f is continuously differentiable in the input, P will be continuously differentiable in the input to the sorting network. If f is differentiable almost everywhere (a.e.), P will be diff. a.e. Proof. (i) For each conditional swap between two elements i, j, the relaxed permutation matrix is 1 at the diagonal except for rows i and j: at points i, i and j, j the value is v ∈ [0, 1], at points i, j and j, i the value is 1 − v and all other entries are 0. This is doubly-stochastic as all rows and columns add up to 1 by construction. As the product of doubly-stochastic matrices is doubly-stochastic, the relaxed permutation matrix P, produced by a differentiable sorting network, is doubly-stochastic. (ii) The composition of differentiable functions is differentiable and the addition and multiplication of differentiable functions is also differentiable. Thus, a sorting network is differentiable if the employed sigmoid function is differentiable. "Differentiable" may be replaced with any other form of differentiability, such as "differentiable a.e." Now that we have established the ingredients to differentiable sorting networks, we can focus on the monotonicity of differentiable sorting networks. Definition 4 (Monotonic Continuous Conditional Swaps). We say f produces monotonic conditional swaps if min f (x, 0) is non-decreasingly monotonic in x, i.e., min f (x, 0) ≥ 0 for all x. It is sufficient to define it w.l.o.g. in terms of min f (x, 0) due to its commutativity, stability, and odd-symmetry of the operators (cf. Supplementary Material B). Theorem 5 (Monotonicity of Continuous Conditional Swaps). A continuous conditional swap (in terms of a differentiable sigmoid function f ) being non-decreasingly monotonic in all arguments and outputs requires that the derivative of f decays no faster than 1/x 2 , i.e., f (x) ∈ Ω 1 x 2 .(5) Proof. We show that Equation 5 is a necessary criterion for monotonicity of the conditional swap. Because f is a continuous sigmoid function with f : R → [0, 1], min f (x, 0) = f (−x) · x > 0 for some x > 0. Thus, montononicity of min f (x, 0) implies lim sup x→∞ min f (x, 0) > 0 (otherwise the value would decrease again from a value > 0.) Thus, lim x→∞ min f (x, 0) = lim x→∞ f (−x) · x = lim x→∞ f (−x) 1/x (L'Hôpital's rule) = lim x→∞ −f (−x) −1/x 2 (6) = lim x→∞ f (−x) 1/x 2 = lim x→∞ f (x) 1/x 2 = lim sup x→∞ f (x) 1/x 2 > 0 ⇐⇒ f (x) ∈ Ω 1 x 2 .(7) assuming lim x→∞ f (x) 1/x 2 exists. Otherwise, it can be proven analogously via a proof by contradiction. Corollary 6 (Monotonic Sorting Networks). If the individual conditional swaps of a sorting network are monotonic, the sorting network is also monotonic. Proof. If single layers g, h are non-decreasingly monotonic in all arguments and outputs, their composition h • g is also non-decreasingly monotonic in all arguments and outputs. Thus, a network of arbitrarily many layers is non-decreasingly monotonic. Above, we formalized the property of monotonicity. Another important aspect is whether the error of the differentiable sorting network is bounded. It is very desirable to have a bounded error because without bounded errors the result of the differentiable sorting network diverges from the result of the hard sorting function. Minimizing this error is desirable. Definition 7 (Error-Bounded Continuous Conditional Swaps). A continuous conditional swap has a bounded error if and only if sup x min f (x, 0) = c is finite. The continuous conditional swap is therefore said to have an error bounded by c. It is sufficient to define it w.l.o.g. in terms of min f (x, 0) due to its commutativity, stability, and odd-symmetry of the operators (cf. Supplementary Material B). In general, for better comparability between functions, we assume a Lipschitz continuous function f with Lipschitz constant 1. Theorem 8 (Error-Bounds of Continuous Conditional Swaps). (i) A differentiable continuous condi- tional swap has a bounded error if f (x) ∈ O 1 x 2 .(8) (ii) If it is additionally monotonic, the error-bound can be found as lim x→∞ min f (x, 0) and additionally the error is bound only if Equation 8 holds. Proof . (i) W.l.o.g. we consider x > 0. Let g(z) := f (−1/z), g(0) = 0. Thus, g (z) = 1/z 2 · f (−1/z) ≤ c according to Equation 8. Thus, g(z) = g(0) + z 0 g (t)dt ≤ c · z. Therefore, f (−1/z) ≤ c · z =⇒ 1/z · f (−1/z) ≤ c and with x = 1/z =⇒ x · f (−x) = min f (x, 0) ≤ c. (ii) Let min f (x, 0) be monotonic and bound by min f (x, 0) ≤ c. For x > 0 and h(x) := min f (x, 0), h (x) = −x · f (−x) + f (−x) =⇒ x 2 f (−x) = −xh (x) ≤0 +x · f (−x) ≤ x · f (−x) ≤ c . (9) And thus f (x) ∈ O 1 x 2 . Theorem 9 (Error-Bounds of Diff. Sorting Networks). If the error of individual conditional swaps of a sorting network is bounded by and the network has layers, the total error is bounded by · . Proof. For the proof, cf. Supplementary Material D. Discussion. Monotonicity is highly desirable as otherwise adverse effects such as an input requiring to be decreased to increase the output can occur. In gradient-based training, non-mononicity is problematic as it produces gradients with the opposite sign. In addition, as monotonicity is also given in hard sorting networks, it is desirable to preserve this property in the relaxation. Further, monotonic differentiable sorting networks are quasiconvex and quasiconcave as any monotonic function is both quasiconvex and quasiconcave, which leads to favorable convergence rates [6]. Bounding and reducing the deviation from its hard counterpart reduces the relaxation error, and thus is desirable. SIGMOID FUNCTIONS Above, we have specified the space of functions for the differentiable swap operation, as well as their desirable properties. In the following, we discuss four notable candidates as well as their properties. The properties of these functions are visualized in Figures 2 and 3 and an overview over their properties is given in Table 1. Logistic distributions. The first candidate is the logistic sigmoid function (the CDF of a logistic distribution) as proposed in [4]: σ(x) = CDF L βx = 1 1 + e −βx(10) This function is the de-facto default sigmoid function in machine learning. It provides a continuous, error-bounded, and Lipschitz continuous conditional swap. However, for the logistic function, monotonicity is not given, as displayed in Figure 2. Table 1: For each function, we display the function, its derivative, and indicate whether the respective relaxed sorting network is monotonic and has a bounded error. Function f (CDF) f (PDF) Eq. Mono. Bounded Error σ (10) (≈ .0696/α) f R (11) (1/4/α) f C (12) (1/π 2 /α) f O (13) (1/16/α) Reciprocal Sigmoid Function. To obtain a function that yields a monotonic as well as errorbound differentiable sorting network, a necessary criterion is f (x) ∈ Θ(1/x 2 ) (the intersection of Equations 5 and 8.) A natural choice is, therefore, f R (x) = 1 (2|x|+1) 2 , which produces f R (x) = x −∞ 1 (2β|t| + 1) 2 dt = 1 2 2βx 1 + 2β|x| + 1 2 .(11) f R fulfills all criteria, i.e., it is an adequate sigmoid function and produces monotonic and error-bound conditional swaps. It has an -bounded-error of = 0.25. It is also an affine transformation of the elementary bipolar sigmoid function x → x |x|+1 . Properties of this function are visualized in Table 1 and Figures 2 and 3. Proofs for monotonicity can be found in Supplementary Material D. Cauchy distributions. By using the CDF of the Cauchy distribution, we maintain montonicity while reducing the error-bound to = 1/π 2 ≈ 0.101. It is defined as f C (x) = CDF C βx = 1 π x −∞ β 1 + (βt) 2 dt = 1 π arctan βx + 1 2(12) In the experimental evaluation, we find that tightening the error improves the performance. Optimal Monotonic Sigmoid Function. At this point, we are interested in the monotonic swap operation that minimizes the error-bound. Here, we set 1-Lipschitz continuity again as a requirement to make different relaxations of conditional swaps comparable. We show that f O is the best possible sigmoid function achieving an error-bound of only = 1/16 Theorem 10 (Optimal Sigmoid Function). The optimal sigmoid function minimizing the error-bound, while producing a monotonic and 1-Lipschitz continuous (with β = 1) conditional swap operation, is f O (x) =      − 1 16βx if βx < − 1 4 , 1 − 1 16βx if βx > + 1 4 , βx + 1 2 otherwise.(13) Proof. Given the above conditions, the optimal sigmoid function is uniquely determined and can easily be derived as follows: Due to stability, it suffices to consider min f ( x, 0) = x · f (−x) or max f (0, x) = −x · f (x) . Due to symmetry and inversion, it suffices to consider min f (x, 0) = x · f (−x) for x > 0. Since min(x, 0) = 0 for x > 0, we have to choose f in such a way as to make min f (x, 0) = x·f (−x) as small as possible, but not negative. For this, f (−x) must be made as small as possible. Since we know that f (0) = 1 2 and we are limited to functions f that are Lipschitz continuous with α = 1, f (−x) cannot be made smaller than 1 2 − x, and hence min f (x, 0) cannot be made smaller than x · 1 2 − x . To make min f (x, 0) as small as possible, we have to follow x · 1 2 − x as far as possible (i.e., to values x as large as possible). Monotonicity requires that this function can be followed only up to x = 1 4 , at which point we have min f ( 1 4 , 0) = 1 4 1 2 − 1 4 = 1 16 . For larger x, that is, for x > 1 4 , the value of x · 1 2 − x decreases again and hence the functional form of the sigmoid function f has to change at x = 1 4 to remain monotonic. The best that can be achieved for x > 1 4 is to make it constant, as it must not decrease (due to monotonicity) and should not increase (to minimize the deviation from the crisp / hard version). That is, min f (x, 0) = 1 16 for x > 1 4 . It follows x · f (−x) = 1 16 and hence f (−x) = 1 16x for x > 1 4 . Note that, if the transition from the linear part to the hyperbolic part were at |x| < 1 4 , the function would not be Lipschitz continuous with α = 1. An overview of the selection of sigmoid functions we consider is shown in Table 1. Note how f R , f C and f O in this order get closer to x + 1 2 (the gray diagonal line) and hence steeper in their middle part. This is reflected by a widening region of values of the derivatives that are close to or even equal to 1. Figure 2: min f (x, 0) for different sigmoid functions f ; color coding as in Table 1. Table 1 also indicates whether a sigmoid function yields a monotonic swap operation or not, which is visualized in Figure 2: clearly σ-based sorting networks are not monotonic, while all others are. It also states whether the error is bounded, which for a monotonic swap operation means lim x→∞ min f (x, 0) < ∞, and gives their bound relative to the Lipschitz constant α. Figure 3 displays the loss for a sorting network with n = 3 inputs. We project the hexagon-shaped 3-value permutahedron onto the x-y-plane, while the z-axis indicates the loss. Note that, at the rightmost point Figure 3: Loss for a 3-wire odd-even sorting network, drawn over a permutahedron projected onto the x-y-plane. For logistic sigmoid (left) and optimal sigmoid (right). 0.2 0.1 1 / 2 1 3 / 2 2 x f R f C f O σ (1, 2, 3), the loss is 0 because all elements are in the correct order, while at the left front (2, 3, 1) and rear (3, 1, 2) the loss is at its maximum because all elements are at the wrong positions. Along the red center line, the loss rises logarithmic for the optimal sigmoid function on the right. Note that the monotonic sigmoid functions produce a loss that is larger when more elements are in the wrong order. For the logistic function, (3, 2, 1) has the same loss as (2, 3, 1) even though one of the ranks is correct at (3, 2, 1), while for (2, 3, 1) all three ranks are incorrect. For the special case of n = 2, i.e., for sorting two elements, NeuralSort [1] and Relaxed Bubble sort [10] are equivalent to differentiable sorting networks with the logistic sigmoid function. Thus, it is non-monotonic, as displayed in Figure 4. MONOTONICITY OF OTHER DIFFERENTIABLE SORTING OPERATORS For the Sinkhorn sorting algorithm [2], we can simply construct an example of non-monotonicity by keeping one value fixed, e.g., at zero, and varying the second value (x) as in Figure 4 and displaying the minimum. Notably, for the case of n = 2, this function is numerically equal to NeuralSort and differentiable sorting networks with the logistic function. For fast sort [3], we follow the same principle and find that it is indeed monotonic (in this example); however, the error is unbounded, which is undesirable. For differentiable sorting networks, Petersen et al. [4] proposed to extend the sigmoid function by the activation replacement trick, which avoids extensive blurring as well as vanishing gradients. They apply the activation replacement trick ϕ before feeding the values into the logistic sigmoid function; thus, the sigmoid function is effectively σ • ϕ. Here, ϕ : x → x |x| λ + where λ ∈ [0, 1] and ≈ 10 −10 . Here, the asymptotic character of σ • ϕ does not fulfill the requirement set by Theorem 5, and is thereby non-monotonic as also displayed in Figure 4 (purple). We summarize monotonicity and error-boundness for all differentiable sorting functions in Table 2. EMPIRICAL EVALUATION To evaluate the properties of the proposed function as well as their practical impact in the context of sorting supervision, we evaluate them with respect to two standard benchmark datasets. The MNIST sorting dataset [1]- [4] consists of images of numbers from 0000 to 9999 composed of four MNIST digits [7]. Here, the task is training a network to produce a scalar output value for each image such that the ordering of the outputs follows the respective ordering of the images. Specifically, the metrics here are the proportion of full rankings correctly identified, and the proportion of individual element ranks correctly identified [1]. The same task can also be extended to the more realistic SVHN [19] dataset with the difference that the images are already multi-digit numbers as shown in [4]. Comparison to the State-of-the-Art. We first compare the proposed functions to other state-ofthe-art approaches using the same network architecture and training setup as used in previous works, as well as among themselves. The respective hyperparameters for each setting can be found in Supplementary Material A. We report the results in Table 3. The proposed monotonic differentiable sorting networks outperform current state-of-the-art methods by a considerable margin. Especially for those cases where more samples needed to be sorted, the gap between monotonic sorting nets and other techniques grows with larger n. The computational complexity of the proposed method depends on the employed sorting network architecture leading to a complexity of O(n 3 ) for odd-even networks and a complexity of O(n 2 log 2 n) for bitonic networks because all of the employed sigmoid functions can be computed in closed form. This leads to the same runtime as in [4]. Comparing the three proposed functions among themselves, we observe that for odd-even networks on MNIST, the error-optimal function f O performs best. This is because here the approximation error is small. However, for the more complex bitonic sorting networks, f C (Cauchy) performs better than f O . This is because f O does not provide a higher-order smoothness and is only C 1 smooth, while the Cauchy function f C is analytic and C ∞ smooth. Table 3: Results on the four-digit MNIST and SVHN tasks using the same architecture as previous works [1]- [4]. The metric is the proportion of rankings correctly identified, and the value in parentheses is the proportion of individual element ranks correctly identified. All results are averaged over 5 runs. SVHN w/ n = 32 is omitted to reduce the carbon impact of the evaluation. In all settings, the monotonic sorting networks clearly outperform the non-monotonic ones. Top: Odd-Even sorting networks with n = 3 (left) and n = 15 (right). Bottom: n = 32 with an Odd-Even (left) and a Bitonic network (right). For small n, such as 3, Cauchy performs best because it has a low error but is smooth at the same time. For larger n, such as 15 and 32, the optimal sigmoid function (wrt. error) f O performs better because it, while not being smooth, has the smallest possible approximation error which is more important for deeper networks. For the bitonic network with its more complex structure at n = 32 (bottom right), the reciprocal sigmoid f R performs best. Evaluation of Inverse Temperature β. To further understand the behavior of the proposed monotonic functions compared to the logistic sigmoid function, we evaluate all sigmoid functions for different inverse temperatures β during training. We investigate four settings: odd-even networks for n ∈ {3, 15, 32} and a bitonic sorting network with n = 32 on the MNIST data set. Notably, there are 15 layers in the bitonic sorting networks with n = 32, while the odd-even networks for n = 15 also has 15 layers. We display the results of this evaluation in Figure 5. In Supplementary Material C, we show an analogous figure with additional settings. Note that here, we train for only 50% of the training steps compared to Table 3 to reduce the computational cost. We observe that the optimal inverse temperature depends on the number of layers, rather than the overall number of samples n. This can be seen when comparing the peak accuracy of each function for the odd-even sorting network for different n and thus for different numbers of layers. The bitonic network for n = 32 (bottom right) has the same number of layers as n = 15 in the odd-even network (top right). Here, the peak performances for each sigmoid function fall within the same range, whereas the peak performances for the odd-even network for n = 32 (bottom left) are shifted almost an order of magnitude to the right. For all configurations, the proposed sigmoid functions for monotonic sorting networks improve over the standard logistic sigmoid function, as well as the ART. The source code of this work is publicly available at github.com/Felix-Petersen/diffsort. CONCLUSION In this work, we addressed and analyzed monotonicity and error-boundness in differentiable sorting and ranking operators. Specifically, we focussed on differentiable sorting networks and presented a family of sigmoid functions that preserve monotonicity and bound approximation errors in differentiable sorting networks. This makes the sorting functions quasiconvex, and we empirically observe that the resulting method outperforms the state-of-the-art in differentiable sorting supervision. [20] D. Kingma and J. Ba, "Adam: A method for stochastic optimization," in International Conference on Learning Representations (ICLR), 2015. [21] I. J. Goodfellow, Y. Bulatov, J. Ibarz, S. Arnoud, and V. Shet, "Multi-digit number recognition from street view imagery using deep convolutional neural networks," Computing Research Repository (CoRR) in arXiv, 2013. A IMPLEMENTATION DETAILS For training, we use the same network architecture as in previous works [1], [2], [4] and also use the Adam optimizer [20] at a learning rate of 3 · 10 −4 . For Figure 5, we train for 100 000 steps. For Table 3, we train for 200 000 steps on MNIST and 1 000 000 steps on SVHN. We preprocess SVHN as done by Goodfellow et al. [21]. A.1 INVERSE TEMPERATURE β For the inverse temperature β, we use the following values, which correspond to the optima in Figure 5 and were found via grid search: B PROPERTIES OF min AND max The core element of differentiable sorting networks is the relaxation of the conditional swap operation, allowing for a soft transition between passing through and swapping, such that the sorting operator becomes differentiable. It is natural to try to achieve this by using soft versions of the minimum (denoted by min) and maximum operators (denoted by max ). But before we consider concrete examples, let us collect some desirable properties that such relaxations should have. Naturally, min and max should satisfy many properties that their crisp / hard counterparts min and max satisfy, as well as a few others (for a, b, c ∈ R): Symmetry / Commutativity. Since min and max are symmetric/commutative, so should be their soft counterparts: min(a, b) = min(b, a) and max (a, b) = max (b, a). Ordering. Certainly a (soft) maximum of two numbers should be at least as large as a (soft) minimum of the same two numbers: min(a, b) ≤ max (b, a). Continuity in Both Arguments. Both min and max should be continuous in both arguments. Idempotency. If the two arguments are equal in value, this value should be the result of min and max , that is, min(a, a) = max (a, a) = a. (a, b). Note that together with ordering this property implies idempotency, vi7.: a = min(a, a) ≤ min(a, a) ≤ max (a, a) ≤ max(a, a) = a. Otherwise, they cannot be defined via a convex combination of their inputs, making it impossible to define proper argmin and argmax , and hence we could not compute differentiable permutation matrices. Monotonicity in Both Arguments. For any c > 0, it should be min(a + c, b) ≥ min(a, b), (a, b). Note that the second expression for each operator follows from the first with the help of symmetry / commutativity. min(a, b + c) ≥ min(a, b), max (a + c, b) ≥ max (a, b), and max (a, b + c) ≥ max Bounded Error / Minimum Deviation from Hard Versions. Soft versions of minimum and maximum should differ as little as possible from their crisp / hard counterparts. However, this condition needs to be made more precise to yield concrete properties (see below for details). Note that min and max cannot satisfy associativity, as this would force them to be identical to their hard counterparts. Associativity means that max (a, max (b, c)) = max (max (a, b), c) and min(a, min(b, c)) = min(min (a, b), c). Now consider a, b ∈ R with a < b. Then with associativity and idempotency max (a, max (a, b)) = max (max (a, a), b) = max (a, b) and hence max (a, b) = b = max(a, b) (by comparison of the second arguments). Analogously, one can show that if associativity held, we would have min(a, b) = a = min(a, b). That is, one cannot have both associativity and idempotency. Note that without idempotency, the soft operators would not be bounded by their hard versions. As idempotency is thus necessary, associativity has to be given up. If min and max are to be bounded by the crisp / hard version and symmetry, ordering, inversion and stability (which imply sum preservation) hold, they must be convex combinations of the arguments a and b with weights that depend only on the difference of a and b. That is, min(a, b) = f (b − a) · a + (1 − f (b − a)) · b max (a, b) = (1 − f (b − a)) · a + f (b − a) · b, where f (x) yields a value in [0, 1] (due to boundedness of min and max by their crisp / hard counterparts). Due to inversion, f must satisfy f (x) = 1−f (−x) and hence f (0) = 1 2 . Monotonicity of min and max requires that f is a monotonically increasing function. Continuity requires that f is a continuous function. In summary, f must be a continuous sigmoid function (in the older meaning of this term, i.e., an s-shaped function, of which the logistic function is only a special case) satisfying f (x) = 1 − f (−x). As mentioned, the condition that the soft versions of minimum and maximum should deviate as little as possible from the crisp / hard versions causes a slight problem: this deviation can always be made smaller by making the sigmoid function steeper (reaching the crisp / hard versions in the limit for infinite inverse temperature, when the sigmoid function turns into the Heaviside step function). Hence, in order to find the best shape of the sigmoid function, we have to limit its inverse temperature. Therefore, w.l.o.g., we require the sigmoid function to be Lipschitz-continuous with Lipschitz constant α = 1. C ADDITIONAL EXPERIMENTS In Figure 6, we display additional results for more setting analogous to Figure 5. Figure 6: Additional results analogous to Figure 5. Evaluating different sigmoid functions on the sorting MNIST task for ranges of different inverse temperatures β. The metric is the proportion of individual element ranks correctly identified. In all settings, the monotonic sorting networks clearly outperform the non-monotonic ones. The first three rows use odd-even networks with n ∈ {3, 5, 7, 9, 15, 32}. The last row uses bitonic networks with n ∈ {16, 32}. D ADDITIONAL PROOFS Theorem 9 (Error-Bounds of Diff. Sorting Networks). If the error of individual conditional swaps of a sorting network is bounded by and the network has layers, the total error is bounded by · . Proof. Induction over number k of executed layers. Let x (k) be input x differentially sorted for k layers and x (k) be input x hard sorted for k layers as an anchor. We require this anchor, as it is possible that x (k) i < x (k) j but x (k) i > x (k) j for some i, j, k. Begin of induction: k = 0. Input vector x equals the vector x (0) after 0 layers. Thus, the error is equal to 0 · . Step of induction: Given that after k − 1 layers the error is smaller than or equal to (k − 1) , we need to show that the error after k layers is smaller than or equal to k . x(1 + |x|) 1 + 2|x| + |x| 2 + 1 + 2|x| + |x| 2 1 + 2|x| + |x| 2 + x 1 + 2|x| + |x| 2 (23) = 1 2 2x + 2|x| + x|x| + |x| 2 + 1 1 + 2|x| + |x| 2 (24) = 1 2 2(x + |x|) + |x|(x + |x|) + 1 1 + 2|x| + |x| 2 (25) ≥ 1 2 1 1 + 2|x| + |x| 2 (because x + |x| ≥ 0)(26) > 0 (27) max f R is analogous. Theorem 12. min f C and max f C are monotonic functions with the sigmoid function f C . E ADDITIONAL DISCUSSION "How would the following baseline perform? Hard rank the predictions and compare it with the ground truth rank. Then, use their difference as the learning signal (i.e., instead of the gradient)." This kind of supervision does not converge, even for small learning rates and in simplified settings. Specifically, we observed in our experiments that the range of values produced by the CNN gets compressed heavily by training in this fashion. Also counteracting it by explicitly adding a term to spread it out again did not help, and training was very unstable. Despite testing various hyperparameters (learning rate, adaptation factor, both absolute and relative to the range of values in a batch or in the whole data set, spread factor, etc.) it did not work, even on toy data like single-digit MNIST with n = 5. "Could β be jointly trained as a parameter with the model?" Yes, it could; however, in our experiments, we found that the entire training performs better if β is fixed. If β is also a parameter to be trained, its learning rate should be very small as it (i) should not change too fast and (ii) already accumulates many gradient signals as it is used many times. 1 2 2) function f with f : R → [0, 1] with lim x→−∞ f (x) = 0 and lim x→∞ f (x) = 1. Figure 4 : 4min(x, 0) for Sinkhorn sort (red), NeuralSort (red), Relaxed Bubble sort (red), diffsort with logistic sigmoid (red), diffsort with activation replacement trick (purple), and Fast Sort (orange). Figure 5 : 5-3.4 (59.2) f R : Reciprocal Sigmoid 85.7 (89.8) 68.8 (84.2) 53.3 (80.0) 40.0 (76.3) 13.2 (66.0) -11.5 (64.9) f C : Cauchy CDF 85.5 (89.6) 68.5 (84.1) 52.9 (79.8) 39.9 (75.8) 13.7 (66.0) -12.2 (65.6) f O : Optimal Sigmoid 86.0 (90.0) 67.5 (83.5) 53.1 (80.0) 39.1 (76.0) 13.2 (66.3) -10.6 (66.8) -Evaluating different sigmoid functions on the sorting MNIST task for ranges of different inverse temperatures β. The metric is the proportion of individual element ranks correctly identified. Inversion. As for min and max, the two operators min and max should be connected in such a way that the result of one operator equals the negated result of the other operator applied to negated arguments: min(a, b) = −max (−a, −b) and max (a, b) = −min(−a, −b). Stability / Shift Invariance. Shifting both arguments by some value c ∈ R should shift each operator's result by the same value: min(a + c, b + c) = min(a, b) + c and max (a + c, b + c) = max (a, b) + c. Stability implies that the values of min and max depend effectively only on the difference of their arguments. Specifically, choosing c = −a yields min(a, b) = min(0, b − a) + a and max (a, b) = max (0, b − a) + a, and c = −b yields min(a, b) = min(a − b, 0) + b and max (a, b) = max (a − b, 0) + b.Sum preservation. The sum of min and max should equal the sum of min and max: min(a, b) + max (a, b) = min(a, b) + max(a, b) = a + b. Note that sum preservation follows from stability, inversion and symmetry: min(a, b) = min(a−b,0)+b = b−max (0, b−a) = b−(max (a, b)−a) = a + b − max (a, b)Bounded by Hard Versions. Soft operators should not yield values more extreme than their crisp / hard counterparts: min(a, b) ≤ min(a, b) and max (a, b) ≤ max Table 2 : 2Diff. Sorting Networks f R Diff. Sorting Networks f C Diff. Sorting Networks f OFor each differentiable sorting opera- tor, whether it is monotonic (M), and whether it has a bounded error (BE). Method M BE NeuralSort - Sinkhorn Sort - Fast Sort Relaxed Bubble Sort - Diff. Sorting Networks σ Diff. Sorting Networks σ • ϕ The layer consists of comparator pairs i, j. W.l.o.g. we assume x(k−1) i = 1 2 x 1 + |x| + 1 + x 1 2 d dx x 1 + |x| + 1 (16) = 1 2 x 1 + |x| + 1 + x 1 2 d dx x 1 + |x| (17) = 1 2 x 1 + |x| + 1 + x 1 2 dx dx · (1 + |x|) − x · d|x|+1 dx (1 + |x|) 2 (18) = 1 2 x 1 + |x| + 1 + x 1 2 (1 + |x|) − x sgn(x) (1 + |x|) 2 (19) = 1 2 x 1 + |x| + 1 + x 1 2 1 + |x| − |x| (1 + |x|) 2 (20) = 1 2 x 1 + |x| + 1 + x 1 2 1 1 + 2|x| + |x| 2 (21) = 1 2 x 1 + |x| + 1 + x 1 + 2|x| + |x| 2 (22) = 1 2 ACKNOWLEDGMENTS & FUNDING DISCLOSUREWe warmly thank Robert Denk for helpful discussions. This work was supported by the Goethe Center for Scientific Computing (G-CSC) at Goethe University Frankfurt, the IBM-MIT Watson AI Lab, the DFG in the Cluster of Excellence EXC 2117 "Centre for the Advanced Study of Collective Behaviour"(Project-IDREPRODUCIBILITY STATEMENTWe made the source code and experiments of this work publicly available at github.com/Felix-Petersen/diffsort to foster future research in this direction. All data sets are publicly available. We specify all necessary hyperparameters for each experiment. We use the same model architectures as in previous works. We demonstrate how the choice of hyperparameter β affects the performance inFigure 5. Each experiment can be reproduced on a single GPU.≤ x (k−1) j . W.l.o.g. we assume that wire i will be the min and that wire j will be the max, therefore xj . We distinguish two cases:have to be so close that within margin of error such a reversed order is possible. According to the assumption, xTheorem 11. min f R and max f R are monotonic functions with the sigmoid function f R .Proof. Wlog., we assume a i = x and a j = 0.To show monotonicity, we consider its derivative / slope.Proof. Wlog., we assume a i = x and a j = 0.To show monotonicity, we consider its derivative.To reason about the derivative, we also consider the second derivative:For z ∈ (−∞, 0]: The derivative of min f C (0, x) converges to 1 for z → −∞ (Eq. 31).For z ∈ [0, ∞): The derivative of min f C (0, x) converges to 0 for z → ∞ (Eq. 30).∂ ∂zThe second derivative (Eq. 32) of min f C (0, x) is always negative.Therefore, the derivative is always in (0, 1), and therefore always positive. Thus, min f C (0, x) is strictly monotonic. max f C is analogous. Stochastic Optimization of Sorting Networks via Continuous Relaxations. A Grover, E Wang, A Zweig, S Ermon, International Conference on Learning Representations (ICLR). A. Grover, E. Wang, A. Zweig, and S. Ermon, "Stochastic Optimization of Sorting Networks via Continuous Relaxations," in International Conference on Learning Representations (ICLR), 2019. Differentiable ranking and sorting using optimal transport. M Cuturi, O Teboul, J.-P Vert, Proc. Neural Information Processing Systems (NeurIPS). Neural Information essing Systems (NeurIPS)M. Cuturi, O. Teboul, and J.-P. Vert, "Differentiable ranking and sorting using optimal transport," in Proc. Neural Information Processing Systems (NeurIPS), 2019. Fast Differentiable Sorting and Ranking. M Blondel, O Teboul, Q Berthet, J Djolonga, Proc. Machine Learning Research (PMLR), International Conference on Machine Learning (ICML). Machine Learning Research (PMLR), International Conference on Machine Learning (ICML)2020M. Blondel, O. Teboul, Q. Berthet, and J. Djolonga, "Fast Differentiable Sorting and Ranking," in Proc. Machine Learning Research (PMLR), International Conference on Machine Learning (ICML), 2020. Differentiable sorting networks for scalable sorting and ranking supervision. F Petersen, C Borgelt, H Kuehne, O Deussen, International Conference on Machine Learning (ICML). PMLR2021Proc. Machine Learning ResearchF. Petersen, C. Borgelt, H. Kuehne, and O. Deussen, "Differentiable sorting networks for scalable sorting and ranking supervision," in Proc. Machine Learning Research (PMLR), International Conference on Machine Learning (ICML), 2021. D E Knuth, Sorting and Searching. Addison Wesley32nd EdD. E. Knuth, The Art of Computer Programming, Volume 3: Sorting and Searching (2nd Ed.) Addison Wesley, 1998. Convergence and efficiency of subgradient methods for quasiconvex minimization. K C Kiwiel, Mathematical Programming. 90K. C. Kiwiel, "Convergence and efficiency of subgradient methods for quasiconvex minimization," Mathematical Programming, vol. 90, 2001. Mnist handwritten digit database. Y Lecun, C Cortes, C Burges, Y. LeCun, C. Cortes, and C. Burges, "Mnist handwritten digit database," 2010. [Online]. Available: http://yann.lecun.com/exdb/mnist. Learning latent permutations with gumbel-sinkhorn networks. G Mena, D Belanger, S Linderman, J Snoek, International Conference on Learning Representations (ICLR). G. Mena, D. Belanger, S. Linderman, and J. Snoek, "Learning latent permutations with gumbel-sinkhorn networks," in International Conference on Learning Representations (ICLR), 2018. Sinkhorn distances: Lightspeed computation of optimal transport. M Cuturi, Proc. Neural Information Processing Systems (NeurIPS). Neural Information essing Systems (NeurIPS)M. Cuturi, "Sinkhorn distances: Lightspeed computation of optimal transport," in Proc. Neural Informa- tion Processing Systems (NeurIPS), 2013. Learning with algorithmic supervision via continuous relaxations. F Petersen, C Borgelt, H Kuehne, O Deussen, Proc. Neural Information Processing Systems (NeurIPS). Neural Information essing Systems (NeurIPS)2021F. Petersen, C. Borgelt, H. Kuehne, and O. Deussen, "Learning with algorithmic supervision via continu- ous relaxations," in Proc. Neural Information Processing Systems (NeurIPS), 2021. Differentiable ranking metric using relaxed sorting for top-k recommendation. H Lee, S Cho, Y Jang, J Kim, H Woo, IEEE Access. H. Lee, S. Cho, Y. Jang, J. Kim, and H. Woo, "Differentiable ranking metric using relaxed sorting for top-k recommendation," IEEE Access, 2021. Pirank: Learning to rank via differentiable sorting. R Swezey, A Grover, B Charron, S Ermon, Proc. Neural Information Processing Systems (NeurIPS). Neural Information essing Systems (NeurIPS)2021R. Swezey, A. Grover, B. Charron, and S. Ermon, "Pirank: Learning to rank via differentiable sorting," in Proc. Neural Information Processing Systems (NeurIPS), 2021. Differentiable patch selection for image recognition. J.-B Cordonnier, A Mahendran, A Dosovitskiy, D Weissenborn, J Uszkoreit, T Unterthiner, Proc. International Conference on Computer Vision and Pattern Recognition (CVPR). International Conference on Computer Vision and Pattern Recognition (CVPR)2021J.-B. Cordonnier, A. Mahendran, A. Dosovitskiy, D. Weissenborn, J. Uszkoreit, and T. Unterthiner, "Differentiable patch selection for image recognition," in Proc. International Conference on Computer Vision and Pattern Recognition (CVPR), 2021. Differentiable top-k with optimal transport. Y Xie, H Dai, M Chen, B Dai, T Zhao, H Zha, W Wei, T Pfister, Proc. Neural Information Processing Systems (NeurIPS). Neural Information essing Systems (NeurIPS)2020Y. Xie, H. Dai, M. Chen, B. Dai, T. Zhao, H. Zha, W. Wei, and T. Pfister, "Differentiable top-k with optimal transport," in Proc. Neural Information Processing Systems (NeurIPS), 2020. A continuous relaxation of beam search for end-to-end training of neural sequence models. K Goyal, G Neubig, C Dyer, T Berg-Kirkpatrick, AAAI Conference on Artificial Intelligence. K. Goyal, G. Neubig, C. Dyer, and T. Berg-Kirkpatrick, "A continuous relaxation of beam search for end-to-end training of neural sequence models," in AAAI Conference on Artificial Intelligence, 2018. Parallel neighbor-sort (or the glory of the induction principle). A N Habermann, A. N. Habermann, "Parallel neighbor-sort (or the glory of the induction principle)," 1972. Sorting networks and their applications. K E Batcher, Proc. AFIPS Spring Joint Computing Conference. AFIPS Spring Joint Computing ConferenceAtlantic City, NJK. E. Batcher, "Sorting networks and their applications," in Proc. AFIPS Spring Joint Computing Conference (Atlantic City, NJ), 1968, pp. 307-314. Generative adversarial networks. I J Goodfellow, J Pouget-Abadie, M Mirza, B Xu, D Warde-Farley, S Ozair, A Courville, Y Bengio, Proc. Neural Information Processing Systems (NeurIPS). Neural Information essing Systems (NeurIPS)I. J. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, "Generative adversarial networks," in Proc. Neural Information Processing Systems (NeurIPS), 2014. Reading digits in natural images with unsupervised feature learning. Y Netzer, T Wang, A Coates, A Bissacco, B Wu, A Y Ng, Y. Netzer, T. Wang, A. Coates, A. Bissacco, B. Wu, and A. Y. Ng, "Reading digits in natural images with unsupervised feature learning," 2011.
[]
[ "Wilson, fixed point and Neuberger's lattice Dirac operator for the Schwinger model *", "Wilson, fixed point and Neuberger's lattice Dirac operator for the Schwinger model *", "Wilson, fixed point and Neuberger's lattice Dirac operator for the Schwinger model *", "Wilson, fixed point and Neuberger's lattice Dirac operator for the Schwinger model *" ]
[ "F Farchioni \nInstitut für Theoretische Physik\nUniversität Graz\nA-8010GrazAUSTRIA\n", "I Hip \nInstitut für Theoretische Physik\nUniversität Graz\nA-8010GrazAUSTRIA\n", "C B Lang \nInstitut für Theoretische Physik\nUniversität Graz\nA-8010GrazAUSTRIA\n", "F Farchioni \nInstitut für Theoretische Physik\nUniversität Graz\nA-8010GrazAUSTRIA\n", "I Hip \nInstitut für Theoretische Physik\nUniversität Graz\nA-8010GrazAUSTRIA\n", "C B Lang \nInstitut für Theoretische Physik\nUniversität Graz\nA-8010GrazAUSTRIA\n" ]
[ "Institut für Theoretische Physik\nUniversität Graz\nA-8010GrazAUSTRIA", "Institut für Theoretische Physik\nUniversität Graz\nA-8010GrazAUSTRIA", "Institut für Theoretische Physik\nUniversität Graz\nA-8010GrazAUSTRIA", "Institut für Theoretische Physik\nUniversität Graz\nA-8010GrazAUSTRIA", "Institut für Theoretische Physik\nUniversität Graz\nA-8010GrazAUSTRIA", "Institut für Theoretische Physik\nUniversität Graz\nA-8010GrazAUSTRIA" ]
[]
We perform a comparison between different lattice regularizations of the Dirac operator for massless fermions in the framework of the single and two flavor Schwinger model. We consider a) the Wilson-Dirac operator at the critical value of the hopping parameter; b) Neuberger's overlap operator; c) the fixed point operator. We test chiral properties of the spectrum, dispersion relations and rotational invariance of the mesonic bound state propagators.
10.1016/s0370-2693(98)01343-4
[ "https://export.arxiv.org/pdf/hep-lat/9809016v2.pdf" ]
14,800,923
hep-lat/9809016
cf8462218de2619293f0dca478bfd7da0b9d2e8d
Wilson, fixed point and Neuberger's lattice Dirac operator for the Schwinger model * December 1, 2021 F Farchioni Institut für Theoretische Physik Universität Graz A-8010GrazAUSTRIA I Hip Institut für Theoretische Physik Universität Graz A-8010GrazAUSTRIA C B Lang Institut für Theoretische Physik Universität Graz A-8010GrazAUSTRIA Wilson, fixed point and Neuberger's lattice Dirac operator for the Schwinger model * December 1, 2021arXiv:hep-lat/9809016v2 8 Sep 19981115Ha1110Kk Key words: Lattice field theoryfixed point actionoverlap operatorDirac operator spectrumzero-modestopological chargeSchwinger model * Supported by Fonds zur Förderung der Wissenschaftlichen Forschung inÖsterreichProject P11502-PHY We perform a comparison between different lattice regularizations of the Dirac operator for massless fermions in the framework of the single and two flavor Schwinger model. We consider a) the Wilson-Dirac operator at the critical value of the hopping parameter; b) Neuberger's overlap operator; c) the fixed point operator. We test chiral properties of the spectrum, dispersion relations and rotational invariance of the mesonic bound state propagators. Motivation and Introduction The Nielsen-Ninomiya [1] theorem and the Ginsparg-Wilson [2] condition (GWC) provide us with the crucial information under which circumstances [3] remnants of chiral symmetry may stay with a lattice action for fermions, like it is e.g. the case for overlap fermions [4] or fixed point actions [5]. Recently Lüscher [6] has pointed out the explicit form of the underlying symmetry and indicated possible generalizations. Fermion actions for massless fermions satisfying the GWC 1 2 D, γ 5 = a D γ 5 R D ,(1) where D is the lattice Dirac operator, violate chiral symmetry up to a local term O(a); the r.h.s. of the above equation is local since R is. It was pointed out in [3] that actions which are fixed points under real space renormalization group (block spin) transformations (BST), are solutions of the GWC. R is then local and bounded and as a consequence the spectrum of D in complex space is confined between two circles [5]: |λ − r min | ≥ r min , |λ − r max | ≤ r max ,(2) where the real numbers r min and r max are related to the maximum and minimum eigenvalue of R respectively. For non-overlapping BSTs R = 1 2 and (2) reduces to |λ − 1| = 1, i.e. the spectrum lies on circle. Independent solutions of the GWC are provided by the overlap formalism [4], which allows the formulation of chiral fermions on the lattice. These solutions are obtained in an elegant way, as shown recently by Neuberger [7,8], through some projection of the Wilson operator with negative fermion mass. Even in this case we have R = 1 2 and |λ − 1| = 1. For the Schwinger model (2D QED) we are in a situation where we have access to three different lattice Dirac operators for massless fermions, namely the original Wilson operator D Wi at κ c (β), the Neuberger-projected operator D Ne and a numerically determined and therefore approximate fixed point operator D Fp [9,10]. Studying these alternatives we may ask the following questions: • For given gauge field configurations: What is the relation between real eigenvalues of the lattice Dirac operator and the geometric topological charge? To what extent is the Atiyah-Singer Index Theorem (ASIT) realized in these lattice environments? • In the continuum, the eigenvalue distribution is related to the condensate ψ ψ (Banks-Casher formula [11]). What about the lattice theory for the given Dirac operators? • The first two questions concern chiral symmetry and the phenomenon of fermion condensation. Important for the eventual study of the continuum limit of the full theory are also spectral properties: What is the behavior of e.g. dispersion relations? Concerning off-shell properties, what about recovery of rotational invariance of the propagators? Here we perform a Monte Carlo simulation for the one-and two-flavor Schwinger model with gauge fields in the compact representation and the three different lattice Dirac operators. For the two-flavor model one expects a massless bound state in the chiral limit, which is of particular interest in this framework. We stress that the ensemble of gauge configurations used is the same in all three cases, the sampling being performed using the oneplaquette standard gauge action. The unquenching is obtained through the multiplication by the fermion determinant. Lattice Dirac Operators The three fermion actions may be writtenψ D ψ, where the fields at each site are two-components Grassmann variablesψ, ψ and D is a matrix in Euclidean and Dirac space (lattice Dirac operator). For two flavors the number of fields duplicate and the action becomesū D u+d D d (with independent Grassmann fieldsū, u,d, d). All three Dirac operators are non-hermitian but have γ 5 -hermiticity: γ 5 Dγ 5 = D † . Their eigenvalue spectrum is therefore symmetric with regard to complex conjugation. Wilson Dirac operator We write the Wilson Dirac operator in the form D Wi (x, y) = (m + D) 1 xy − 1 2 µ (1 + σ µ ) U xy δ x,y−µ + (1 − σ µ ) U † yx δ x,y+µ .(3) The hopping parameter κ is related to the (bare) quark mass m through the relation κ = 1/(2m + 2D). The chiral limit is obtained in this environment for κ → κ c (β), where κ c (β) should be determined in some way (see the following). Thus we will work at D = 2, and κ = κ c (β) (corresponding massless quarks) as discussed below. Neuberger's operator Neuberger suggests [7] to start with the Wilson Dirac operator at some value of m ∈ (−1, 0) corresponding to 1 2D < κ < 1 2D−2 and then construct D Ne = 1 + γ 5 ǫ(γ 5 D Wi ) .(4) We call the actual value of κ used in the above definition κ Ne . Some words about the choice of κ Ne : according to [8] it is arbitrary, in the sense that any (strictly negative) value of m in the interval (−1, 0) reproduces the correct continuum theory (see also the discussion on a suitable choice of κ Ne in [12]), but it may be optimized with regard to its scale dependence by looking for example at the behavior of the (projected) spectrum. Comparing expectation values of operators like ψ ψ for different κ Ne one has to take care of the proper normalization [13]. One may see from the comparison with free lattice fermions that there is a (trivial) factor of √ m, i.e. ψ ψ = ψ ψ Ne /|m| in our convention. 1 Whenever not mentioned otherwise we choose κ Ne = 1 2 (m = −1) for our exploration; we also discuss results with some smaller values. The operative definition of ǫ(γ 5 D Wi ) entering the above equation is: ǫ(γ 5 D Wi ) = U Sign(Λ) U † with γ 5 D Wi = U Λ U † .(5) Here Sign(Λ) denotes the diagonal matrix containing the signs of the eigenvalue matrix Λ obtained through the unitary transformation U of the hermitian matrix γ 5 D Wi . There are various efficient ways to numerically find D Ne without passing through the diagonalization problem, prohibitive for D = 4 [14] (for D = 2, see also [15]). In our simple context computer time is no real obstacle and therefore we use the direct definition (5), explicitly performing the diagonalization. As observed in [8], gauge configurations with non-zero topological charge imply exact zero eigenvalues of D Ne . The subsequent inversion, necessary to find the quark propagators, is then not possible. The correct way to proceed is to introduce a regulator cutoff-mass: D → D + µ 1 and then consider the limit µ → 0. Fixed point Dirac operator In [9] the fixed point Dirac operator was parameterized as D Fp (x, y) = 3 i=0 x ,f ρ i (f ) σ i U(x, f ) , with y ≡ x + δf .(6) Here f denotes a closed loop through x or a path from the lattice site x to y = x + δf (distance vector δf ) and U(x, f ) is the parallel transporter along this path. The σ i -matrices denote the Pauli matrices for i = 1, 2, 3 and the unit matrix for i = 0. The action obeys the usual symmetries as discussed in [9]; altogether it has 429 terms per site. The action was determined for gauge fields distributed according to the non-compact formulation with the Gaussian measure. There excellent scaling properties, rotational invariance and continuum-like dispersion relations were observed at various values of the gauge coupling β. In [10] the action was studied both, for compact and the original noncompact gauge field distributions. In the compact case the action is not expected to exactly reproduce the fixed point of the corresponding BST, but nevertheless it is still a solution of the GWC; violations of the GWC are instead introduced by the parameterization procedure, which cuts off the less local couplings. We demonstrated that indeed the spectrum is close to circular, somewhat fuzzy at small values of β ≤ 2 but excellently living up to the theoretical expectations of [5] at large gauge couplings β ≥ 4. Here we study the action only for the compact gauge field distributions in order to allow a direct comparison with the other lattice Dirac operators. Simulation Details Uncorrelated gauge configurations have been generated in the quenched setup. However, we are including the fermionic determinant in the observables: all the results presented here are obtained with the correct determinant (squared, for two flavors) weight. From earlier experience [9,10] we know that this is justifiable for the presented statistics. We perform our investigation on three sets of 5000-10000 configurations at β = 2, 4 and 6. The so-called "geometric definition" of the topological charge is Q G = 1 2π x Im ln(U 12 (x)) ;(7) we keep track of its value for all our configurations. The configurations have been well separated by 3 τ int , the autocorrelation length for Q G . For D Wi we need to determine κ c (β). We use PCAC techniques for this purpose [16] determining κ c for the unquenched 2-flavor case. For each configuration we then build D Wi (at κ c (β)), D Ne and D Fp as discussed. Each lattice Dirac matrix is diagonalized to obtain the complex eigenvalue spectrum. This is somewhat time-consuming due to the nonhermiticity. Furthermore the inverse (the quark propagator) is determined. In the 2-flavor Schwinger model one expects [17] (for a recent discussion cf. [18]) one massive mode (called η by analogy) and a massless flavor-triplet (called π). The corresponding momentum-projected operators are η(p, t) = x 1 e ipx 1 ū(x 1 , t) σ 1 u(x 1 , t) +d(x 1 , t) σ 1 d(x 1 , t) ,(8)π 3 (p, t) = x 1 e ipx 1 ū(x 1 , t) σ 1 u(x 1 , t) −d(x 1 , t) σ 1 d(x 1 , t) .(9) Their correlation functions define by their exponential decay the corresponding energy functions E(p) and thereby the dispersion relation. In the 1-flavor case only the massive mode is there. We also study rotational symmetry via the correlation function P (x) = ψ (0) σ 3 ψ(0)ψ(x) σ 3 ψ(x)(10) measured for all 2-point separations. As mentioned before, in order to avoid numerical problems with inversions for the Dirac operator with (almost or exact) zero eigenvalues we introduce a small regulator mass µ. It turned out that for µ = O(10 −3 ) or smaller the result is practically insensitive to this cut-off, the inversion algorithm still working properly. Discussion of the Results From the definition (4) of D Ne one could naively expect that its spectrum is obtained from the starting Wilson operator at a proper value of κ Ne by simply projecting the eigenvalues onto the circle on the complex plane |λ − 1| = 1. This is not quite the case, although it becomes more and more so for larger β approaching the continuum limit. In particular the real modes are projected either onto λ = 0 or to λ = 2 (see fig.1). In the case of a pair of real modes of opposite chirality, they split into two conjugated complex eigenvalues on the circle. Finally one is left with zero modes of only one definite chirality -in agreement with the so-called vanishing theorem valid in D=2 [19] -and an equal number of modes λ = 2 with the opposite chirality. As a consequence of the process of splitting of chirality pairs, the number of zero modes in D Ne is smaller than those of real small eigenvalues of D Wi . In fig. 1 we compare -for a fixed gauge configuration (at β = 6) -the eigenvalues of D Wi (at κ = 1 2 and 1 3 ; n.b. this is still above κ c ) with those of D Ne − 1 = γ 5 ǫ(γ 5 D Wi ) resulting from the projection (4). We find that the negative real eigenvalues of D Wi are projected to −1 (corresponding to λ = 0): their number, counted according to the signs of their chirality, agrees with the number of zero modes of D Ne . In particular gauge configurations for smaller β have more eigenvalues on the real axis, with a distribution density becoming broader when decreasing β. Increasing (decreasing) κ Ne the spectrum of D Wi would shift towards left (right), and so more (less) zero modes might be obtained as a result of the projection. For small β there is no clear distinction for D Wi between the physical branch of the real spectrum and the eigenvalues due to doubler modes. This uncertainty is discussed in the framework of the overlap method [20] and the Wilson-and Sheikholeslami-Wohlert action [21]. For D Ne , as for D Wi , not all zero modes can be equivalenced to the geometrically defined topological charge of the gauge configuration. Of course one can still define the topological charge as the number of zero modes, or equivalently the number of λ = 2 modes, counting properly the chirality; in the case of the fixed point action one obtains in this way the fixed point topological charge of the lattice configuration [5]. This ambiguity vanishes towards larger β, approaching the continuum limit. Table 1 (upper part) shows the percentage p(β) of configurations where the number of zero modes counted according to the sign of their chirality (n R − n L ) (cf. the discussion in [22]) agrees with the geometric topological charge. The agreement of the first two lines is trivially explained, since the real modes counted for D Wi are those projected to λ = 0 (or split into complex pairs in the case of a real pair of opposite chirality) for D Ne . This agreement stems from the choice of κ Ne as "cut-off" for the counting of the real modes of D Wi . In table 1 (lower part) we confirm that for D Ne the zero modes have just one definite chirality, whereas for the other actions both chiralities contribute, displaying a violation of the vanishing theorem (in the case of the D Fp we believe that this violation is an effect of the truncation of less local couplings). For larger β all three actions recover the vanishing theorem and the ASIT: (n R − n L ) = Q G . The density distribution of eigenvalues on (for D Ne ) or almost on (for D Fp ) the circle agrees with each other at small eigenvalues, with improving agreement for increasing β. For κ Ne = 1 2 already at β = 6 the densities are indistinguishable within the statistical errors for Im(λ) < 0.6 ( Fig. 2, here we adopt the definition in [10] for the projection of the density distribution of eigenvalues to the imaginary axis). Choosing another κ Ne will produce another eigenvalue distribution. Comparing these with each other and with the value of ψ ψ [11] one has to take care of the proper normalization of the fermion fields, as discussed above in sec. 2.2. A suitable lattice definition of ψ ψ was suggested in the framework of the overlap action [23], which can be generalized for any fermions obeyed the GWC [24] (see [10] for an application). Also this quantity shows excellent agreement for D Ne at κ Ne = 1 2 and D Fp . We obtain for the two actions (lattice units): 0.073(4) and 0.072(7) respectively for β = 4, 0.062(3) and 0.063(3) for β = 6 (to be compared to the continuum values 0.080 and 0.065 [25]). Choosing κ Ne = 1 3 with the appropriate normalization (with the factor for free fermions) we find 0.058(1) at β = 6. The condensate has been studied in the 1-flavor Schwinger model in the overlap formalism already in [26], where also good agreement with the expected values has been demonstrated. Discussing the mass spectrum, we find that for D Ne (κ Ne = 1 2 ) the vanishing of the π-mass is excellently realized (see fig. 3); this is an amazing improvement with respect to D Wi , where the tuning to κ c creates non-irrelevant technical problems. The behavior of the energy dispersion for non-zero mo-menta shows no improvement compared to D Wi , while for D Fp it is almost linear as expected for a massless particle in the continuum: D Fp eliminates the cut-off effects producing a continuum-like propagator for all momenta. The massive state displays qualitatively similar behavior. Decreasing κ Ne in the definition of D Ne from the suggested value 1 2 down to 1 3 somewhat enlarged the values of E(p) slightly above those of D Wi , but still similar in the overall shape. Changing further to even smaller κ Ne closer to 1 4 we observe, that the propagators do not reach asymptotic behavior on the lattice size studied (even for κ Ne > κ c ) and no mass plateaus can be identified. We suspect, that this indicates larger corrections to scaling. We also studied the correlation function (10) for the three actions. The rotational symmetry properties for D Ne are comparable to those of D Wi . Action D Fp shows the best rotational invariance. In summary, we find that the real-space correlations functions and the dispersion relations are not noticeably improved by D Ne , as compared to those for D Fp , which show a behavior substantially closer to the continuum. It is expected that D Ne is automatically O(a) corrected [27] and thus improves scaling for the on-shell quantities; without at least introducing improvement of the current operators one would not expect improvement for the propagators as exhibited by our results. We conclude that the identification of zero modes with the geometric topological charge for D Ne agrees with that for D Wi , if the real modes are counted according to their chirality. D Fp shows generally better behavior. The vanishing theorem, however, is satisfied automatically for D Ne , since only zero modes of one chirality result from the projection. The bound state masses and condensate come out similarly. The rotational invariance and the dispersion relations of the current propagators are not significantly improved for D Ne as compared to D Wi , but they are definitely better for D Fp . Figure 1 : 1Eigenvalues of D Wi (open circles) and of the term γ 5 ǫ(γ 5 D Wi ) (full diamonds) in D Ne , left: κ Ne = 1 2 , right: κ Ne = 1 3 . Figure 2 : 2The unquenched eigenvalue density distribution projected from the circle onto the imaginary axis at β = 6 for D Fp (thick lines) and D Ne with κ Ne = 1 2 (thin lines). The horizontal line denotes the continuum value at infinite volume. Figure 3 : 3The dispersion relation E(p) for (upper figure) the π and (lower figure) the η propagator for the three actions studied (lattice size 16 2 , β = 6); squares: Wilson action, diamonds: Neuberger action, circles: fixed point action. We thank H. Neuberger for pointing this out to us. Acknowledgment:I.H. wishes to thank S. Chandrasekharan for a stimulating discussion and for sharing some of his unpublished results. We are grateful to W. Bietenholz and F. Niedermayer for discussions. Support by Fonds zur Förderung der Wissenschaftlichen Forschung inÖsterreich, Project P11502-PHY is gratefully acknowledged. . H Nielsen, M Ninomiya, Nucl. Phys. B. 18520H. Nielsen and M. Ninomiya, Nucl. Phys. B 185 (1981) 20. . P H Ginsparg, K G Wilson, Phys. Rev. D. 252649P. H. Ginsparg and K. G. Wilson, Phys. Rev. D 25 (1982) 2649. . P Hasenfratz, Nucl. Phys. B (Proc. Suppl.) 63A-C. 53P. Hasenfratz, Nucl. Phys. B (Proc. Suppl.) 63A-C (1997) 53. . R Narayanan, H Neuberger, Phys. Rev. Lett. 302305Nucl. Phys. B. ibid. BR. Narayanan and H. Neuberger, Phys. Lett. B 302 (1993) 62; Phys. Rev. Lett. 71 (1993) 3251; Nucl. Phys. B 412 (1994) 574; ibid. B 443 (1995) 305. . P Hasenfratz, V Laliena, F Niedermayer, Phys. Lett. B. 427125P. Hasenfratz, V. Laliena, and F. Niedermayer, Phys. Lett. B 427 (1998) 125. . M Lüscher, Phys. Lett. B. 428342M. Lüscher, Phys. Lett. B 428 (1998) 342. . H Neuberger, Phys. Lett. B. 417141H. Neuberger, Phys. Lett. B 417 (1998) 141. . H Neuberger, Phys. Lett. B. 427353H. Neuberger, Phys. Lett. B 427 (1998) 353. . C B Lang, T K Pany, Nucl. Phys. B. 513645C. B. Lang and T. K. Pany, Nucl. Phys. B 513 (1998) 645. . F Farchioni, C B Lang, M Wohlgenannt, Phys. Lett. B. 433377F. Farchioni, C. B. Lang, and M. Wohlgenannt, Phys. Lett. B 433 (1998) 377. . T Banks, A Casher, Nucl. Phys. 169103T. Banks and A. Casher, Nucl. Phys. 169 (1980) 103. . S Chandrasekharan, hep-lat/9805015S. Chandrasekharan, hep-lat/9805015. . Y Kikukawa, R Narayanan, H Neuberger, Phys. Rev. D. 571233Y. Kikukawa, R. Narayanan and H. Neuberger, Phys. Rev. D 57 (1998) 1233. . H G Neuberger ; R, U M Edwards, R Heller, Narayanan, hep-lat/9807017H. Neuberger, hep-lat/9806025; R. G. Edwards, U. M. Heller, and R. Narayanan, hep-lat/9807017. . T.-W Chiu, hep-lat/9804016T.-W. Chiu, hep-lat/9804016. . I Hip, C B Lang, R Teppner, Nucl. Phys. (Proc. Suppl.). 63682I. Hip, C. B. Lang, and R. Teppner, Nucl. Phys. (Proc. Suppl.) 63 (1998) 682. . S Coleman, Ann. Phys. 101239S. Coleman, Ann. Phys. 101 (1976) 239. . C R Gattringer, E Seiler, Ann. Phys. 23397C. R. Gattringer and E. Seiler, Ann. Phys. 233 (1994) 97. . J Kiskis, Phys. Rev. D. 152329J. Kiskis, Phys. Rev. D 15 (1977) 2329; . N K Nielsen, B Schroer, Nucl. Phys. B. 127493N. K. Nielsen and B. Schroer, Nucl. Phys. B 127 (1977) 493; . M M Ansourian, Phys. Lett. 70301M. M. Ansourian, Phys. Lett. 70B (1977) 301. . R G Edwards, U M Heller, R Narayanan, hep-lat/9802016R. G. Edwards, U. M. Heller, and R. Narayanan, hep-lat/9802016. . C Gattringer, I Hip, hep-lat/9806032C. Gattringer and I. Hip, hep-lat/9806032. . P Hernandez, hep-lat/9801035P. Hernandez, hep-lat/9801035. . H Neuberger, Phys. Rev. D. 575417H. Neuberger, Phys. Rev. D 57 (1998) 5417. . P Hasenfratz, Nucl. Phys. 525401P. Hasenfratz, Nucl. Phys. B525 (1998) 401. . I Sachs, A Wipf, Helv. Phys. Acta. 65653I. Sachs and A. Wipf, Helv. Phys. Acta 65 (1992) 653. . R Narayanan, H Neuberger, P Vranas, Phys. Lett. B. 353507R. Narayanan, H. Neuberger and P. Vranas, Phys. Lett. B 353 (1995) 507. ) 105; see also: F. Niedermayer, plenary talk given at. Y Kikukawa, R Narayanan, H Neuberger, Phys. Lett. B. 399BoulderY. Kikukawa, R. Narayanan and H. Neuberger, Phys. Lett. B 399 (1997) 105; see also: F. Niedermayer, plenary talk given at "Lattice 98", Boul- der, July 1998.
[]
[ "The Dual Graph Shift Operator: Identifying the Support of the Frequency Domain", "The Dual Graph Shift Operator: Identifying the Support of the Frequency Domain" ]
[ "Fellow, IEEEGeert Leus ", "Member, IEEESantiago Segarra ", "Member, IEEEAlejandro Ribeiro ", "Senior Member, IEEEAntonio G Marques " ]
[]
[]
Contemporary data is often supported by an irregular structure, which can be conveniently captured by a graph. Accounting for this graph support is crucial to analyze the data, leading to an area known as graph signal processing (GSP). The two most important tools in GSP are the graph shift operator (GSO), which is a sparse matrix accounting for the topology of the graph, and the graph Fourier transform (GFT), which maps graph signals into a frequency domain spanned by a number of graph-related Fourier-like basis vectors. This alternative representation of a graph signal is denominated the graph frequency signal. Several attempts have been undertaken in order to interpret the support of this graph frequency signal, but they all resulted in a one-dimensional interpretation. However, if the support of the original signal is captured by a graph, why would the graph frequency signal have a simple one-dimensional support? That is why, for the first time, we propose an irregular support for the graph frequency signal, which we coin the dual graph. The dual GSO leads to a better interpretation of the graph frequency signal and its domain, helps to understand how the different graph frequencies are related and clustered, enables the development of better graph filters and filter banks, and facilitates the generalization of classical SP results to the graph domain.
10.1007/s00041-021-09850-1
[ "https://arxiv.org/pdf/1705.08987v1.pdf" ]
26,305,931
1705.08987
76df00613f9113f7229226a815c11858f9b3ff68
The Dual Graph Shift Operator: Identifying the Support of the Frequency Domain Fellow, IEEEGeert Leus Member, IEEESantiago Segarra Member, IEEEAlejandro Ribeiro Senior Member, IEEEAntonio G Marques The Dual Graph Shift Operator: Identifying the Support of the Frequency Domain Index Terms-Graph signal processingDual graph shift op- eratorFrequency supportGraph Fourier transformDuality Contemporary data is often supported by an irregular structure, which can be conveniently captured by a graph. Accounting for this graph support is crucial to analyze the data, leading to an area known as graph signal processing (GSP). The two most important tools in GSP are the graph shift operator (GSO), which is a sparse matrix accounting for the topology of the graph, and the graph Fourier transform (GFT), which maps graph signals into a frequency domain spanned by a number of graph-related Fourier-like basis vectors. This alternative representation of a graph signal is denominated the graph frequency signal. Several attempts have been undertaken in order to interpret the support of this graph frequency signal, but they all resulted in a one-dimensional interpretation. However, if the support of the original signal is captured by a graph, why would the graph frequency signal have a simple one-dimensional support? That is why, for the first time, we propose an irregular support for the graph frequency signal, which we coin the dual graph. The dual GSO leads to a better interpretation of the graph frequency signal and its domain, helps to understand how the different graph frequencies are related and clustered, enables the development of better graph filters and filter banks, and facilitates the generalization of classical SP results to the graph domain. I. INTRODUCTION Graph signal processing (GSP) has emerged as an effective solution to handle data with an irregular support. Its approach is to represent this support by a graph, view the data as a signal defined on its nodes, and use algebraic and spectral properties of the graph to study the signals [1]. Such a data structure appears in many domains, including social networks, smart grids, sensor networks, and neuroscience. Instrumental to GSP are the notions of the graph shift operator (GSO), which is a matrix that accounts for the topology of the graph, and the graph Fourier transform (GFT), which allows the representation of graph signals in the so-called graph frequency domain. These tools are the fundamental building blocks for the development of compression schemes, filter banks, nodevarying filters, windows, and other GSP techniques [2]- [8]. Work Motivated by the practical importance of the GFT, some efforts have been made to establish a total ordering of the graph frequencies [1], [9], [10], implicitly assuming a onedimensional support for the graph frequency signal. Such an ordering translates into proximities between frequencies, which are critical for the definition of bandlimitedness and smoothness as well as for the design of sampling and (bank) filtering schemes. However, the basis vectors associated with frequencies that are close in such one-dimensional domains are often dissimilar and focus on completely different parts of the graph [11], suggesting that a one-dimensional support is not descriptive enough to capture the similarity relationships between graph frequencies. To overcome that limitation, we propose the first description of the (not necessarily regular) support of a graph frequency signal by means of a graph (which we denominate as the dual graph 1 ) and its corresponding dual GSO. This dual GSO helps in describing the existing relations across frequencies, which can be ultimately leveraged to enhance existing vertex-frequency GSP schemes. II. THE DUAL GRAPH We start by reviewing fundamental concepts of GSP and then state formally the problem of identifying the dual GSO. A. Fundamentals of GSP Consider a graph G of N nodes or vertices with node set N = {n 1 , ..., n N } and edge set E = {(n i , n j ) | n i is connected to n j }. The graph G is further characterized by the so-called GSO, which is an N × N matrix S whose entries [S] ij for i = j are zero whenever nodes n i and n j are not connected. The diagonal entries of S can be selected freely and typical choices for the GSO include the Laplacian or adjacency matrices [1], [9]. A graph signal defined on G can be conveniently represented by a vector x = [x 1 , ..., x N ] T ∈ C N , where x i is the signal value associated with node n i . The GSO S -encoding the structure of the graph -is crucial to define the GFT and graph filters. The former transforms graph signals into a frequency domain, whereas the latter represents a class of local linear operators between graph signals. Assume for simplicity that the GSO S is normal, such that its eigenvalue decomposition (EVD) can always be written as S = VΛV H , where V is a unitary matrix that stacks the eigenvectors and Λ is a diagonal matrix that stacks the eigenvalues. To simplify exposition, we also assume the eigenvalues of the shift are simple (non-repeated), such that the associated eigenspaces are unidimensional. The eigenvectors V = [v 1 , ..., v N ] correspond to the graph frequency basis vectors whereas the eigenvalues λ = diag(Λ) = [λ 1 , ..., λ N ] T can be viewed as graph frequencies. With these conventions, the definitions of the GFT and graph filters are given next. Definition 1. Given the GSO S = VΛV H , the GFT of the graph signal x ∈ C N isx = [x 1 , ...,x N ] T := V H x. Definition 2. Given the GSO S = VΛV H , a graph filter H ∈ C N×N of degree L is a graph-signal operator of the form H = H(h, S) := L l=0 h l S l = Vdiag(h)V H ,(1) where h := [h 0 , ..., h L ] andh := diag( L l=0 h l Λ l ). Definition 1 implies that the inverse GFT (iGFT) is simply x = Vx. Vector h in Definition 2 collects the filter coefficients andh ∈ C N in (2) can be deemed as the frequency response of the filter. The particular case of the filter being H = S, so thath = λ, will be subject of further discussion in Section III. Graph filters and the GFT have been shown useful for sampling, compression, filtering, windowing, and spectral estimation of graph signals [2]- [8]. B. Support of the frequency domain The underlying assumption in GSP is that to analyze and process the graph signal x ∈ C N one has to take into account its graph support G via the associated GSO S. Moreover, according to Definition 1 the graph frequency signalx ∈ C N is an alternative representation of x. Thus, a natural problem is the identification of the graph and the GSO corresponding tõ x. More precisely, we are interested in finding the dual graph G f -represented via the corresponding dual GSO S f -that characterizes the support of the frequency domain. Let N f = {n f,1 , ..., n f,N } denote the node set of the dual graph G f . Each element in N f corresponds to a different frequency (λ i , v i ), thus, the edge set E f indicates pairwise relations between the different frequencies. We interpretx as a signal defined on this dual graph, wherex i is associated with the node (frequency) n f,i . As for the primal GSO, the EVD of the N × N matrix S f associated with G f will be instrumental to studyx. We start from the assumption that normality of S implies normality of S f . Later on, we will see that this assumption is valid. Due to normality, we have Fig. 1). then that S f = V f Λ f V H f , and thus the dual graph has (dual) frequency basis vectors V f = [v f,1 , ..., v f,N ] and (dual) graph frequencies λ f = diag(Λ f ) = [λ f,1 , ..., λ f,N ] T (cf. Problem statement. Given the GSO S = VΛV H find the dual GSO S f = V f Λ f V H f . To address this problem we postulate desirable properties that we want the dual GSO to satisfy. First, we start by identifying V f (Section III). We then proceed to determine Λ f (Section IV), which is a more challenging problem. III. EIGENVECTORS OF THE DUAL GRAPH We want the GFT V H f associated with the dual graph to mapx back to graph signal x. Given thatx = V H x (cf. Definition 1), the ensuing result follows. Property 1. Given the primal GSO S = VΛV H , the eigenvectors of the dual GSO S f are V f = V H . As announced in the previous section, since we have that V −1 f = V H f , then the dual shift S f is normal too. With e i ∈ R N denoting the ith canonical basis vector (all entries are zero except for the one corresponding to the ith node, which is one), then v f,i can be written as v f,i = V H e i =ẽ i , i.e., the GFT of the graph signal e i . Hence, the dual frequency vector v f,i can be viewed as how node i expresses each of the primal graph frequencies, revealing that each frequency of the dual graph G f is related to a particular node of the primal graph G. Moreover, we can also interpret the dual eigenvalues from a primal perspective. To that end, note that λ f is the frequency response of the dual filterH = S f (cf. discussion after Definition 2); thus, the ith entry of λ f can be understood as how strongly the primal value at the ith node x i is amplified when S f is applied tox. One interesting implication of Property 1 is that the dual of a Laplacian shift S = VΛV H is, in general, not a Laplacian. Laplacian matrices require the existence of a constant eigenvector. Hence, for S f to be a Laplacian, one of the rows of V -corresponding to the columns of V f -needs to be constant, which in general is not the case. Another implication of Property 1 is the duality of the filtering and windowing operations, as shown next. Corollary 1. Given the graph signal x ∈ R N and the window w ∈ R N , define the windowed graph signal x w ∈ R N as x w = diag(w)x.(2) Then, recalling thatx = V H x andx w = V H x w , if S f does not have repeated eigenvalues it holds that x w = H(h f , S f )x, with H(h f , S f ) = L l=0 h f,l (S f ) l (3) for some h f := [h f,0 , ..., h f,L ] T and L ≤ N − 1. Proof: Substituting x w = diag(w)x and x = Vx into the definition ofx w yieldsx w = V H diag(w)Vx. This reveals that the mapping fromx tox w is given by the matrixH = V H diag(w)V. Since V H is normal and unitary, V H are the eigenvectors ofH and w are its eigenvalues. Because V H are also the eigenvectors of S f (cf. Property 1), to show thatH is a filter on S f we only need to show that there exist coefficients h f := [h f,0 , ..., h f,N −1 ] T such that w = diag( N −1 l=0 h f,l Λ l f ) [cf. (1)]. Defining Ψ f ∈ C N ×N as [Ψ f ] i,l = (λ f,i ) l−1 , the equality can be written as w = Ψ f h f . Since Ψ f is Vandermonde, if all the dual eigenvalues {λ f,i } N i=1 are distinct, an h f solving w = Ψ f h f exists. The proof holds regardless of the particular λ f and only requires S f to have non-repeated eigenvalues. The corollary states that multiplication in the vertex domain is equivalent to filtering in the dual domain -note that the GSO of the filter in (3) is S f . Clearly, when the entries of w are binary values, multiplying x by w acts as a windowing procedure preserving the values of x in the support w, while discarding the information at the remaining nodes. IV. EIGENVALUES OF THE DUAL GRAPH Given S = Vdiag(λ)V H and using Property 1 to write the dual shift as S f = V H diag(λ f )V, the last step to identify S f is to obtain λ f . Two different (complementary) approaches to accomplish this are discussed next. A. Axiomatic approach Our first approach is to postulate properties that we want the dual shift S f to satisfy, and then translate these properties into requirements on the dual eigenvalues λ f . We denominate these properties as axioms, which we state next. In the following, P denotes an arbitrary permutation matrix. (A1) Axiom of Duality. The dual of the dual graph is equal to the original graph (S f ) f = S. (4) (A2) Axiom of Reordering. The dual graph is robust to reordering the nodes in the primal graph (PSP T ) f = S f .(5) (A3) Axiom of Permutation. Permutations in the EVD of the primal shift lead to permutations in the dual graph (VPdiag(P T λ)P T V H ) f = P T (Vdiag(λ)V H ) f P. (6) Consistency with Property 1 is encoded in the Axiom of Duality (A1). More precisely, since the GFT of the dual shift transforms a frequency signalx back into the graph domain x, we want the associated shift to be recovered as well.The Axiom of Reordering (A2) ensures that the frequency structure encoded in the dual shift is invariant to relabelings of the nodes in the primal shift. Specifically, the frequency coefficients of a given signal x with respect to S should be the same as those of x = Px with respect to S = PSP T . Finally, since the nodes of the dual graph correspond to different frequencies, the Axiom of Permutation (A3) ensures that if we permute the eigenvectors (and corresponding eigenvalues) of S, the nodes of the dual shift are permuted accordingly. Axioms (A1)-(A3) impose conditions on the possible choices for the dual eigenvalues λ f . More precisely, let us define the function h : C N × C N ×N → C N , that computes the dual eigenvalues λ f = h(λ, V) as a function of the eigendecomposition of S. In terms of h, axiom (A1) requires that λ = h(λ f , V f ) = h(h(λ, V), V H ).(7) In order to translate (5) into a condition on h, notice that PSP T = PVdiag(λ)V H P T so that (PSP T ) f from Property 1 must be equal to V H P T diag(λ )PV. Thus, for (PSP T ) f to coincide with S f we need λ = Pλ f which ultimately requires that h(λ, PV) = λ = Pλ f = Ph(λ, V).(8) Lastly, in order to find the requirement imposed by axiom (A3) on h, we again leverage Property 1 to obtain (VPdiag(P T λ)P T V H ) f = P T V H diag(λ )VP. It readily follows that to satisfy (6) we need λ = λ f , i.e. h(P T λ, VP) = λ = λ f = h(λ, V).(9) It is possible to find a function h that simultaneously satisfies (7)-(9), as shown next. Theorem 1. The following class of functions satisfies (7)-(9), leading to a generating method for dual graphs that abides by axioms (A1)-(A3) λ f = h(λ, V) = D −1 f VDλ,(10) where D = diag(g(v 1 ), . . . , g(v N )) and D f = diag(g(v f,1 ), . . . , g(v f,N )), with g(·) any permutation invariant function, i.e., g(Px) = g(x). Proof: We show that (10) satisfies (7), (8), and (9). Showing that (7) holds, requires only substituting (10) into h(h(λ, V), V H ), which yields h(h(λ, V), V H ) = D −1 V H D f (D −1 f VDλ) = λ.h(P T λ, VP) = D −1 f (VP)(P T DP)(P T λ) = D −1 f VDλ = h(λ, V). Note that Theorem 1 proves the existence of a class of eligible dual graphs, but it does not indicate that every dual graph falls in this class. If we restrict ourselves to the class in (10), which can be described by the function g(·), the simplest choice for g(·) is g(x) = 1. This results in λ f = Vλ, but any power of any norm is also a valid function, i.e., g(x) = x q p . A possible policy to design a dual graph could be to select the function g(·) that optimizes a particular figure of merit (such as the minimization of the number of edges in the dual graph G f ) yet keeping faithful to (A1)-(A3). This problem is discussed in more detail at the end of the following section. Furthermore, additional axioms can be imposed on S f to further winnow the class of admissible functions h. A possible avenue, not investigated here, is to impose a desirable behavior of S f with respect to the intrinsic phase ambiguity of the primal EVD. B. Optimization approach A different (and complementary) approach is to find a dual shift S f for which certain properties of practical relevance are promoted. For example, one may be interested in the sparsest S f . To be rigorous, consider that the primal shift S = VΛV H is given. Then, upon setting v f,i = V H e i (cf. Property 1), the dual shift S f is found by solving min {S f , λ f } (S f ) s. to S f = N i=1 λ f,i v f,i v H f,i , S f ∈ S. (11) In the above problem the optimization variables are effectively the eigenvalues λ f , since the constraint S f = N i=1 λ f,i v f,i v H f,i forces the columns of V f to be the eigenvectors of S f . The objective promotes desirable network structural properties, such as sparsity or minimum-energy edge weights. The constraint set S imposes requirements on the sought shift, such as each entry being non-negative or each node having at least one neighbor. This problem has been analyzed in detail in the context of network topology inference from nodal observations [13]. One challenge of this approach is guaranteeing that the dual shift satisfies axioms (A1)-(A3), already deemed as desirable properties. For example, for axiom (A1) to hold, it is necessary for the original shift S itself to be optimal in the sense encoded by (11). To elaborate on this, consider the normal unitary matrix U and the associated shift set S U := {S = Vdiag(Λ)V H |V = U and λ ∈ C N }. Moreover, let S * denote the solution to (11) when V f = U and S * f the solution when V f = U H . Then, it holds that: i) the dual shift for any S ∈ S U is given by S * f , and ii) the dual of S * f is S * . Hence, S * is the only element of S U that guarantees that the dual of the dual is the original graph. Alternatively, one can see S U as a shift class whose (canonical) representative is S * . With this interpretation any S ∈ S U is first mapped to S * and then S * serves as input for (11). Under this assumption, the invertibility of the dual mapping is achieved. One important choice for the objective in (11) is to set (·) = · 0 , so that the goal is to find the sparsest shift (the one minimizing the number of pairwise relationships between the frequencies). Another interesting choice is to find a GSO that either minimizes the variability (maximizes the smoothness) of a given set of signals, or guarantees that the sum of the variability in the primal and dual domains does not exceed a given threshold -which entails minor modifications to (11). To ensure that the optimal S * satisfies axioms (A1)-(A3), the approaches in Sections IV-A and IV-B can be combined. More specifically, one can solve (11) by optimizing over the class of admissible functions g; cf. Theorem 1. This interesting (and more challenging) problem is left as future work. V. ILLUSTRATIVE SIMULATIONS We provide a few simple examples illustrating that representations of the frequency domain that go beyond one dimension are of interest. To that end, we consider two primal graphs and compute their associated dual graphs by applying the methods in Sections IV-A (setting λ f = Vλ) and IV-B (setting (·) = · 0 ). The results are shown in Fig. 2. The Fig. 2. The first row corresponds to a DCT graph and the second an ER graph. The left column plots the primal graph, the central the dual graph recovered using (10), and the right column the dual graph recovered using (11). first row corresponds to the primal graph associated with the Discrete Cosine Transform (DCT) of type II [14], while the second row corresponds to an Erdős-Rényi (ER) graph [15] with N = 10 and edge probability p = 0.15. The first observation is that for none of the primal graphs the axiomatic approach gives rise to a sparse dual graph (cf. central column). This is important because the plotted dual graphs, which do not admit a one-dimensional representation, are legitimate representations of the pairwise interactions between the frequencies. The second observation is that the method promoting sparsity is able to find a very sparse dual graph for the case of the DCT, but not for the ER graph. The sparse and regular dual graph obtained for the DCT serves as implicit validation of our approach and indicates that graphs with a very strong structure in the primal domain can be associated with strong and regular structures in the dual domain. In this extreme case, a one-dimensional representation could be argued to be sufficient. However, the dual shift corresponding to the ER graph demonstrates that the support of the frequency domain is, in general, more complicated and that structures that go beyond one-dimensional representations are required. These observations are confirmed for other types of graphs which, due to space limitations, are not presented here. VI. CONCLUSIONS AND OPEN QUESTIONS This paper investigated the problem of identifying the support associated with the frequency representation of graph signals. Given the (primal) graph shift operator supporting graph signals of interest, the problem was formulated as that of finding a compatible dual graph shift operator that serves as a domain for the frequency representation of these signals. We first identified the eigenvectors of the dual shift, showing that those correspond to how each of the nodes expresses the different graph frequencies. We then proposed different alternatives to find the dual eigenvalues and characterized relevant properties that those eigenvalues must satisfy. Future work includes considering additional properties for the dual eigenvalues so that the size of feasible dual shift operators is reduced, and identifying additional results connecting the vertex domain with the frequency domain. The results in this paper constitute a first step towards understanding the structure of the signals in the frequency domain as well as developing enhanced GSP algorithms for signal compression, frequency grouping, filtering, and spectral estimation schemes. Fig. 1 . 1The primal graph (left) represents the support of the vertex domain, while the dual graph (right) represents the support of the frequency domain. In order to show (8), notice that a permutation of the rows of V (the columns of V f ) does not influence D and only permutes the diagonal entries of D f . Hence, we can write h(λ, PV) as h(λ,PV) = (PD f P T ) −1 PVDλ = PD −1 f VDλ = Ph(λ,V). Finally, since a permutation of the columns of V (the rows of V f ) does not influence D f and only permutes the diagonal entries of D, we can write h(P T λ, VP) as [cf. (9)] in this paper is supported by Spanish MINECO grants No TEC2013-41604-R, TEC2016-75361-R and USA NSF CCF-1217963. G. Leus is with the Dept. of Electrical Eng., Math. and Comp. Science, Delft Univ. of Technology. S. Segarra is with the Inst. for Data, Systems and Society, Massachusetts Inst. of Technology. A. Ribeiro are with the Dept. of Electrical and Systems Eng., Univ. of Pennsylvania. A. G. Marques is with the Dept. of Signal Theory and Comms., King Juan Carlos Univ. Emails: [email protected], [email protected], [email protected], [email protected]. This is not related to the graph theoretic notion of dual graph of a planar graph G, which is a graph that has a vertex for each face of G[12]. The emerging field of signal processing on graphs: Extending highdimensional data analysis to networks and other irregular domains. D Shuman, S Narang, P Frossard, A Ortega, P Vandergheynst, IEEE Signal Process. Mag. 303D. Shuman, S. Narang, P. Frossard, A. Ortega, and P. Vandergheynst, "The emerging field of signal processing on graphs: Extending high- dimensional data analysis to networks and other irregular domains," IEEE Signal Process. Mag., vol. 30, no. 3, pp. 83-98, May 2013. Discrete signal processing on graphs: Sampling theory. S Chen, R Varma, A Sandryhaila, J Kovačević, IEEE Trans. Signal Process. 6324S. Chen, R. Varma, A. Sandryhaila, and J. Kovačević, "Discrete signal processing on graphs: Sampling theory," IEEE Trans. Signal Process., vol. 63, no. 24, pp. 6510 -6523, Dec. 2015. Sampling of graph signals with successive local aggregations. A G Marques, S Segarra, G Leus, A Ribeiro, IEEE Trans. Signal Process. 647A. G. Marques, S. Segarra, G. Leus, and A. Ribeiro, "Sampling of graph signals with successive local aggregations," IEEE Trans. Signal Process., vol. 64, no. 7, pp. 1832 -1843, Apr. 2016. Discrete signal processing on graphs. A Sandryhaila, J Moura, IEEE Trans. Signal Process. 617A. Sandryhaila and J. Moura, "Discrete signal processing on graphs," IEEE Trans. Signal Process., vol. 61, no. 7, pp. 1644-1656, Apr. 2013. Autoregressive moving average graph filtering. E Isufi, A Loukas, A Simonetto, G Leus, IEEE Trans. Signal Process. 652E. Isufi, A. Loukas, A. Simonetto, and G. Leus, "Autoregressive moving average graph filtering," IEEE Trans. Signal Process., vol. 65, no. 2, pp. 274-288, 2017. Distributed linear network operators using graph filters. S Segarra, A G Marques, A Ribeiro, arXiv:1510.03947arXiv preprintS. Segarra, A. G. Marques, and A. Ribeiro, "Distributed linear network operators using graph filters," arXiv preprint arXiv:1510.03947, 2015. Vertex-frequency analysis on graphs. D I Shuman, B Ricaud, P Vandergheynst, Applied and Computational Harmonic Analysis. 402D. I. Shuman, B. Ricaud, and P. Vandergheynst, "Vertex-frequency analysis on graphs," Applied and Computational Harmonic Analysis, vol. 40, no. 2, pp. 260-291, 2016. Stationary graph processes and spectral estimation. A G Marques, S Segarra, G Leus, A Ribeiro, arXiv:1603.04667arXiv preprintA. G. Marques, S. Segarra, G. Leus, and A. Ribeiro, "Stationary graph processes and spectral estimation," arXiv preprint arXiv:1603.04667, 2016. Discrete signal processing on graphs: Frequency analysis. A Sandryhaila, J Moura, IEEE Trans. Signal Process. 6212A. Sandryhaila and J. Moura, "Discrete signal processing on graphs: Frequency analysis," IEEE Trans. Signal Process., vol. 62, no. 12, pp. 3042-3054, June 2014. Approximating signals supported on graphs. X Zhu, M Rabbat, IEEE Intl. Conf. Acoust., Speech and Signal Process. (ICASSP). X. Zhu and M. Rabbat, "Approximating signals supported on graphs," in IEEE Intl. Conf. Acoust., Speech and Signal Process. (ICASSP), Mar. 2012, pp. 3921-3924. Discrete uncertainty principles on graphs. O Teke, P P Vaidyanathan, Asilomar Conf. on Signals, Systems, and Computers. Pacific Grove, CAO. Teke and P. P. Vaidyanathan, "Discrete uncertainty principles on graphs," in Asilomar Conf. on Signals, Systems, and Computers, Pacific Grove, CA, Nov. 7-10 2016, pp. 1475-1479. J Yellen, J L Gross, Handbook of Graph Theory (Discrete Mathematics and Its Applications). CRC pressJ. Yellen and J. L. Gross, Handbook of Graph Theory (Discrete Math- ematics and Its Applications). CRC press, 2003. Network topology inference from spectral templates. S Segarra, A G Marques, G Mateos, A Ribeiro, arXiv:1608.03008v1arXiv preprintS. Segarra, A. G. Marques, G. Mateos, and A. Ribeiro, "Net- work topology inference from spectral templates," arXiv preprint arXiv:1608.03008v1, 2016. Algebraic signal processing theory: 1-d space. M Puschel, J M F Moura, IEEE Trans. Signal Process. 568M. Puschel and J. M. F. Moura, "Algebraic signal processing theory: 1-d space," IEEE Trans. Signal Process., vol. 56, no. 8, pp. 3586-3599, Aug 2008. Random Graphs. B Bollobás, SpringerB. Bollobás, Random Graphs. Springer, 1998.
[]
[ "Absorption imaging of a quasi 2D gas: a multiple scattering analysis", "Absorption imaging of a quasi 2D gas: a multiple scattering analysis", "Absorption imaging of a quasi 2D gas: a multiple scattering analysis", "Absorption imaging of a quasi 2D gas: a multiple scattering analysis" ]
[ "L Chomaz \nLaboratoire Kastler Brossel\nCNRS\nUPMC\nEcole normale supérieure\n24 rue Lhomond75005ParisFrance\n", "L Corman \nLaboratoire Kastler Brossel\nCNRS\nUPMC\nEcole normale supérieure\n24 rue Lhomond75005ParisFrance\n\nEcole polytechnique (member of ParisTech)\n91128Palaiseau cedexFrance\n\nInstitute for Quantum Electronics\nETH Zurich\n8093ZurichSwitzerland\n", "T Yefsah \nLaboratoire Kastler Brossel\nCNRS\nUPMC\nEcole normale supérieure\n24 rue Lhomond75005ParisFrance\n", "R Desbuquois \nLaboratoire Kastler Brossel\nCNRS\nUPMC\nEcole normale supérieure\n24 rue Lhomond75005ParisFrance\n", "J Dalibard [email protected] \nLaboratoire Kastler Brossel\nCNRS\nUPMC\nEcole normale supérieure\n24 rue Lhomond75005ParisFrance\n", "L Chomaz \nLaboratoire Kastler Brossel\nCNRS\nUPMC\nEcole normale supérieure\n24 rue Lhomond75005ParisFrance\n", "L Corman \nLaboratoire Kastler Brossel\nCNRS\nUPMC\nEcole normale supérieure\n24 rue Lhomond75005ParisFrance\n\nEcole polytechnique (member of ParisTech)\n91128Palaiseau cedexFrance\n\nInstitute for Quantum Electronics\nETH Zurich\n8093ZurichSwitzerland\n", "T Yefsah \nLaboratoire Kastler Brossel\nCNRS\nUPMC\nEcole normale supérieure\n24 rue Lhomond75005ParisFrance\n", "R Desbuquois \nLaboratoire Kastler Brossel\nCNRS\nUPMC\nEcole normale supérieure\n24 rue Lhomond75005ParisFrance\n", "J Dalibard [email protected] \nLaboratoire Kastler Brossel\nCNRS\nUPMC\nEcole normale supérieure\n24 rue Lhomond75005ParisFrance\n" ]
[ "Laboratoire Kastler Brossel\nCNRS\nUPMC\nEcole normale supérieure\n24 rue Lhomond75005ParisFrance", "Laboratoire Kastler Brossel\nCNRS\nUPMC\nEcole normale supérieure\n24 rue Lhomond75005ParisFrance", "Ecole polytechnique (member of ParisTech)\n91128Palaiseau cedexFrance", "Institute for Quantum Electronics\nETH Zurich\n8093ZurichSwitzerland", "Laboratoire Kastler Brossel\nCNRS\nUPMC\nEcole normale supérieure\n24 rue Lhomond75005ParisFrance", "Laboratoire Kastler Brossel\nCNRS\nUPMC\nEcole normale supérieure\n24 rue Lhomond75005ParisFrance", "Laboratoire Kastler Brossel\nCNRS\nUPMC\nEcole normale supérieure\n24 rue Lhomond75005ParisFrance", "Laboratoire Kastler Brossel\nCNRS\nUPMC\nEcole normale supérieure\n24 rue Lhomond75005ParisFrance", "Laboratoire Kastler Brossel\nCNRS\nUPMC\nEcole normale supérieure\n24 rue Lhomond75005ParisFrance", "Ecole polytechnique (member of ParisTech)\n91128Palaiseau cedexFrance", "Institute for Quantum Electronics\nETH Zurich\n8093ZurichSwitzerland", "Laboratoire Kastler Brossel\nCNRS\nUPMC\nEcole normale supérieure\n24 rue Lhomond75005ParisFrance", "Laboratoire Kastler Brossel\nCNRS\nUPMC\nEcole normale supérieure\n24 rue Lhomond75005ParisFrance", "Laboratoire Kastler Brossel\nCNRS\nUPMC\nEcole normale supérieure\n24 rue Lhomond75005ParisFrance" ]
[]
Absorption imaging with quasi-resonant laser light is a commonly used technique to probe ultra-cold atomic gases in various geometries. Here we investigate some non-trivial aspects of this method when it is applied to in situ diagnosis of a quasi two-dimensional gas. Using Monte Carlo simulations we study the modification of the absorption cross-section of a photon when it undergoes multiple scattering in the gas. We determine the variations of the optical density with various parameters, such as the detuning of the light from the atomic resonance and the thickness of the gas. We compare our results to the known three-dimensional result (Beer-Lambert law) and outline the specific features of the two-dimensional case.PACS numbers: 42.25. Dd,37.10.-x,03.75.-b arXiv:1112.3170v1 [cond-mat.quant-gas]
10.1088/1367-2630/14/5/055001
[ "https://arxiv.org/pdf/1112.3170v1.pdf" ]
119,169,172
1112.3170
9128ec15745811abee3dfc2ff2ecdd544c95b3e5
Absorption imaging of a quasi 2D gas: a multiple scattering analysis 14 Dec 2011 L Chomaz Laboratoire Kastler Brossel CNRS UPMC Ecole normale supérieure 24 rue Lhomond75005ParisFrance L Corman Laboratoire Kastler Brossel CNRS UPMC Ecole normale supérieure 24 rue Lhomond75005ParisFrance Ecole polytechnique (member of ParisTech) 91128Palaiseau cedexFrance Institute for Quantum Electronics ETH Zurich 8093ZurichSwitzerland T Yefsah Laboratoire Kastler Brossel CNRS UPMC Ecole normale supérieure 24 rue Lhomond75005ParisFrance R Desbuquois Laboratoire Kastler Brossel CNRS UPMC Ecole normale supérieure 24 rue Lhomond75005ParisFrance J Dalibard [email protected] Laboratoire Kastler Brossel CNRS UPMC Ecole normale supérieure 24 rue Lhomond75005ParisFrance Absorption imaging of a quasi 2D gas: a multiple scattering analysis 14 Dec 2011 Absorption imaging with quasi-resonant laser light is a commonly used technique to probe ultra-cold atomic gases in various geometries. Here we investigate some non-trivial aspects of this method when it is applied to in situ diagnosis of a quasi two-dimensional gas. Using Monte Carlo simulations we study the modification of the absorption cross-section of a photon when it undergoes multiple scattering in the gas. We determine the variations of the optical density with various parameters, such as the detuning of the light from the atomic resonance and the thickness of the gas. We compare our results to the known three-dimensional result (Beer-Lambert law) and outline the specific features of the two-dimensional case.PACS numbers: 42.25. Dd,37.10.-x,03.75.-b arXiv:1112.3170v1 [cond-mat.quant-gas] Introduction The study of cold atomic gases has recently shed new light on several aspects of quantum many-body physics [1,2,3,4]. Most of the measurements in this field of research are based on the determination of the spatial density of the gas [5]. For instance one can use the in situ steady-state atomic distribution in a trapping potential to infer the equation of state of the homogenous gas [6]. Another example is the time-of-flight method, in which one measures the spatial density after switching off the trapping potential and allowing for a certain time of ballistic expansion. This gives access to the momentum distribution of the gas, and to the conversion of interaction energy into kinetic energy at the moment of the potential switch-off. To access the atomic density n(r), one usually relies on the interaction of the atoms with quasi-resonant laser light. The most common method is absorption imaging, in which the shadow imprinted by the cloud on a low intensity probe beam is imaged on a camera. The simplest modelling of absorption imaging is based on a mean-field approach, in which one assumes that the local value of the electric field driving an atomic dipole at a given location depends only on the average density of scatterers. One can then relate the attenuation of the laser beam to the column atomic density n (col) (x, y) = n(r) dz along the line-of-sight z. The optical density of the cloud D(x, y) ≡ ln[I in (x, y)/I out (x, y)] is given by the Beer-Lambert law D BL (x, y) = σ n (col) (x, y), where σ is the photon scattering cross-section, and I in (resp. I out ) are the incoming (resp. outgoing) intensity of the probe laser in the plane xy perpendicular to the propagation axis. For a closed two-level atomic transition of frequency ω 0 = ck 0 , σ depends on the wavelength λ 0 = 2π/k 0 associated to this transition and on the detuning ∆ = ω − ω 0 between the probe light frequency ω and the atomic frequency: σ = σ 0 1 + δ 2 , σ 0 = 3λ 2 0 2π , δ = 2∆ Γ .(2) Here Γ represents the natural line width of the transition (i.e., Γ −1 is the natural life time of the excited state of the transition). Eq. (2) assumes that the intensity of the probe beam is much lower than the saturation intensity of the atomic transition. Quasiresonant absorption imaging is widely used to measure the spatial distribution of atomic gases after a long time-of-flight, when the density has dropped sufficiently so that the mean-field approximation leading to Eq. (1) is valid. One can also use absorption imaging to probe in situ samples, at least in the case where σ n (col) is not very large so that the output intensity is not vanishingly small. This is in particular the case for low dimensional gases. Consider for example a 2D gas, such that the translational degree of freedom along z has been frozen. For a probe beam propagating along this axis, one can transpose the Beer-Lambert law of Eq. (1) by simply replacing the column density by the surface density n (2D) of the gas. This 2D Beer-Lambert law can be heuristically justified by treating each atom as a disk of area σ that blocks every photon incident on it. In an area A σ containing N = An (2D) 1 randomly placed atoms, the probability that a photon is not blocked by any of the disks is (1 − σ/A) N ≈ exp(−σn (2D) ). In a quasi-2D gas there is however an important limitation on the optical densities to which one may apply the Beer-Lambert prediction of Eq. (1). Already for σ 0 n (2D) = 1 the mean interparticle distance is only 0.7 λ 0 and one may expect that the optical response of an atom strongly depends on the precise location of its neighbours. More precisely the exchange of photons between closely spaced atoms induces a resonant van der Waals interaction that significantly shifts the atomic resonance frequency with respect to its bare value ω 0 . The optical density of the gas at resonance may then be reduced with respect to Eq. (1), and this was indeed observed in a series of experiments performed with a degenerate 87 Rb gas [7,8]. The general subject of the propagation of a light wave in a dense atomic sample, where multiple scattering plays an essential role, has been the subject of numerous experimental and theoretical works (see e.g. [9,10] in the context of cold atoms, and [11] for a review). Here we present a quantitative treatment of the collective effects that appear when a weak probe beam interacts with a quasi-2D atomic gas. We consider an ensemble of N atoms at rest with random positions and we investigate the transmission of quasi-resonant light by the atom sheet. We model the resonance transition between the atomic ground (g) and excited (e) states by a J g = 0 ↔ J e = 1 transition. We present two equivalent approaches; the first one is based on the calculation of the field radiated by an assembly of N dipoles, where each dipole is driven by an external field plus the field radiated by the N −1 other dipoles; the second one uses the standard T matrix formalism of scattering theory. We show that in both cases the optical density of the medium can be determined by solving the same 3N × 3N linear system. A similar formalism has been previously used for the study of light propagation in small 3D atomic samples, in the presence of multiple scattering (see e.g. [12,13,14,15,16,17,18,19]). However its application to quasi-2D samples has (to our knowledge) not yet been investigated, except in the context of Anderson localisation of light [13]. Our numerical calculations are performed for N = 2048 atoms, which is sufficient to reach the 'thermodynamic limit' for the range of parameters that is relevant for experiments. We show in particular that even for moderate values of σ 0 n (col) , the optical density is notably reduced compared to what is expected from the Beer-Lambert law (e.g., more than 20 % reduction for σ 0 n (col) = 1). We investigate how the absorption line shape is modified by the resonant van der Waals interactions and we also show how the result (1) is recovered when one increases the thickness of the gas, for a given column density n (col) . The paper is organised as follows. In section 2, we detail the modelling of the atomlight interaction with the two-level and rotating wave approximations. Then we explain the principle of the calculation for the absorption of a weak probe beam crossing the atom slab (section 3). The ensemble of our numerical results are presented in section 4. Finally in section 5 we discuss some limitations to our model and draw some concluding remarks. Modelling the atom-light interaction The electromagnetic field We use the standard description of the quantised electromagnetic field in the Coulomb gauge [20], and choose periodic boundary conditions in the cubic-shaped quantisation volume V = L x L y L z . We denote a q,s the destruction operator of a photon with wave vector q and polarisation s (s⊥q). The Hamiltonian of the quantised field is H F = q,s cq a † q,s a q,s ,(3) and the transverse electric field operator reads E(r) = E (+) (r) + E (−) (r) with E (+) (r) = i q,s cq 2ε 0 V a q,s e iq·r s ,(4) and E (−) (r) = E (+) (r) † . The wave vectors q are quantised in the volume V as q i = 2πn i /L i , i = x, y, z, where n i is a positive or negative integer. The atomic medium We consider a collection of N identical atoms at rest in positions r j , j = 1, . . . N . We model the atomic resonance transition by a two-level system with a ground state |g with angular momentum J g = 0 and an excited level of angular momentum J e = 1. We choose as a basis set for the excited manifold the three Zeeman sublevels |e α , α = x, y, z, where |e α is the eigenstate with eigenvalue 0 of the component J α of the atomic angular momentum operator. We denote ω 0 the energy difference between e and g. The atomic Hamiltonian is thus (up to a constant) H A = N j=1 α=x,y,z ω 0 |j : e α j : e α |.(5) The restriction to a two-level approximation is legitimate if the detuning ∆ between the probe and the atomic frequencies is much smaller than ω 0 . The modelling of this transition by a J g = 0 ↔ J e = 1 transition leads to a relatively simple algebra. The transitions that are used for absorption imaging in real experiments often involve more Zeeman states (J g = 2 ↔ J e = 3 for Rb atoms in [7,8]), but are more complex to handle [21,22] and they are thus out of the scope of this paper. However we believe that the most salient features of multiple scattering and resonant Van der Waals interactions are captured by our simple level scheme. The atom-light coupling We treat the atom-light interaction using the electric dipole approximation (length gauge), which is legitimate since the resonance wavelength of the atoms λ 0 is much larger than the atomic size. We write the atom-light coupling as: V = − j D j · E(r j ),(6) where D j is the dipole operator for the atom j. We will use the rotating wave approximation (RWA), which consists in keeping only the resonant terms in the coupling: V ≈ − j D (+) j · E (+) (r j ) + h.c. ,(7) where h.c. stands for Hermitian conjugate. Here D (+) j represents the raising part of the dipole operator for atom j: D (+) j = d α=x,y,z |j : e α j : g|û α ,(8) where d is the electric dipole associated to the g − e transition andû α is a unit vector in the direction α. When a single atom is coupled to the electromagnetic field, this coupling results in the modification of the resonance frequency (Lamb shift) and in the fact that the excited state e acquires a non-zero width Γ Γ = d 2 ω 3 0 3πε 0 c 3 .(9) For simplicity we will incorporate the Lamb shift in the definition of ω 0 . Note that the proper calculation for this shift requires that one goes beyond the two-level and the rotating wave approximations. The linewidth Γ on the other hand can be calculated from the above expressions for V using the Fermi golden rule. The RWA provides a very significant simplification of the treatment of the atomlight coupling, in the sense that the total number of excitations is a conserved quantity. The annihilation (resp. creation) of a photon is always associated with the transition of one of the N atoms from g to e (resp. from e to g). This would not be the case if the non-resonant terms of the electric dipole coupling D (+) i · E (−) and D (−) i · E (+) were also taken into account. The small parameter associated to the RWA is ∆/ω 0 , which is in practice in the range 10 −6 − 10 −9 ; the RWA is thus an excellent approximation. Formally the use of the electric dipole interaction implies to add to the Hamiltonian an additional contact term between the dipoles (see e.g. [23,12]). This term will play no role in our numerical simulations because we will surround the position of each atom by a small excluded volume, which mimics the short range repulsive interaction between atoms. We checked that the results of our numerical calculations (see Sec. 4) do not depend on the size of the excluded volume, and we can safely omit the additional contact term in the present work. Interaction of a probe laser beam with a dense quasi-2D atomic sample We present in this section the general formalism that allows one to calculate the absorption of a quasi-resonant laser beam by a slab of N atoms. We address this question using two different approaches. The first one maps the problem onto the collective behaviour of an assembly of N oscillating dipoles [12]. The equation of motion for each dipole is obtained using the Heisenberg picture for the Hamiltonian presented in section 2. It contains two driving terms, one from the incident probe field and one from the field radiated by all the other dipoles at the location of the dipole under study. The steady-state of this assembly of dipoles is obtained by solving a set of 3N linear equations. The second approach uses the standard quantum scattering theory [24], which is well suited for perturbative calculations and partial resummations of diagrams. We suppose that one photon is incident on the atomic medium and we use resummation techniques to take into account the multiple scattering events that can occur before the photon emerges from the medium. The relevant quantity in this approach is the probability amplitude T ii that the outgoing photon is detected in the same mode as the incident one [14,17], and we show that T ii is obtained from the same set of equations as the values of the dipoles in the first approach. Wave propagation in an assembly of driven dipoles. In this section we assume that the incident field is prepared in a coherent state corresponding to a monochromatic plane wave E L e i(kz−ωt) . We choose the polarization to be linear and parallel to the x axis ( =û x ). Since we consider a J g = 0 ↔ J e = 1 transition, this choice does not play a significant role and we checked that we recover essentially the same results with a circular polarisation. Note that the situation would be different for an atomic transition with larger J g and J e since optical pumping processes would then depend crucially on the polarisation of the probe laser. The amplitude E L is supposed to be small enough that the steady-state populations of the excited states e j,α are small compared to unity. This ensures that the response of each atomic dipole is linear in E L ; this approximation is valid when the Rabi frequency dE L / is small compared to the natural width Γ or the detuning ∆. Using the atom-light coupling (6), the equations of motion for the annihilation operators a q,s in the Heisenberg picture read: a q,s (t) = −i cq a q,s (t) + cq 2 ε 0 V j s * · D j (t) e −iq·r j .(10) This equation can be integrated between the initial time t 0 and the time t, and the result can be injected in the expression for the transverse field to provide its value at any point r: E α (r, t) = E free,α (r, t) + j ,α q,s t−t 0 0 dτ cq 2ε 0 V iD j ,α (t − τ ) e iq·(r−r j )−icqτ s α s * α + h.c. ,(11) where E free stands for the value obtained in the absence of atoms. We now take the quantum average of this set of equations. In the steady-state regime the expectation value of the dipole operator D j (t) can be written d j e −iωt + c.c., and the average of E free (r, t) is the incident field E L e i(kz−ωt) + c.c. . We denote the average value of the transverse field operator in r as E(r, t) =Ē(r) e −iωt + c.c., and we obtain after some algebra (see e.g. [12,25]) E α (r) = E L α e ikz + k 3 6πε 0 j ,α g α,α (u j ) d j ,α ,(12) where we set u j = k(r − r j ) (with k ≈ k 0 ), g α,α (u) = δ α,α h 1 (u) + u α u α u 2 h 2 (u),(13) and h 1 (u) = 3 2 e iu u 3 (u 2 + iu − 1), h 2 (u) = 3 2 e iu u 3 (−u 2 − 3iu + 3).(14) The function g α,α (kr) is identical to the one appearing in classical electrodynamics [26], when calculating the field radiated in r by a dipole located at the origin. We proceed similarly for the equations of motion for the dipole operators D (−) j and take their average value in steady-state. The result can be put in the form [12] ( δ + i)d j,α + j =j, α g α,α (u jj )d j ,α = − 6πε 0 k 3 E L α e ikz j ,(15) where the reduced detunig δ = 2∆/Γ has been defined in Eq. (2) and u j,j = k(r j − r j ). This can be written with matrix notation [M ]|X = |Y(16) where the 3N vectors |X and |Y are defined by X j,α = − k 3 6π 0 E L d j,α , Y j,α = α e ikz j ,(17) and where the complex symmetric matrix [M ] has its diagonal coefficients equal to δ + i and its off-diagonal coefficients (for j = j ) given by g α,α (u jj ). This matrix belongs to the general class of Euclidean matrices [27], for which the (i, j) element can be written as a function F (r i , r j ) of points r i in the Euclidean space. The spectral properties of these matrices for a random distribution of the r i 's (as it will be the case in this work, see Sec. 4) have been studied in [27,28,29,30]. Eq. (15) has a simple physical interpretation: in steady-state each dipole d j is driven by the sum of the incident field E L and the field radiated by all the other dipoles. This set of 3N equations was first introduced by L. L. Foldy in [31] who named it, together with Eq. (12), "the fundamental equations of multiple scattering". Indeed for a given incident field, the solution of (16) provides the value of each dipole d j , which can then be injected in (12) to obtain the value of the total field at any point in space. Absorption signal From the expression of the average value of the dipoles we now extract the absorption coefficient of the probe beam and the optical density of the gas. We suppose that the N atoms are uniformly spread in a cylinder of radius R along the z axis and located between z = − /2 and z = /2. We can consider two experimental setups to address this problem. The first one, represented in Fig. 1a, consists in measuring after the atomic sample the total light intensity with the same momentum k = kû z as the incident probe beam. This can be achieved by placing a lens with the same size as the atomic sample, in the plane z = > /2 just after the sample. The light field at the focal point of the lens F gives the desired attenuation coefficient. We refer to this method as 'global', since the field E(F ) provides information over the whole atomic cloud. One can also use the setup sketched in Fig. 1b, which forms an image of the atom slab on a camera and provides a 'local' measurement of the absorption coefficient. In real experiments local measurements are often favored because trapped atomic sample are non homogeneous and it is desirable to access the spatial distribution of the particles. However for our geometry with a uniform density of scatterers, spatial information on the absorption of the probe beam is not relevant. Therefore we only present the formalism for global measurements, which is simpler to derive and leads to slightly more general expressions. We checked numerically that we obtained very similar results when we modelled the local procedure. We assume that the lens in Fig. 1a operates in the paraxial regime, i.e., its focal length f is much larger than its radius R. We relate the field at the image focal point of the lens to the field in the plane z = just before the lens: E(F ) = − ie ikf λ 0 f L E(x, y, ) dx dy,(18) where the integral runs over the lens area. Since the incident probe beam is supposed to be linearly polarised along x, we calculate the x component of the field in F . Plugging the value of the field given in Eqs. (12,17) we obtain the transmission coefficient T ≡ E x (F )| with atoms E x (F )| no atom = 1 − e −ik πR 2 j,α X j,α L g x,α [k(r − r j )] dx dy . (19) This result can be simplified in the limit of a large lens by using an approximated value for the integral appearing in (19). We suppose that k 1 so that the dominant part in g x,α is the e iu /u contribution to h 1 . More precisely the domain in the lens plane contributing to the integral for the dipole j is essentially a disk of radius λ( − z j ) ∼ √ λ centered on (x j , y j ) . When this small disk is entirely included in the lens aperture, i.e., the larger disk of radius R centered on x = y = 0, we obtain L g x,α [k(r − r j )] dx dy ≈ 3iπ k 2 δ x,α e ik( −z j ) .(20) We use the result (20) for all atoms, which amounts to neglect edge effects for the dipoles located at the border of the lens, and we obtain: T = 1 − i 2 σ 0 n (col) Π,(21) with n (col) = N/πR 2 and where the coefficient Π is defined by Π = 1 N j X j,x e −ikz j .(22) This coefficient captures the whole physics of multiple scattering and resonant van der Waals interactions among the N atoms. Indeed one takes into account all possible couplings between the dipoles when solving the 3N × 3N system [M ]|X = |Y . Once T is known the optical density is obtained from D ≡ ln |T | −2 .(23) As an example, consider the limit of a very sparse sample where multiple scattering does not play a significant role (σ 0 n (col) 1). All non-diagonal matrix elements in [M ] are then negligible and [M ] is simply the identity matrix, times i + δ. Each X j,x solution of the system (16) is equal to e ikz j /(i + δ), and we obtain as expected: σ 0 n (col) 1 : T ≈ 1 − 1 2(1 − iδ) σ 0 n (col) , D ≈ σ 0 n (col) 1 + δ 2 .(24) Light absorption as a quantum scattering process In order to study the attenuation of a weak probe beam propagating along the z axis when it crosses the atomic medium, we can also use quantum scattering theory. The Hamiltonian of the problem is H = H 0 + V , H 0 = H A + H F ,(25) and we consider the initial state where all atoms are in their ground state and where a single photon of wave vector k = kû z and polarisation =û x is incident on the atomic medium |Ψ i = |G ⊗ |k, ,(26) with |G ≡ |1 : g, 2 : g, . . . , N : g . The state |Ψ i is an eigenstate of H 0 with energy ω. The interaction of the photon with the atomic medium, described by the coupling V , can be viewed as a collision process during which an arbitrary number of elementary scattering events can take place. Each event starts from a state |G ⊗ |q, s and corresponds to: (i) The absorption of the photon in mode q, s by atom j, which jumps from its ground state |j : g to one of its excited states |j : e α . The state of the system is then |E j,α = |1 : g, . . . , j : e α , . . . , N : g ⊗ |vac , where |vac stands for the vacuum state of the electromagnetic field. The subspace spanned by the states |E j,α has dimension 3N . (ii) The emission of a photon in the mode (q , s ) by atom j, which falls back into its ground state. Finally a photon emerges from the atomic sample, and we want to determine the probability amplitude to find this photon in the same mode |k, as the initial one. The T matrix defined as T (E) = V + V 1 E − H + i0 + V ,(28) where 0 + is a small positive number that tends to zero at the end of the calculation, provides a convenient tool to calculate this probability amplitude. Generally T if = Ψ f |T (E i )|Ψ i(29) gives the probability amplitude to find the system in the final state |Ψ f after the scattering process. The states |Ψ i and |Ψ f are eigenstates of the unperturbed Hamiltonian H 0 , with energy E i . Here we are interested in the element T ii of the T matrix, corresponding to the choice |Ψ f = |Ψ i . Using the definition (28) we find T ii = ωd 2 2ε 0 V j,j e ik(z j −z j ) E j ,x | 1 ω − H + i0 + |E j,x .(30) We now have to calculate the (3N )×(3N ) matrix elements of the operator 1/(z−H), with z = ω + i0 + , entering into (30). We introduce the two orthogonal projectors P and Q, where P projects on the subspace with zero photon, and Q projects on the orthogonal subspace. We thus have P |E j,α = |E j,α P |G ⊗ |k, = 0,(31)Q|E j,α = 0 Q|G ⊗ |k, = |G ⊗ |k, .(32) We define the displacement operator R(z) = V + V Q z − QH 0 Q − QV Q V(33) and use the general result [20] P 1 z − H P = P z − H eff ,(34) where the effective Hamiltonian H eff is H eff = P (H 0 + R(z)) P.(35) For the following calculations, it is convenient to introduce the dimensionless matrix [M ] proportional to the denominator of the right hand side of (34): [M ] (j ,α ),(j,α) = 2 Γ E j ,α |z − H eff |E j,α .(36) It is straightforward to check ‡ that for z → ω this matrix coincides with the symmetric matrix appearing in (16). Indeed the matrix elements of R(z) are E j ,α |R(z)|E j,α = d 2 2ε 0 V q,s cq s * α s α e iq·(r j −r j ) z − ω ,(37) which can be calculated explicitly. For j = j , the real part of this expression is the Lamb shift that we reincorporate in the definition of ω 0 , and its imaginary part reads: E j,α |R(z)|E j,α = −i Γ 2 δ α,α .(38) For j = j , the sum over (q, s) appearing in (37) is the propagator of a photon from an atom in r j in internal state |e α , to another atom in r j in internal state |e α . This is nothing but (up to a multiplicative coefficient) the expression that we already introduced for the field radiated in r j by a dipole located in r j : E j ,α |R(z)|E j,α = − Γ 2 g α,α (u j,j ),(39) where the tensor g α,α is defined in Eqs. (13)(14). Suppose now that the atoms are uniformly distributed over the transverse area L x L y of the quantisation volume. We set n (col) = N/(L x L y ) and we rewrite the expression (30) of the desired matrix element T ii as T ii L z c = 1 2N σ 0 n (col) j,j e ik(z j −z j ) [M −1 ] (j,x),(j ,x) = 1 2 σ 0 n (col) Π ,(40) where the coefficient Π has been defined in (22). The result (40) combined with (21) leads to T = 1 − i T ii L z c ,(41) which constitutes the 'optical theorem' for our slab geometry, since it relates the attenuation of the probe beam T to the forward scattering amplitude T ii . The emergence of resonant van der Waals interactions is straightforward in this approach. Let us consider for simplicity the case where only N = 2 atoms are present. The effective Hamiltonian H eff is a 6 × 6 matrix that can be easily diagonalized and its eigenvectors, with one atom in |e and one in |g , form in this particular case an orthogonal basis, although H eff is non-Hermitian [32,33]. For a short distance r between the atoms (kr 1), the leading term in h 1 (u) and h 2 (u) is u −3 and the energies (real parts of the eigenvalues) of the six eigenstates vary as ∼ ± Γ/(kr) 3 (resonant dipoledipole interaction). The imaginary parts of the eigenvalues, which give the inverse of the radiative lifetime of the states, tend either to Γ or 0 when r → 0, which correspond to the superradiant and subradiant states for a pair of atoms, respectively [34]. For N > 2 the eigenvectors of the non-Hermitian Euclidean matrix H eff are in general non orthogonal, which complicates the use of standard techniques of spectral theory in this context [29,30]. More precisely, one could think of solving the linear system (16), or equivalently calculating T ii in Eq. (30), by using the expansion of the column vector |Y defined in Eq. (17) on the left (|α j ) and right ( β j |) eigenvectors of H eff . Then one could inject this expansion in the general expression of the matrix element T ii , to express it as a sum of the contributions of the various eigenvalues of H eff . However the physical discussion based on this approach is made difficult by the fact that since H eff is non-Hermitian, the {|α j } and the {|β j } bases do not coincide. Hence the weight β j |Y Y |α j of a given eigenvalue in the sum providing the value of T ii is not a positive number, and this complicates the interpretation of the result. Beyond the sparse sample case: 3D vs. 2D For a sparse sample, we already calculated the optical density at first order in density (Eq. (24)) and the result is identical for a strictly 2D gas and a thick one. The approach based on quantum scattering theory is well suited to go beyond this first order approximation and look for differences between the 2D and 3D cases. The basis of the calculation is the series expansion of Eq. (34), which gives P 1 z − H P = P z − H 0 + ∞ n=1 P z − H 0 P R(z)P 1 z − H 0 n .(42) Consider the case of a resonant probe δ = 0 for simplicity. The result T ≈ 1 − σ 0 n (col) /2 obtained for a sparse sample in Eq. (24) corresponds to the first term [P/(z − H 0 )] of this expansion. Here we investigate the next order term and explain why one can still recover the Beer-Lambert law for a thick (3D) gas, but not for a 2D sample. Double scattering diagrams for a thick sample (k 1). We start our study by adding the first term (n = 1) in the expansion (42) to the zero-th order term already taken into account in Eq. (24). This amounts to take into account the diagrams where the incident photon is scattered on a single atom, and those where the photon 'bounces' on two atoms before leaving the atomic sample. Injecting the first two terms of the expansion (42) into (40), we obtain T ii L z c = 1 2 σ 0 n (col) −i + 1 N j j =j e ik(z j −z j ) g xx (u jj ) .(43) We now have to average this result on the positions of the atoms j and j . There are N (N − 1) ≈ N 2 couples (j, j ). Assuming that the gas is dilute so that the average distance between two atoms (in particular |z j − z j |) is much larger than k −1 , the leading term in g xx is the e iu /u contribution of h 1 (u) in Eqs. (13)- (14). We thus arrive at T ii L z c = 1 2 σ 0 n (col) −i + 3N 2k e ik(z−z ) e ik|r−r | |r − r | ,(44) where the average is taken over the positions r and r of two atoms. We first calculate the average over the xy coordinates and we get (cf. Eq. (20)) T ii L z c = 1 2 σ 0 n (col) −i + i 2 σ 0 n (col) e ik(z−z ) e ik|z−z | .(45) For a thick gas (k 1) the bracket in this expression has an average value of ≈ 1/2. Indeed the function to be averaged is equal to 1 if z < z , which occurs in half of the cases, and it oscillates and averages to zero in the other half of the cases, where z > z . We thus obtain the approximate value of the transmission coefficient: k 1 : T = 1 − i T ii L z c ≈ 1 − 1 2 σ 0 n (col) + 1 8 σ 0 n (col) 2 ,(46) where we recognize the first three terms of the power series expansion of T = exp(−σ 0 n (col) /2), corresponding to the optical density D = σ 0 n (col) . Double scattering diagrams for a 2D gas ( = 0). When all atoms are sitting in the same plane, the evaluation of the second order term (and the subsequent ones) in the expansion of T ii in powers of the density is modified with respect to the 3D case. The calculation starts as above and the second term in the bracket of Eq. (43) can now be written 1 N j j =j g xx (u jj ) = n (2D) g xx (u) d 2 u .(47) If we keep only the terms varying as e iu /u in h 1 and h 2 (Eq. (14)), we can calculate analytically the integral in (47) and find the same result as in 3D, i.e., iσ 0 n (2D) /4. If this was the only contribution to (47), it would lead to the Beer-Lambert law also in 2D, at least at second order in density. However one can check that a significant contribution to the integral in (47) comes from the region u = kr < 1. In this region, it is not legitimate to keep only the term in e ikr /kr in h 1 , h 2 , since the terms in e ikr /(kr) 3 , corresponding to the short range resonant van der Waals interaction, are actually dominant. Therefore the expansion of the transmission coefficient T in powers of the density differs from (46), and one cannot recover the Beer-Lambert law at second order in density. Calculating analytically corrections to this law could be done following the procedure of [12]. Here we will use a numerical method to determine the deviation with respect to the Beer-Lambert law (see section 4.2). Remark. For a 3D gas there are also corrections to the second term in Eq. (45) due the 1/r 3 contributions to h 1 and h 2 . However these corrections have a different scaling with the density and can be made negligible. More precisely their order of magnitude is ∼ n (3D) k −3 , to be compared with the value ∼ n (col) k −2 of the second term in Eq. (45). Therefore one can have simultaneously n (3D) k −3 1 and n (col) k −2 1, if the thickness of the gas along z is 1/k. Absorption of light by a slab of atoms In order to study quantitatively the optical response of a quasi-2D gas, we have performed a Monte Carlo calculation of the transmission factor T given in Eq. (21), and of the related optical density D = ln |T | −2 . We start our calculation by randomly drawing the positions of the N atoms, we then solve numerically the 3N × 3N linear system (16), and finally inject the result for the N dipoles in the expression of T . The atoms are uniformly distributed in a cylinder of axis z, with a radius R and a thickness . The largest spatial densities considered in this work correspond to a mean inter-particle distance ≈ k −1 . Around each atom we choose a small excluded volume with a linear size a = 0.01 k −1 . We varied a by a factor 10 around this value and checked that our results were essentially unchanged. Apart from this excluded volume we do not include any correlation between the positions of the atoms. This choice is justified physically by the fact that, in the case of large phase space densities which motivates our study, the density fluctuations in a 2D Bose gas are strongly reduced and the two-body correlation function g 2 (r, r ) is such that g 2 (r, r) ≈ 1 [35]. In this section we first determine the value of N that is needed to reach the 'thermodynamic limit' for our problem: for a given thickness , D should not be an independent function of the number of atoms N and the disk radius R, but should depend only of the ratio N/πR 2 = n (col) . We will see that this imposes to use relatively large number of atoms, typically N > 1000, for the largest spatial densities considered here. All subsequent calculations are performed with N = 2048. We then study the dependence of D with the various parameters of the problem: the column density n (col) , the thickness of the gas , and the detuning ∆. In particular we show that for a given n (col) we recover the 3D result (1) when the thickness is chosen sufficiently large. Reaching the 'thermodynamic limit' We start our study by testing the minimal atom number that is necessary to obtain a faithful estimate of the optical density. We choose a given value of n (col) = N/πR 2 and we investigate how D depends on N either for a strictly 2D gas ( = 0) or for a gas extending significantly along the third direction ( = 20 k −1 ). We consider a resonant probe for this study (∆ = 0). We vary N by multiplicative steps of 2, from N = 8 up to N = 2048 and we determine how large N must be so that D is a function of n (col) only. The results are shown in Fig. 2a and Fig. 2b, where we plot D as a function of N . We perform this study for four values of the density n (col) , corresponding to σ 0 n (col) = 0.5, 1, 2 and 4. Let us consider first the smallest value σ 0 n (col) = 0.5. For each value of N we perform a number of draws that is sufficient to bring the standard error below 2 × 10 −3 and we find that the calculated optical density is independent of N (within standard error) already for N 100, for both values of . Consider now our largest value σ 0 n (col) = 4; for a strictly 2D gas ( = 0), D reaches an approximately constant value independent of N for N 1000. For σ 0 n (col) = 4 and a relatively thick gas ( = 20 k −1 , blue squares in Fig. 2b), reaching the thermodynamic limit is more problematic since there is still a clear difference between the results obtained with 1024 and 2048 atoms. This situation thus corresponds to the limit of validity of our numerical results. In the remaining part of the paper we will show only results obtained with N = 2048 atoms for column densities not exceeding σ 0 n (col) = 4. The number of independent draws of the atomic positions (at least 8) is chosen such that the standard error for each data point is below 2%. Measured optical density vs. Beer-Lambert prediction We now investigate the variation of the optical density D = ln |T | −2 as function of the column density of the sample n (col) , or equivalently of the Beer-Lambert prediction D BL = n (col) σ. We suppose in this section that the probe beam is resonant (∆ = 0), and we address the cases of a strictly 2D gas ( = 0) and a thick slab ( = 20 k −1 ). Consider first the case of a strictly 2D case, = 0, leading to the results shown in Fig. 3a. We see that D differs significantly (∼ 25%) from D BL already for D BL around 1. A quadratic fit to the calculated variation of D for σ 0 n (2D) < 1 (continuous red line) The discrepancy between D and D BL increases when the density increases: for D BL = 4, the calculated D is only ≈ 1.4. For such a large density the average distance between nearest neighbours is ≈ k −1 and the energy shifts due to the dipole-dipole interactions are comparable to or larger than the linewidth Γ. The atomic medium is then much less opaque to a resonant probe beam than in the absence of dipole-dipole coupling. Consider now the case of a thick sample, = 20 k −1 (Fig. 3b). The calculated optical density is then very close to the Beer-Lambert prediction over the whole range that we studied. This means that in our chosen range of optical densities, the mean-field approximation leading to D BL is satisfactory as soon as the sample thickness exceeds a few optical wavelengths λ = 2π/k. It is interesting to characterize how the optical density evolves from the value for a strictly 2D gas to the expected value from the Beer-Lambert law D BL when the thickness of the gas increases. We show in Fig. 4 the variation of D as function of for three values of the column density corresponding to D BL = 1, 2 and 4. An exponential fit D = α + β exp(− / c ) to these data for 2 k −1 ≤ ≤ 20 k −1 gives a good account of the observed variation over this range, and it provides the characteristic thickness c needed to recover the Beer-Lambert law. We find that c ≈ 3.0 k −1 for D BL = 1, c ≈ 3.5 k −1 for D BL = 2, and c ≈ 4.4 k −1 for D BL = 4. Remark. For the largest value of the column density considered here (n (col) σ 0 = 4) we find that D increases slightly above the value D BL when is chosen larger than 20 k −1 (upper value considered in Fig. 4). We believe that this is a consequence of the edge terms that we neglected when approximating Eq. (19) by Eq. (21). These terms become significant for D BL = 4 because for our atom number N = 2048, the sample radius R ≈ 55 k −1 is then not very large compared to its thickness for 20 k −1 . In order to check this assumption, we also calculated numerically the result of Eq. (19) (instead of Eq. (21)) for practical values of the parameters (position and radius) of the lens represented in Fig. 1a. The results give again D ≈ D BL , but now with D remaining below D BL . Since our emphasis in this paper is rather put on the 2D case, we will not explore this aspect further here. Absorption line shape Resonant van der Waals interactions manifest themselves not only in the reduction of the optical density at resonance but also in the overall line shape of the absorption profile. To investigate this problem we have studied the variations of D with the detuning of the probe laser. We show in Fig. 5a the results for a strictly 2D gas ( = 0) for n (col) σ 0 = 1, 2 and 4. Several features show up in this series of plots. First we note a blue shift of the resonance, which increases with n (2D) and reaches ∆ ≈ Γ/4 for σ 0 n (2D) = 4. We also note a slight broadening of the central part of the absorption line, since the full-width at half maximum, which is equal to Γ for an isolated atom, is 1.3Γ for n (col) σ 0 = 4. Finally we note the emergence of large, non-symmetric wings in the absorption profile. This asymmetry is made more visible in Fig. 5b, where we show with full blue squares the same data as in Fig. 5a for n (col) σ 0 = 4, but now plotting D/D BL as function of δ. For a detuning δ = ±15, the calculated optical density exceeds the Beer-Lambert prediction by a factor 4.1 (resp. 2.8) on the red (resp. blue) side. In order to get a better understanding of these various features, we give in Fig. 5b two additional results. On the one hand we plot with empty blue squares the variations of D/D BL for a thick gas ( = 20 k −1 ) with the same column density n (col) σ 0 = 4. There are still some differences between D and D BL in this case, as already pointed out in [17], but they are much smaller than in the = 0 case. This indicates that the strong deviations with respect to the Beer-Lambert law that we observe in Fig. 5a are specific 2D features. On the other hand we plot with black stars the variations of D/D BL for a 2D gas ( = 0) in which we artificially increased the exclusion radius around each atom up to a = k −1 instead of a = 0.01 k −1 (blue full squares) for the other results in this paper. This procedure, which was suggested to us by Robin Kaiser, allows one to discriminate between effects due to isolated pairs of closely spaced atoms, and manybody features resulting from multiple scattering of photons among larger clusters of atoms. The comparison of the results obtained for a = 0.01 k −1 and a = k −1 suggests that the blue shift of the resonance line, which is present in both cases, is a many-body phenomenon, whereas the large amplitude wings with a blue-red asymmetry, which occurs only for a = 0.01 k −1 , is rather an effect of close pairs. This asymmetry in the wings of the absorption line in a 2D gas can actually be understood in a semi-quantitative manner by a simple reasoning. We recall that for two atoms at a distance r k −1 , the levels involving one ground and one excited atom have an energy (real part of the eigenvalues of H eff ) that is displaced by ∼ ± Γ/(kr) 3 . A given detuning δ can thus be associated to a distance r between the two members of a pair that will resonantly absorb the light. To be more specific let us consider a pair of atoms with kr 1, and suppose for simplicity that it is aligned either along the polarization axis of the light (x) or perpendicularly to the axis (y). In both cases the excited state of the pair that is coupled to the laser is the symmetric combination (|1 : g; 2 : e x + |1 : e x ; 2 : g )/ √ 2. If the pair is aligned along the x axis, this state has an energy ω 0 − 3 Γ/2(kr) 3 , hence it is resonant with red detuned light such that δ = −3/(kr) 3 . If the pair axis is perpendicular to x, the state written above has an energy ω 0 + 3 Γ/4(kr) 3 , hence it is resonant with blue detuned light such that δ = 3/2(kr) 3 . This clearly leads to an asymmetry between red and blue detuning; indeed the pair distance r needed for ensuring resonance for a given δ > 0, r blue = (3/2|δ|) 1/3 , is smaller than the value r red = (3/|δ|) 1/3 for the opposite value −δ. Since the probability density for the pair distance is P(r) ∝ r in 2D for randomly drawn positions, we expect the absorption signal to be stronger for −δ than for +δ. In a 3D geometry the variation of the probability density with r is even stronger (P(r) ∝ r 2 ), but it is compensated by the fact that the probability of occurrence of pairs that are resonant with blue detuned light is dimensionally increased. For example in our simplified modelling where the pair axis is aligned with the references axes, a given pair will be resonant with blue detuned light in 2/3 of the cases (axis along y or z) and resonant with red detuned light only in 1/3 of the cases (axis along x). This explains why the asymmetry of the absorption profile is much reduced for a 3D gas in comparison to the 2D case. Summary We have presented in this paper a detailed analysis of the scattering of light by a disordered distribution of atoms in a quasi-two dimensional geometry. The particles were treated as fixed scatterers and their internal structure was modeled as a two-level system, with a J = 0 ground state and a J = 1 excited state. In spite of these simplifying assumptions the general trend of our results is in good agreement with the experimental finding of [7], where a variation of the measured optical density similar to that of Fig. 3 was measured. Several improvements in our modeling can be considered in order to reach a quantitative agreement with theory and experiment. The first one is to include the relatively complex atomic structure of the alkali-metal species used in practice, with a multiply degenerate ground state; this could be done following the lines of [21,22]. A second improvement consists in taking into account the atomic motion. This is in principle a formidable task, because it leads to a spectacular increase in the dimension of the relevant Hilbert space. This addition can however be performed in practice in some limiting cases, for example if one assumes that the particles are tightly bound in a lattice [36,37]. When the atom-light interaction is used only to probe the spatial atomic distribution of the gas, neglecting the particle motion should not be a major problem. Indeed the duration of the light pulse is quite short (∼ 10 microseconds only). Each atom scatters only a few photons in this time interval and its displacement is then smaller than the mean interatomic spacing for the spatial densities encountered in practice. The acceleration of the atoms under the effect of resonant van der Waals interaction should also have a minor effect under relevant experimental conditions. Finally another aspect that could be valuably studied is the interaction of the gas with an intense laser beam [38]. One could thus validate the intuitive idea that saturation phenomena reduce the effects of resonant van der Waals interactions [39,8], and are thus helpful to provide a faithful estimate of the atomic density from the light absorption signal. Figure 1 . 1Two possible setups for measuring the absorption of an incident probe beam by a slab of atoms using a lens of focal f . a) Global probe. b) Local probe. Figure 2 . 2Variation of the optical density D = ln |T | −2 calculated from (21) as function of the number of atoms N , for = 0 (a) and = 20 k −1 (b), and for 4 values of the density: σ 0 n (2D) = 0.5 (black), 1 (red), 2 (green) and 4 (blue). The bars indicate the standard deviations. The dotted lines give the value obtained for our largest value of N (N = 2048). The results have been obtained at resonance (∆ = 0). Figure 3 . 3Variations of the optical density D as function of the Beer-Lambert prediction D BL for = 0 (a) and = 20 k −1 (b). The black dotted line is the straight line of slope 1. In (a) the continuous red line is a quadratic fit D = D BL (1 − µ D BL ) with µ = 0.22 to the data points with D BL ≤ 1. The calculations are done for N = 2048, ∆ = 0 and the bars indicate standard deviations. gives D ≈ D BL (1 − 0.22 D BL ) . Figure 4 . 4Variation of D with the thickness of the gas for various column densities corresponding to D BL = 1 (red), 2 (green), 4 (blue). The continuous lines are exponential fits to the data. The dotted lines give the Beer-Lambert result. The calculations are done for N = 2048, ∆ = 0 and the bars indicate standard deviations. Figure 5 . 5(a) Variation of D with the reduced detuning δ = 2∆/Γ of the probe laser in the case of a 2D gas ( = 0), for three values of the column density σ 0 n (col) = 1 (red), 2 (green), and 4 (blue). (b) Blue full squares: same data as in (a), now plotted for D/D BL as function of δ. Blue open squares: D/D BL for a thick gas ( = 20 k −1 ). Black stars: D/D BL for a 2D gas ( = 0) and a large exclusion region around each atom (a = k −1 ). All data in (b) correspond to σ 0 n (2D) = 4. The calculations are done with N = 2048 atoms and the bars indicate standard deviations. ‡ As for the derivation leading from Eq. (10) to Eq. (12), one must take into account the non-resonant terms that are usually dropped in the RWA, in order to ensure the proper convergence of the sum (37) and obtain the tensor g αα . AcknowledgmentsWe thank I. Carusotto, Y. Castin, K. Günter, M. Holzmann, R. Kaiser, W. Krauth and S.P. Rath for helpful discussions and comments. This work is supported by IFRAF and ANR (project BOFL). Theory of Bose-Einstein condensation in trapped gases. F S Dalfovo, L P Pitaevkii, S Stringari, S Giorgini, Rev. Mod. Phys. 71463F. S. Dalfovo, L. P. Pitaevkii, S. Stringari, and S. Giorgini. Theory of Bose-Einstein condensation in trapped gases. Rev. Mod. Phys., 71:463, 1999. Ultracold atomic gases in optical lattices: mimicking condensed matter physics and beyond. M Lewenstein, A Sanpera, V Ahufinger, B Damski, A Sen De, U Sen, Adv. Phys. 562M. Lewenstein, A. Sanpera, V. Ahufinger, B. Damski, A. Sen De, and U. Sen. Ultracold atomic gases in optical lattices: mimicking condensed matter physics and beyond. Adv. Phys., 56(2):243-379, 2007. Many-body physics with ultracold gases. I Bloch, J Dalibard, W Zwerger, Rev. Mod. Phys. 803885I. Bloch, J. Dalibard, and W. Zwerger. Many-body physics with ultracold gases. Rev. Mod. Phys, 80(3):885, 2008. Theory of ultracold atomic Fermi gases. S Giorgini, L P Pitaevskii, S Stringari, Rev. Mod. Phys. 80S. Giorgini, L. P. Pitaevskii, and S. Stringari. Theory of ultracold atomic Fermi gases. Rev. Mod. Phys., 80:1215-1274, 2008. Making, probing and understanding Bose-Einstein condensates. W Ketterle, D S Durfee, D M Stamper-Kurn, Bose-Einstein condensation in atomic gases, Proceedings of the International School of Physics Enrico Fermi, Course CXL. M. Inguscio, S. Stringari, and C.E. WiemanAmsterdamIOS Press67W. Ketterle, D. S. Durfee, and D. M. Stamper-Kurn. Making, probing and understanding Bose- Einstein condensates. In M. Inguscio, S. Stringari, and C.E. Wieman, editors, Bose-Einstein condensation in atomic gases, Proceedings of the International School of Physics Enrico Fermi, Course CXL, page 67, Amsterdam, 1999. IOS Press. Obtaining the phase diagram and thermodynamic quantities of bulk systems from the densities of trapped gases. T.-L Ho, Qi Zhou, Nature Physics. 6131T.-L. Ho and Qi Zhou. Obtaining the phase diagram and thermodynamic quantities of bulk systems from the densities of trapped gases. Nature Physics, 6:131, 2009. Equilibrium state of a trapped two-dimensional Bose gas. S P Rath, T Yefsah, K J Günter, M Cheneau, R Desbuquois, M Holzmann, W Krauth, J Dalibard, Phys. Rev. A. 8213609S. P. Rath, T. Yefsah, K. J. Günter, M. Cheneau, R. Desbuquois, M. Holzmann, W. Krauth, and J. Dalibard. Equilibrium state of a trapped two-dimensional Bose gas. Phys. Rev. A, 82:013609, 2010. Exploring the thermodynamics of a two-dimensional Bose gas. T Yefsah, R Desbuquois, L Chomaz, K J Günter, J Dalibard, Phys. Rev. Lett. 107130401T. Yefsah, R. Desbuquois, L. Chomaz, K. J. Günter, and J. Dalibard. Exploring the thermodynamics of a two-dimensional Bose gas. Phys. Rev. Lett., 107:130401, 2011. Coherent backscattering of light by cold atoms. G Labeyrie, F De Tomasi, J.-C Bernard, C A Müller, C Miniatura, R Kaiser, Phys. Rev. Lett. 83G. Labeyrie, F. de Tomasi, J.-C. Bernard, C. A. Müller, C. Miniatura, and R. Kaiser. Coherent backscattering of light by cold atoms. Phys. Rev. Lett., 83:5266-5269, 1999. Slow diffusion of light in a cold atomic cloud. G Labeyrie, E Vaujour, C A Müller, D Delande, C Miniatura, D Wilkowski, R Kaiser, Phys. Rev. Lett. 91223904G. Labeyrie, E. Vaujour, C. A. Müller, D. Delande, C. Miniatura, D. Wilkowski, and R. Kaiser. Slow diffusion of light in a cold atomic cloud. Phys. Rev. Lett., 91:223904, 2003. Mesoscopic Physics of Electrons and Photons. E Akkermans, G Montambaux, Cambridge University PressCambridge, EnglandE. Akkermans and G. Montambaux. Mesoscopic Physics of Electrons and Photons. Cambridge University Press, Cambridge, England, 2007. Refractive index of a dilute Bose gas. O Morice, Y Castin, J Dalibard, Phys. Rev. A. 513896O. Morice, Y. Castin, and J. Dalibard. Refractive index of a dilute Bose gas. Phys. Rev. A, 51:3896, 1995. Probing Anderson localization of light via decay rate statistics. F A Pinheiro, M Rusek, A Orlowski, B A Van Tiggelen, Phys. Rev. E. 6926605F. A. Pinheiro, M. Rusek, A. Orlowski, and B. A. van Tiggelen. Probing Anderson localization of light via decay rate statistics. Phys. Rev. E, 69:026605, 2004. Superradiance and multiple scattering of photons in atomic gases. A Gero, E Akkermans, Phys. Rev. A. 7553413A. Gero and E. Akkermans. Superradiance and multiple scattering of photons in atomic gases. Phys. Rev. A, 75:053413, 2007. Dynamical evolution of correlated spontaneous emission of a single photon from a uniformly excited cloud of N atoms. A A Svidzinsky, J.-T Chang, M O Scully, Phys. Rev. Lett. 100160504A. A. Svidzinsky, J.-T. Chang, and M. O. Scully. Dynamical evolution of correlated spontaneous emission of a single photon from a uniformly excited cloud of N atoms. Phys. Rev. Lett., 100:160504, 2008. Photon localization and Dicke superradiance in atomic gases. E Akkermans, A Gero, R Kaiser, Phys. Rev. Lett. 101103602E. Akkermans, A. Gero, and R. Kaiser. Photon localization and Dicke superradiance in atomic gases. Phys. Rev. Lett., 101:103602, 2008. Light scattering from a dense and ultracold atomic gas. I M Sokolov, M D Kupriyanova, D V Kupriyanov, M D Havey, Phys. Rev. A. 7953405I. M. Sokolov, M. D. Kupriyanova, D. V. Kupriyanov, and M. D. Havey. Light scattering from a dense and ultracold atomic gas. Phys. Rev. A, 79:053405, 2009. Collective Lamb shift in single photon Dicke superradiance. M O Scully, Phys. Rev. Lett. 102143601M. O. Scully. Collective Lamb shift in single photon Dicke superradiance. Phys. Rev. Lett., 102:143601, 2009. Euclidean matrix theory of random lasing in a cloud of cold atoms. A Goetschy, S E Skipetrov, Europhysics Letters). 9634005EPLA. Goetschy and S. E. Skipetrov. Euclidean matrix theory of random lasing in a cloud of cold atoms. EPL (Europhysics Letters), 96:34005, 2011. Atom-Photon Interactions. C Cohen-Tannoudji, J Dupont-Roc, G Grynberg, WileyNew YorkC. Cohen-Tannoudji, J. Dupont-Roc, and G. Grynberg. Atom-Photon Interactions. Wiley, New York, 1992. Multiple scattering of light by atoms in the weak localization regime. T Jonckheere, C A Müller, R Kaiser, C Miniatura, D Delande, Phys. Rev. Lett. 85T. Jonckheere, C. A. Müller, R. Kaiser, C. Miniatura, and D. Delande. Multiple scattering of light by atoms in the weak localization regime. Phys. Rev. Lett., 85:4269-4272, 2000. Multiple scattering of light by atoms with internal degeneracy. C A Müller, C Miniatura, Journal of Physics A: Mathematical and General. 354710163C. A. Müller and C. Miniatura. Multiple scattering of light by atoms with internal degeneracy. Journal of Physics A: Mathematical and General, 35(47):10163, 2002. Photons and Atoms-Introduction to Quantum Electrodynamics. C Cohen-Tannoudji, J Dupont-Roc, G Grynberg, WileyNew-YorkC. Cohen-Tannoudji, J. Dupont-Roc, and G. Grynberg. Photons and Atoms-Introduction to Quantum Electrodynamics. Wiley, New-York, 1989. Quantum Mechanics, Chapter XIX. A Messiah, North-Holland Publishing CompanyIIAmsterdamA. Messiah. Quantum Mechanics, Chapter XIX, volume II. North-Holland Publishing Company, Amsterdam, 1961. Atomes refroidis par laser : du refroidissement sub-reculà la recherche d'effets quantiques collectifs. O Morice, ParisUniversité Pierre et Marie CuriePhD thesisO. Morice. Atomes refroidis par laser : du refroidissement sub-reculà la recherche d'effets quantiques collectifs. PhD thesis, Université Pierre et Marie Curie, Paris, http://tel.archives- ouvertes.fr/docs/00/06/13/10/PDF/1995MORICE.pdf, 1995. Classical Electrodynamics. J D Jackson, John WileyNew YorkJ. D. Jackson. Classical Electrodynamics. John Wiley, New York, 1998. Spectra of euclidean random matrices. A Zee, M Mézard, G Parisi, Nuclear Physics B. 559689A. Zee M. Mézard, G. Parisi. Spectra of euclidean random matrices. Nuclear Physics B, 559:689, 1999. Random green matrices: From proximity resonances to Anderson localization. M Rusek, J Mostowski, A Or, Phys. Rev. A. 6122704M. Rusek, J. Mostowski, and A. Or lowski. Random green matrices: From proximity resonances to Anderson localization. Phys. Rev. A, 61:022704, 2000. Eigenvalue distributions of large Euclidean random matrices for waves in random media. S E Skipetrov, A Goetschy, Journal of Physics A: Mathematical and Theoretical. 44665102S. E. Skipetrov and A. Goetschy. Eigenvalue distributions of large Euclidean random matrices for waves in random media. Journal of Physics A: Mathematical and Theoretical, 44(6):065102, 2011. Non-Hermitian Euclidean random matrix theory. A Goetschy, S E Skipetrov, Phys. Rev. E. 8411150A. Goetschy and S. E. Skipetrov. Non-Hermitian Euclidean random matrix theory. Phys. Rev. E, 84:011150, 2011. The multiple scattering of waves. i. general theory of isotropic scattering by randomly distributed scatterers. L L Foldy, Phys. Rev. 67L. L. Foldy. The multiple scattering of waves. i. general theory of isotropic scattering by randomly distributed scatterers. Phys. Rev., 67:107-119, 1945. First-order dispersion forces. M J Stephen, J. Chem. Phys. 40669M. J. Stephen. First-order dispersion forces. J. Chem. Phys., 40:669, 1964. Interaction effects on lifetimes of atomic excitations. D A Hutchinson, H F Hameka, J. Chem. Phys. 41D. A. Hutchinson and H. F. Hameka. Interaction effects on lifetimes of atomic excitations. J. Chem. Phys., 41:2006, 1964. Coherence in spontaneous radiation processes. R H Dicke, Phys. Rev. 9399R. H. Dicke. Coherence in spontaneous radiation processes. Phys. Rev., 93:99, 1954. Critical point of a weakly interacting two-dimensional Bose gas. N V Prokof&apos;ev, O Ruebenacker, B V Svistunov, Phys. Rev. Lett. 87270402N. V. Prokof'ev, O. Ruebenacker, and B. V. Svistunov. Critical point of a weakly interacting two-dimensional Bose gas. Phys. Rev. Lett., 87:270402, 2001. Spectrum of light in a quantum fluctuating periodic structure. M Antezza, Y Castin, Phys. Rev. Lett. 103123903M. Antezza and Y. Castin. Spectrum of light in a quantum fluctuating periodic structure. Phys. Rev. Lett., 103:123903, 2009. Fano-Hopfield model and photonic band gaps for an arbitrary atomic lattice. M Antezza, Y Castin, Phys. Rev. A. 8013816M. Antezza and Y. Castin. Fano-Hopfield model and photonic band gaps for an arbitrary atomic lattice. Phys. Rev. A, 80:013816, 2009. Strong saturation absorption immaging of dense clouds of ultracold atoms. G Reinaudi, T Lahaye, Z Wang, D Guéry-Odelin, Opt. Lett. 323143G. Reinaudi, T. Lahaye, Z. Wang, and D. Guéry-Odelin. Strong saturation absorption immaging of dense clouds of ultracold atoms. Opt. Lett., 32:3143, 2007. Observation of scale invariance and universality in two-dimensional Bose gases. Chen-Lung Hung, Xibo Zhang, Nathan Gemelke, Cheng Chin, Nature. 470236Chen-Lung Hung, Xibo Zhang, Nathan Gemelke, and Cheng Chin. Observation of scale invariance and universality in two-dimensional Bose gases. Nature, 470:236, 2011.
[]
[ "Resonant Low Frequency Interlayer Shear Modes in Folded Graphene Layers", "Resonant Low Frequency Interlayer Shear Modes in Folded Graphene Layers" ]
[ "Chunxiao Cong \nDivision of Physics and Applied Physics\nSchool of Physical and Mathematical Sciences\nNanyang Technological University\n637371Singapore\n", "Ting Yu [email protected] \nDivision of Physics and Applied Physics\nSchool of Physical and Mathematical Sciences\nNanyang Technological University\n637371Singapore\n\nDepartment of Physics\nFaculty of Science\nNational University of Singapore\n117542Singapore\n\nGraphene Research Center\nNational University of Singapore\n117546Singapore\n" ]
[ "Division of Physics and Applied Physics\nSchool of Physical and Mathematical Sciences\nNanyang Technological University\n637371Singapore", "Division of Physics and Applied Physics\nSchool of Physical and Mathematical Sciences\nNanyang Technological University\n637371Singapore", "Department of Physics\nFaculty of Science\nNational University of Singapore\n117542Singapore", "Graphene Research Center\nNational University of Singapore\n117546Singapore" ]
[]
Naturally or artificially stacking extra layers on single layer graphene (SLG) forms few-layer graphene (FLG), which has attracted tremendous attention owing to its exceptional properties inherited from SLG and new features generated by introducing extra freedom. In FLG, shear modes play a critical role in understanding its distinctive properties. Unfortunately, energies of shear modes are so close to excitation lasers to be fully blocked by a Rayleigh rejecter. This greatly hinders investigations of shear modes in FLG. Here, we demonstrate dramatically enhanced shear modes in properly folded FLG. Benefiting from the extremely strong signals, for the first time, enhancement mechanism, vibrational symmetry, anharmonicity and electron-phonon coupling (EPC) of shear modes are uncovered through studies of two-dimensional (2D) Raman mapping, polarization-and temperature-dependent Raman spectroscopy. This work complements Raman studies of graphene layers, and paves an efficient way to exploit low frequency shear modes of FLG and other 2D layered materials.Few-layer graphene (FLG) possesses unique properties of crystal structure, lattice dynamics and electronics, for example, opening an energy band gap in Bernal-stacked bilayer graphene (BLG) 1 ; different responses in the integer quantum Hall effect (IQHE) measurements for ABA-and ABC-stacked trilayer graphene (TLG) 2 ; and the formation of Van Hove Singularity (VHS) in folded or twisted double layer graphene (f/tDLG) 3-4 .Raman spectroscopy is one of the most useful and versatile techniques to probe graphene layers as has been demonstrated in the studies of number of layers, strain, doping, edges, stacking orders, and even magneto-phonon coupling in graphene layers [5][6][7][8][9][10][11][12][13][14][15][16][17][18][19][20]. The beauty of Raman spectroscopy for exploiting graphene is the truth that the fundamental vibrational modes such as G, G' and D modes are resonant with electrons, which leads to very strong signal, facilitates many unfulfillable measurements with weak Raman signal and provides an effective way to probe phonons and electronic band structures through strong electron-phonon coupling (EPC). In addition to these wellknown fundamental modes, some other weak modes like higher order, combinational and superlattice wave vector mediated phonon modes have been observed in either Bernal-or nonBernal-stacked graphene layers 21-24 . They all carry interesting and important information about lattice vibration and electronic band structures.Another very fundamental and intrinsic vibrational mode in FLG and bulk graphite is rigid interlayer shear mode, involving the relative motion of atoms in adjacent layers.Vibrational energies of shear modes vary when the thickness and consequently the restoring force strength of Bernal-stacked graphene layers changes as being demonstrated by the experimental observation and perfectly modeled by a simple linear chin system 25 . Therefore, this shear mode, named C peak can be used as another Raman spectroscopic feature for identifying the thickness of Bernal-stacked graphene layers.Considering its low energy, ~ 5 meV, researchers believe the C peak could be a probe Supplementary information is available in the online version of the paper. Reprints and permissions information is available online at www.nature.com/reprints.
10.1038/ncomms5709
[ "https://export.arxiv.org/pdf/1312.6928v1.pdf" ]
205,329,187
1312.6928
fb39c575fd7a0fd8da47daee1583a2913aa80ff8
Resonant Low Frequency Interlayer Shear Modes in Folded Graphene Layers Chunxiao Cong Division of Physics and Applied Physics School of Physical and Mathematical Sciences Nanyang Technological University 637371Singapore Ting Yu [email protected] Division of Physics and Applied Physics School of Physical and Mathematical Sciences Nanyang Technological University 637371Singapore Department of Physics Faculty of Science National University of Singapore 117542Singapore Graphene Research Center National University of Singapore 117546Singapore Resonant Low Frequency Interlayer Shear Modes in Folded Graphene Layers 1 *Address correspondence to Naturally or artificially stacking extra layers on single layer graphene (SLG) forms few-layer graphene (FLG), which has attracted tremendous attention owing to its exceptional properties inherited from SLG and new features generated by introducing extra freedom. In FLG, shear modes play a critical role in understanding its distinctive properties. Unfortunately, energies of shear modes are so close to excitation lasers to be fully blocked by a Rayleigh rejecter. This greatly hinders investigations of shear modes in FLG. Here, we demonstrate dramatically enhanced shear modes in properly folded FLG. Benefiting from the extremely strong signals, for the first time, enhancement mechanism, vibrational symmetry, anharmonicity and electron-phonon coupling (EPC) of shear modes are uncovered through studies of two-dimensional (2D) Raman mapping, polarization-and temperature-dependent Raman spectroscopy. This work complements Raman studies of graphene layers, and paves an efficient way to exploit low frequency shear modes of FLG and other 2D layered materials.Few-layer graphene (FLG) possesses unique properties of crystal structure, lattice dynamics and electronics, for example, opening an energy band gap in Bernal-stacked bilayer graphene (BLG) 1 ; different responses in the integer quantum Hall effect (IQHE) measurements for ABA-and ABC-stacked trilayer graphene (TLG) 2 ; and the formation of Van Hove Singularity (VHS) in folded or twisted double layer graphene (f/tDLG) 3-4 .Raman spectroscopy is one of the most useful and versatile techniques to probe graphene layers as has been demonstrated in the studies of number of layers, strain, doping, edges, stacking orders, and even magneto-phonon coupling in graphene layers [5][6][7][8][9][10][11][12][13][14][15][16][17][18][19][20]. The beauty of Raman spectroscopy for exploiting graphene is the truth that the fundamental vibrational modes such as G, G' and D modes are resonant with electrons, which leads to very strong signal, facilitates many unfulfillable measurements with weak Raman signal and provides an effective way to probe phonons and electronic band structures through strong electron-phonon coupling (EPC). In addition to these wellknown fundamental modes, some other weak modes like higher order, combinational and superlattice wave vector mediated phonon modes have been observed in either Bernal-or nonBernal-stacked graphene layers 21-24 . They all carry interesting and important information about lattice vibration and electronic band structures.Another very fundamental and intrinsic vibrational mode in FLG and bulk graphite is rigid interlayer shear mode, involving the relative motion of atoms in adjacent layers.Vibrational energies of shear modes vary when the thickness and consequently the restoring force strength of Bernal-stacked graphene layers changes as being demonstrated by the experimental observation and perfectly modeled by a simple linear chin system 25 . Therefore, this shear mode, named C peak can be used as another Raman spectroscopic feature for identifying the thickness of Bernal-stacked graphene layers.Considering its low energy, ~ 5 meV, researchers believe the C peak could be a probe Supplementary information is available in the online version of the paper. Reprints and permissions information is available online at www.nature.com/reprints. for the quasiparticles near the Dirac point through quantum interference 25 . However, the low energy also causes direct observation of shear modes being extremely challenging because the shear modes are so close to the excitation photons and fully suppressed by a notch or edge filter of most Raman instruments. To directly detect this C peak, lowdoped Si substrate with pre-etched holes was used in the previous study 25 . Though the C peak of the suspended graphene layers was observed on such specially prepared substrate, it is still very weak, especially for BLG and TLG, which happen to be the most interesting and promising candidates of graphene family together with SLG. Therefore, the extremely weak signal and the sophisticated sample preparation severely limit study of shear mode and its coupling with other particles. After the pioneering work 25 , very few experimental observations of the first order fundamental shear modes of FLG were reported 26 . Here, we report remarkably enhanced shear modes in folded 2+2 and 3+3 graphene layers with some certain rotational angles, where VHS is induced and results in the enhancement and doublet splitting of G mode as discussed in our previous study 27 . These folded 2+2 and 3+3 graphene layers with strongly enhanced G mode are named as r-f4LG and r-f6LG, respectively. Here, "r" refers to resonance of G mode. Instead of specially prepared substrate of low-doped Si with array of micro-holes 25 , we used the typical high-doped Si with SiO 2 (285 nm) substrate. The extremely strong signal, comparable or even stronger than the resonant G mode, enables measurements of twodimensional Raman mapping, polarization-and temperature-dependent Raman spectroscopy of this low frequency shear mode for the first time and thus unravels its vibrational symmetry, anharmonicity and EPC. Results In our previous study, we classified folded graphene layers into three types by the folding or rotational angles θ of θ small , θ medium and θ large, for a given excitation laser. These three types of folded layers exhibit very different Raman spectral features 27 . Figure 1 shows BLG with two self-folded regions of rotational angles of 11 o (θ medium ) and 21.4 o (θ large, ). Excitation laser of 532 nm was used for all the Raman measurements in this work. The significant enhancement of G mode in the θ medium r-f4LG can be clearly seen in the Raman image (Fig. 1b) and spectra (Fig. 1g). Surprisingly, along with the resonant G mode, a low frequency (~30 cm -1 ) peak also presents in this r-f4LG and it is so strong that the first Raman images of such low frequency mode in graphene layers are clearly resolved by extracting its intensity, position and width ( Fig. 1d-f). A single Lorentzian lineshape peak is used for fitting the Raman peaks in this study (Fig. S1). Reading the position and width of this low frequency mode and referring to the previous study of the suspended BLG 25 , we tentatively assign it to the interlayer shear mode (C peak or our label, C 2 ) of the "mother" flake, BLG, but with an extremely large enhancement of the intensity. The good correlation between the enhanced C 2 and G modes is clearly revealed by the Raman images ( Fig. 1b and d) and spectrum (Fig. 1g). Therefore, we believe that they share the same enhancement mechanism, a folding induced VHS. Though the formation and influence of VHS in 1+1 f/tDLG has been intensively studied recently [3][4] , there is no evidence of the existence of such VHS in 2+2 θ medium r-f4LG. In this work, by adapting the previous methodology [28][29] , we exploit the electronic band structure and density of states (DOS) of such 2D system. The VHS with good corresponding to our excitation photon energy is seen (Fig. S2). We remark that the nonuniformity appearing in the Raman images of intensity of C 2 mode (Fig. 1d) might be due to the varying of the interlayer spacing. A relatively weak peak locating at around 115 cm -1 is noticed and attributed to a combinational mode of interlayer breathing mode (~86 cm -1 ) 30 and shear mode, labelled as B+C 2 peak herein. Mediated by either the short-range twisted bilayer lattice or the supperlattice, ZO' and R peaks have also been observed in 1+1 r-fDLG and exhibit dependence of peak positions on twisting angles 23,31 . To further prove this ultra-strong low frequency peak is the shear mode of BLG and unravel its nature, we measured other two pieces of r-f4LGs, which present R peaks of various positions (Fig. 1h). The rotational angles are determined by carefully fitting and reading the positions of R peaks [23][24] . Clearly enough, all C 2 peaks are enhanced and their positions show no dependence on the rotational angles, which is very much different to the twisting angle dependent peaks discussed previously. The detailed curve fitting and calculation (Table S1) show the peak positions and linewidths of these C 2 peaks as well as the derived interlayer coupling strength by the frequencies of the C 2 peaks. The derived interlayer coupling strength keeps the same as that of normal Bernal-stacked BLG 25 . This indicates this fundamental shear phonon mode of the "mother" flake, though could be enhanced by the folding, is very robust even after being folded on top of itself, which must be very interesting and important for exploiting mechanical and electrical properties of such folded atomically thin layers including graphene and other 2D systems. It is also noticed that the position of the weak combinational (B+C 2 ) peak does not change neither when the rotational angles vary. This further supports our assignment since both breathing (B) and shear (C 2 ) modes are the fundamental modes of the "mother" flake and are not affected by the folding. Not only the 2+2 f4LG exhibits three types of folding, 3+3 f6LG also follows this criterion. Figure 2 presents optical and Raman images of θ medium and θ large f6LG together with the Raman images of the low frequency modes. The same as r-f4LG, in the G mode resonant region of 3+3 folded layers (r-f6LG), the low frequency peaks are remarkably enhanced and correlate very well with the resonant G mode as visualized by their Raman images. Therefore, the responsibility of the folding induced VHS for the enhancement of G and C peaks could be extended to the 3+3 r-f6LG. As predicated by the theory 25 and illustrated by the diagram (Fig. 2n), there are two shear modes in Bernal-stacked TLG, locating at relatively lower and higher frequency sides of the shear mode in BLG. The Raman spectrum (Fig. 2g) of the r-f6LG clearly presents the two low frequency peaks. The positions and linewdiths (Table S2) guide us to assign these two modes as the Raman-active E'' shear mode for the lower frequency one (C 31 ) and the infrared (IR)/Raman-active E' shear mode for the higher frequency one (C 32 ). Though C 31 is slightly weaker than C 32 , it is also significantly enhanced. This is the first observation of the lower frequency shear mode, which is supposed to be extremely weak and could not be observed even in bulk graphite 25 . It is noticed that the interlayer coupling strength derived from the two shear modes of 3+3 r-f6LG is nearly identical to the one in the 2+2 r-f4LG and the previously reported 25 . The peak locating at ~120 cm -1 in the r-f6LG is attributed to the combinational mode of the higher frequency breathing mode (IR-active) and the lower frequency shear mode (Raman-active), labelled as B+C 31 . Very interestingly and meaningfully, in a 2+3 folded 5-layer graphene of a medium rotational angle (11.8º ), G mode and shear modes of both BLG and TLG are enhanced ( Figure 2). In a 6+6 r-f12LG, all FIVE shear modes are clearly resolved and show perfect agreement with the theoretical predication ( Figure S3 and Table S3). This immediately indicates how robust the formation of VHS in such folded graphene layers, the enhancement of shear modes and G mode, and shear modes against the folding could be. To further demonstrate the feasibility of the enhancement of shear modes by a proper folding and exploit their more intrinsic properties, we performed polarization and in-situ temperature-dependent Raman spectroscopy study. Figure 3 shows the low frequency and G modes of r-f4LG as a function of angles between the polarization of the incident and scattering lights. The strong and sharp peak locating at ~30 cm -1 is inert to the change of polarization configurations while the intensity of the weak peak locating at ~115 cm -1 is maximized under the parallel polarization and minimized for the perpendicular configuration. From our discussion and the assignments above, the enhanced sharp peak in the 2+2 r-f4LG should be the shear mode of Bernal-stacked BLG with the symmetry of E g . Thus, it is in-plane two-fold degenerated and naturally independent to our polarization configurations as G mode. For the weak peak, the assignment is the combinational modes of breathing (A 1g ) and shear (E g ) modes in r-f4LG. Since the out-of-plane breathing mode (A 1g ) contributes to the combinational modes, the intrinsic polarization nature of the A 1g mode is perfectly responsible for the polar-dependence of the weak peaks locating at ~115 cm -1 , which shows zero intensity under anti-parallel and maximum for parallel polarization configuration. More discussion can be found in the supplementary information. In-situ temperature-dependent Raman spectroscopy is one of the most powerful tools to probe phonons, an assembling of lattice vibration and their interaction with other particles/quasiparticles. In Figure 4, we present the evolution of the shear mode of BLG in a temperature range of 90 K to 390 K. Firstly, we compare our thermal chamber temperature readings with the sample local temperatures estimated from the intensity ratio of Stokes and anti-Stokes 32 . The fairly good agreement between each other affirms that the laser heating could be neglected (Fig. 4b). Now, we focus on the line shift as a function of temperatures. A redshift of the shear mode as the increase of temperatures is observed (Fig. 4c upper panel). In the previous studies, the similar redshift of the G mode is also reported, and the frequency of G mode at 0 K and the first-order temperature coefficient were extracted by a linear fitting 33 . Softening of phonons at a higher temperature is common for many crystals owing to the enlarged bonds length due to the thermal expansion. However, graphene is exceptional as it has a quite large negative thermal expansion, potentially leading a blueshift instead, which actually has been well probed in the previous studies [34][35] . Though SLG anchoring on a substrate might be pinned down and follow the thermal expansion of the substrate, the shearing movement of the f4LG should be much free. Thus, there must be extra contribution to the overall softening of the shear mode with the increase of temperatures. The response of phonon frequency to the temperatures is a very effect manifestation of the anharmonicity. Two effects are usually responsible to the temperature-dependent line shift: anharmonic multiple phonons coupling and crystal thermal expansion 34 . We speculate that the anharmonic multiple phonons coupling should be the main reason for the softening of shear mode phonons with the increase of temperatures. Following the previous strategy 34 , we fit our experimental data by a polynomial function, which carries the total effects of lattice thermal expansion and anharmonic phonons coupling. The perfect agreement of each other confirms our speculation. The frequency of the shear mode of BLG at 0 K is extrapolated to be ) 0 (  = 32.6 cm -1 , which is critical for many further investigations, for example probing the influence of phonon-phonon coupling on the linewidth of phonon mode as discussed below. As comparison, the linear fitting is also shown here. Apparently, the nonlinear one is much more suitable, as also employed in the previous study 35 . Now, we move to the line width. In a defect-free crystal, the intrinsic linewidth (γ in ) is defined by: γ in = γ ph-ph + γ e-ph , where γ ph-ph represents the anharmonic phonon-phonon coupling and γ e-ph is from the electron-phonon interaction 36 . For γ ph-ph , a possible decay channel could be one shear mode phonon splits into two acoustic phonons of the same energy and opposite momentum 25 as described by : )] 2 / ( 2 1 )[ 0 ( 0    n ph ph ph ph     , where ) 0 ( ph ph  and 0  is the linewidth caused by the anharmonic phonon-phonon coupling and the frequency of the shear mode at 0 K, respectively. ] 1 ) / /[exp( 1 ) (   T K n B T T    is)] 2 / ( ) 2 / ( )[ 0 ( T K f T K f B B ph e ph e            , where ) 0 ( ph e  is the width resulted from the EPC at 0 K and ] 1 ) /[exp( 1 ) (   x x f . In this work, we fitted our data by considering both the anharmonic phonon decay ( ph ph  ) and the EPC ( ph e  ). A fairly good agreement could be achieved (Fig. 4c). Methods Sample preparation Graphene layers were prepared by the mechanical cleavage of graphite and transferred onto a 285 nm SiO 2 /Si substrate. During the mechanical exfoliation process, some graphene flakes flipped over and folded themselves partially and accidentally. Such interesting folded graphene layers were located under an optical microscope. The number of layers of the unfolded part was further identified by white light contrast spectra and Raman spectroscopy 39 . The folding or rotational angles were determined by reading the R peak position [23][24] and double checked by their geometrical morphologies visualized in their optical and Raman images 40 . Raman spectroscopy study A WITec CRM200 Raman system with a low-wavenumber coupler, a 600 lines/mm grating, a piezocrystal controlled scanning stage, a ×100 objective lens of NA = 0.95 was used for the Raman images. For Raman spectra of good spectral resolution, a 2400 lines/mm grating was used. The in-situ temperature-dependent Raman measurements were conducted in a Linkam thermal stage with a ×50 objective lens of NA = 0.55. All the Raman images and spectra were recorded under an excitation laser of 532 nm (E laser = 2.33 eV). To avoid the laser-induced heating, laser power was kept below 0.1 mW. 2+2 r-f4LG and 3+3 r-f6LG layers of different rotational angles are studied. From the position of R peaks, the rotational angles are determined 1-2 . Considering the spectral broadening of our system, ~0.9 cm -1 , we further correct the fitted line widths and list in the tables below. The interlayer coupling strength (α) is able to be obtained by: Acknowledgements                N i c i      ) Fitting the low frequency shear mode by a single Lorentzian line shape peak. In previous study 3 , due to the quantum interference between the shear mode and a continuum of electronic transition near the K point, a Breit-Wagner-Fano (BWF) line shape is observed in the weak shear modes. In this work, the shear modes are dramatically enhanced and could be fairly well fitted by a single Lorentzian line shape peak. Supplementary Figure S1 | Raman spectra of the low frequency shear mode in 2+2 r-f4LG obtained at 90 K, 300 K and 390 K. Both Stokes and anti-Stokes peaks are well fitted by a single Lorentzian line shape peak. Electronic band structure and density of states (DOS) of the 2+2 r-f4LG. In our previous study 4 , we grouped 2+2 folded graphene layers into three types by reading the folding or rotational angles θ, such as θ small (<4 degrees), θ medium (around 11 degrees), and θ large (>20 degrees), for the excitation photon energy of 2.33 eV. Here, we plot the electronic band structure and density of states (DOS) of a 2+2 f4LG with a medium rotational angle of 11.2 degree for 2.33 eV. The electronic structure was obtained applying the methodology outlined in previous work for twisted layer systems [5][6] here for Shear modes in 6+6 r-f12LG. The enhancement of shear modes of 6-layer graphene (6LG) is obtained in a 6+6 r-f12LG. All the shear modes are resolved and show good agreement with the theoretical prediction based on a simple yet sufficient linear-chain model as 3 :                N i c i      Estimation of sample local temperatures from the intensity ratio of Stokes and anti-Stokes peaks of shear mode. The sample local temperatures are estimated by reading the intensity ratio of Stokes (I S ) and anti-Stokes (I AS ) lines as 7 : where represents the laser frequency, represents the shear mode (C peak) frequency, K B is the Boltzmann constant and T is the temperature. cm-1 and 17.27 cm -1 , respectively. To further elucidate the origin of the linewidth or the phonon lifetime of the shear mode, we plot the contribution of ph ph  and ph e  in Fig. 4c. It is obvious that the ph e  is more dominant, especially at low temperatures. We expect a substantial EPC induced increment of linewidth of shear mode at cryogenic temperature. Such decrease of phonon lifetime could be well interpreted as, at very low temperature the occupation of conduction band near the Dirac point by the thermal excited electrons could be significantly suppressed and as a result of the creation of phonon-excited electron-hole pairs and thus their interactions (EPC) are remarkably activated, leading the broadening of Rama peaks. The large contrast of the phonon energies of the shear and G modes explains why the G mode is much broader than shear mode and a large EPC of the G mode could be preserved even at a high temperature 25 . Discussion Together with previous intensive Raman scattering studies of D, G, and G' modes of graphene, our systematic studies of the low frequency interlayer shear mode, C mode of FLG complement the probing of fundamental Raman studies of carbon materials. The folding-induced VHS promotes a remarkable enhancement of the shear modes as it does on the G mode. The in-plane two-fold degenerated symmetry, the anharmonicity and EPC of the shear mode are well exploited through two-dimensional Raman mapping, polarization-and temperature-dependent Raman spectroscopy of this low frequency shear mode (~5 meV), which was far away from being accessible before. More insight understandings of mechanical and electrical properties and further developments of practical applications of FLG are expected to be achieved soon through investigations of the enhanced shear modes in the stretched, electrically or molecularly doped folded FLG and even under a magnetic field. This work is supported by the Singapore National Research Foundation under NRF RF Award No. NRF-RF2010-07 and MOE Tier 2 MOE2012-T2-2-049. C.X.C thanks the valuable discussion with Dr Jun Zhang. T.Y. thanks the enlightening discussion with Professor Castro Neto, Antonio Helio. The authors are grateful for the important help of Dr Jeil Jung on sharing the electronic band structure and density of states of 2+2 folded graphene layers.Author contributionsC.X.C. and T.Y. initialled the project; conceived and designed the experiments; C.X.C.performed the experiments; C.X.C. and T.Y. analysed the data, discussed the results and co-wrote the paper. Figure captions Figure captions Figure 1 | 1Raman images and spectra of 2+2 f4LG. a, Optical image of folded BLG. The folding types are identified and labelled by their rotational angles. Raman intensity images of b, G mode; c, G' mode; Raman image of d, intensity; e, frequency; and f, width of the shear mode of BLG (C 2 ). g, Raman spectra of low and intermediate frequency modes of BLG, θ medium 2+2 r-f4LG and θ large 2+2 f4LG. h, Raman spectra of low and intermediate frequency modes of 2+2 r-f4LG with different rotational angles as determined by the R peak positions and indicated. (E laser = 2.33 eV). Figure 2 | 2Raman images and spectra of 3+3 r-f6LG (left panel) and 2+3 r-f5LG (right panel). a, Optical image of folded TLG. The folding types are identified and labelled by their rotational angles. Raman intensity images of b, G mode; c, G' mode; d, lower frequency shear mode (C 31 ); e, higher frequency shear mode (C 32 ), and f, combinational mode (B+C 31 ). g, Raman spectrum of low frequency modes of 3+3 r-f6LG with rotational angle of 12.2º . Raman image of h, intensity of G mode; i, intensity of G' mode; j, width of G' mode; k -m, Raman intensity images of shear modes in BLG (C 2 ) and in TLG (C 31 and C 32 ); n, schematic diagram of shear modes in BLG and TLG. o, Raman spectrum of low and intermediate frequency modes of 2+3 r-f5LG. The shear modes of both BLG and TLG are enhanced together with the G mode. (E laser = 2.33 eV). Figure 3 | 3Polarization-dependent Raman spectra of 2+2 r-f4LG. Raman spectra of a, low frequency modes, and b, G mode when the angles between the polarization of the incident and scattering lights are tuned to be 0, 30, 60 and 90 degrees as indicated. (E laser = 2.33 eV). Figure 4 | 4In-situ temperature-dependent Raman spectra of 2+2 r-f4LG. a, Temperature-dependent Raman spectra of shear mode with both Stokes and anti-Stokes lines; b, sample local temperatures estimated from the intensity ratio of Stokes and anti-Stokes as a function of the thermal chamber temperatures; c, positions (upper panel) and full width at half maximum (FWHM) (lower panel) of Stokes shear mode as a function of temperatures. The solid spheres represent the experimental data. The straight blue line (upper panel) is the linear fit and the pink curve is the polynomial fit, which accounts for both thermal expansion and anharmonic multi-phonon interaction. Obviously, the nonlinear fit is superior to the linear one. The frequency of the shear mode at 0 K can be extrapolated. For FWHM (lower panel), the green line is the fit of the data by considering both effects of the phonon-phonon (ph-ph) interaction and electron-phonon coupling (EPC). The purple and blue plots describe the contributions of the ph-ph and EPC, separately. The dominance of EPC, especially at low temperature is clear. Note: the FWHM has been corrected by subtracting the broadening of our system from the fitted values. (E laser = 2.33 eV). Figure 1 1Figure 1 Figure 2 Figure 3 23Figure 2 of Physics and Applied Physics, School of Physical and Mathematical Sciences, Nanyang Technological University, 637371, Singapore; 2 Department of Physics, Faculty of Science, National University of Singapore, 117542, Singapore; 3 Graphene Research Center, National University of Singapore, 117546, Singapore *Address correspondence to [email protected] 1. Raman spectroscopic features of low (< 130 cm -1 ) frequency and intermediate (1400 -1700 cm -1 ) frequency modes in resonant folded few-layer graphene (r-fFLG). the specific system of 2+2 f4LG. The band structures are represented in the Moire Brillouin zone and are plotted along the following symmetry lines. Supplementary Figure S2 | Electronic band structure and density of states (DOS) of the 2+2 r-f4LG with a rotational angle of 11.2 degree. The band structure is represented along the straight lines connecting the symmetry points of the Moire Brillouin zone represented on the right panel. Å -2 is the SLG mass per unit area and c is the speed of light in cms -1 . 6LG Supplementary Figure S3 | Raman images and spectrum of 6+6 r-f12LG. a, Optical image of folded 6-layer graphene; Raman intensity images of b, G; c, G' bands; d, Schematic diagram of shear modes of 6LG. The Raman-active (R) or IR-active (IR) are also indicated; e, Raman spectrum of shear, R and G modes of 6+6 r-f12LG. The IR-active modes become Raman-active might be due to the folding as discussed in our previous study 4 . Supplementary Table S3 | Positions (Stokes) and widths of C 61 -C 65 shear modes in 6+6 r-f12LG together with the linear-chain mode predicted positions as Determination of the polarization dependence of shear modes and their combinational modes in r-f4LG. e i and e s are the unit vectors describing the polarizations of the incident and scattered lights, and  is Raman tensor. In this work, the polarization of the incident light is fixed along the horizontal while the polarization of the scattered light is tuned with an angle (φ) to the horizontal by a polarizer . The Raman tensors for the shear and the breathing modes of BLG are: shear mode (c = d) and breathing mode .Thus, the intensity of the shear mode is: the phonon occupation number, whereT   is the shear mode phonon energy at temperature T and B K is the Boltzmann constant 37 . Rather than fixing the phonon energy, i.e. 196 meV in the previous study of G mode 34 , we substitute individual phonon energy of shear mode at corresponding temperature because the variation of the G phonon energy is only around 0.3% in our temperature window, whereas upto 8% is noticed for the shear mode phonon energy. EPC also contributes and even become dominant contribution to linewidth in a gapless system like graphene, graphite and metallic carbon nanotubes 13,38 . For γ e-ph , it should follow: Note: We noticed a similar work by Dr. Tan P. H. et al. from a conference after our submission.39 Ni, Z. H. et al. Graphene thickness determination using reflection and contrast spectroscopy. Nano Lett 7, 2758-2763 (2007). 40 Ni, Z. H. et al. G-band Raman double resonance in twisted bilayer graphene: Evidence of band splitting and folding. Phys Rev B 80, 125404 (2009). Additional informationCompeting financial interestsThe authors declare no competing financial interests Biased bilayer graphene: Semiconductor with a gap tunable by the electric field effect. E V Castro, Phys Rev Lett. 99216802Castro, E. V. et al. Biased bilayer graphene: Semiconductor with a gap tunable by the electric field effect. Phys Rev Lett 99, 216802 (2007). Integer Quantum Hall Effect in Trilayer Graphene. A Kumar, Phys Rev Lett. 107126806Kumar, A. et al. Integer Quantum Hall Effect in Trilayer Graphene. Phys Rev Lett 107, 126806 (2011). Observation of Van Hove singularities in twisted graphene layers. G H Li, Nat Phys. 6Li, G. H. et al. Observation of Van Hove singularities in twisted graphene layers. Nat Phys 6, 109-113 (2010). Raman Spectroscopy Study of Rotated Double-Layer Graphene: Misorientation-Angle Dependence of Electronic Structure. K Kim, Phys Rev Lett. 108246103Kim, K. et al. Raman Spectroscopy Study of Rotated Double-Layer Graphene: Misorientation-Angle Dependence of Electronic Structure. Phys Rev Lett 108, 246103 (2012). Influence of the atomic structure on the Raman spectra of graphite edges. L G Cancado, M A Pimenta, B R A Neves, M S S Dantas, A Jorio, Phys Rev Lett. 93247401Cancado, L. G., Pimenta, M. A., Neves, B. R. A., Dantas, M. S. S. & Jorio, A. Influence of the atomic structure on the Raman spectra of graphite edges. Phys Rev Lett 93, 247401 (2004). Raman spectrum of graphene and graphene layers. A C Ferrari, Phys Rev Lett. 97187401Ferrari, A. C. et al. Raman spectrum of graphene and graphene layers. Phys Rev Lett 97, 187401 (2006). Raman spectroscopy as a versatile tool for studying the properties of graphene. A C Ferrari, D M Basko, Nat Nanotechnol. 8Ferrari, A. C. & Basko, D. M. Raman spectroscopy as a versatile tool for studying the properties of graphene. Nat Nanotechnol 8, 235-246 (2013). Raman mapping investigation of graphene on transparent flexible substrate: The strain effect. T Yu, J Phys Chem C. 112Yu, T. et al. Raman mapping investigation of graphene on transparent flexible substrate: The strain effect. J Phys Chem C 112, 12602-12605 (2008). Phonon softening and crystallographic orientation of strained graphene studied by Raman spectroscopy. M Y Huang, Proc Natl Acad Sci. 106Huang, M. Y. et al. Phonon softening and crystallographic orientation of strained graphene studied by Raman spectroscopy. Proc Natl Acad Sci USA 106, 7304-7308 (2009). Uniaxial strain in graphene by Raman spectroscopy: G peak splitting, Gruneisen parameters, and sample orientation. T M G Mohiuddin, Phys Rev B. 79205433Mohiuddin, T. M. G. et al. Uniaxial strain in graphene by Raman spectroscopy: G peak splitting, Gruneisen parameters, and sample orientation. Phys Rev B 79, 205433 (2009). Strain-Dependent Splitting of the Double-Resonance Raman Scattering Band in Graphene. D Yoon, Y W Son, H Cheong, Phys Rev Lett. 106155502Yoon, D., Son, Y. W. & Cheong, H. Strain-Dependent Splitting of the Double-Resonance Raman Scattering Band in Graphene. Phys Rev Lett 106, 155502 (2011). Monitoring dopants by Raman scattering in an electrochemically topgated graphene transistor. A Das, Nat Nanotechnol. 3Das, A. et al. Monitoring dopants by Raman scattering in an electrochemically top- gated graphene transistor. Nat Nanotechnol 3, 210-215 (2008). Electric field effect tuning of electron-phonon coupling in graphene. J Yan, Y B Zhang, P Kim, A Pinczuk, Phys Rev Lett. 98166802Yan, J., Zhang, Y. B., Kim, P. & Pinczuk, A. Electric field effect tuning of electron-phonon coupling in graphene. Phys Rev Lett 98, 166802 (2007). Thickness-dependent azobenzene doping in mono-and few-layer graphene. N Peimyoo, T Yu, J Z Shang, C X Cong, H P Yang, Carbon. 50Peimyoo, N., Yu, T., Shang, J. Z., Cong, C. X. & Yang, H. P. Thickness-dependent azobenzene doping in mono-and few-layer graphene. Carbon 50, 201-208 (2012). Edge chirality determination of graphene by Raman spectroscopy. Y M You, Z H Ni, T Yu, Z X Shen, Appl Phys Lett. 93163112You, Y. M., Ni, Z. H., Yu, T. & Shen, Z. X. Edge chirality determination of graphene by Raman spectroscopy. Appl Phys Lett 93, 163112 (2008). Raman Study on the G Mode of Graphene for Determination of Edge Orientation. C X Cong, T Yu, H M Wang, ACS Nano. 4Cong, C. X., Yu, T. & Wang, H. M. Raman Study on the G Mode of Graphene for Determination of Edge Orientation. ACS Nano 4, 3175-3180 (2010). Imaging Stacking Order in Few-Layer Graphene. C H Lui, Nano Lett. 11Lui, C. H. et al. Imaging Stacking Order in Few-Layer Graphene. Nano Lett 11, 164-169 (2011). Raman Characterization of ABA-and ABC-Stacked Trilayer Graphene. C X Cong, ACS Nano. 5Cong, C. X. et al. Raman Characterization of ABA-and ABC-Stacked Trilayer Graphene. ACS Nano 5, 8760-8768 (2011). Probing the band structure of quadri-layer graphene with magnetophonon resonance. C Faugeras, New J Phys. 1495007Faugeras, C. et al. Probing the band structure of quadri-layer graphene with magneto- phonon resonance. New J Phys 14, 095007 (2012). Strong magnetophonon resonance induced triple G-mode splitting in graphene on graphite probed by micromagneto Raman spectroscopy. C Y Qiu, Phys Rev B. 88165407Qiu, C. Y. et al. Strong magnetophonon resonance induced triple G-mode splitting in graphene on graphite probed by micromagneto Raman spectroscopy. Phys Rev B 88, 165407 (2013). Observation of Layer-Breathing Mode Vibrations in Few-Layer Graphene through Combination Raman Scattering. C H Lui, Nano Lett. 12Lui, C. H. et al. Observation of Layer-Breathing Mode Vibrations in Few-Layer Graphene through Combination Raman Scattering. Nano Lett 12, 5539-5544 (2012). Second-Order Overtone and Combination Raman Modes of Graphene Layers in the Range of 1690-2150 cm. C X Cong, T Yu, R Saito, G F Dresselhaus, M S Dresselhaus, ACS Nano. 51Cong, C. X., Yu, T., Saito, R., Dresselhaus, G. F. & Dresselhaus, M. S. Second-Order Overtone and Combination Raman Modes of Graphene Layers in the Range of 1690- 2150 cm(-1). ACS Nano 5, 1600-1605 (2011). Raman Signature of Graphene Superlattices. V Carozo, Nano Lett. 11Carozo, V. et al. Raman Signature of Graphene Superlattices. Nano Lett 11, 4527-4534 (2011). Resonance effects on the Raman spectra of graphene superlattices. V Carozo, Phys Rev B. 8885401Carozo, V. et al. Resonance effects on the Raman spectra of graphene superlattices. Phys Rev B 88, 085401 (2013). The shear mode of multilayer graphene. P H Tan, Nat Mater. 11Tan, P. H. et al. The shear mode of multilayer graphene. Nat Mater 11, 294-300 (2012). Evaluation of the interlayer interactions of few layers of graphene. J Tsurumi, Y Saito, P Verma, Chem Phys Lett. 557Tsurumi, J., Saito, Y. & Verma, P. Evaluation of the interlayer interactions of few layers of graphene. Chem Phys Lett 557, 114-117 (2013). Evolution of Raman G and G' (2D) Modes in Folded Graphene Layers. C X Cong, T Yu, submittedCong, C. X. & Yu, T. Evolution of Raman G and G' (2D) Modes in Folded Graphene Layers. submitted (2013). Moire bands in twisted double-layer graphene. R Bistritzer, A H Macdonald, Proc Natl Acad Sci. 108Bistritzer, R. & MacDonald, A. H. Moire bands in twisted double-layer graphene. Proc Natl Acad Sci USA 108, 12233-12237 (2011). Ab-initio theory of moire bands in layered two-dimensional materials. J Jung, A Raoux, Z H Qiao, A H Macdonald, to appearJ. Jung, A. Raoux, Z. H. Qiao & MacDonald, A. H. Ab-initio theory of moire bands in layered two-dimensional materials'. (to appear). Measurement of layer breathing mode vibrations in few-layer graphene. C H Lui, T F Heinz, Phys Rev B. 87121404Lui, C. H. & Heinz, T. F. Measurement of layer breathing mode vibrations in few-layer graphene. Phys Rev B 87, 121404 (2013). Observation of Low Energy Raman Modes in Twisted Bilayer Graphene. R He, Nano Lett. 13He, R. et al. Observation of Low Energy Raman Modes in Twisted Bilayer Graphene. Nano Lett 13, 3594-3601 (2013). Determination of the Local Temperature at a Sample during Raman Experiments Using Stokes and Anti-Stokes Raman Bands. B J Kip, R J Meier, Appl Spectrosc. 44Kip, B. J. & Meier, R. J. Determination of the Local Temperature at a Sample during Raman Experiments Using Stokes and Anti-Stokes Raman Bands. Appl Spectrosc 44, 707-711 (1990). Temperature dependence of the Raman spectra of graphene and graphene multilayers. I Calizo, A A Balandin, W Bao, F Miao, C N Lau, Nano Lett. 7Calizo, I., Balandin, A. A., Bao, W., Miao, F. & Lau, C. N. Temperature dependence of the Raman spectra of graphene and graphene multilayers. Nano Lett 7, 2645-2649 (2007). Phonon anharmonicities in graphite and graphene. N Bonini, M Lazzeri, N Marzari, F Mauri, Phys Rev Lett. 99176802Bonini, N., Lazzeri, M., Marzari, N. & Mauri, F. Phonon anharmonicities in graphite and graphene. Phys Rev Lett 99, 176802 (2007). Temperature evolution of infrared-and Raman-active phonons in graphite. P Giura, Phys Rev B. 86121404Giura, P. et al. Temperature evolution of infrared-and Raman-active phonons in graphite. Phys Rev B 86, 121404 (2012). Optical phonons of graphene and nanotubes. S Piscanec, M Lazzeri, F Mauri, A C Ferrari, Eur Phys J-Spec Top. 148Piscanec, S., Lazzeri, M., Mauri, F. & Ferrari, A. C. Optical phonons of graphene and nanotubes. Eur Phys J-Spec Top 148, 159-170 (2007). Temperature dependence of the first-order Raman scattering by phonons in Si, Ge, and α-Sn: Anharmonic effects. J Menéndez, M Cardona, Phys Rev B. 29Menéndez, J. & Cardona, M. Temperature dependence of the first-order Raman scattering by phonons in Si, Ge, and α-Sn: Anharmonic effects. Phys Rev B 29, 2051- 2059 (1984). Phonon linewidths and electron-phonon coupling in graphite and nanotubes. M Lazzeri, S Piscanec, F Mauri, A C Ferrari, J Robertson, Phys Rev B. 73155426Lazzeri, M., Piscanec, S., Mauri, F., Ferrari, A. C. & Robertson, J. Phonon linewidths and electron-phonon coupling in graphite and nanotubes. Phys Rev B 73, 155426 (2006). Raman Signature of Graphene Superlattices. V Carozo, Nano Lett. 11Carozo, V. et al. Raman Signature of Graphene Superlattices. Nano Lett 11, 4527-4534 (2011). Resonance effects on the Raman spectra of graphene superlattices. V Carozo, Phys Rev B. 8885401Carozo, V. et al. Resonance effects on the Raman spectra of graphene superlattices. Phys Rev B 88, 085401 (2013). The shear mode of multilayer graphene. P H Tan, Nat Mater. 11Tan, P. H. et al. The shear mode of multilayer graphene. Nat Mater 11, 294-300 (2012). Evolution of Raman G and G' (2D) Modes in Folded Graphene Layers. C X Cong, T Yu, submittedCong, C. X. & Yu, T. Evolution of Raman G and G' (2D) Modes in Folded Graphene Layers. submitted (2013). Moire bands in twisted double-layer graphene. R Bistritzer, A H Macdonald, P Natl Acad Sci. 108Bistritzer, R. & MacDonald, A. H. Moire bands in twisted double-layer graphene. P Natl Acad Sci USA 108, 12233-12237 (2011). Ab-initio theory of moire bands in layered two-dimensional materials. J Jung, A Raoux, Z H Qiao, A H Macdonald, to appearJ. Jung, A. Raoux, Z. H. Qiao & MacDonald, A. H. Ab-initio theory of moire bands in layered two-dimensional materials'. (to appear). Determination of the Local Temperature at a Sample during Raman Experiments Using Stokes and Anti-Stokes Raman Bands. B J Kip, R J Meier, Appl Spectrosc. 44Kip, B. J. & Meier, R. J. Determination of the Local Temperature at a Sample during Raman Experiments Using Stokes and Anti-Stokes Raman Bands. Appl Spectrosc 44, 707-711 (1990).
[]
[ "The Gravitational-wave Optical Transient Observer (GOTO): prototype performance and prospects for transient science", "The Gravitational-wave Optical Transient Observer (GOTO): prototype performance and prospects for transient science" ]
[ "D Steeghs [email protected] \nDepartment of Physics\nUniversity of Warwick\nGibbet Hill RoadCV4 7ALCoventryUK\n\nThe ARC Centre of Excellence for Gravitational Wave Discovery\nOzGrav\nClayton VIC 3800Australia\n", "D K Galloway \nSchool of Physics & Astronomy\nMonash University\nClayton VIC 3800Australia\n\nThe ARC Centre of Excellence for Gravitational Wave Discovery\nOzGrav\nClayton VIC 3800Australia\n", "K Ackley \nDepartment of Physics\nUniversity of Warwick\nGibbet Hill RoadCV4 7ALCoventryUK\n\nSchool of Physics & Astronomy\nMonash University\nClayton VIC 3800Australia\n\nThe ARC Centre of Excellence for Gravitational Wave Discovery\nOzGrav\nClayton VIC 3800Australia\n", "M J Dyer \nDepartment of Physics and Astronomy\nUniversity of Sheffield\nS3 7RHSheffieldUK\n", "J Lyman \nDepartment of Physics\nUniversity of Warwick\nGibbet Hill RoadCV4 7ALCoventryUK\n", "K Ulaczyk \nDepartment of Physics\nUniversity of Warwick\nGibbet Hill RoadCV4 7ALCoventryUK\n", "R Cutter \nDepartment of Physics\nUniversity of Warwick\nGibbet Hill RoadCV4 7ALCoventryUK\n", "Y.-L Mong \nSchool of Physics & Astronomy\nMonash University\nClayton VIC 3800Australia\n\nThe ARC Centre of Excellence for Gravitational Wave Discovery\nOzGrav\nClayton VIC 3800Australia\n", "V Dhillon \nDepartment of Physics and Astronomy\nUniversity of Sheffield\nS3 7RHSheffieldUK\n\nInstituto de Astrofísica de Canarias\nE-38205La Laguna, TenerifeSpain\n", "P O&apos;brien \nSchool of Physics & Astronomy\nUniversity of Leicester\nUniversity RoadLE1 7RHLeicesterUK\n", "G Ramsay \nArmagh Observatory & Planetarium\nCollege HillBT61 9DGArmagh\n", "S Poshyachinda \nT. Donkaew\nNational Astronomical Research Institute of Thailand\n260 Moo 450180ChiangmaiA. MaerimThailand\n", "R Kotak \nDepartment of Physics & Astronomy\nUniversity of Turku\nVesilinnantie 5FI-20014TurkuFinland\n", "L K Nuttall \nUniversity of Portsmouth\nPO1 3FXPortsmouthUK\n", "E Pallé \nInstituto de Astrofísica de Canarias\nE-38205La Laguna, TenerifeSpain\n\nDepartment of Astrophysics/IMAPP\nRadboud University Nijmegen\nP.O. Box 90106500 GLNijmegenThe Netherlands\n", "R P Breton \nDepartment of Physics and Astronomy\nJodrell Bank Centre for Astrophysics\nThe University of Manchester\nM13 9PLManchesterUK\n", "D Pollacco \nDepartment of Physics\nUniversity of Warwick\nGibbet Hill RoadCV4 7ALCoventryUK\n", "E Thrane \nSchool of Physics & Astronomy\nMonash University\nClayton VIC 3800Australia\n", "S Aukkaravittayapun \nT. Donkaew\nNational Astronomical Research Institute of Thailand\n260 Moo 450180ChiangmaiA. MaerimThailand\n", "S Awiphan \nT. Donkaew\nNational Astronomical Research Institute of Thailand\n260 Moo 450180ChiangmaiA. MaerimThailand\n", "U Burhanudin \nDepartment of Physics and Astronomy\nUniversity of Sheffield\nS3 7RHSheffieldUK\n", "P Chote \nDepartment of Physics\nUniversity of Warwick\nGibbet Hill RoadCV4 7ALCoventryUK\n", "A Chrimes \nDepartment of Physics\nUniversity of Warwick\nGibbet Hill RoadCV4 7ALCoventryUK\n", "E Daw \nDepartment of Physics and Astronomy\nUniversity of Sheffield\nS3 7RHSheffieldUK\n", "C Duffy \nDepartment of Physics\nUniversity of Warwick\nGibbet Hill RoadCV4 7ALCoventryUK\n\nArmagh Observatory & Planetarium\nCollege HillBT61 9DGArmagh\n", "R Eyles-Ferris \nSchool of Physics & Astronomy\nUniversity of Leicester\nUniversity RoadLE1 7RHLeicesterUK\n", "B Gompertz \nDepartment of Physics\nUniversity of Warwick\nGibbet Hill RoadCV4 7ALCoventryUK\n", "T Heikkilä \nDepartment of Physics & Astronomy\nUniversity of Turku\nVesilinnantie 5FI-20014TurkuFinland\n", "P Irawati \nT. Donkaew\nNational Astronomical Research Institute of Thailand\n260 Moo 450180ChiangmaiA. MaerimThailand\n", "M R Kennedy \nDepartment of Physics and Astronomy\nJodrell Bank Centre for Astrophysics\nThe University of Manchester\nM13 9PLManchesterUK\n", "T Killestein \nDepartment of Physics\nUniversity of Warwick\nGibbet Hill RoadCV4 7ALCoventryUK\n", "H Kuncarayakti \nDepartment of Physics and Astronomy\nTuorla Observatory\nUniversity of Turku\nFI-20014Finland\n\nFinnish Centre for Astronomy with ESO (FINCA)\nUniversity of Turku\nFI-20014Finland\n", "A J Levan \nDepartment of Physics\nUniversity of Warwick\nGibbet Hill RoadCV4 7ALCoventryUK\n\nDepartment of Astrophysics/IMAPP\nRadboud University Nijmegen\nP.O. Box 90106500 GLNijmegenThe Netherlands\n", "S Littlefair \nDepartment of Physics and Astronomy\nUniversity of Sheffield\nS3 7RHSheffieldUK\n", "L Makrygianni \nDepartment of Physics and Astronomy\nUniversity of Sheffield\nS3 7RHSheffieldUK\n", "T Marsh \nDepartment of Physics\nUniversity of Warwick\nGibbet Hill RoadCV4 7ALCoventryUK\n", "D Mata-Sanchez \nInstituto de Astrofísica de Canarias\nE-38205La Laguna, TenerifeSpain\n\nDepartment of Physics and Astronomy\nJodrell Bank Centre for Astrophysics\nThe University of Manchester\nM13 9PLManchesterUK\n\nDepartamento de Astrofísica\nUniversidad de La Laguna\nE-38206La Laguna, TenerifeSpain\n", "S Mattila \nDepartment of Physics & Astronomy\nUniversity of Turku\nVesilinnantie 5FI-20014TurkuFinland\n", "J Maund \nDepartment of Physics and Astronomy\nUniversity of Sheffield\nS3 7RHSheffieldUK\n", "J Mccormac \nDepartment of Physics\nUniversity of Warwick\nGibbet Hill RoadCV4 7ALCoventryUK\n", "D Mkrtichian \nT. Donkaew\nNational Astronomical Research Institute of Thailand\n260 Moo 450180ChiangmaiA. MaerimThailand\n", "J Mullaney \nDepartment of Physics and Astronomy\nUniversity of Sheffield\nS3 7RHSheffieldUK\n", "K Noysena \nT. Donkaew\nNational Astronomical Research Institute of Thailand\n260 Moo 450180ChiangmaiA. MaerimThailand\n", "M Patel \nSchool of Physics & Astronomy\nUniversity of Leicester\nUniversity RoadLE1 7RHLeicesterUK\n", "E Rol \nSchool of Physics & Astronomy\nMonash University\nClayton VIC 3800Australia\n", "U Sawangwit \nT. Donkaew\nNational Astronomical Research Institute of Thailand\n260 Moo 450180ChiangmaiA. MaerimThailand\n", "E R Stanway \nDepartment of Physics\nUniversity of Warwick\nGibbet Hill RoadCV4 7ALCoventryUK\n", "R Starling \nSchool of Physics & Astronomy\nUniversity of Leicester\nUniversity RoadLE1 7RHLeicesterUK\n", "P Strøm \nDepartment of Physics\nUniversity of Warwick\nGibbet Hill RoadCV4 7ALCoventryUK\n", "S Tooke \nSchool of Physics & Astronomy\nUniversity of Leicester\nUniversity RoadLE1 7RHLeicesterUK\n", "R West \nDepartment of Physics\nUniversity of Warwick\nGibbet Hill RoadCV4 7ALCoventryUK\n", "D J White \nEPCC\nUniversity of Edinburgh\n47 PotterrowEH8 9BTBayes Centre, EdinburghUK\n", "K Wiersema \nDepartment of Physics\nUniversity of Warwick\nGibbet Hill RoadCV4 7ALCoventryUK\n" ]
[ "Department of Physics\nUniversity of Warwick\nGibbet Hill RoadCV4 7ALCoventryUK", "The ARC Centre of Excellence for Gravitational Wave Discovery\nOzGrav\nClayton VIC 3800Australia", "School of Physics & Astronomy\nMonash University\nClayton VIC 3800Australia", "The ARC Centre of Excellence for Gravitational Wave Discovery\nOzGrav\nClayton VIC 3800Australia", "Department of Physics\nUniversity of Warwick\nGibbet Hill RoadCV4 7ALCoventryUK", "School of Physics & Astronomy\nMonash University\nClayton VIC 3800Australia", "The ARC Centre of Excellence for Gravitational Wave Discovery\nOzGrav\nClayton VIC 3800Australia", "Department of Physics and Astronomy\nUniversity of Sheffield\nS3 7RHSheffieldUK", "Department of Physics\nUniversity of Warwick\nGibbet Hill RoadCV4 7ALCoventryUK", "Department of Physics\nUniversity of Warwick\nGibbet Hill RoadCV4 7ALCoventryUK", "Department of Physics\nUniversity of Warwick\nGibbet Hill RoadCV4 7ALCoventryUK", "School of Physics & Astronomy\nMonash University\nClayton VIC 3800Australia", "The ARC Centre of Excellence for Gravitational Wave Discovery\nOzGrav\nClayton VIC 3800Australia", "Department of Physics and Astronomy\nUniversity of Sheffield\nS3 7RHSheffieldUK", "Instituto de Astrofísica de Canarias\nE-38205La Laguna, TenerifeSpain", "School of Physics & Astronomy\nUniversity of Leicester\nUniversity RoadLE1 7RHLeicesterUK", "Armagh Observatory & Planetarium\nCollege HillBT61 9DGArmagh", "T. Donkaew\nNational Astronomical Research Institute of Thailand\n260 Moo 450180ChiangmaiA. MaerimThailand", "Department of Physics & Astronomy\nUniversity of Turku\nVesilinnantie 5FI-20014TurkuFinland", "University of Portsmouth\nPO1 3FXPortsmouthUK", "Instituto de Astrofísica de Canarias\nE-38205La Laguna, TenerifeSpain", "Department of Astrophysics/IMAPP\nRadboud University Nijmegen\nP.O. Box 90106500 GLNijmegenThe Netherlands", "Department of Physics and Astronomy\nJodrell Bank Centre for Astrophysics\nThe University of Manchester\nM13 9PLManchesterUK", "Department of Physics\nUniversity of Warwick\nGibbet Hill RoadCV4 7ALCoventryUK", "School of Physics & Astronomy\nMonash University\nClayton VIC 3800Australia", "T. Donkaew\nNational Astronomical Research Institute of Thailand\n260 Moo 450180ChiangmaiA. MaerimThailand", "T. Donkaew\nNational Astronomical Research Institute of Thailand\n260 Moo 450180ChiangmaiA. MaerimThailand", "Department of Physics and Astronomy\nUniversity of Sheffield\nS3 7RHSheffieldUK", "Department of Physics\nUniversity of Warwick\nGibbet Hill RoadCV4 7ALCoventryUK", "Department of Physics\nUniversity of Warwick\nGibbet Hill RoadCV4 7ALCoventryUK", "Department of Physics and Astronomy\nUniversity of Sheffield\nS3 7RHSheffieldUK", "Department of Physics\nUniversity of Warwick\nGibbet Hill RoadCV4 7ALCoventryUK", "Armagh Observatory & Planetarium\nCollege HillBT61 9DGArmagh", "School of Physics & Astronomy\nUniversity of Leicester\nUniversity RoadLE1 7RHLeicesterUK", "Department of Physics\nUniversity of Warwick\nGibbet Hill RoadCV4 7ALCoventryUK", "Department of Physics & Astronomy\nUniversity of Turku\nVesilinnantie 5FI-20014TurkuFinland", "T. Donkaew\nNational Astronomical Research Institute of Thailand\n260 Moo 450180ChiangmaiA. MaerimThailand", "Department of Physics and Astronomy\nJodrell Bank Centre for Astrophysics\nThe University of Manchester\nM13 9PLManchesterUK", "Department of Physics\nUniversity of Warwick\nGibbet Hill RoadCV4 7ALCoventryUK", "Department of Physics and Astronomy\nTuorla Observatory\nUniversity of Turku\nFI-20014Finland", "Finnish Centre for Astronomy with ESO (FINCA)\nUniversity of Turku\nFI-20014Finland", "Department of Physics\nUniversity of Warwick\nGibbet Hill RoadCV4 7ALCoventryUK", "Department of Astrophysics/IMAPP\nRadboud University Nijmegen\nP.O. Box 90106500 GLNijmegenThe Netherlands", "Department of Physics and Astronomy\nUniversity of Sheffield\nS3 7RHSheffieldUK", "Department of Physics and Astronomy\nUniversity of Sheffield\nS3 7RHSheffieldUK", "Department of Physics\nUniversity of Warwick\nGibbet Hill RoadCV4 7ALCoventryUK", "Instituto de Astrofísica de Canarias\nE-38205La Laguna, TenerifeSpain", "Department of Physics and Astronomy\nJodrell Bank Centre for Astrophysics\nThe University of Manchester\nM13 9PLManchesterUK", "Departamento de Astrofísica\nUniversidad de La Laguna\nE-38206La Laguna, TenerifeSpain", "Department of Physics & Astronomy\nUniversity of Turku\nVesilinnantie 5FI-20014TurkuFinland", "Department of Physics and Astronomy\nUniversity of Sheffield\nS3 7RHSheffieldUK", "Department of Physics\nUniversity of Warwick\nGibbet Hill RoadCV4 7ALCoventryUK", "T. Donkaew\nNational Astronomical Research Institute of Thailand\n260 Moo 450180ChiangmaiA. MaerimThailand", "Department of Physics and Astronomy\nUniversity of Sheffield\nS3 7RHSheffieldUK", "T. Donkaew\nNational Astronomical Research Institute of Thailand\n260 Moo 450180ChiangmaiA. MaerimThailand", "School of Physics & Astronomy\nUniversity of Leicester\nUniversity RoadLE1 7RHLeicesterUK", "School of Physics & Astronomy\nMonash University\nClayton VIC 3800Australia", "T. Donkaew\nNational Astronomical Research Institute of Thailand\n260 Moo 450180ChiangmaiA. MaerimThailand", "Department of Physics\nUniversity of Warwick\nGibbet Hill RoadCV4 7ALCoventryUK", "School of Physics & Astronomy\nUniversity of Leicester\nUniversity RoadLE1 7RHLeicesterUK", "Department of Physics\nUniversity of Warwick\nGibbet Hill RoadCV4 7ALCoventryUK", "School of Physics & Astronomy\nUniversity of Leicester\nUniversity RoadLE1 7RHLeicesterUK", "Department of Physics\nUniversity of Warwick\nGibbet Hill RoadCV4 7ALCoventryUK", "EPCC\nUniversity of Edinburgh\n47 PotterrowEH8 9BTBayes Centre, EdinburghUK", "Department of Physics\nUniversity of Warwick\nGibbet Hill RoadCV4 7ALCoventryUK" ]
[ "MNRAS" ]
The Gravitational-wave Optical Transient Observer (GOTO) is an array of wide-field optical telescopes, designed to exploit new discoveries from the next generation of gravitational wave detectors (LIGO, Virgo, KAGRA), study rapidly evolving transients, and exploit multimessenger opportunities arising from neutrino and very high energy gamma-ray triggers. In addition to a rapid response mode, the array will also perform a sensitive, all-sky transient survey with few day cadence. The facility features a novel, modular design with multiple 40-cm wide-field reflectors on a single mount. In June 2017 the GOTO collaboration deployed the initial project prototype, with 4 telescope units, at the Roque de los Muchachos Observatory (ORM), La Palma, Canary Islands. Here we describe the deployment, commissioning, and performance of the prototype hardware, and discuss the impact of these findings on the final GOTO design. We also offer an initial assessment of the science prospects for the full GOTO facility that employs 32 telescope units across two sites.
10.1093/mnras/stac013
[ "https://arxiv.org/pdf/2110.05539v1.pdf" ]
238,634,486
2110.05539
c99dfb5c93e1868dad333255f98e79ac9068dd7d
The Gravitational-wave Optical Transient Observer (GOTO): prototype performance and prospects for transient science 2021 D Steeghs [email protected] Department of Physics University of Warwick Gibbet Hill RoadCV4 7ALCoventryUK The ARC Centre of Excellence for Gravitational Wave Discovery OzGrav Clayton VIC 3800Australia D K Galloway School of Physics & Astronomy Monash University Clayton VIC 3800Australia The ARC Centre of Excellence for Gravitational Wave Discovery OzGrav Clayton VIC 3800Australia K Ackley Department of Physics University of Warwick Gibbet Hill RoadCV4 7ALCoventryUK School of Physics & Astronomy Monash University Clayton VIC 3800Australia The ARC Centre of Excellence for Gravitational Wave Discovery OzGrav Clayton VIC 3800Australia M J Dyer Department of Physics and Astronomy University of Sheffield S3 7RHSheffieldUK J Lyman Department of Physics University of Warwick Gibbet Hill RoadCV4 7ALCoventryUK K Ulaczyk Department of Physics University of Warwick Gibbet Hill RoadCV4 7ALCoventryUK R Cutter Department of Physics University of Warwick Gibbet Hill RoadCV4 7ALCoventryUK Y.-L Mong School of Physics & Astronomy Monash University Clayton VIC 3800Australia The ARC Centre of Excellence for Gravitational Wave Discovery OzGrav Clayton VIC 3800Australia V Dhillon Department of Physics and Astronomy University of Sheffield S3 7RHSheffieldUK Instituto de Astrofísica de Canarias E-38205La Laguna, TenerifeSpain P O&apos;brien School of Physics & Astronomy University of Leicester University RoadLE1 7RHLeicesterUK G Ramsay Armagh Observatory & Planetarium College HillBT61 9DGArmagh S Poshyachinda T. Donkaew National Astronomical Research Institute of Thailand 260 Moo 450180ChiangmaiA. MaerimThailand R Kotak Department of Physics & Astronomy University of Turku Vesilinnantie 5FI-20014TurkuFinland L K Nuttall University of Portsmouth PO1 3FXPortsmouthUK E Pallé Instituto de Astrofísica de Canarias E-38205La Laguna, TenerifeSpain Department of Astrophysics/IMAPP Radboud University Nijmegen P.O. Box 90106500 GLNijmegenThe Netherlands R P Breton Department of Physics and Astronomy Jodrell Bank Centre for Astrophysics The University of Manchester M13 9PLManchesterUK D Pollacco Department of Physics University of Warwick Gibbet Hill RoadCV4 7ALCoventryUK E Thrane School of Physics & Astronomy Monash University Clayton VIC 3800Australia S Aukkaravittayapun T. Donkaew National Astronomical Research Institute of Thailand 260 Moo 450180ChiangmaiA. MaerimThailand S Awiphan T. Donkaew National Astronomical Research Institute of Thailand 260 Moo 450180ChiangmaiA. MaerimThailand U Burhanudin Department of Physics and Astronomy University of Sheffield S3 7RHSheffieldUK P Chote Department of Physics University of Warwick Gibbet Hill RoadCV4 7ALCoventryUK A Chrimes Department of Physics University of Warwick Gibbet Hill RoadCV4 7ALCoventryUK E Daw Department of Physics and Astronomy University of Sheffield S3 7RHSheffieldUK C Duffy Department of Physics University of Warwick Gibbet Hill RoadCV4 7ALCoventryUK Armagh Observatory & Planetarium College HillBT61 9DGArmagh R Eyles-Ferris School of Physics & Astronomy University of Leicester University RoadLE1 7RHLeicesterUK B Gompertz Department of Physics University of Warwick Gibbet Hill RoadCV4 7ALCoventryUK T Heikkilä Department of Physics & Astronomy University of Turku Vesilinnantie 5FI-20014TurkuFinland P Irawati T. Donkaew National Astronomical Research Institute of Thailand 260 Moo 450180ChiangmaiA. MaerimThailand M R Kennedy Department of Physics and Astronomy Jodrell Bank Centre for Astrophysics The University of Manchester M13 9PLManchesterUK T Killestein Department of Physics University of Warwick Gibbet Hill RoadCV4 7ALCoventryUK H Kuncarayakti Department of Physics and Astronomy Tuorla Observatory University of Turku FI-20014Finland Finnish Centre for Astronomy with ESO (FINCA) University of Turku FI-20014Finland A J Levan Department of Physics University of Warwick Gibbet Hill RoadCV4 7ALCoventryUK Department of Astrophysics/IMAPP Radboud University Nijmegen P.O. Box 90106500 GLNijmegenThe Netherlands S Littlefair Department of Physics and Astronomy University of Sheffield S3 7RHSheffieldUK L Makrygianni Department of Physics and Astronomy University of Sheffield S3 7RHSheffieldUK T Marsh Department of Physics University of Warwick Gibbet Hill RoadCV4 7ALCoventryUK D Mata-Sanchez Instituto de Astrofísica de Canarias E-38205La Laguna, TenerifeSpain Department of Physics and Astronomy Jodrell Bank Centre for Astrophysics The University of Manchester M13 9PLManchesterUK Departamento de Astrofísica Universidad de La Laguna E-38206La Laguna, TenerifeSpain S Mattila Department of Physics & Astronomy University of Turku Vesilinnantie 5FI-20014TurkuFinland J Maund Department of Physics and Astronomy University of Sheffield S3 7RHSheffieldUK J Mccormac Department of Physics University of Warwick Gibbet Hill RoadCV4 7ALCoventryUK D Mkrtichian T. Donkaew National Astronomical Research Institute of Thailand 260 Moo 450180ChiangmaiA. MaerimThailand J Mullaney Department of Physics and Astronomy University of Sheffield S3 7RHSheffieldUK K Noysena T. Donkaew National Astronomical Research Institute of Thailand 260 Moo 450180ChiangmaiA. MaerimThailand M Patel School of Physics & Astronomy University of Leicester University RoadLE1 7RHLeicesterUK E Rol School of Physics & Astronomy Monash University Clayton VIC 3800Australia U Sawangwit T. Donkaew National Astronomical Research Institute of Thailand 260 Moo 450180ChiangmaiA. MaerimThailand E R Stanway Department of Physics University of Warwick Gibbet Hill RoadCV4 7ALCoventryUK R Starling School of Physics & Astronomy University of Leicester University RoadLE1 7RHLeicesterUK P Strøm Department of Physics University of Warwick Gibbet Hill RoadCV4 7ALCoventryUK S Tooke School of Physics & Astronomy University of Leicester University RoadLE1 7RHLeicesterUK R West Department of Physics University of Warwick Gibbet Hill RoadCV4 7ALCoventryUK D J White EPCC University of Edinburgh 47 PotterrowEH8 9BTBayes Centre, EdinburghUK K Wiersema Department of Physics University of Warwick Gibbet Hill RoadCV4 7ALCoventryUK The Gravitational-wave Optical Transient Observer (GOTO): prototype performance and prospects for transient science MNRAS 0002021Accepted October 11 2021. Received October 8; in original form July 26Preprint 13 October 2021 Compiled using MNRAS L A T E X style file v3.0Astronomical instrumentationmethods and techniques: telescopestechniques: photometricmethods:observational -Transients: neutron star mergers -Physical Data and Processes: gravitational waves - ★ The Gravitational-wave Optical Transient Observer (GOTO) is an array of wide-field optical telescopes, designed to exploit new discoveries from the next generation of gravitational wave detectors (LIGO, Virgo, KAGRA), study rapidly evolving transients, and exploit multimessenger opportunities arising from neutrino and very high energy gamma-ray triggers. In addition to a rapid response mode, the array will also perform a sensitive, all-sky transient survey with few day cadence. The facility features a novel, modular design with multiple 40-cm wide-field reflectors on a single mount. In June 2017 the GOTO collaboration deployed the initial project prototype, with 4 telescope units, at the Roque de los Muchachos Observatory (ORM), La Palma, Canary Islands. Here we describe the deployment, commissioning, and performance of the prototype hardware, and discuss the impact of these findings on the final GOTO design. We also offer an initial assessment of the science prospects for the full GOTO facility that employs 32 telescope units across two sites. stars, outbursts from accreting binaries, and also near-earth asteroids. Amongst the most productive of these surveys are the the All-sky Automated Survey for Supernovae (ASAS-SN; Shappee et al. 2014), the Asteroid Terrestrial-impact Last Alert System (ATLAS; Tonry et al. 2018a), the Catalina Real-time Transient Survey (CRTS; Drake et al. 2009), the Dark Energy Camera (DECam; Flaugher et al. 2015), the Evryscope (Law et al. 2015), HyperSuprimeCam (HSC; Aihara et al. 2018), Pan-STARRS1 (Chambers et al. 2016), SkyMapper (Keller et al. 2007), the Zwicky Transient Facility (ZTF; Bellm et al. 2019) and the upcoming BlackGEM array (Bloemen et al. 2015). We also anticipate the addition of the Legacy Survey of Space and Time (LSST) at the Vera C. Rubin Observatory within the next few years (Ivezić et al. 2019). The recent developments in wide-field all-sky optical surveys has been at least partly motivated by the increasing sensitivity of the Laser Interferometric Gravitational-wave Observatory (LIGO) and Virgo detectors (LIGO Scientific Collaboration et al. 2015;Acernese et al. 2015). Due to their design, interferometric gravitational-wave (GW) instruments typically offer poor localisation accuracy, compared to traditional (electromagnetic) astronomical instruments. For a reconstructed GW signal, the sky localisation error region encompassing all possible signal origins can span many hundreds of square degrees (e.g. Abbott et al. 2020a). The uncertainty arises primarily from the precision with which the signal arrival time delay can be measured, coupled with the relative signal strengths due to the different instrumental sensitivity patterns projected on the sky (Fairhurst 2009). In order to maximise the chance of identifying an electromagnetic counterpart to a GW signal, follow-up instruments must promptly cover the maximum visible fraction of this sky region or, more accurately, the time-volume. This task is difficult for conventional optical telescopes, as their fields of view are usually measured in square arc minutes, requiring many individual pointings to cover the GW source localisation region. The use of alternative strategies, such as targeting individual galaxies within the region, which themselves can number in the hundreds to thousands also brings additional challenges (e.g. Ducoin et al. 2020;Gehrels et al. 2016). For example, the GLADE catalog (Dálya et al. 2018) is complete only up to ∼ 37 Mpc and uses luminosity as a tracer for the mass and merger rate of BNS sources. Consequentially, this strategy could result in missed events for those with low offsets from the host galaxy or those that originate in low-mass galaxies. The Gravitational-wave Optical Transient Observer (GOTO 1 ) is an array of wide-field optical telescopes designed to efficiently survey the variable optical sky. It is specifically optimised for wide-field searches for electromagnetic counterparts to GW sources, complementing other search facilities and focusing on rapid identification of candidates. Although not necessarily a typical event, the first binary neutron-star (BNS) merger, GW170817 validated many of the key design parameters of GOTO. GW170817 was localised to within ∼28 square degrees of sky using LIGO and Virgo data (Abbott et al. 2017a(Abbott et al. , 2020a. The = 16 mag optical counterpart was discovered within ∼11 hr of the GW event followed by a lengthy multiwavelength campaign (e.g. Abbott et al. 2017d;Andreoni et al. 2017;Arcavi et al. 2017;Chornock et al. 2017;Coulter et al. 2017;Covino et al. 2017;Cowperthwaite et al. 2017;Drout et al. 2017;Evans et al. 2017;Kasliwal et al. 2017;Lipunov et al. 2017;Nicholl et al. 2017;Pian et al. 2017;Shappee et al. 2017;Troja et al. 2017;Utsumi et al. 2017;Valenti et al. 2017), and its host galaxy was NGC 4993 at a distance of ∼40 Mpc (LIGO Scientific Collaboration & Virgo Col-1 https://goto-observatory.org laboration 2017; Levan et al. 2017;Hjorth et al. 2017). Subsequent observations led to an avalanche of extraordinary observational data on an entirely new class of astrophysical event, providing insight into the production of short gamma-ray bursts (Abbott et al. 2017c;Goldstein et al. 2017;Savchenko et al. 2017;Lyman et al. 2018), the origin of heavy elements (Pian et al. 2017;Smartt et al. 2017;Tanvir et al. 2017) and even a new route to measuring cosmological expansion (Abbott et al. 2017b;Cantiello et al. 2018). However, this event represents only the beginning of a new era of multi-messenger astronomy, and great diversity is to be expected as GW rates increase. Much is still uncertain around the physics driving the EM emission of mergers involving neutron stars. The EM luminosities, distances and source localisation properties will vary strongly between events and across science runs. Many of the key questions are still to be answered and this requires systematic efforts to identify and characterise these events. Early localisation is key, such that follow-up can unfold promptly. This need is the driving force behind the GOTO project. In this paper we describe the design, deployment, commissioning, and performance of the GOTO prototype and look ahead towards the full deployment of the GOTO concept across two observing sites. In §2 we describe the principles informing the hardware design and specifications of the GOTO telescope system. In §3 we describe the implementation, including the telescope control system, image processing pipelines, and observation scheduler, and assess their performance. In §4 we describe the opportunities arising from survey and follow-up observations during the prototype commissioning, along with quantitative assessments of the instrument performance. Finally, in §5 we assess the future prospects for detections of transients including the observational products of counterparts to binary neutron star inspirals. GOTO PRINCIPLES The GOTO concept was developed well before the first GW detections (White 2014). The focus was a dedicated rapid-response system, targeting the early localisation of GW sources. At the time this goal presented significant challenges; not only were the early source locations expected to be very poorly constrained at the time of GW detection, there was also significant theoretical uncertainty for the electromagnetic properties of such events, including their luminosities as a function of energy and their decay timescales, among others. There are different strategies that one can take, reflecting a different balance between sensitivity, sky coverage and cadence. Our key design principles were flexibility, scalability and cost-effectiveness, with the GOTO instrumental capabilities tuned to complement other facilities suited for deeper observations and spectroscopic coverage. We explored this parameter space of depth, area and cadence to find an optimal configuration. The GOTO hardware design centres on using arrays of relatively modest aperture, wide-field optical telescopes, hereafter referred to as unit telescopes (UTs), in order to survey the sky regularly in anticipation of detections. This approach was inspired by the SuperWASP approach to planet transit searching (Pollacco et al. 2006), which in turn inspired projects such as ASAS-SN. There are two important factors for assessing the performance in this context, which define two distinct observing modes for the GOTO telescope system; "triggered" and "sky-survey" modes. First, the instrument must be able to respond promptly to a GW detection, targeting the specific areas on the sky that are consistent with the lo-calisation constraints as provided by the multi-detector GW network. In this response mode, hundreds to possibly thousands of square degrees need to be targeted, ideally with multiple visits, and fast enough to catch a short-lived source. Second, the instrument must be able to provide recent reference images (prior to the GW detection) with which to compare -these would be acquired in a continuous all-sky survey mode. Although the difference imaging technique is a wellestablished tool in the variable star and transient community (e.g. Alard & Lupton 1998;Alard 2000) to remove the static foreground of sources effectively, many other variable and transient sources unrelated to the GW detection can be expected at any given time. The longer the time gap between triggered follow-up observations and the most recent sky-survey epoch(s), the more interlopers can enter, and it becomes increasingly more difficult to find the bona-fide object of interest. For this reason, one would want regular sky survey epochs, so that sources known to be variable prior to the GW detection can be discarded. Of particular relevance are supernovae, which are luminous for weeks to months. Over such large search areas significant numbers are visible at any given time. The combination of these two modes (both "triggered" and "sky-survey" modes) means that a large field of view is desired; the larger the field of view, the faster both modes are able to be completed. As previously mentioned, the array approach offers a number of advantages. It allows the project to be scalable, with its capability set by the number of unit telescopes that can be deployed. An array also offers flexibility, as it can be deployed to maximise instantaneous field of view, depth at a more focused position, or provide different filters in individual telescopes. It is cost-effective, as the cost is linearly coupled to capability, and the implementation allows a good number of unit telescopes to be deployed at a site. A key constraint in this is the availability of cost-effective detectors. High-end professional large format CCDs would completely dominate the costs when employing large numbers of UTs, and would have complex cooling and electronics requirements. Our focus was instead on the much more affordable range of Kodak sensors, which offer exceptional price per pixel, albeit with a reduction in the quantum efficiency (QE) as compared to high-grade devices. However, the cost reduction is so significant (order of magnitude), that it is then possible to consider using a significant number of cameras in order to make up for the loss of efficiency of a single camera. These types of sensors also perform well at relatively warm temperatures and therefore do not require sophisticated cooling systems. With the pixel's physical size dictated by the sensor market, we then evaluated the performance of modest aperture telescopes using such sensors. Bigger apertures obviously improve the sensitivity. To make the most of the sensitivity, the optical design would need to sensibly sample the sensor pixels, with smaller pixel scales reducing the impact of sky background, but also reducing the achieved field of view. It is also desirable to be able to cycle through different filters such that both searching and characterisation can be optimized. The final constraint was the ability to multiplex without requiring a separate mount for each telescope. This is to reduce the physical footprint, complexity and cost of the facility. We pursued custom heavy-duty robotic mount systems capable of holding 4-8 telescopes at a time. We simulated a number of possible compromises, covering very wide-field configurations with 20 cm aperture telescopes, to more depth-focused options using fewer, larger telescopes. It was found that = 40 cm aperture unit telescopes were close to optimal, as this still allows us to multiplex the telescopes on a shared mount while offering a better depth/pixel scale compromise than smaller telescopes. A fast optical design would be needed to maximise the = 40 cm f/3 primary mirror has a hyperbolic surface and is supported by a mirror cell that allows for three-point collimation adjustment. An elliptical secondary direct the lights towards the multi-lens corrector system that projects a collimated effective f/2.5 beam. The instrumentation is mounted off-axis with a stage of tip-tilt, a robotic focuser, a 5 slot filter wheel and the camera enclosure. The structural support is provided by a carbon-fibre open truss arrangement. field of view, but also allow for a filterwheel. Multiple arrays of 8 telescopes could then cover the entire visible sky to moderate depths every few days, while multiple sites would ensure full sky coverage in both hemispheres. We present the implementation in more detail in the next section. We denote the "prototype" as the 4 UT system (GOTO-4), the full-scale single-site system as GOTO-16, and the finalised full-scale dual-site observatory as GOTO-32. IMPLEMENTATION Hardware As motivated in the previous section, the design of the GOTO telescopes was first and foremost driven by the sensor. In particular, the KAF-50100 CCD sensor produced by ON Semiconductor offered a very affordable large-format sensor, including 8304×6220 pixels at a scale of 6 m. The sensor was also offered in a convenient compact package by Finger Lakes Instrumentation (FLI) as part of their Mi-croLine range (ML50100). We provide more details on the detector performance in §3.3.1. In order to provide a sensible pixel-scale, the prototype optical tube assemblies (OTAs) for the GOTO UTs were designed to offer an aperture of = 40 cm at f/2.5 (Fig. 1). This maps to 1.25 arcsec per pixel, small enough to control sky background yet critically sampling the point-spread function (PSF) and offering a field of view of ∼5 square degrees. To deliver a corrected field, the design deploys a set of corrector lenses in between the secondary mirror and the focal plane. As it was desirable to be able to deploy filters, the optical design is Newtonian, allowing for a traditional filter wheel at the Newtonian focus. In our case, we coupled a 5-slot FLI filter wheel (CFW9-5) to the FLI camera package. The initial set of filters were the Baader set, which offers three colour bands ( , , ) as well as a wide-band filter (see section 3.3.3). The first phase of the GOTO project involved the development and construction of a prototype telescope, with 4 UTs mounted on a custom robotic mount (see Fig. 2). The mount is a German equatorial design, and the unit telescopes are loaded symmetrically to keep the system balanced. The mount drive used a wormwheel implementation where the two axes motors transfer torque to the mount wheels via a worm-gear. The gear is tensioned to push into the wormwheels but can decouple under overload for safety. The tension can be adjusted to find a balance between stiffness of the gear versus the ability to slew smoothly under load without overloading the motors. Encoders on the motors and high-resolution Renishaw encoders on the two axes permit accurate active dual encoder mount position control. Steel boom arms protrude to both the East and the West side to accommodate the tubes, control electronics, control computers and balance weights. Each unit telescope is connected to the mount boom arm via an adjustable guidemount, which allows individual UTs to be rotated and tilted (±5 deg) so that the footprint of the combined array can be defined. In the prototype configuration the entire array covers 18.1 square degrees in a single pointing (see Fig. 3). The field of view of individual unit telescopes intentionally overlap, to provide a contiguous field of view which allows for effective tiling on the sky without gaps. The overlap regions also provides important cross-calibration checks for the pipeline. In principle the guidemount adjustment range is sufficient to allow all the unit telescopes to co-align, or be arranged into more complex shapes, but the default arrangement allows a wider combined field of view and therefore prioritises sky coverage. Whilst the prototype phase only included 4 UTs (2 on either side), the mount was designed from the start to be able to hold 8 UTs. A complete mount array would produce a field of view of ∼ 35 − 40 square degrees, as shown in Fig. 3, comparable to the 47 square degree field of view of ZTF. In order to deliver full sky coverage and a cadence of a few days, it was envisaged that four of these full 8 UT arrays can then be located across the globe at two sites in opposite hemispheres to achieve the targets outlined in Section 2. Spreading four 8-UT arrays over two sites (rather than four locations) was done to alleviate logistical and infrastructure challenges that come with setting up and operating at each location. This setup would result in an instantaneous field of view of up to ∼80 square degrees at each site, split across two mounts and, given proper choice of sites, provide near 24-hour coverage for a fraction of the sky and coverage of all declinations. The prototype telescope was deployed at the Roque de los Muchachos Observatory, La Palma, the intended home of the first GOTO site and a premier observing site in the Northern hemisphere. The GOTO site is operated by the University of Warwick on behalf of the GOTO consortium and was funded by the founding members. The system is housed in an Astrohaven 18ft clamshell dome enclosure, offering panoramic access to the local sky down to 30 degrees altitude. Additional customisations were added to the dome to facilitate secure robotic operations, including extra sensors, in-dome cameras and sirens . The key goal of the GOTO prototype was to demonstrate the viability of the design choices before scaling the project up with additional telescopes. We also wanted to deploy it timely enough to ensure that the prototype could pursue actual GW searches during the advanced LIGO-Virgo observing runs. The prototype achieved first light in June 2017, followed by its official inauguration in July 2017. A summary of the key specifications are provided in Table 1. Software The GOTO software was developed in-house and is divided into multiple components, each of which is described in the sections below. Almost all of the GOTO software was written in Python and makes use of Python-based packages. Robotic telescope control GOTO operates using a custom control system, G-TeCS (the GOTO Telescope Control System; Dyer et al. 2018. G-TeCS is written in Python and is based on the code developed for pt5m (Hardy et al. 2015). The primary software programs within G-TeCS are a series of daemons; background processes that monitor and provide an interface to their hardware units. The daemons interact using the Python Remote Objects (Pyro) module 2 ; each daemon is a Pyro server which allows communications between processes and daemons across the local network. Figure 4 shows a schematic view of the G-TeCS software architecture. There are six primary hardware daemons each named after the category of hardware they control: the camera, filter wheel, focuser, dome, mount and power daemons. These are run on the primary control computer located within a rack in the GOTO dome. Due to GOTO's array design the unit telescope hardware (the cameras, focusers and filter wheels attached to each UT) are connected in pairs to interface computers mounted on the boom arm. Each category of hardware is then controlled in parallel by their respective daemons running on the primary control computer. A seventh hardware daemon, the exposure queue daemon, processes sets of exposures and handles timing between the camera and filter wheel daemons, allowing sets of exposures to be observed in sequence and ensuring that the correct filters are set before each begins. Three additional support daemons run on a central server alongside the primary observation database, located on La Palma in the neighbouring SuperWASP telescope enclosure. The sentinel daemon processes incoming transient alerts and adds targets to the database, which are then processed and sorted by the scheduler daemon to determine the highest priority target to observe at the given time (see § 3.2.2). In addition the conditions daemon collects and processes data from the on-site weather stations in order to determine if it is currently safe to open the dome. To enable GOTO to function as a fully robotic telescope the daemons are issued commands by the pilot control program, which acts in place of an on-site human operator. The pilot is an asynchronous Python script that runs through a series of tasks every night: powering up the system in the late afternoon, taking bias and dark images, opening the dome after sunset, taking flat fields and focusing the telescopes, observing targets provided by the scheduler daemon throughout the night, taking flat fields again in the morning twilight, and finally closing the dome and shutting down the system at sunrise. Throughout the night the pilot monitors the local weather conditions reported by the conditions daemon, as well as the status of the telescope hardware. If the conditions are reported as bad then the dome will close and the pilot will pause until they are clear. If a problem with the hardware is detected then the pilot will run through a series of pre-defined recovery commands in order to try and repair the system; if these fix the problem then the pilot will resume ob- servations, but if the error persists then the pilot will issue an alert before shutting down. During the night the pilot sends messages to a dedicated channel on Slack 3 , a messaging application workspace, both regularly scheduled reports (a weather report in the evening, a list of observed targets in the morning) as well as alerts for any errors that might require human intervention. The control system can also be switched over to manual mode if desired, pausing the pilot and allowing a remote observer control of the telescope. The G-TeCS architecture has been designed to be modular and the overall system is easily expandable. For instance, adding the second set of four unit telescopes to the prototype only requires new interface daemons, which are then integrated into the existing system. In the future as more GOTO telescopes are commissioned each array will be controlled by an independent pilot, which will receive targets from a single central scheduler. This will allow a rapid, coordinated response to any transient alerts. Observation scheduling As a survey telescope, GOTO observes target fields aligned to a fixed all-sky grid, to ensure consistently-aligned frames for difference imaging. For the GOTO-4 prototype this grid is formed of tiles with a size of 3.7 degrees in the right ascension direction and 4.9 degrees in the declination direction, combining the field of view of all four cameras into a single 18.1 square degree field with some overlap 3 https://slack.com between the neighbouring cameras (as shown in Fig. 3). The all-sky grid is defined by dividing the sky into a series of equally spaced 18.1 square degree tiles; 2913 tiles in total cover the entire celestial sphere. Just over 700 tiles are visible at any one time when considering the local horizon, and approximately 76 per cent of the celestial sphere is visible over the course of a year from GOTO's site on La Palma (see Fig. 10 in § 3.3.5). The G-TeCS sentinel daemon contains the system alert listener, which monitors the NASA GCN (Gamma-ray Coordination Network Barthelmy et al. 1998) stream for relevant astrophysical events. During the prototype phase GOTO-4 responded to gravitational-wave alerts from the LIGO-Virgo Collaboration (LVC; see § 4.1), as well as gamma-ray burst (GRB) events from the Fermi Gamma-ray Burst Monitor (GBM; Meegan et al. 2009) and Swift Burst Alert Telescope (BAT; Krimm et al. 2013, see § 4.2). When one of these alerts is received by the sentinel, the skymap containing the localisation region is mapped onto the predefined all-sky grid, in order to find the contained probability within each tile. These tiles are then inserted into the observation database in order of probability until the entire 90 per cent localisation region has been covered. In order to determine which of the targets in the database to observe, the scheduler daemon first applies several observing constraints on the queue of pending pointings using the astroplan Python module (Morris et al. 2018). The constraints include checking the target's altitude above the local artificial horizon, the distance of the target from the Moon and the current lunar phase (targets can be limited to bright, grey or dark time). Once invalid pointings have been filtered from the queue, those remaining are sorted by the rank defined when they were inserted into the database. Gravitational-wave follow-up pointings rank higher than those from GRB alerts, and both are always higher than normal survey pointings. For events with large skymaps spanning multiple tiles (such as almost all gravitationalwave detections so far) the pointings that are yet to be observed are prioritised over repeat visits of previously-observed tiles, ensuring that the visible localisation region is covered rapidly. For any pointings that are still equally ranked a tiebreak parameter is constructed based on the skymap localisation probability contained within the tile and the current airmass of the target, to prioritise both covering the high-probability regions of the skymap and data quality. The resulting target with the highest priority is returned to the pilot to observe. This ranking system functions as a "just-in-time" scheduler; the pilot queries the scheduler every 10 seconds, the scheduler then recalculates the pointings queue and returns the pointing that is currently the highest priority. This results in a system that is very quick to react to transient alerts, as new targets added to the database are automatically sorted at the top of the queue, and GOTO has been able to begin observations of new events within 30 seconds of the alert being received by the sentinel (see § 4). Image processing No significant image processing is performed on La Palma. For each observation, images from each camera are saved as individual frames by the G-TeCS camera daemon using the FITS (Flexible Image Transport System) format and are then compressed and transferred to a data centre based on the campus of Warwick University (Coventry, UK). A dedicated level-2 VLAN fibre connection was set up for this purpose, providing a secure 1 Gb connection between the observatory and the campus. This connection provides ample bandwidth to transfer images while the next set is being exposed, and should allow real-time processing even when the envisaged full-site of unit telescopes will be exposing in parallel. A watcher script in the data centre monitors the arrival of new data files and adds them to the queue for processing with the prototype data reduction pipeline, GOTO (Fig. 5). The data centre hardware is a dedicated stack of high-performance server nodes, with some dedicated to offer NAS storage while others serving as database servers, and a group of identical compute nodes for processing. The stack is on a local 10 Gb interconnect throughout and also links to other campus subnets at 10 Gb. The data-flow is designed to allow real-time data processing with low latency. The initial stages perform standard CCD bias, dark, and flat-field corrections for each science frame. The corrections are performed using calibration files from deep stacks of frames taken across multiple nights as a more robust and reliable method than using nightly stacks. A source detection pass using SE (Bertin & Arnouts 1996) is then made, identifying the locations and performing preliminary instrumental photometry for sources in the frame. An initial astrometric solution is then found using -. (Lang et al. 2010) with their pre-built Gaia indices. The fitting process uses the telescope pointing as a starting point to search in right ascension and declination, and fixes the pixel scale to that of each telescope. Although the fast optics suffer from significant distortions across the field of view, the large number of point sources available in each frame offer good constraints for the astrometry. The quality of this initial solution is then checked, and the higher order terms further refined if necessary. This refinement uses our principal reference catalogue, ATLAS-REFCAT2 (Tonry et al. 2018b), for cross-matching. A custom package 4 is used to iteratively refine the SIP (Simple Imaging Polynomial) distortion parameters of the WCS (World Coordinate System) solution for improving the sky to frame coordinate transformation, updating the linear and polynomial coefficients sequentially to ensure stable convergence. More robust quality flags are computed using the reference catalogue, applying information about the local quality of the astrometric solution to the source tables. The quality flag is a combination of bit values indicating whether parameters such as the astrometric solution or the mean full width at half maximum of the stellar profiles are significantly greater than the expected values. These flags take binary values up to 128, with the most severe defects attracting higher values. After refitting, the typical astrometric RMS noise in each frame is ∼0.6 arcsec (or less than half of the detector's pixel scale). The cross-matched reference catalogue is then used to calibrate the initial instrumental photometry found earlier. Kron apertures (Kron 1980) are used for measurements of all sources in the frames with a typical baseline calibration uncertainty of 0.03 mag. After the above processing, an individual science frame is considered finished. Further stages of the data-flow rely on small stacks of these individual frames, which form exposure sets. The scheduler almost exclusively employs an observing strategy where multiple exposures are obtained at each pointing for increasing the S/N of each set -a set is typically 3-4 exposures, each 30-90 seconds long. A processing queue is aware of the assignment of individual frames to a given set using header cards denoting the total number of frames to be included in the set, and the position of the current frame in that set. Once a set has had all its individual frames processed, they are aligned and median-combined. Given the typically small alignments required between frames, the alignment procedure is a simple translation of the frames, fixing rotation, scale and higher order terms. Combination is done via a relatively naïve scaled-median approach. After this, the stack is sent through the same source detection, as-4 https://github.com/GOTO-OBS/goto-astromtools trometry and photometry routines as was done for the frames, to produce the final science image for the pointing. We note that various phenomena can cause an abrupt end to a set of exposures, e.g. weather or target-of-opportunity override. In these cases the pipeline has a default wait period, of order an hour, after which it considers a set finished, regardless of whether the expected number of exposures matches those that were processed, and the partial stack is sent forward for processing as above. Template images Science frames undergo additional standard difference-imaging processing as a means to identify variable and new objects in the fields of view. A 'template bank' of observations of tile pointings is maintained, which are generated from historical visits to a given tile and using the best quality frame available (determined from a combination of PSF characteristics and limiting magnitude of the frame). This template bank is searched for a suitable template frame from which to subtract a given set science frame and principally matching on the UT, filter and coordinates on sky. Given the distortions across the fields of view, and the consequential requirement for significant alignment, including arcminutes translations, rotation and the need for high-order transformation terms, we employ our own customised alignment routine 5 . Briefly, the routine finds an initial affine transformation between two matching "quads" (Lang et al. 2010) of stars between sets of frames and fits a smooth 2D spline surface to the x-and y-pixel residuals between cross-matched sources. The 2D spline surface is applied to the final transformation and robustly handles non-homogeneous coordinate mapping to align the science and template frames to within a sub-pixel accuracy. The aligned template frame is subtracted from the science frame using (Becker 2015) to produce a difference frame. Finally, this difference frame is then passed through SE to identify sources (see §3.2.6). Database The valuable metadata, including photometry, for a processed science frame is stored in the header and various FITS table extensions of its file. However, for ease and speed of access, this data is also stored in a Postgres database (DB) held on a dedicated server node. Since at the core of most queries is some reliance on sky coordinates (whether searching for images covering a particular location, or cross-matching photometry to create light curves), indexes are generated for raand dec-like columns using the 3 Postgres extension (Koposov & Bartunov 2006). The ATLAS-REFCAT2 is also stored as a 3 -indexed Postgres table, which is queried as part of the data-flow ( §3.2.3). Performance of the DB is heavily optimised by Postgres and makes use of the sizeable cache available from the 128 GB memory on the current DB server. As such, query speeds can be variable, but, as an example, returning sources in a typical GOTO UT field of view (∼ 10 4 − 10 5 rows) from the total ∼ 10 9 sources in the ATLAS-REFCAT2 catalogue takes less than a few seconds, and substantially less than one second if a similar query has been performed recently (which is often the case when processing frames from exposure sets taken at the same sky position). Figure 5. Data processing flow. For a set of three exposures 12 individual frames are produced: a stack of three from each of the four UTs. Each stack is processed in parallel, first with each raw frame being calibrated to produce a reduced frame. These reduced frames are stacked as a group where astrometry, source extraction and photometry are performed. The entire photometric catalogs are stored in the photometry database and added to the FITS image. The median stacked frame is matched with a stacked template image from the template database and subtracted using . Once subtracted, a list of transient candidates are sent out to be vetted. The vetting process in its final stage is manual where contextual information about the source is provided as well as the the classification score from the real-bogus classifier. If any candidates have passed all vetting stages, then they are sent to other follow-up facilities for further characterisation. Transient & Variable Source identification In order to identify transient and variable sources, difference imaging is employed ( §3.2.3). Such difference imaging does not provide a clean representation of the new or varying sources in the field alone. Since the image subtraction algorithm must handle varying levels of image-depth and PSF shapes, subtraction residuals are almost entirely unavoidable (Alard & Lupton 1998;Alard 2000;Zackay et al. 2016;Masci et al. 2017). These residuals often appear as valid source detections to most algorithms (including SE , used here), and they generally far outnumber any astrophysically real detections in the difference frame. In order to elucidate the objects of interest, various methods involving machine learning have been pioneered to calculate probabilistic scores for the detections. These scores are often described on a scale of "real" to "bogus", giving rise to the "realbogus" name to describe such models. The models can then be used to filter out image-level contaminants, such as spurious residuals and related CCD artefacts, in the difference frames (Brink et al. 2013;Wright et al. 2015;Duev et al. 2019). The early version of the GOTO data-flow employed a Random Forest (RF) model which matched quite closely the one presented in Bloom et al. (2012). However, the significant optical distortions of the UTs meant that the difference images were particularly challenging for the model in most cases. The lack of historical GOTO data meant training the supervised model was also difficult and had to rely on fake source injections to produce sufficient "real" sources. This meant properly characterising its performance was also difficult. To overcome this we generated a much improved model, using instead a convolutional neural network (CNN) to analyse the pixel-level data (in contrast to extracting human-selected "features" of the detections, as is required for the RF approach), and harvested very large samples of "real" and "bogus" sources from actual data, with novel augmentation techniques to improve the recovery of various types of transients and across a whole variety of observing conditions. A preliminary version of this approach was implemented in July 2020 and resulted in drastically improved recovery of transients when compared to external streams (such as spectroscopically-confirmed Transient Name Server objects). For a fixed false positive rate of 1 per cent, the newlyimplemented classifier achieved a 1.5 per cent false negative rate on a held-out test set, and reached a ∼97 per cent recovery rate when evaluated on a benchmark dataset of real observations of confirmed transients. The CNN model and the automated data-generation techniques are described fully in Killestein et al. (2021). Once difference frame sources have been scored, they are presented to end-users via a web interface "Marshall" (a screen shot of which is shown in Figure 6). The GOTO Marshall is powered by the 6 web-framework utilising its own Postgres DB backend, and exploiting 7 to manage its internal tasks. At regular intervals a task scrapes the candidate table of the GOTO for new rows that pass some threshold on the classifier's real-bogus score. The ingestion of a new entry generates a cascade of tasks to aid end-users in their decision on the scientific merit of a source, such as generating image-stamps and light curve plots, and perform contextual information checking on the source by cross-matching with astrophysical catalogues (through the HTM interface, Soumagnac & Ofek 2018), and minor planet ephemerides. Performance The GOTO prototype was deployed towards the end of the second LIGO-Virgo Observing run (O2). The key goal was to ensure that the viability of both the design and implementation was confirmed so that a full facility could be built in time for the later observing runs. In the period between O2 and the start of the third observing period (O3) in early 2019, the prototype mount and unit telescopes were commissioned and tested, and upgrades were developed to improve the system performance and reliability. The telescope control, scheduling, image processing and source detection software was also developed during this period, to create a fully automated system from the point a transient alert is received to the potential sources appearing in the GOTO Marshall. Figure 6. An example screenshot from the GOTO Marshall web interface. Shown is a list of source tickets providing at-a-glance information for each new source that passes preliminary cuts on the real-bogus score. Links within the ticket can take the user to pages showing more information on the source and its photometry. Users are also able to comment and provide additional classification for the sources, as well as assigns them to their own (or shared) "watchlists". Table 2. Zeropoint calibration performance for the GOTO-4 prototype system under dark lunar conditions. The airmass-corrected calibration is completed against the APASS survey for each frame and the performance is calculated against the expected theoretical magnitudes. The zeropoint performance is measured as 10 ( − model ) /2.5 . The expected 5 limiting magnitudes are given using =60 s observations under dark (D), grey (G), and bright (B) conditions. Detectors Each GOTO unit telescope is equipped with a 50 megapixel FLI MicroLine camera (see section 3.1). The physical properties of the detectors are given in Table 1, and other parameters were measured prior to the cameras being shipped to La Palma for commissioning (for details see Dyer 2020). The gain, readout noise and fixed-pattern noise for each camera were measured using the photon transfer curve method (Janesick 2001); each camera has a gain of between 0.53 and 0.63 e − /ADU, with a typical readout noise of 12 e − and a fixedpattern noise of 0.4 per cent of full-well capacity. By taking a series of long, dark exposures the dark current noise was measured to be less than 0.002 e − /s for each camera. The cameras also each have a non-linearity of less than 0.2 per cent over their dynamic range, aside from when taking very short exposures or when close to saturation. Optics We measured the image quality of the GOTO-4 prototype using the full-width at half-maximum (FWHM) of all stellar sources across the field with airmass less than 1.2, using data from across March 2018. Under ideal observing conditions, the typical PSF at the centre of the frame was determined to have FWHM∼ 2.5 arcsec. Due to the inherent wide FoV, the PSF may show significant deviations (on average up to ∼ 64 per cent) between the centre of the frame and the edges. We found that the FWHMs in , , and bands are largely similar and found that the average FWHM values at the centre of the frame ∼ 2.5 − 3.0 arcsec. The PSF performance was somewhat worse than expected (1.8 − 2.5 arcsec theoretical performance), in particular towards the field edges. However, we will see in the next sub-sections that it still allowed the prototype to deliver the necessary sensitivity and depth. Extensive tests were performed on the PSF behaviour and a number of issues were identified that contributed to this. Some optics and tube hardware upgrades were installed to mitigate these, and these issues informed the design of the next generation tubes to be used in the full facility (see §5). Key components in this were the stability of the primary mirror cell, the alignment of the corrector optics, and mount jitter. We measured the vignetting across each instrument using the flatfield frames and find that the typical flux values deviate ∼10 per cent between the centre of the frame and the edges. The centre of the vignetting pattern is located approximately on the central pixels which suggests that the cameras were centred close to the line-of- Fig. 7). The grey hashed area shows the throughput of the system without a filter. sight of the optical axis. Additionally, we determined the amount of scattered light by analysing the large-scale deviations of flat field during dark and bright time. We found that the difference between the dark and bright conditions increase the overall background of the flat field by a factor of 2. The addition of cloths and baffling to the OTAs marked a significant improvement over the original design. Prior to this, the scattered light showed non-trivial structural gradients, which were subsequently removed by the additional baffling and allowed for more relaxed moon constraints. Sensitivity and Zero-point calibration The magnitude zeropoints were calibrated against the AAVSO Photometric All-Sky Survey (APASS) survey 8 . The APASS survey is an all-sky photometric survey conducted in eight filters: Johnson and (in Vega magnitudes) and Sloan , , , , _ , and (in AB magnitudes). Each GOTO frame was calibrated against the photometry from a set of referenced filters from the APASS survey. The first crossmatch is performed with HTM using a cone 8 http://www.aavso.org/apass search to identify the HDF5 APASS file. For each frame, all unsaturated ( > 14) sources were spatially cross-matched to neutral colour (−0.5 < − < 1) APASS sources via a KDSphere cross-match. The reference filters were chosen based on the maximum integrated overlap area between the GOTO and the Johnson/Sloan filter response curves (Figure 7): GOTO-is calibrated against APASS-Sloan-, against , against , and against . As the -band filter is broad, it essentially covers Sloan , and Johnson . However, since these zeropoints are to demonstrate headline performance of a prototype, we provide nominal zeropoints based solely against . For the characterisation of the final hardware, a more accurate prescription will be in place. A throughput model of a GOTO unit telescope was constructed in order to determine the throughput of the system 9 . This model, shown in Fig. 8, includes the reflectivity of the primary and secondary mirrors, the transmission of the three lenses in the Wynne corrector and the glass window in front of the camera (collectively combined into the OTA throughput in Fig. 8), the QE of the CCD sensors, and the bandpass of each filter. Using the Astrolib PySynphot package (Lim et al. 2015), theoretical zeropoints were calculated by passing the flux profile of a zero-magnitude star through the complete throughput model. These Based on the calibrated zeropoint magnitudes, the 5 limiting magnitudes that GOTO-4 was able to achieve are shown in Fig. 9. For a standard 60 second exposure a limiting magnitude of = 19.8 was predicted, which matches exactly the typical observed limits of = 19.8 during dark time and = 19.56 on average over all lunar phases. The modelled 5 limiting magnitudes are given in Table 2 for all filters for a single UT under dark, grey, and bright time and for all UTs for -band under dark, grey, and bright time. The quoted performance is measured as 10 ( − model )/2.5 . Under all conditions, the calibrated zeropoints match reasonably well to theoretical expectations, despite the lower than expected performance characteristics of the PSF. Mount pointing & tracking The pointing accuracy of GOTO was complicated by the array design, with each UT being affected by flexure in the mount, the boomarm and the guidemounts holding each OTA (see section 3.1). The pointing accuracy is typically 2-5' but can be worse than 10' in declination, depending on the elevation. This, however, is still a small fraction of GOTO's large field of view, and future mount upgrades should reduce this further. For similar reasons, the tracking could drift up to 1'/hour depending on the unit telescope. As GOTO typically only uses exposure times of 60 or 120 seconds, and only stays on each target for less than 5 minutes, this is rarely a major issue. Of more concern was the sensitivity to wind load as wind gusts can induce significant tracking errors. The prototype was particularly vulnerable to wind shake due to the exposed clamshell dome. Even under lower wind loads, the mount jitter contributed to the overall image PSF. The wormwheel design means that the motor torque is transferred via a belt and wormgear, which cannot be overly stiff. Sky coverage The complete GOTO-4 prototype began to take regular observations in the evening of 21 February 2019, and covered the entire LIGO-Virgo O3 period from 1 April 2019 to its suspension on 23 March 2020. Afterwards, GOTO continued regularly observing until the morning of 1 August 2020, when the prototype was shut down in order to upgrade it to a full 8-UT array (see section 5). Between 21 February 2019 and 1 August 2020, GOTO observed at least one target on each of the 430 out of 527 nights (81.6 per cent), with the other nights in downtime due to bad weather, technical work and 53 days between 14 March and 6 May when the observatory was closed due to the COVID-19 pandemic. During this time GOTO observed 45,315 individual pointings, of which the vast majority (45,299 or 99.96 per cent) were aligned to the all-sky grid (see 3.2.2). The coverage of the on-grid pointings are shown in Figure 10. Of the 2913 tiles in the all-sky grid, 2207 (75.8 per cent) were observed at least once, with the remaining 706 being below the horizon visible from La Palma. The median number of observations per tile was 20. Two tiles were observed more than 100 times and are highlighted in Figure Response time The scheduling system described in section 3.2.2 allows GOTO to respond rapidly to transient alerts, and for events that occur during a clear night on La Palma GOTO can be observed within minutes. Of the eight gravitational-wave alerts to occur during clear nights in the commissioning period, observations of all but one began within 60 seconds after the alert was received . The shortest time between an alert being received by the G-TeCS sentinel and the exposures beginning was 28 seconds (for gravitational-wave event S190521g), and most of this delay was the unavoidable time spent slewing the telescope to the new target (see section 4.1). Similar response times were recorded during GOTO's follow-up to GRB alerts. Under clear conditions and without any extraneous observational issues garnering delay, the shortest times between receiving the GRB alert and starting the exposures were 55 seconds (for Swift trigger 959431) and 2.3 minutes (for Fermi GBM trigger 573604668) (Mong et al. 2021). Once images are taken they are automatically transferred to the Warwick data centre and processed as described in section 3.2.3. The typical latencies for the data transfer from La Palma to the Warwick data centre are ∼10 seconds. Single frames are processed within ∼3-5 minutes and within ∼10-12 minutes of the final exposure for coadding and stacking sets of science frames. The mean time from the mid-point of the exposure to a candidate entry being uploaded to the GOTO Marshall was 30 minutes, across all sources detected in 2020. This value excludes any delays of more than 2 hours, which are more likely due to network down-time or other disruption; without excluding those cases the mean delay was 47 minutes. The aim of future pipeline development is to reduce this delay to 10-15 minutes, which includes improvements to the latencies and efficiencies for database ingestion. Photometric and astrometric accuracy Long-term stability and accuracy of the photometric and astrometric measurements is key for high-quality data products. An assessment of the photometric and astrometric accuracy has been detailed in Mullaney et al. (2021) and Makrygianni et al. (2021) in the context of exploring the compatibility of next-generation real-time pipelines, i.e. the LSST stack, on GOTO-4 data. The observations were obtained by GOTO during regular survey mode between 24 February 2019 and 31 July 2019 and covers the region between 02h < < 20h and −20 • < < +90 • , specifically avoiding the densest regions of the Galactic plane. The LSST-stack measured astrometry and photometry was compared to matched sources from PanSTARRS DR1, and it was found that the measured source positions were accurate to 0.27±0.20 arcsec, and the -band photometry was accurate to ∼50 mmag at ∼ 16 mag and ∼200 mmag at ∼ 18 mag. These values are favourably comparable to those obtained using GOTO (Mullaney et al. 2021). Repeatability tests were also conducted on the tiles with the greatest frequency of visits. It was found that the photometric precision is typically within 1 − 2 mmag for sources brighter than ∼ 16 mag, within ∼ 3 − 6 mmag for sources between 16 > > 18 mag and within 0.2 mag RMS of the Pan-STARRS photometry for sources fainter than < 18 mag (Makrygianni et al. 2021). Further improvements are expected as we transition to a new data flow with more robust photometric calibrations and source flux determinations. EXAMPLE SCIENCE OPPORTUNITIES In this section, we present some example science opportunities with results obtained during the commissioning phase of the GOTO-4 prototype on La Palma. Gravitational-wave triggers While the prototype GOTO-4 instrument was undergoing commissioning, the prioritisation was on targeting every GW trigger (regardless of source type) and the creation of a set of good-quality reference stacks for candidate counterpart identification. During the first half of the LVC O3 observing run, (O3a; April -September 2019), the prototype GOTO-4 followed up 32 LVC GW triggers (including 3 retractions; see Gompertz et al. 2020, for a full summary). As noted in §3.3.6, GOTO-4 can be on target within less than a minute from alert. The GW alert response time varied between 28 seconds and 29.8 hours, with an average of 8.79 hours. This large latency in the response time is mainly attributed to observational constraints, including the delay between the GW alert and the sky area becoming accessible from La Palma and weather conditions at the site. In addition to rapid response capacity, GOTO also provides a unique set of wide-field capabilities, even with just 4 unit telescopes. This was particularly evident during the follow-up to GW190425 (Abbott et al. 2020b). The LVC alert was distributed during the La Palma day roughly 42 minutes after the GW event. The initial classification (Singer et al. 2014;Singer 2015) was a BNS merger at a distance of 155 ± 45 Mpc. The 90 per cent credible region covered 10,183 square degrees (LVC 2019a), with 71.1 per cent observable from La Palma. GOTO-4 began observations nearly half a day after trigger imaging ∼2,134 square degrees (or 29.6 per cent of the skymap) during the first night. Shortly thereafter, the LVC probability map was updated using (Aasi et al. 2013;Veitch et al. 2015). While the distance and classification were largely unchanged, the new 90 per cent credible region was smaller (down to 7,461 square degrees; LVC 2019b), with much of the probability shifted to the unobservable southern sky (Figure 11). GOTO-4 continued to observe the remaining 38.1 per cent over the next two nights. Over the three-night campaign, GOTO-4 imaged 2,667 square degrees which included 37 per cent of the initial and 22 per cent of the final skymap. Although no counterpart was discovered, GOTO-4 was able to constrain the non-detection of an AT2017gfolike kilonova out to 227 Mpc, or 6 per cent exclusion of the total volume of the LVC probability map (Gompertz et al. 2020). Over the course of O3a, a mean of 732 square degrees were tiled per campaign, up to a maximum of 2,667 square degrees. GOTO-4 covered up to 94.4 per cent of the total LVC localisation probability, or 99.1 per cent of the observable probability. Of particular note is the inclusion of GOTO's data as part of an aggregate analysis of the follow-up to GW190814 (Abbott et al. 2020c), the first potential neutron star -black hole merger detected in GWs (Ackley et al. 2020), though it is now thought to be more likely a binary black hole merger (Abbott et al. 2020c). Given that the full GOTO facility will feature 8× the number of telescopes compared to the prototype, observations of GW sources will be a key strength of the facility. Gamma-ray bursts In the absence of any prioritised GW trigger to follow-up, GOTO also participated in rapid follow-up of gamma-ray burst (GRB) triggers. Between 26 February 2019 and 07 June 2020, GOTO-4 observed 77 Fermi-GBM and 29 Swift-BAT burst alerts. GRBs were observed on a case-by-case basis to test different strategies and features of the observatory, and as such do not constitute a representative sample. However, taken as a group it can provide insight into the impact that GOTO can make in the explosive transients field. Further details on the overall performance of the GOTO-4 follow-up of GRB triggers can be found in Mong et al. (2021). During this time frame GOTO-4 detected four optical GRB counterparts, including the counterpart to GRB 190202A which was detected at ∼ 19 mag at ∼ 2.2 h after the trigger time (Steeghs et al. Figure 11. GOTO observations of GW190425 (Abbott et al. 2020b), shown on the initial probability map (top) and final map (bottom). Blue squares represent individual pointings (tiles), and the orange shading shows probability density. Much of the probability initially resided near the well-covered northern crescent, but later shifted to the southern region during LVC re-analysis after the first observing night. The grey shaded areas were not observable from La Palma in the first three days after the event detection. 2019) and the counterpart to GRB 180914B detected ∼ 2.15 days post-trigger at ∼ 20 mag (Ramsay et al. 2018). The observation response times for all GRBs ranged from 55 s -69.3 h after the GCN had been received by the G-TeCS sentinel. Although a number of factors can determine the latency, observational constraints such as sky location and source rise time are the leading contributors to the measured latency rather than any significant instrumental delays. Another notable example of GOTO's niche in this field was the response to GRB 171205A.The observed photometric data points (Steeghs et al. 2017) complemented the other multiwavelength datasets which altogether describes a GRB that shows compelling evidence for the emergence of a cocoon (Izzo et al. 2019). Accreting Binaries Accreting compact binaries are a well established class of highly variable objects. Cataclysmic variables (CVs) are stellar binaries in which a white dwarf (WD) accretes matter from a nearby donor star. Within the compact accreting binaries family, CVs are far more abundant than their more massive counterparts known as X-ray binaries (XRBs), which harbour either a neutron star or a black hole. CVs are split into many subtypes depending on their average accretion rate, the magnetic field strength of the WD, the composition of the companion star, or the general behaviour of their light curve. Monitoring of 8 AM CVn systems with GOTO-4 and with long-term historical data sets revealed that there are diverse behaviours of a subset of AM CVn and that even within subclasses they may not be a homogeneous group (Duffy et al. 2021). A common feature of a CV light curve is an increase in their luminosity by several magnitudes within a few days as their accretion disc undergoes a thermal instability (Osaki 1974). Figure 12 shows an example of one of these so-called dwarf novae outbursts for a newly discovered CV (GOTO2019bryr / AT2019fun) observed by GOTO-4, where we have combined the median-stacked GOTO -band data with photometry from ZTF 10 , via the Lasair broker (Smith et al. 2019). This system underwent several rebrightening epochs, which highlighted the need to monitor the long term evolution of this kind of outburst. The all-sky survey mode of GOTO will also be ideal for discovering new XRBs which enter into outburst as well, albeit at a lower detection rate than for CVs due to population sizes. GOTO will also excel at identifying low level variations in the light curves of accreting binaries, which are likely due to small changes in the accretion rates in these systems. It has already contributed to the confirmation of a change in the accretion rate of the magnetic cataclysmic variable FO Aquarii (Kennedy et al. 2019). Transients, Variables, and Moving Objects While many similarly-poised facilities undertake routine wide-area surveys, variances of cadence and depth determine the rate of expected numbers of variable and transient sources. However, a general expectation of the rate of transients can be empirically estimated based on the site location, instrument hardware, survey sensitivity and cadence, among other considerations (Bellm 2016, ; and adapted the described package 11 for our analysis). Once GOTO moves into full operational mode, the entire available sky is expected to be covered every 3 days. To calculate the estimated rate of transient sources with GOTO-16, we assumed a 3-day cadence, that sources have been detected at least twice and have shown a decay rate of 1 magnitude over timescales between 1 hr and 100 days. The event rate per year covers the phase space as shown in Fig. 13. We gridded over possible combinations of peak absolute magnitudes and transient decay timescales which may represent generic transients and used the intrinsic rate of Type Ia supernova of 3.0 × 10 −5 Mpc −3 yr −1 as is the default setting of the package defined in (Bellm 2016;LSST Science Collaboration et al. 2009). The left edge boundary is an arbitrary cutoff bounded by a transient decay timescale of 1 hour to decline by 1 magnitude. The lower edge boundary is set by the lower bound of events per year for the illustration, or 10 −4 yr −1 . There are a multitude of transient and variable astrophysical phenomena that are observable in the optical band. Transient events such as supernovae, flare stars, luminous red novae, dwarf novae outbursts, 11 https://github.com/ebellm/VolumetricSurveySpeed Events per year Figure 13. The event rate per year for a general transient that will be probed by a full GOTO-16 site as a function of peak absolute magnitude and decay timescales. For purposes of illustration we assume a 3 day cadence with a requirement of 2 consecutive detections. We grid over decay timescales ranging between 1 hour to 100 days per magnitude and over peak absolute magnitude between -4 to -27. We set a lower bound of events per year to 10 −4 yr −1 . This figure is has been created using the package described in (Bellm 2016). tidal disruption events, and kilonovae; and variable events such as, RR Lyrae, transits, eclipsing, rotating, and microlensing events, Active Galactic Nuclei (AGN) and BL-Lac objects, will all be routinely observed with GOTO in significant numbers. As a simple example to estimate the expected transient rates for a GOTO-16 system, we used the rates for typical Type Ia supernovae. We assumed 9 hours of observing per night down to a limiting magnitude of = 19.8 (a coverage of 11,520 square degrees per night). Given the peak absolute magnitude of a Type Ia SN of = −19, a decay timescale of ∼ 50 days, a volumetric rate of 3.0 × 10 −5 Mpc −3 yr −1 (LSST Science Collaboration et al. 2009), we find ∼ 1596 events per year. Whilst not a core science goal, data from GOTO's general all-sky survey will uncover both known as well as unknown moving objects. The observing strategy of taking sets of 3-4 frames at each position will permit a direct search for objects moving on a short timescale. More rapidly moving objects will appear as apparent orphan transients and whilst being interlopers for some of the other science goals, there are excellent prospects for GOTO to contribute data concerning both new and poorly constrained moving objects. Initial effort on the detection of such moving objects made use of the C L T software (Savanevych et al. 2018), which permits a semi-automatic search in parallel with the main pipeline. During commissioning, GOTO observed the near-Earth Apollo asteroid (3200) Phaethon. Quasi-simultaneous observations made with the Torino Polarimeter (Pernechele et al. 2012) mounted on the Omicron (west) telescope of the C2PU facility at the Calern observing station of the Observatoire de la Côte d'Azur obtained time-resolved imaging polarimetry and were used to probe the variation in surface mineralogy (Borisov et al. 2018). Serendipitous Discoveries During GOTO's long-term systematic survey campaigns or dedicated follow-up activities, it is also possible to make serendipitous discoveries. The novel and interesting serendipitous astronomical events must be filtered out amongst a variety of other sources that appear simultaneously in the images, such as transient impostors (Cowperthwaite et al. 2018;Pastorello & Fraser 2019;Almualla et al. 2021); or as optical or instrumental contaminants, e.g. ghosts, cosmic rays, spurious noise. The identification of the novelty of the sources becomes particularly important when working under rapid identification timeframes for GW counterpart searches. While it is possible that identification of candidate transient events can be in-ferred through contextual information (such as whether there are previous non-detections of the source or if it is isolated, near a galaxy or within the galactic plane) often, and at first glance, they can appear to be legitimate transient events. While searching for the counterpart to the BNS GW candidate S190901ap (LIGO Scientific Collaboration & Virgo Collaboration 2019), a serendipitous transient candidate was discovered. S190901ap was reported as a possible BNS out to an estimated distance = 241 ± 79 Mpc and was localized to an extremely large 14,753 square degrees (90 per cent) as reported by and . GOTO began observing the field 6.7 minutes post GW trigger time and followed the observing strategy for distant BNS sources Gompertz et al. 2020). Within the error region, a source of interest was identified with a detection magnitude of =19.08 and a previous non-detection only 2.5 hours prior to the GW trigger time down to a limiting magnitude of 20. The candidate was given the internal designation GOTO2019hope and the Astronomical Transient (AT) designation SN 2019pjv. Within the field of view of the candidate were two possible host galaxies, MCG+05-41-001 and LEDA 1826843, separated by 46.92 and 64.31 arcsec and located at distances z=0.0227 ( =98.5 Mpc) and z=0.0707 ( =313.8 Mpc) respectively. Ultimately, spectroscopic follow-up of the source by GRAWITA on the Copernico 1.82m telescope, and later confirmed by the Nordic Optical Telescope (NOT), reported that the source best matches a Type Ia-91T like SN about one week before maximum light at a = 0.024 (Nascimbeni et al. 2019;Kankare et al. 2019) and effectively ruled out the source as a transient associated with S190901ap. While the source was ultimately deemed unrelated, it was a viable source for the GOTO-4 prototype to monitor long-term as a test field for photometric and astrometric accuracy monitoring, as well as testing of the "realbogus" model's stability (Section 3.2.6). Fig. 14 shows the rise, peak and decline of this Type Ia-91T SN. Also with the serendipitous sources that GOTO will regularly observe, it is often meaningful to target early follow-up, in order to ascertain the relevance of these discoveries, and whether they warrant immediate follow-up. One example was GOTO21cl/SN 2021fqb as shown in Figure 15. This source was picked up during the routine patrol survey and is coincident (at 9.27 arcsec) with a luminous host galaxy at a redshift = 0.0490 in the GLADE catalog. A spectrum of this source was taken with SPRAT (Spectrograph for the Rapid Acquisition of Transients Piascik et al. 2014) which showed spectral features consistent with a young Type Ia supernova. A key goal of the GOTO dataflow is to make the time delay between first detections and initial follow-up as short as possible by minimising dependencies on human vetting and flagging. CONCLUSIONS AND FUTURE PROSPECTS The main purpose of the GOTO-4 prototype was to implement and further develop the concept of an array of medium-sized telescopes on shared mounts. The science focus is wide field time domain astronomy in the context of gravitational wave searches and other rapidly evolving objects. Whilst not without challenges, the performance of the GOTO-4 prototype instrument has clearly demonstrated the ability of the adopted design to meet the science goals. Lessons learned with the prototype Important lessons have been learned. First of all, the fast optics combined with a large sensor with small pixels places high demands on the UT implementation. The pixel scale is close to critical sampling to maximize field of view, and collimation and field correction needs to be tightly controlled. Our prototype tubes highlighted the need for a stable primary mirror cell to control image quality stability at the edges of the field of view. Furthermore, scattered light can be an issue given the location of the corrector optics. For these reasons, the final GOTO UTs will feature closed carbon-fibre tubes with top end baffles as well as a more advanced primary mirror cell. The second point was concerning the mount, which has to carry a heavy load as well as handle a big moment of inertia with tubes mounted far from the mount axes. Our prototype mount used a wormwheel design in an effort to be more robust against balancing. But in this design a small amount of mechanical slop in the various mechanisms that connect the mount motors to the axes meant a sensitivity to wind shake. The extensive footprint of the 8 tubes under a full load together with an open clamshell type enclosure further adds to this. Thus, for the final GOTO mount systems, a heavy-duty direct-drive system will be used to mitigate this. The prototype instrument relied on the wide-band filter over the majority of the survey operations. This was a deliberate choice as it allowed for broadband response to transients without relying on color information. However, as is made apparent in the throughput model of Fig. 8, there is a non-negligible sensitivity to the redder wavelengths. Future improvements to the instrument may include a custom wide-band red filter to enable coverage between ∼7000-8500Å. Finally, the prototype data-flow, GOTO , has been used successfully to benchmark and formulate the framework for the envisaged successor data-flow, which will need to have strong horizontalscaling capabilities as the number of UTs used by GOTO increases. The new framework is in active development and in mid 2020 processed some stages of the data-flow in parallel. In addition to a requisite need for a more robust and scalable pipeline, so too is there for developing scalable transient identification algorithms, improvements to modelling wide-field PSFs for deconvolution and image subtraction, and automated image quality assessments for full-frame flagging. The new framework will address many of the early challenges that were uncovered during the prototype stage and will lead to technical advances in high-cadence wide-field optical image data processing. Vision for next phases With the UTs delivering the required headline performance metric, i.e. sufficient depth in a reasonably short exposure time, the true power then lies in deploying a significant number of telescopes across more than one location. In the GOTO design, the instantaneous footprint scales with the number of unit telescopes. The project is transitioning from prototype platform towards full deployment. At the La Palma site, the prototype equipment will be replaced by two new 8-telescope systems (Fig. 16). These feature the revised tubes and mount noted above and will provide a collective field of view of ≈ 75 − 80 square degrees. With such a footprint, and a typical exposure set of several minutes per pointing, a good cadence can be achieved across the visible sky, a key driver for the project (∼ 10, 000 square degrees per night). In parallel, a twin deployment is being developed at Siding Spring Observatory that will provide all-sky coverage. La Palma and Siding Spring are an ideal antipodal setup, covering all declinations whilst offering maximum complementarity. It is also of note that these two sites offer key longitudinal coverage compared to, for example Hawaii and Chile. The Siding Spring array will be identical to La Palma, also featuring two 8-telescope mount systems. This large expansion, amounting to an 8-fold increase in the number of telescopes deployed, will significantly boost the capabilities of GOTO compared to the prototype in both the monitoring and responsive modes. We previously considered an estimate for the expected rate of transients in survey mode for a dual 8-telescope system, or . This is a full node at a given single site. With the addition of the second site at Siding Spring Observatory, the survey can be extended to cover all-sky and the expected rate of recoverable events will scale ∼linearly with coverage. Each site can cover about a quarter of the whole sky each night, and combined allows for an all-sky cadence of 2-3 days covering both N and S hemispheres. A considerable fraction of the sky is visible from both sites. However, there are also other modes available given the array approach of GOTO. It is also possible to point multiple mount systems at the same patch of sky to go deeper and reach events further in volume scaling as ∼ 1/ 2 . Alternatively, instead of maximising sky coverage or depth, the array could be split into groups that observe the same part of sky in different filters simultaneously. Colour information is particularly revealing for sources which show colour evolution or where colour information can reveal underlying properties of surrounding ejecta material -as may be expected with kilonovae (Metzger & Fernández 2014). GOTO is well situated in terms of a cost-effective, wide-field, scalable, and adaptable optical observatory. The volumetric survey speed (Bellm 2016) of a single site GOTO-16 can be estimated as ∼ 2×10 7 Mpc 3 /hr down to a limiting magnitude of = 19.8, comparable to that of other instruments, such as ATLAS and Pan-STARRS. For the full-scale GOTO instrument, the survey speed shows marked improvements, up to ∼ 10 8 Mpc 3 /hr for unique pointing strategy and ∼ 2 × 10 8 Mpc 3 /hr if overlapping alignment of tiles are observed. Similar metrics to the volumetric survey speed, such as the grasp (Ofek & Ben-Ami 2020), can be used to showcase the unique niche that GOTO will fill in the current array of operable instruments. Based on estimates of information grasp, multiples of small and scalable telescope systems such as GOTO and ATLAS can be greater than 3 times more cost-effective compared to other survey telescopes with single unit systems. Turning now to the responsive mode performance, we illustrate the impact on the core science area of EM counterpart searches coincident with GW triggers. A key improvement thanks to the dual anti-podal sites is an effective doubling of the duty cycle for any given GW event localisation region. This will significantly reduce the overall latency and roughly double the number of recoverable events detected within, say, the first 12 hours. The single site nature of the GOTO-4 prototype was the main limiting factor setting the mean delay to first observation during the O3 run (Gompertz et al. 2020). The responsive mode searches will also profit directly from the increase in survey grasp. This will allow more search area to be covered more quickly. If we simply scale from the O3a sample in Gompertz et al. (2020), this would double the mean coverage to ∼1500 square degrees per campaign, or 90% of the O3a LVC probability skymaps and offer a reduction in the average response time delay of ∼4.5 hours. A final key step is the continued evolution of the GW localisation performance, evolving to significantly better localisations as the global networks develop, and thus smaller areas to search over. The increase in survey grasp can then be used to provide denser and deeper coverage of these search areas. For events that are only accessible from a single site, this would be a fourfold increase in grasp, whereas events accessibly from both sites could receive the full factor of 8. Thus smaller areas and more telescopes combine to offer a significant opportunity to boost the typical depth achieved in search pointings. Co-pointing multiple mount systems can be seen as essentially increasing the effective exposure time per set, with a corresponding gain in limiting magnitude (Fig. 9). The reduction in search area comes on top of this and will allow multiple visits to be stacked for even greater depth. Although the localisation performance and evolution is complex (Abbott et al. 2020a;Petrov et al. 2021), gains of 1-2 mags compared to the depth achieved in a single set are to be expected. The best strategy will be event-dependent, in terms of the specific optimal balance between maximising probability covered, depth achieved and time delay since GW trigger. Looking in general at the prospects for kilonova detections for wide-field instruments also highlights how facilities such as GOTO complement and extend our search capabilities. The diverse specifications of current and planned facilities can be directly assessed in terms of the capability of probing the kilonovae detectable volume. Several studies have addressed the serendipitous detectability with the LSST at VRO (e.g. Cowperthwaite et al. 2019;Setzer et al. 2019;Scolnic et al. 2018), for other wide-field instruments like GOTO and DECam (Rosswog et al. 2017;Chase et al. 2021) and using infrastructure like the ZTF REaltime Search and Triggering (ZTFReST Andreoni et al. 2021). While space-based instruments like the Roman Space Telescope (formerly WFIRST; Spergel et al. 2015) are distinctly poised to reach deep ( ∼ 1) into the volume, terrestrial observatories like LSST, DECam and GOTO are fully capable of imaging out to ∼ 0.1 (Chase et al. 2021). A more detailed description of the final GOTO hardware together with their performance metrics will be provided in a future paper, following commissioning of the science-grade arrays. Figure 1 . 1The GOTO prototype unit telescopes make use of a Wynne corrector in a Newtonian configuration. The Figure 2 .Figure 3 . 23A photo of the GOTO-4 prototype telescope system at the Roque de los Muchachos Observatory in 2018, loaded with the initial 4 prototype unit telescopes inside of an 18-ft Astrohaven dome. The GOTO prototype field of view. On the left is a commissioning image of M31 taken with one of GOTO's cameras, showing the wide field of view of a single unit telescope. Four unit telescopes together create the initial 18 square degree prototype survey tile (GOTO-4), which will increase to 40 square degrees in the full GOTO-8 system (shown by the dashed boxes). For comparison, the fields of view of two other wide-field projects are shown to scale on the right: the Zwicky Transient Facility and the Rubin Observatory LSST Camera. Figure 4 . 4The G-TeCS software architecture. The observation database along with the sentinel, scheduler and conditions daemons are located on a central server (left). The pilot and hardware daemons for the telescope are run on the primary control computer located in the GOTO dome (centre). The hardware daemons communicate with their respective hardware units (right) directly or via interface daemons (in the case of the unit telescopes). Figure 7 . 7Bandpass comparison between the four Baader filters used by GOTO (filled areas) and the selected reference filters from the APASS survey (solid lines). Figure 8 . 8Throughput model for one of the GOTO-4 unit telescopes. The complete model (coloured areas) includes contributions from the OTA optics and CCD quantum efficiency (QE; dashed lines) and the bandpasses of the four Baader filters (dotted lines, from Figure 9 . 9Calculated 5 limiting magnitudes for the GOTO-4 prototype as a function of exposure time, in the four Baader filters. Limits for dark and bright time are shown by solid and dashed lines respectively, and assume a target at airmass 1.0 and seeing of 1.5 arcsec. were = 22.63 mag (AB), = 21.33 mag (AB), = 21.67 mag (Vega) and = 21.66 mag (AB). Under typical observing conditions and during dark time, the airmass-corrected zeropoint magnitudes for a single UT (UT1) were observed to be = 22.47 mag (AB), = 21.27 mag (AB), = 21.37 mag (Vega), and = 21.49 mag (AB). For each UT, the airmass-corrected zeropoint magnitudes were found to be = 22.65, 22.54 and 22.45 for UT2, UT3 and UT4, respectively. 10: T2407 contains M31 (00:42:44.3, +41:16:09) and was observed on 145 occasions, while T2204 contained GOTO2019hope (SN 2019pjv) (17:14:34.817, +28:07:26.26; see section 4.5) and was observed 123 times. Figure 10 . 10All on-grid observations taken by the GOTO-4 prototype between 21 February 2019 and 1 August 2020. The labeled tiles mark objects of particular interest during our commissioning observations; T2407 covers M31, and T2204 includes the GOTO2019hope field (see section 3.3.5) Figure 12 . 12Light curve of the source GOTO2019bryr, also known as AT2019fun. This is a newly discovered CV, first detected by GOTO. Upper limits are marked with triangles. Contributions to the light curve from ZTF (r and g bands from Lasair) are also included for completeness. Figure 14 .Figure 15 . 1415GOTO -band light curve of GOTO2019hope/SN 2019pjv. This field was targeted nightly over the duration of the SN as a technical test for the difference image analysis and the transient "realbogus" model. GOTO clear and -band light curve of Type Ia SN GOTO21cl/2021fqb. The initial detection was picked up close to the noise limit of the detection image by the transient "realbogus" model. The timescales for subsequent triggering of the Liverpool Telescope (LT) for follow-up and observations by LT's SPRAT instrument(Piascik et al. 2014) are shown. Figure 16 . 16Visualisation of a full GOTO node site consisting of 16 UTs spread over 2 domes. The northern and southern nodes will contain identical sets of 2x8, providing a total of 32 UTs. Table 1 . 1GOTO prototype hardware specifications.Parameter Value Site Latitude 28 • 45 36.2 N Longitude 17 • 52 45.4 W Altitude 2300 m a.s.l. Dome design Clamshell Dome diameter 18 ft (5.5 m) Mount Mount design German equatorial (parallactic) Mount slew rate 4-5 deg s −1 UTs per mount 8 (4 filled) Unit telescopes OTA design Wynne-Riccardi Primary diameter 40 cm Primary conic constant -1.5 Secondary diameter 19 cm (short axis) Secondary conic constant N/A (flat) Corrector diameter 12 cm Focal ratio f/2.5 Field of View 2.1 deg × 2.8 deg Detectors Detector size 8304 × 6220 pixels Active region 8176 × 6132 pixels Pixel size 6 m Pixel scale 1.25 /pixel Filters Baader , , , Gain 0.53 -0.63 − /ADU Readout Noise 12 − Dark current noise < 0.002 − /s Full-well capacity 40300 − Fixed-pattern noise 0.4% full-well capacity Non-linearity < 0.2% MNRAS 000, 1-19(2021) https://pythonhosted.org/Pyro4/ https://github.com/Lyalpha/spalipy MNRAS 000, 1-19 (2021) https://www.djangoproject.com/ 7 https://github.com/celery/celery Data files are available from https://github.com/GOTO-OBS/ public_resources and though the SVO Filter Profile Service (http: //svo2.cab.inta-csic.es/theory/fps/?gname=GOTO). https://lasair.roe.ac.uk/object/ZTF19aaviqnb MNRAS 000, 1-19 (2021) ACKNOWLEDGEMENTSWe thank the referee Eric Bellm for the constructive comments. The Gravitational-wave Optical Transient Observer (GOTO) project acknowledges the support of the (Grant agreement No. 725246, PI Levan).This research has made use of data and/or services provided by the International Astronomical Union's Minor Planet Center. This research has also made use of the following Python packages: Astropy (Astropy Collaboration et al. 2013, numpy(Harris et al. 2020), scipy(Virtanen et al. 2020), healpy(Zonca et al. 2019), Astrolib PySynphot(Lim et al. 2015), astroplan (Morris et al. 2018), pandas (McKinney et al. 2010, photutils(Bradley et al. 2020), and scikitlearn(Pedregosa et al. 2011).DATA AVAILABILITYData files covering the system throughput and some of the software packages are available via public github repositories under https: //github.com/GOTO-OBS/. Prototype data was mainly used for testing and commissioning and a full release of all data is not foreseen. Some data products will be available as part of planned GOTO public data releases. . J Aasi, 10.1103/PhysRevD.88.062001Phys. Rev. D. 8862001Aasi J., et al., 2013, Phys. Rev. D, 88, 062001 . B P Abbott, 10.1103/PhysRevLett.119.161101Physical Review Letters. 119161101Abbott B. P., et al., 2017a, Physical Review Letters, 119, 161101 . B P Abbott, 10.1038/nature24471Nature. 55185Abbott B. P., et al., 2017b, Nature, 551, 85 . B P Abbott, 10.3847/2041-8213/aa920cApJ. 84813Abbott B. P., et al., 2017c, ApJ, 848, L13 . B P Abbott, 10.3847/2041-8213/aa91c9ApJ. 84812Abbott B. P., et al., 2017d, ApJ, 848, L12 . B P Abbott, 10.1007/s41114-020-00026-9Living Reviews in Relativity. 233Abbott B. P., et al., 2020a, Living Reviews in Relativity, 23, 3 . B P Abbott, 10.3847/2041-8213/ab75f5ApJ. 8923Abbott B. P., et al., 2020b, ApJ, 892, L3 . R Abbott, 10.3847/2041-8213/ab960fApJ. 89644Abbott R., et al., 2020c, ApJ, 896, L44 . F Acernese, 10.1088/0264-9381/32/2/024001Classical and Quantum Gravity. 3224001Acernese F., et al., 2015, Classical and Quantum Gravity, 32, 024001 . K Ackley, 10.1051/0004-6361/202037669A&A. 643113Ackley K., et al., 2020, A&A, 643, A113 . H Aihara, 10.1093/pasj/psx066PASJ. 704Aihara H., et al., 2018, PASJ, 70, S4 . C Alard, 10.1051/aas:2000214A&AS. 144363Alard C., 2000, A&AS, 144, 363 . C Alard, R H Lupton, 10.1086/305984ApJ. 503325Alard C., Lupton R. H., 1998, ApJ, 503, 325 . M Almualla, 10.1093/mnras/stab1090MNRAS. 5042822Almualla M., et al., 2021, MNRAS, 504, 2822 . I Andreoni, 10.1017/pasa.2017.65Publ. Astron. Soc. Australia. 3469Andreoni I., et al., 2017, Publ. Astron. Soc. Australia, 34, e069 . I Andreoni, 10.3847/1538-4357/ac0bc7ApJ. 91863Andreoni I., et al., 2021, ApJ, 918, 63 . I Arcavi, 10.1038/nature24291Nature. 55164Arcavi I., et al., 2017, Nature, 551, 64 . 10.1051/0004-6361/201322068A&A. 55833Astropy Collaboration et al., 2013, A&A, 558, A33 . 10.3847/1538-3881/aabc4fAJ. 156123Astropy Collaboration et al., 2018, AJ, 156, 123 S D Barthelmy, 10.1063/1.55426Gamma-Ray Bursts, 4th Hunstville Symposium. pp 99-103. Barthelmy S. D., et al., 1998, in Gamma-Ray Bursts, 4th Hunstville Sympo- sium. pp 99-103, doi:10.1063/1.55426 A Becker, ascl:1504.004HOTPANTS: High Order Transform of PSF ANd Template Subtraction. Becker A., 2015, HOTPANTS: High Order Transform of PSF ANd Template Subtraction (ascl:1504.004) . E C Bellm, 10.1088/1538-3873/128/966/084501PASP. 12884501Bellm E. C., 2016, PASP, 128, 084501 . E C Bellm, 10.1088/1538-3873/aaecbePASP. 13118002Bellm E. C., et al., 2019, PASP, 131, 018002 . E Bertin, S Arnouts, 10.1051/aas:1996164A&AS. 117393Bertin E., Arnouts S., 1996, A&AS, 117, 393 S Bloemen, P Groot, G Nelemans, M Klein-Wolt, Astronomical Society of the Pacific Conference Series. Rucinski S. M., Torres G., Zejda M.496254Living Together: Planets, Host Stars and BinariesBloemen S., Groot P., Nelemans G., Klein-Wolt M., 2015, in Rucinski S. M., Torres G., Zejda M., eds, Astronomical Society of the Pacific Conference Series Vol. 496, Living Together: Planets, Host Stars and Binaries. p. 254 . J S Bloom, 10.1086/668468PASP. 1241175Bloom J. S., et al., 2012, PASP, 124, 1175 . G Borisov, 10.1093/mnrasl/sly140MNRAS. 480131Borisov G., et al., 2018, MNRAS, 480, L131 . L Bradley, 10.5281/zenodo.4044744Bradley L., et al., 2020, astropy/photutils: 1.0.0, doi:10.5281/zenodo.4044744, https://doi.org/10.5281/zenodo. . H Brink, J W Richards, D Poznanski, J S Bloom, J Rice, S Negahban, M Wainwright, 10.1093/mnras/stt1306MNRAS. 4351047Brink H., Richards J. W., Poznanski D., Bloom J. S., Rice J., Negahban S., Wainwright M., 2013, MNRAS, 435, 1047 . M Cantiello, 10.3847/2041-8213/aaad64ApJ. 85431Cantiello M., et al., 2018, ApJ, 854, L31 . K C Chambers, arXiv:1612.05560arXiv e-printsChambers K. C., et al., 2016, arXiv e-prints, p. arXiv:1612.05560 . E A Chase, arXiv:2105.12268arXiv e-printsChase E. A., et al., 2021, arXiv e-prints, p. arXiv:2105.12268 . R Chornock, 10.3847/2041-8213/aa905cApJ. 84819Chornock R., et al., 2017, ApJ, 848, L19 . D A Coulter, 10.1126/science.aap9811Science. 3581556Coulter D. A., et al., 2017, Science, 358, 1556 . S Covino, 10.1038/s41550-017-0285-zNature Astronomy. 1791Covino S., et al., 2017, Nature Astronomy, 1, 791 . P S Cowperthwaite, 10.3847/2041-8213/aa8fc7ApJ. 84817Cowperthwaite P. S., et al., 2017, ApJ, 848, L17 . P S Cowperthwaite, 10.3847/1538-4357/aabad9ApJ. 85818Cowperthwaite P. S., et al., 2018, ApJ, 858, 18 . P S Cowperthwaite, V A Villar, D M Scolnic, E Berger, 10.3847/1538-4357/ab07b6ApJ. 87488Cowperthwaite P. S., Villar V. A., Scolnic D. M., Berger E., 2019, ApJ, 874, 88 . G Dálya, 10.1093/mnras/sty1703MNRAS. 4792374Dálya G., et al., 2018, MNRAS, 479, 2374 . A J Drake, 10.1088/0004-637X/696/1/870ApJ. 696870Drake A. J., et al., 2009, ApJ, 696, 870 . M R Drout, 10.1126/science.aaq0049Science. 3581570Drout M. R., et al., 2017, Science, 358, 1570 . J G Ducoin, D Corre, N Leroy, Le Floch, E , 10.1093/mnras/staa114MNRAS. 4924768Ducoin J. G., Corre D., Leroy N., Le Floch E., 2020, MNRAS, 492, 4768 . D A Duev, 10.1093/mnras/stz2357MNRAS. 4893582Duev D. A., et al., 2019, MNRAS, 489, 3582 . C Duffy, 10.1093/mnras/stab389MNRAS. 5024953Duffy C., et al., 2021, MNRAS, 502, 4953 . M J Dyer, arXiv:2003.06317University of SheffieldPhD thesisDyer M. J., 2020, PhD thesis, University of Sheffield (arXiv:2003.06317) M J Dyer, V S Dhillon, S Littlefair, D Steeghs, K Ulaczyk, P Chote, D Galloway, E Rol, 10.1117/12.2311865arXiv:1807.01614Observatory Operations: Strategies, Processes, and Systems VII. p. 107040Dyer M. J., Dhillon V. S., Littlefair S., Steeghs D., Ulaczyk K., Chote P., Galloway D., Rol E., 2018, in Observatory Operations: Strate- gies, Processes, and Systems VII. p. 107040C (arXiv:1807.01614), doi:10.1117/12.2311865 M J Dyer, 10.1117/12.2561506arXiv:2012.02686Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series. p. 114521Dyer M. J., et al., 2020, in Society of Photo-Optical Instrumentation En- gineers (SPIE) Conference Series. p. 114521Q (arXiv:2012.02686), doi:10.1117/12.2561506 . P A Evans, 10.1126/science.aap9580Science. 3581565Evans P. A., et al., 2017, Science, 358, 1565 . S Fairhurst, 10.1088/1367-2630/11/12/123006New Journal of Physics. 11123006Fairhurst S., 2009, New Journal of Physics, 11, 123006 . B Flaugher, 10.1088/0004-6256/150/5/150AJ. 150150Flaugher B., et al., 2015, AJ, 150, 150 . N Gehrels, J K Cannizzo, J Kanner, M M Kasliwal, S Nissanke, L P Singer, 10.3847/0004-637X/820/2/136ApJ. 820136Gehrels N., Cannizzo J. K., Kanner J., Kasliwal M. M., Nissanke S., Singer L. P., 2016, ApJ, 820, 136 . A Goldstein, 10.3847/2041-8213/aa8f41ApJ. 84814Goldstein A., et al., 2017, ApJ, 848, L14 . B P Gompertz, 10.1093/mnras/staa1845MNRAS. 497726Gompertz B. P., et al., 2020, MNRAS, 497, 726 . L K Hardy, T Butterley, V S Dhillon, S P Littlefair, R W Wilson, 10.1093/mnras/stv2279MNRAS. 4544316Hardy L. K., Butterley T., Dhillon V. S., Littlefair S. P., Wilson R. W., 2015, MNRAS, 454, 4316 . C R Harris, 10.1038/s41586-020-2649-2Nature. 585357Harris C. R., et al., 2020, Nature, 585, 357 . J Hjorth, 10.3847/2041-8213/aa9110ApJ. 84831Hjorth J., et al., 2017, ApJ, 848, L31 . Ž Ivezić, 10.3847/1538-4357/ab042cApJ. 873111Ivezić Ž., et al., 2019, ApJ, 873, 111 . L Izzo, 10.1038/s41586-018-0826-3Nature. 565324Izzo L., et al., 2019, Nature, 565, 324 Scientific Charge-Coupled Devices. J R Janesick, 10.1117/3.374903SPIE PressJanesick J. R., 2001, Scientific Charge-Coupled Devices. SPIE Press, doi:https://doi.org/10.1117/3.374903 . E Kankare, GRB Coordinates Network256651Kankare E., et al., 2019, GRB Coordinates Network, 25665, 1 . M M Kasliwal, 10.1126/science.aap9455Science. 3581559Kasliwal M. M., et al., 2017, Science, 358, 1559 . S C Keller, 10.1071/AS07001Publ. Astron. Soc. Australia. 241Keller S. C., et al., 2007, Publ. Astron. Soc. Australia, 24, 1 M R Kennedy, The Astronomer's Telegram. 128601Kennedy M. R., et al., 2019, The Astronomer's Telegram, 12860, 1 . T L Killestein, 10.1093/mnras/stab633MNRAS. 5034838Killestein T. L., et al., 2021, MNRAS, 503, 4838 S Koposov, O Bartunov, Astronomical Society of the Pacific Conference Series. Gabriel C., Arviset C., Ponz D., Enrique S.351735Astronomical Data Analysis Software and Systems XVKoposov S., Bartunov O., 2006, in Gabriel C., Arviset C., Ponz D., Enrique S., eds, Astronomical Society of the Pacific Conference Series Vol. 351, Astronomical Data Analysis Software and Systems XV. p. 735 . H A Krimm, 10.1088/0067-0049/209/1/14ApJS. 20914Krimm H. A., et al., 2013, ApJS, 209, 14 . R G Kron, 10.1086/190669ApJS. 43305Kron R. G., 1980, ApJS, 43, 305 . GRB Coordinates Network. 215131LIGO Scientific Collaboration Virgo Collaboration 2017, GRB Coordinates Network, 21513, 1 GRB Coordinates Network. 256061LIGO Scientific Collaboration Virgo Collaboration 2019, GRB Coordinates Network, 25606, 1 . 10.1088/0264-9381/32/7/074001Classical and Quantum Gravity. 3274001LIGO Scientific Collaboration et al., 2015, Classical and Quantum Gravity, 32, 074001 . D Lang, D W Hogg, K Mierle, M Blanton, S Roweis, 10.1088/0004-6256/139/5/1782AJ. 1391782Lang D., Hogg D. W., Mierle K., Blanton M., Roweis S., 2010, AJ, 139, 1782 . N M Law, 10.1086/680521PASP. 127234Law N. M., et al., 2015, PASP, 127, 234 . A J Levan, 10.3847/2041-8213/aa905fApJ. 84828Levan A. J., et al., 2017, ApJ, 848, L28 GRB Coordinates Network. 241681Ligo Scientific Collaboration VIRGO Collaboration 2019a, GRB Coordinates Network, 24168, 1 GRB Coordinates Network. 242281Ligo Scientific Collaboration VIRGO Collaboration 2019b, GRB Coordinates Network, 24228, 1 P L Lim, R I Diaz, V Laidler, ascl:1303.023Synthetic photometry software package. Lim P. L., Diaz R. I., Laidler V., 2015, pysynphot: Synthetic photometry software package (ascl:1303.023) . V M Lipunov, 10.3847/2041-8213/aa92c0ApJ. 8501Lipunov V. M., et al., 2017, ApJ, 850, L1 . J D Lyman, 10.1038/s41550-018-0511-3Nature Astronomy. 2751Lyman J. D., et al., 2018, Nature Astronomy, 2, 751 . L Makrygianni, 10.1017/pasa.2021.19Publ. Astron. Soc. Australia3825Makrygianni L., et al., 2021, Publ. Astron. Soc. Australia, 38, e025 . F J Masci, 10.1088/1538-3873/129/971/014002PASP. 12914002Masci F. J., et al., 2017, PASP, 129, 014002 W Mckinney, Proceedings of the 9th Python in Science Conference. the 9th Python in Science ConferenceMcKinney W., et al., 2010, in Proceedings of the 9th Python in Science Conference. pp 51-56 . C Meegan, 10.1088/0004-637X/702/1/791ApJ. 702791Meegan C., et al., 2009, ApJ, 702, 791 . B D Metzger, R Fernández, 10.1093/mnras/stu802MNRAS. 4413444Metzger B. D., Fernández R., 2014, MNRAS, 441, 3444 . Y L Mong, K Ackley, R Cutter, M J Dyer, 10.3847/1538-3881/aaa47eMNRAS. 155128AJMong Y. L., Ackley K., Cutter R., Dyer M. J., et al. 2021, MNRAS Morris B. M., et al., 2018, AJ, 155, 128 . J R Mullaney, 10.1017/pasa.2020.45Publ. Astron. Soc. Australia384Mullaney J. R., et al., 2021, Publ. Astron. Soc. Australia, 38, e004 . V Nascimbeni, Grawita CollaborationI Salmaso, Grawita CollaborationL Tomasella, Grawita CollaborationS Benetti, Grawita CollaborationP D&apos;avanzo, Grawita CollaborationE Cappellaro, Grawita CollaborationE Brocato, Grawita CollaborationGRB Coordinates Network256611Nascimbeni V., Salmaso I., Tomasella L., Benetti S., D'Avanzo P., Cappellaro E., Brocato E., Grawita Collaboration 2019, GRB Coordinates Network, 25661, 1 . M Nicholl, 10.3847/2041-8213/aa9029ApJ. 84818Nicholl M., et al., 2017, ApJ, 848, L18 . E O Ofek, S Ben-Ami, 10.1088/1538-3873/abc14cPASP. 132125004Ofek E. O., Ben-Ami S., 2020, PASP, 132, 125004 . Y Osaki, PASJ. 26429Osaki Y., 1974, PASJ, 26, 429 . A Pastorello, M Fraser, 10.1038/s41550-019-0809-9Nature Astronomy. 3676Pastorello A., Fraser M., 2019, Nature Astronomy, 3, 676 . F Pedregosa, Journal of Machine Learning Research. 122825Pedregosa F., et al., 2011, Journal of Machine Learning Research, 12, 2825 Ground-based and Airborne Instrumentation for Astronomy IV. C Pernechele, L Abe, P Bendjoya, A Cellino, G Massone, J P Rivet, P Tanga, 10.1117/12.925933Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series. McLean I. S., Ramsay S. K., Takami H.844684462Pernechele C., Abe L., Bendjoya P., Cellino A., Massone G., Rivet J. P., Tanga P., 2012, in McLean I. S., Ramsay S. K., Takami H., eds, Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series Vol. 8446, Ground-based and Airborne Instrumentation for Astronomy IV. p. 84462H, doi:10.1117/12.925933 Data-driven expectations for electromagnetic counterpart searches based on LIGO/Virgo public alerts. P Petrov, arXiv:2108.07277Petrov P., et al., 2021, Data-driven expectations for electromagnetic counter- part searches based on LIGO/Virgo public alerts (arXiv:2108.07277) . E Pian, 10.1038/nature24298Nature. 55167Pian E., et al., 2017, Nature, 551, 67 . A S Piascik, I A Steele, S D Bates, C J Mottram, R J Smith, R M Barnsley, B Bolton, 10.1117/12.2055117Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series. Ramsay S. K., McLean I. S., Takami H.914791478Piascik A. S., Steele I. A., Bates S. D., Mottram C. J., Smith R. J., Barnsley R. M., Bolton B., 2014, in Ramsay S. K., McLean I. S., Takami H., eds, Society of Photo-Optical Instrumentation Engineers (SPIE) Confer- ence Series Vol. 9147, Ground-based and Airborne Instrumentation for Astronomy V. p. 91478H, doi:10.1117/12.2055117 . D L Pollacco, 10.1086/508556PASP. 1181407Pollacco D. L., et al., 2006, PASP, 118, 1407 . G Ramsay, GRB Coordinates Network232521Ramsay G., et al., 2018, GRB Coordinates Network, 23252, 1 . S Rosswog, U Feindt, O Korobkin, M R Wu, J Sollerman, A Goobar, G Martinez-Pinedo, 10.1088/1361-6382/aa68a9Classical and Quantum Gravity. 34104001Rosswog S., Feindt U., Korobkin O., Wu M. R., Sollerman J., Goobar A., Martinez-Pinedo G., 2017, Classical and Quantum Gravity, 34, 104001 . V E Savanevych, 10.1051/0004-6361/201630323A&A. 60954Savanevych V. E., et al., 2018, A&A, 609, A54 . V Savchenko, 10.3847/2041-8213/aa8f94ApJ. 84815Savchenko V., et al., 2017, ApJ, 848, L15 . D Scolnic, 10.3847/2041-8213/aa9d82ApJ. 8523Scolnic D., et al., 2018, ApJ, 852, L3 . C N Setzer, LSST Dark Energy Science CollaborationR Biswas, LSST Dark Energy Science CollaborationH V Peiris, LSST Dark Energy Science CollaborationS Rosswog, LSST Dark Energy Science CollaborationO Korobkin, LSST Dark Energy Science CollaborationR T Wollaeger, LSST Dark Energy Science Collaboration10.1093/mnras/stz506MNRAS. 4854260Setzer C. N., Biswas R., Peiris H. V., Rosswog S., Korobkin O., Wollaeger R. T., LSST Dark Energy Science Collaboration 2019, MNRAS, 485, 4260 . B J Shappee, 10.1088/0004-637X/788/1/48ApJ. 78848Shappee B. J., et al., 2014, ApJ, 788, 48 . B J Shappee, 10.1126/science.aaq0186Science. 3581574Shappee B. J., et al., 2017, Science, 358, 1574 . L P ; P Singer, 10.1088/0004-637X/795/2/105ApJ. 795105California Institute of Technology Singer LPhD thesisSinger L. P., 2015, PhD thesis, California Institute of Technology Singer L. P., et al., 2014, ApJ, 795, 105 . S J Smartt, 10.1038/nature24303Nature. 55175Smartt S. J., et al., 2017, Nature, 551, 75 . K W Smith, 10.3847/2515-5172/ab020fResearch Notes of the AAS. 326Smith K. W., et al., 2019, Research Notes of the AAS, 3, 26 . M T Soumagnac, E O Ofek, 10.1088/1538-3873/aac410Publications of the Astronomical Society of the Pacific130Soumagnac M. T., Ofek E. O., 2018, Publications of the Astronomical Society of the Pacific, 130 . D Spergel, arXiv:1503.03757arXiv e-printsSpergel D., et al., 2015, arXiv e-prints, p. arXiv:1503.03757 . D Steeghs, GRB Coordinates Network. 221901Steeghs D., et al., 2017, GRB Coordinates Network, 22190, 1 . D Steeghs, GRB Coordinates Network238331Steeghs D., et al., 2019, GRB Coordinates Network, 23833, 1 . N R Tanvir, 10.3847/2041-8213/aa90b6ApJ. 84827Tanvir N. R., et al., 2017, ApJ, 848, L27 . J L Tonry, 10.1088/1538-3873/aabadfPASP. 13064505Tonry J. L., et al., 2018a, PASP, 130, 064505 . J L Tonry, 10.3847/1538-4357/aae386ApJ. 867105Tonry J. L., et al., 2018b, ApJ, 867, 105 . E Troja, 10.1038/nature24290Nature. 55171Troja E., et al., 2017, Nature, 551, 71 . Y Utsumi, 10.1093/pasj/psx118PASJ. 69101Utsumi Y., et al., 2017, PASJ, 69, 101 . S Valenti, 10.3847/2041-8213/aa8edfApJ. 84824Valenti S., et al., 2017, ApJ, 848, L24 . J Veitch, Physical Review D. 9142003Veitch J., et al., 2015, Physical Review D, 91, 042003 . P Virtanen, 10.1038/s41592-019-0686-2Nature Methods. 17261Virtanen P., et al., 2020, Nature Methods, 17, 261 . D J. ; E White, 10.1093/mnras/stv292MNRAS. 449451University of Sheffield Wright DPhD thesisWhite D. J., 2014, PhD thesis, University of Sheffield Wright D. E., et al., 2015, MNRAS, 449, 451 . B Zackay, E O Ofek, A Gal-Yam, 10.3847/0004-637X/830/1/27ApJ. 83027Zackay B., Ofek E. O., Gal-Yam A., 2016, ApJ, 830, 27 . A Zonca, L Singer, D Lenz, M Reinecke, C Rosset, E Hivon, K Gorski, 10.21105/joss.01298Journal of Open Source Software. 41298Zonca A., Singer L., Lenz D., Reinecke M., Rosset C., Hivon E., Gorski K., 2019, Journal of Open Source Software, 4, 1298
[ "https://github.com/GOTO-OBS/goto-astromtools", "https://github.com/ebellm/VolumetricSurveySpeed", "https://github.com/Lyalpha/spalipy", "https://github.com/celery/celery", "https://github.com/GOTO-OBS/" ]
[ "The (2 + δ)-dimensional theory of the electromechanics of lipid membranes: I. Electrostatics", "The (2 + δ)-dimensional theory of the electromechanics of lipid membranes: I. Electrostatics" ]
[ "Yannick A D Omar \nDepartment of Chemical & Biomolecular Engineering\nUniversity of California\n94720BerkeleyCAUSA\n", "Zachary G Lipel \nDepartment of Chemical & Biomolecular Engineering\nUniversity of California\n94720BerkeleyCAUSA\n", "§ ", "Kranthi K Mandadapu \nDepartment of Chemical & Biomolecular Engineering\nUniversity of California\n94720BerkeleyCAUSA\n\nChemical Sciences Division\nLawrence Berkeley National Laboratory\n94720CAUSA\n" ]
[ "Department of Chemical & Biomolecular Engineering\nUniversity of California\n94720BerkeleyCAUSA", "Department of Chemical & Biomolecular Engineering\nUniversity of California\n94720BerkeleyCAUSA", "Department of Chemical & Biomolecular Engineering\nUniversity of California\n94720BerkeleyCAUSA", "Chemical Sciences Division\nLawrence Berkeley National Laboratory\n94720CAUSA" ]
[]
The coupling of electric fields to the mechanics of lipid membranes gives rise to intriguing electromechanical behavior, as, for example, evidenced by the deformation of lipid vesicles in external electric fields. Electromechanical effects are relevant for many biological processes, such as the propagation of action potentials in axons and the activation of mechanically-gated ion channels. Currently, a theoretical framework describing the electromechanical behavior of arbitrarily curved and deforming lipid membranes does not exist. Purely mechanical models commonly treat lipid membranes as two-dimensional surfaces, ignoring their finite thickness. While holding analytical and numerical merit, this approach cannot describe the coupling of lipid membranes to electric fields and is thus unsuitable for electromechanical models. In a sequence of articles, we derive an effective surface theory of the electromechanics of lipid membranes, named a (2 + δ)-dimensional theory, which has the advantages of surface descriptions while accounting for finite thickness effects. The present article proposes a new, generic dimensionreduction procedure relying on low-order spectral expansions. This procedure is applied to the electrostatics of lipid membranes to obtain a (2 + δ)-dimensional theory that captures potential differences across and electric fields within lipid membranes. This new model is tested on different geometries relevant for lipid membranes, showing good agreement with the corresponding three-dimensional electrostatics theory.References 29This article is the first in a series of three that derives the (2 + δ)-dimensional theory of the electromechanics of lipid membranes. The series of articles is structured as follows.Part 1: Electrostatics We introduce a new dimension reduction technique for partial differential equations based on low-order spectral expansions of the solution. We then apply the dimension reduction procedure to the electrostatics of thin films and show the effectiveness of the new, dimensionally-reduced theory.Part 2: Balance laws We apply the dimension reduction procedure to the three-dimensional mechanical balance laws of thin films, while accounting for Maxwell stresses arising from electric fields. This yields dimensionally-reduced, constitutive model-independent mass, angular momentum, and linear momentum balance equations.Part 3: Constitutive models We propose three-dimensional elastic and viscous constitutive models for lipid membranes and derive the governing equations of the (2 + δ)-dimensional theory for the electromechanics of lipid membranes.
null
[ "https://export.arxiv.org/pdf/2301.09610v1.pdf" ]
256,105,383
2301.09610
8b12c5786461f87217f2629f984a8371b3a6c958
The (2 + δ)-dimensional theory of the electromechanics of lipid membranes: I. Electrostatics Yannick A D Omar Department of Chemical & Biomolecular Engineering University of California 94720BerkeleyCAUSA Zachary G Lipel Department of Chemical & Biomolecular Engineering University of California 94720BerkeleyCAUSA § Kranthi K Mandadapu Department of Chemical & Biomolecular Engineering University of California 94720BerkeleyCAUSA Chemical Sciences Division Lawrence Berkeley National Laboratory 94720CAUSA The (2 + δ)-dimensional theory of the electromechanics of lipid membranes: I. Electrostatics The coupling of electric fields to the mechanics of lipid membranes gives rise to intriguing electromechanical behavior, as, for example, evidenced by the deformation of lipid vesicles in external electric fields. Electromechanical effects are relevant for many biological processes, such as the propagation of action potentials in axons and the activation of mechanically-gated ion channels. Currently, a theoretical framework describing the electromechanical behavior of arbitrarily curved and deforming lipid membranes does not exist. Purely mechanical models commonly treat lipid membranes as two-dimensional surfaces, ignoring their finite thickness. While holding analytical and numerical merit, this approach cannot describe the coupling of lipid membranes to electric fields and is thus unsuitable for electromechanical models. In a sequence of articles, we derive an effective surface theory of the electromechanics of lipid membranes, named a (2 + δ)-dimensional theory, which has the advantages of surface descriptions while accounting for finite thickness effects. The present article proposes a new, generic dimensionreduction procedure relying on low-order spectral expansions. This procedure is applied to the electrostatics of lipid membranes to obtain a (2 + δ)-dimensional theory that captures potential differences across and electric fields within lipid membranes. This new model is tested on different geometries relevant for lipid membranes, showing good agreement with the corresponding three-dimensional electrostatics theory.References 29This article is the first in a series of three that derives the (2 + δ)-dimensional theory of the electromechanics of lipid membranes. The series of articles is structured as follows.Part 1: Electrostatics We introduce a new dimension reduction technique for partial differential equations based on low-order spectral expansions of the solution. We then apply the dimension reduction procedure to the electrostatics of thin films and show the effectiveness of the new, dimensionally-reduced theory.Part 2: Balance laws We apply the dimension reduction procedure to the three-dimensional mechanical balance laws of thin films, while accounting for Maxwell stresses arising from electric fields. This yields dimensionally-reduced, constitutive model-independent mass, angular momentum, and linear momentum balance equations.Part 3: Constitutive models We propose three-dimensional elastic and viscous constitutive models for lipid membranes and derive the governing equations of the (2 + δ)-dimensional theory for the electromechanics of lipid membranes. Introduction Lipid membranes separate the interior and exterior of a biological cell and its organelles, serving as barriers that regulate the transport of proteins, ions, and other molecules. They exist in various, dynamically-changing shapes, with radii of curvature ranging from tens to hundreds of nanometers. In contrast, they are comprised of only two layers of lipid molecules, forming a bilayer structure with a thickness of just 3 − 5 nm. This makes lipid membranes exceptionally thin materials. The thin, bilayer structure of lipid membranes gives rise to peculiar mechanical behavior. Inplane stretch and out-of-plane bending indicate elastic behavior, while the in-plane flow of lipids shows signatures of viscous behavior. In addition, lipid membranes exhibit an intricate coupling between out-of-plane elastic deformations and in-plane viscous flows. Consequently, lipid membranes are considered viscous-elastic materials [1][2][3]. Lipid membranes also exhibit coupled electrical and mechanical behavior. For instance, under the action of an electric field, membrane vesicles deform into prolates, oblates, and other shapes [4][5][6][7][8][9][10] and even form pores [6,8,9,[11][12][13][14][15][16][17]. In addition, the bulk fluid surrounding lipid membranes is often an electrolyte with varying ionic concentrations across the boundaries and within the interior of cells and organelles. Such concentration differences can give rise to electro-osmotic flows and expose lipid membranes to local electric fields, thereby inducing Maxwell stresses and deformations. Understanding the electromechanics of lipid membranes is relevant across various disciplines. For example, electroporation, the creation of pores by an external electric field, is employed in novel procedures for non-thermal food processing, non-thermal tumor ablation, and the delivery of cancer treatment drugs into cells [18][19][20]. Furthermore, electroporation of nearby lipid membranes leads to their subsequent fusion. This so-called electrofusion is used to facilitate cell-hybridization [21] and the creation of microreactors [9]. The electromechanics of lipid membranes is also essential for understanding many biological phenomena. One fascinating example is the propagation of an action potential through an axon. Action potentials constitute localized and transient depolarizations of the axon caused by ionic currents through the lipid membrane. They travel along the axon to propagate signals-for example, between sensory neurons and the brain [22]. Despite evidence of thermal and mechanical effects [23][24][25][26][27][28], the perspective of action potentials as purely electric phenomena prevails. However, recent attempts challenge the purely electrical description by accounting for coupled thermodynamic, electrical and mechanical aspects [29][30][31]. Theoretical models of the electromechanics of lipid membranes are indispensable to understanding the above phenomena. They often involve long time scales and large length scales, necessitating the development of continuum models. Due to their small thickness, continuum theories commonly model lipid membranes as two-dimensional surfaces [3,32,33], an approach well-established for the mechanics of lipid membranes. However, a surface description may not be suitable for the electromechanics of lipid membranes. An electromechanical theory that treats lipid membranes as surfaces suffers from multiple shortcomings. First, surface descriptions do not resolve potential drops across lipid membranes but instead yield continuous potentials-contradictory to what is observed in action potentials, for instance. Second, arbitrary surface charge densities on the two interfaces between lipid membranes and their surroundings cannot be accounted for correctly. Lastly, the electric field in the interior of lipid membranes is not well-defined when treated as surfaces. Yet, the aforementioned aspects are all required to capture the Maxwell stresses acting on lipid membranes. Hence, a suitable electromechanical theory cannot treat lipid membranes as surfaces but needs to resolve effects arising from their finite thickness. Three-dimensional models naturally account for the finite thickness of lipid membranes. However, three-dimensional models are complex and quickly become intractable for deforming geometries -finding analytical solutions is often unwieldy, and even their numerical treatment is challenging. In this work, we propose an effective two-dimensional theory to describe the electromechanics of lipid membranes. Starting from a three-dimensional continuum picture, we introduce a new dimension reduction procedure using low-order spectral expansions. This approach leads to an effective two-dimensional theory, which explicitly retains the thickness information required to capture potential differences and Maxwell stresses. At the same time, the resulting equations are analytically and numerically less challenging than those of three-dimensional models and can be analyzed using the tools developed for two-dimensional surface theories. Thus, the proposed theory combines the advantages of three-dimensional and surface descriptions of lipid membranes and is referred to as the (2 + δ)-dimensional theory. The remainder of this article is structured as follows. In Sec. 2, we revisit the well-known equations describing an electric field under quasi-static conditions and introduce the problem of a thin film embedded in a bulk domain. In Sec. 3, we take an abstract perspective and introduce the new dimension reduction procedure for a general differential equation. A more physically-inclined reader may skip Sec. 3 and immediately proceed to Sec. 4, wherein we apply the proposed dimension reduction method to the electrostatics of thin films. Section 5 concludes the article with analytical and numerical comparisons of the three-dimensional and dimensionally-reduced electrostatic theories for different geometries relevant for lipid membranes. Electrostatics of a Thin Film We begin this section by recalling the theory of continuum electrostatics with discontinuities [34]. Subsequently, we describe the electrostatics equations governing a thin film embedded in a threedimensional bulk domain. Under the conditions of electrostatics, Maxwell's equations for a linear dielectric material with constant permittivity reduce to 1 [34] (see SM, Sec. 1 for details) εdiv(ě) = q , ∀x ∈ B ,(1)curl(ě) = 0 , ∀x ∈ B ,(2)n · εě = σ , ∀x ∈ S ,(3)n × ě = 0 , ∀x ∈ S ,(4) where B denotes the bulk domain, ε is the permittivity,ě is the electric field, and q is the free charge density in B. Additionally, S denotes an oriented surface, with normaln and surface charge density σ, where the electric field is discontinuous. The notation • denotes the jump across a surface of discontinuity, i.e. • = • + − • − , where • ± denotes the value above and below S, respectively. By Helmholtz' theorem [35], Eq. (2) is satisfied by construction if we define the electric potentiaľ φ such thatě = −grad φ ,(5) which further simplifies Maxwell's equations to ∆φ = −q/ε , ∀x ∈ B ,(6)φ = 0 , ∀x ∈ S ,(7)n · εgrad φ = −σ , ∀x ∈ S ,(8)n × grad φ = 0 , ∀x ∈ S ,(9) where ∆ denotes the Laplacian. Equation (6) is Gauss' law written in terms of the potential and Eq. (7) describes continuity of the potential across the surface of discontinuity. The latter follows from Coulomb's law for continuous charges [36]. According to Eq. (8), the normal component of the gradient of the electric potential is discontinuous at a surface of discontinuity while, according to Eq. (9), components parallel to the surface of discontinuity are continuous. Note that, given Eq. (7), Eq. (9) is trivially satisfied. Figure 1: Schematic of a thin film M with thickness δ that separates the two bulk domains B + and B − . δ S + S − S 0 S || M B + B −n + n − ν Next, we consider a thin film without any free charge in its interior, as is the case for lipid membranes. The thin film M has thickness δ and is embedded in two bulk domains B ± above and below M, as shown in Fig. 1. The top and bottom bounding surfaces of M are denoted by S ± and are equipped with surface charge densities σ ± , making S ± surfaces of discontinuity. The outwardpointing normal vectors on S ± are denoted byn ± . The three-dimensional electrostatics equations are given by ∆φ B − = −q B − /ε B − , ∀x ∈ B − ,(10)φ = 0 , ∀x ∈ S − ,(11)n + · εě = σ − , ∀x ∈ S − ,(12)ε M ∆φ M = 0 , ∀x ∈ M ,(13)n − · εě = σ + , ∀x ∈ S + ,(14)φ = 0 , ∀x ∈ S + ,(15)∆φ B + = −q B + /ε B + , ∀x ∈ B + ,(16) where ε B ± and ε M are the permittivity of the bulk regions and thin film, respectively. The jump conditions, Eqs. (12) and (14), are written in terms of electric fields for later notational convenience. We close the problem with boundary conditions on the lateral surface S || with outward-pointing normal ν, as shown in Fig. 1 :φ M =φ M , ∀x ∈ S ||D ,(17)−ν · grad φ M =ē , ∀x ∈ S ||N ,(18) where S || = S ||D ∪ S ||N , S ||D ∩ S ||N = ∅, andφ M andē are the prescribed potential and electric field component, respectively. The remaining boundary conditions forφ B ± are of no consequence for the dimension-reduction procedure and are thus omitted here. In the following, we refer to Eqs. (10)-(16) as the three-dimensional theory. In comparison, an effective, dimensionally-reduced theory replaces Gauss' law on the three-dimensional thin film M, Eq. (13), by an approximately equivalent equation defined on the two-dimensional mid-surface of M, denoted S 0 . To that end, the following section introduces a new dimension reduction procedure that follows ideas used in spectral methods by expanding all unknowns and parameters in terms of orthogonal polynomials. Spectral Dimension Reduction for Thin Films In this section, we present a new, general approach to deriving dimensionally-reduced differential equations defined on the mid-surface of a thin film. We begin by revisiting spectral expansions in Sec. 3.1 and show how they can be used to derive dimensionally-reduced theories in Sec. 3.2. Note that the remaining sections are self-contained and that the reader may immediately proceed to Sec. 4 to find the dimensionally-reduced electrostatics equations. Mathematical Preliminaries of Spectral Expansions Let P k (θ) : (a, b) → R, k ∈ N 0 , a, b ∈ R denote a real-valued polynomial and let P = {P k (θ) : k ∈ N 0 } denote the corresponding set of polynomials. For two sufficiently well-behaved functions f (θ), g(θ), θ ∈ (a, b), we define the weighted inner product f (θ), g(θ) w = (a,b) f (θ)g(θ)w(θ) dθ ,(19) where w(θ) denotes a weight function. If the polynomials in P satisfy the relation P k (θ), P l (θ) w = c k δ kl , ∀P k , P l ∈ P ,(20) we call P an orthogonal set of polynomials. In Eq. (20), c k is some positive constant and δ kl denotes the Kronecker delta. Let ||·|| w = ·, · w denote the norm induced by the inner product defined in Eq. (19) and let L 2 w denote the space of functions bounded in ||·|| w . The N th -order projection of any function f (θ) ∈ L 2 w onto P, denoted by f P,N , is defined as f P,N (θ) = N k=0f k P k (θ) ,(21) wheref k is the k th coefficient of the expansion and is given bŷ f k = f (θ), P k (θ) w .(22) The set of polynomials P is complete with respect to the norm ||·|| w if, for any f (θ) ∈ L 2 w , [37] lim N →∞ ||f (θ) − f P,N (θ)|| w = 0 .(23) Equation (21) in conjunction with Eq. (23) allows us to express functions in L 2 w as f (θ) = ∞ k=0f k P k (θ) .(24) Complete and orthogonal polynomials are commonly used in spectral methods to numerically solve differential equations. In the following, we briefly revisit the basics of spectral methods required for our method for dimension reduction; see Refs. [37][38][39] for more details. Let L(v(θ); p) : U → L 2 w denote a scalar-valued differential operator, where U is some space of sufficiently smooth functions defined on (a, b) and p = {p j } j=1,..,Np is a set of parameters. We write a generic differential equation as L(u(θ); p) = 0 , θ ∈ (a, b), u ∈ U ,(25) postponing any discussion on the application of boundary conditions to Sec. 3.2. Due to the completeness property of P in Eq. (23) and Eq. (24), we can expand the solution u(θ) as u(θ) = ∞ k=0û k P k (θ) .(26) Thus, finding the solution u(θ) is equivalent to finding the constant coefficientsû k . However, any numerical approach requires truncating the expansion at some finite order N ∈ N, u P,N (θ) = N k=0û N,k P k (θ) ,(27) where the subscript N onû N,k indicates dependence on the truncation order N . The unknown coefficientsû N,k in Eq. (27) are found by replacing u(θ) by u P,N (θ) in Eq. (25) and taking the inner product with the l th polynomial, resulting in L N k=0û N,k P k (θ); p , P l (θ) w = 0 , ∀l ∈ [0, ..., N ] .(28) This yields N + 1 equations for the N + 1 unknown coefficients {û N,k } k=0,...,N . Since the spatial dependence is entirely contained in the polynomials P k (θ), any derivative can be carried out explicitly and the N + 1 equations in Eq. (28) are no longer differential but algebraic equations, independent of θ. Solving the system of algebraic equations resulting from Eq. (28) yields the coefficients {û N,k } k=0,...,N and hence, the approximate solution u P,N (θ). Dimension Reduction Procedure for Thin Films Using Spectral Expansions Before introducing the new dimension reduction procedure, we revisit the thin film setup described in Sec. 2. The arbitrarily curved, thin film M has constant thickness δ and the mid-surface S 0 divides M into two parts of equal thickness. The mid-surface S 0 is equipped with a normal vector n and the superscripts "+" and "−" indicate quantities associated with the regions above and below S 0 , as defined by the orientation of n. The top, bottom, and lateral bounding surfaces of M are denoted by S + , S − , and S || , respectively (see Fig. 1). The lateral bounding surface can be expressed as S || = ∂S 0 × (−δ/2, δ/2), where ∂S 0 is the boundary of S 0 . The outward-pointing normal on S || is denoted by ν, as shown in Fig. 1. Finally, the body M is embedded into three-dimensional bulk domains, referred to as B + above M and B − below M. The thin film M is arbitrarily curved, making it convenient to formulate the proposed dimension reduction method in a differential geometry framework. To this end, we introduce a parametrization of M. The mid-surface S 0 is endowed with a two-dimensional, curvilinear parametrization ζ 1 , ζ 2 ∈ Ω as shown in Fig. 2, where Ω denotes the parametric domain, such that we can express any position x 0 ∈ S 0 using the mapping χ 0 : Ω → S 0 , ζ 1 , ζ 2 → x 0 . Parametrizing the full body M requires a third parametric direction, ζ 3 ∈ Ξ, where Ξ denotes the corresponding parametric domain. We can then express any position x ∈ M using the mapping χ : Ω×Ξ → M, ζ 1 , ζ 2 , ζ 3 → x with χ| ζ 3 =0 = χ 0 . We now discuss how spectral expansions can be used to reduce a differential equation defined on M to a differential equation defined on the mid-surface S 0 . For simplicity, we will continue considering only scalar-valued differential operators such as the Laplacian, relevant for electrostatics. However, the ideas presented here can be extended to vector-valued differential differential operators, as will be discussed in part 2 of this sequence of publications where the mechanical balance laws are addressed. Let u ζ i ∈ U denote a scalar-valued function from a space of sufficiently smooth functions U defined on Ω × Ξ, where we used the short-hand notation ζ i ≡ {ζ 1 , ζ 2 , ζ 3 } for the parametrization. Assume that u ζ i satisfies the differential equation L u ζ i ; p = 0 , ∀ζ i ∈ Ω × Ξ ,(29) where L u ζ i , p is now a partial differential operator. A set of appropriate boundary conditions closes Eq. (29): g m (u(ζ α ), ζ α ; p) = 0 , ∀ζ i ∈ Ω × ∂ ± Ξ , m ∈ [1, ..., N ∂Ξ ] ,(30)h n u ζ i , ζ i ; p = 0 , ∀ζ i ∈ ∂ k Ω × Ξ , n ∈ [1, ..., N ∂Ω ] ,(31) where we used the short-hand notation ζ α ≡ {ζ 1 , ζ 2 } 2 . Here, g m denotes the m th boundary condition on either S + or S − while h n denotes the n th boundary condition on S n || ⊆ S || . Note that the number of boundary conditions, N ∂Ξ and N ∂Ω , is determined by the order of the differential operator. With Eqs. (29)-(31) formulated in terms of the parametrization ζ i , dimension reduction requires eliminating dependence on the parametric direction ζ 3 , implying that the dimensionally-reduced equations only depend on the mid-plane parametrization ζ α . To this end, the key idea of our proposed method is to express the solution u ζ i as u ζ i = ∞ k=0û k (ζ α )P k θ ζ 3 ,(32) whereû k (ζ α ) denotes the unknown coefficients of the spectral expansion, and θ is the mapping θ : Ξ → (a, b). It should be emphasized that the coefficientsû k (ζ α ) only depend on the parametrization of the mid-plane S 0 and not on the parametric direction ζ 3 associated with the thickness direction. Instead, the dependence on ζ 3 is entirely contained in the polynomials P k (θ). Similarly, the parameters p j ∈ p may also depend on the parametrization ζ i , and are thus similarly expanded as, where the coefficientsp jk (ζ α ) are found by applying Eq. (22) along the thickness direction. p j ζ i = ∞ k=0p jk (ζ α )P k θ ζ 3 ,(33) To obtain a finite order approximation, the solution expansion in Eq. (32) is truncated at order N u , which reduces Eq. (29) to L   Nu k=0û k (ζ α )P k θ ζ 3 ; ∞ k=0p jk (ζ α )P k θ ζ 3 j=1,...,Np   = 0 , ∀ζ i ∈ Ω × Ξ .(34) As in Eq. (28), we obtain the equations for the unknown coefficientsû k (ζ α ) by taking the inner product with the l th order polynomial P l and assuming weighted square integrability: L   Nu k=0û k (ζ α )P k θ ζ 3 ; ∞ k=0p jk (ζ α )P k θ ζ 3 j=1,...,Np   , P l θ ζ 3 w = 0 , ∀ζ α ∈ Ω , ∀l ∈ [0, ..., N u ] .(35) In Eq. (35), any differentiation with respect to ζ 3 can be carried out explicitly, allowing the evaluation of the inner product. Using the orthogonality condition in Eq. (20), Eq. (35) yields N u + 1 partial differential equations that only depend on the mid-surface parametrization ζ α . Thus, we obtain a set of dimensionally-reduced differential equations for the N u + 1 unknown coefficientŝ u k (ζ α ) defined on the mid-surface of the thin film M. Due to the potential coupling between higher and lower order coefficients, the coefficientsû k (ζ α ) in Eqs. (34) and (35) do not necessarily coincide with the coefficients of the series expansion in Eq. (32). However, for notational simplicity, we use the same symbolû k (ζ α ) throughout. We further note that the series expansions of the parameters p j are retained in Eq. (35). However, truncation of these series can often be physically motivated and might be necessary to obtain an analytically tractable theory, as will be seen in Sec. 4 when applying the proposed method to the electrostatics of thin films. The original problem, Eq. (29), requires application of the boundary conditions in Eqs. (30) and (31). However, taking the inner product in Eq. (35) eliminates the derivatives along the ζ 3direction such that the boundary conditions in Eq. (30) need to be enforced by discarding N ∂Ξ equations from Eq. (35) and replacing them by the N ∂Ξ boundary conditions in Eq. (30). This, in fact, sets a limit on the minimum order of expansion of the solution, i.e. N u ≥ N ∂Ξ − 1. The N ∂Ω boundary conditions on the lateral boundary S || in Eq. (31) are dimensionally-reduced analogously to Eq. (35). Substituting the truncated expansion of the solution into Eq. (31) and taking the inner product with the l th order polynomial P l , assuming weighted square integrability, yields new boundary conditions defined on the boundary of the mid-surface, ∂S 0 : h i   Nu k=0û k (ζ α )P k θ ζ 3 , ζ i ; ∞ k=0p jk (ζ α )P k θ ζ 3 j=1,...,Np   , P l θ ζ 3 w = 0 , ∀l ∈ [1, ..., N ∂Ω ] . (36) This fully eliminates the parametric direction ζ 3 from Eqs. (29)- (31) such that the dimensionallyreduced problem is given by the first N u + 1 − N ∂Ξ differential equations in Eq. (35), the boundary conditions on S ± in Eq. (30), and the boundary conditions on ∂S 0 in Eq. (36). In the following, we refer to this dimensionally-reduced theory as a (2 + δ)-dimensional theory. Note that when deriving a (2 + δ)-dimensional theory, N u can, in principle, be chosen arbitrarily large. However, the algebraic complexity significantly increases with the order of the expansion. This can be seen in the detailed derivation of the (2 + δ)-dimensional theory of the electrostatics of thin films in Sec. 2.3 of the SM. Hence, a (2 + δ)-dimensional theory generally only remains analytically tractable for low-order expansions of the solution. Thus, for the proposed method to yield meaningful results, we require the exact solution to be well-approximated by low-order polynomials along the thickness. This, however, is a common and often reasonable approximation for thin bodies, as considered here. Thus, given the validity of the low-order expansion of the solution, our proposed method is exact and does not require additional approximations. For the remainder of this manuscript, we specialize our derivations to Chebyeshev polynomials. Chebyeshev polynomials are defined on the interval (a, b) = (−1, 1) and are orthogonal with respect to the inner product in Eq. (19) when the weight function is −1 −0.5 0 0.5 1 −1 −0.5 0 0.5 1 θ P 0 P 1 P 2w(θ) = 1 π 1 √ 1 − θ 2 , θ ∈ (−1, 1) .(37) The first three Chebyeshev polynomials are P 0 (θ) = 1 ,(38)P 1 (θ) = θ ,(39)P 2 (θ) = 2θ 2 − 1 ,(40) and are plotted in Fig. 3. Chebyshev polynomials are commonly used in spectral methods and are amenable to analytical derivations. However, the procedure presented in this section is sufficiently general and can be similarly followed using any other set of complete and orthogonal polynomials defined on a bounded domain. A Dimensionally-Reduced Theory for the Electrostatics of Thin Films In this section, we present the (2 + δ)-dimensional theory of the electrostatics of thin films, obtained by applying the dimension reduction procedure proposed in Sec. 3 to the problem setup in Sec. 2, Eqs. (10)- (16). While the detailed derivations are shown in Sec. 2 of the SM, the key assumptions of the (2 + δ)-dimensional theory are discussed here. To obtain a dimensionally-reduced equation in place of Eq. (13), we introduce low-order expansions of the position vector x ∈ M and the electric potential in the membrane φ M in terms of Chebyeshev polynomials P k : x = 1 k=0 x k (ζ α )P k θ ζ 3 ,(41)φ M = 2 k=0 φ k (ζ α )P k θ ζ 3 ,(42) where ζ α ≡ {ζ 1 , ζ 2 } ∈ Ω indicates the parametrization of the mid-surface S 0 with Ω being the parametric domain, ζ 3 ∈ (−δ/2, δ/2) is the parametrization along the thickness direction, and θ is the mapping θ : (−δ/2, δ/2) → (−1, 1). In Eqs. (41) and (42), x and φ M no longer carry a check symbol to distinguish them from their respective exact quantities,x andφ M . The order of expansion of the position vector x is motivated by the common choice of Kirchhoff-Love kinematics, which is suitable for thin materials such as lipid membranes. Kirchhoff-Love kinematics assumes that any point along the normal to the mid-surface remains on the normal to the mid-surface and maintains the same distance to the mid-surface upon deformation [40]. Using this kinematic assumption, the expansion of the position vector becomes x = x 0 P 0 θ ζ 3 + δ 2 nP 1 θ ζ 3 ,(43) where x 0 ∈ S 0 denotes a point on the mid-surface and n is the normal vector of the mid-surface. Equation (43) further impliesn + = −n − = n. According to the discussion in Sec. 3.2, the electric potential must be expanded to at least first order to enforce two boundary conditions on the top and bottom surfaces, S ± , consistent with the differential order of Eq. (13). However, to preserve the differential nature of Eq. (13), we expand the potential to second order. Furthermore, to make the dimensionally-reduced theory tractable, we introduce two further crucial assumptions: (δκ) 2 1 ,(44)(δ/ ) 2 1 ,(45) where κ is the principal curvature with the largest magnitude and is a characteristic in-plane length scale for the potential and curvature (see SM, Sec. 2.3 for details). Equation (44) implies that the radius of curvature must be much larger than the thickness of the membrane, and Eq. (45) implies that the potential and geometry of the thin film change over length scales much larger than the thickness. Thus, Eqs. (44) and (45) are also conditions for the applicability of the theory proposed here. Using Eqs. (42)- (45), applying the dimension reduction method proposed in Sec. 3 to Eq. (13) yields the dimensionally-reduced equation ε M ∆ s φ 0 (ζ α ) − 4C M φ 1 (ζ α )H + 16 δ C M φ 2 (ζ α ) = 0 , ∀ζ α ∈ Ω ,(46) where C M = ε M /δ is the membrane capacitance per unit area [41] and H is the mean curvature of the mid-surface. The surface Laplacian is defined as ∆ s (•) = (•) ,α :β a αβ , where the colon indicates the surface covariant derivative of a vector, a αβ denotes the contravariant components of the identity tensor on the mid-surface, and Einstein's summation convention is used (see SM, Sec. 2 for details). We consider Eq. (46) as an equation for the coefficient φ 0 (ζ α ) and choose the coefficients φ 1 (ζ α ) and φ 2 (ζ α ) such that some of the interface conditions on S ± are satisfied. To that end, we can select one of the interface conditions on S − , Eq. (11) or (12), and one of the interface conditions on S + , Eq. (14) or (15). The remaining two interface conditions need to be enforced as boundary conditions for Eqs. (10) and (16) such that all interface conditions are satisfied. In this article, we choose the jump conditions, Eqs. (12) and (14), to find expressions for φ 1 (ζ α ) and φ 2 (ζ α ), and impose continuity of the potential, Eqs. (11) and (15), as boundary conditions in the two bulk domains, Eqs. (10) and (16). As detailed in Sec. 2.3 of the SM, this choice yields φ 1 (ζ α ) = − 1 2C M n · ε B e B M − 1 2 σ + − σ − , ∀ζ α ∈ Ω ,(47)φ 2 (ζ α ) = − 1 16C M n · ε B e B M − σ + + σ − , ∀ζ α ∈ Ω ,(48) where e B ± is the electric field in B ± , and ε B e B M = 1 Deformations of the body M stretch and compress the bounding surfaces S ± , leading to changes in surface charge densities σ ± . For a deforming body, it is thus convenient to express the surface charge densities with respect to a flat reference configuration. Using expressions for the change of surface area of S ± under deformations, the surface charge densities are expressed as 2 (ε B + e B + | S + + ε B − e B − | S − ) and ε B e B M = ε B + e B + | S + − ε B − e B − |σ ± ≈ 1 J 0 σ ± 0 (1 ± Hδ) ,(49) where J 0 denotes the relative area change of the mid-surface with respect to a flat reference configuration with charge densities σ ± 0 . Substituting these expressions into Eqs. (47) and (48), we find φ 1 (ζ α ) = − 1 2C M n · ε B e B M − 1 2J 0 σ + 0 − σ − 0 − Hδ 2J 0 σ + 0 + σ − 0 , ∀ζ α ∈ Ω ,(50)φ 2 (ζ α ) = − 1 16C M n · ε B e B M − 1 J 0 σ + 0 + σ − 0 − Hδ J 0 σ + 0 − σ − 0 , ∀ζ α ∈ Ω .(51) While Eqs. (50) and (51) are more useful in practice, Eqs. (47) and (48) are used for simplicity in the remainder of this article. We now apply the dimension reduction procedure to the boundary conditions on S || in Eqs. (17) and (18). Upon expanding the prescribed potential,φ M = ∞ k=0φ Mk (ζ α )P k θ ζ 3 , Eq. (17) becomes φ 0 (ζ α ) =φ M0 (ζ α ) , ∀ζ α ∈ ∂Ω 0D ,(52) where ∂Ω 0D is the part of the parametric domain corresponding to ∂S 0D = ∂S 0 ∩S ||D . Similarly, with the series expansion of the electric field componentē = ∞ k=0ē k (ζ α )P k θ ζ 3 , Eq. (18) becomes −ν α φ 0,α (ζ α ) + δ 4 φ 1,β (ζ α )b β α + δ 2 16 φ 2,β (ζ α )b β γ b γ α =ē 0 , ∀ζ α ∈ ∂Ω 0N ,(53) where ∂Ω 0N is the part of the parametric domain corresponding to ∂S 0N = ∂S 0 ∩ S ||N , and ν α and b β α are the components of ν and the curvature tensor, respectively, and Einstein's summation convention applies. A detailed derivation of Eqs. (52) and (53) is provided in Sec. 2.3 of the SM. Table 1: Corresponding equations between the three-dimensional and (2 + δ)-dimensional theories. Note that the equations for φ 1 and φ 2 have not been assigned a location in the physical domain. This is due to the electric field being evaluated on both S + and S − in the expressions for φ 1 and φ 2 . 3-dimensional theory (2 + δ)-dimensional theory ∆φ B − = −q B − /ε B − x ∈ B − ∆φ B − = −q B − /ε B − x ∈ B − φ = 0 x ∈ S − φ = 0 x ∈ S − n + · εě = σ − x ∈ S − φ 1 = − 1 2C M n · ε B e B M − 1 2 (σ + − σ − ) ε M ∆φ M = 0 x ∈ M ε M ∆ s φ 0 − 4C M φ 1 H + 16 δ C M φ 2 = 0 x ∈ S 0 n − · εě = σ + x ∈ S + φ 2 = − 1 16C M n · ε B e B M − (σ + + σ − ) φ = 0 x ∈ S + φ = 0 x ∈ S − ∆φ B + = −q B + /ε B + x ∈ B + ∆φ B + = −q B + /ε B + x ∈ B + Equations (10), (11), (15), (16), (46), (47), and (48) together with the boundary conditions Eq. (52) and (53) form a closed set of equations that is independent of the parametric direction ζ 3 , while explicitly preserving effects due to the finite thickness of M. This set of equations constitutes the dimensionally-reduced, (2 + δ)-dimensional theory of the electrostatics of thin films and is summarized in Tab. 1. The expansion of the potential in Eq. (42) allows finding the potential drop across the thin film: φ(ζ α ) M = 2φ 1 (ζ α ) = − 1 C M n · ε B e B M − 1 2 σ + − σ − ,(54) which is a generalization of an expression derived in Refs. [42][43][44]. Equation (54) can also be written as φ(ζ α ) M = Σ eff (ζ α ) C M ,(55) with the effective surface charge density Σ eff = −n · ε B e M + 1 2 (σ + − σ − ), indicating an analogy to a parallel plate capacitor. Similarly, for two parallel, charged surfaces a distance δ apart and with constant charge density q in the space between the plates, the second order Chebyshev coefficient of the potential isφ 2 = − Q 16C , where Q = qδ and C is the capacitance per unit area. This motivates the definition of an effective charge density Q eff : Q eff (ζ α ) = −16C M φ 2 (ζ α ) = n · ε B e B M − σ + + σ − .(56) which, upon substitution in Eq. (46), yields ε M ∆ s φ 0 (ζ α ) − 4C M φ 1 (ζ α )H = Q eff (ζ α ) δ , ∀ζ α ∈ Ω ,(57) where Q eff /δ appears like a charge density in Gauss' law. From Eqs. (47) and (48) and the definition of C M , we find that both φ 1 (ζ α ) and φ 2 (ζ α ) are O(δ) terms, suggesting that Q eff is O(1). Thus, for Eq. (57) to remain well-posed in the limit of δ → 0, we require Q eff ∝ φ 2 (ζ α ) → 0, implying which is the jump condition in Eq. (3) for a surface with charge density σ + +σ − . The same condition has been used for lipid membranes in Refs. [45,46]. However, we generally do not consider this limit and instead work with the full expression in Eq. (46). n · ε B e M = σ + + σ − ,(58) Lastly, note that we have defined Eqs. (46)-(48) on the mid-surface S 0 . However, the average ε B e B M and jump ε B e B M in Eqs. (47) and (48), respectively, require evaluation of the electric field on S + and S − as shown in Fig. 4a. In practice, when solving the governing equations numerically, the membrane could be treated as a surface such that the interface conditions would be enforced on either side of S 0 instead. This viewpoint is illustrated in Fig. 4b. Treating the lipid membrane as a surface when creating the discretization introduces an error of order O(δκ). However, the (2 + δ)-dimensional theory is truncated at order O (δκ) 2 , suggesting that the error resulting from treating the lipid membrane as a surface can become dominant at large curvatures. Comparison to Three-Dimensional Gauss Law In this section, the accuracy of the (2 + δ)-dimensional theory is tested on flat geometries, cylinders, and spheres, which are common lipid membrane geometries encountered in both theory and experiments Analytical Comparison We begin by applying the (2 + δ)-dimensional theory to examples of thin films embedded in dielectric bulk media with univariate potentials. In the interest of clarity, many of the details of the analytical solutions are described in Sec. 3 of the SM. For the examples considered, we find that the pointwise, relative error between the exact and (2 + δ)-dimensional theories does not exceed 2%. For cylinders and spheres, the error decreases rapidly with increasing radius. Flat Geometry Consider the flat, thin film shown in Fig. 5a where the potential only depends on the x-direction and q B ± = 0. The electric field is prescribed on the left-hand side boundary of the domain and the potential is fixed on the right-hand side boundary of the domain: − dφ B − dx =ē , x = −δ/2 − L − ,(59)φ B + = 0 , x = δ/2 + L + . 3(60) By simplifying Eq. (13), it becomes apparent that the solution to the exact theory is at most linear in x within the thin film. Since linear solutions can be represented exactly in the (2+δ)-dimensional theory, the (2 + δ)-dimensional theory recovers the exact solution. The governing equations and solutions for both the exact and (2 + δ)-dimensional theory for this case can be found in Sec. 3.1 of the SM. Cylinders The next example is similar to the one discussed in the previous section but with the flat geometry replaced by a cylinder with mid-surface radius R 0 . The setup, depicted in Fig. 5b, is axisymmetric and homogeneous along the cylinder's axis such that the potential only depends on the radial direction. Similar to before, we fix the potential to be zero at r = R E and impose the electric field at r = R A > 0, with R E and R A shown in Fig. 5b: The simplified equations and corresponding solutions for both the three-dimensional and (2 + δ)dimensional theories are presented in Sec. 3.2 of the SM. In contrast to the flat geometry, the potential is no longer linear within the membrane and the (2 + δ)-dimensional theory does not reproduce the exact solution. To assess the differences between the exact and (2 + δ)-dimensional solutions, we introduce the following non-dimensional quantities: − dφ B − dr =ē , r = R A ,(61)φ B + = 0 , r = R E . (62) case ε * M ε * B σ + * σ − * ē * R * A R * E A 1 1 1 −1 1 1 R * 0 + 10 B 2 80 1 100 −10 1 R * 0 + 10r * = r δ , δ * = 1 , ε * M = ε M ε 0 , ε * B = ε B ε 0 , φ * = φε 0 δσ + , σ ± * = σ ± σ + ,ē * =ē ε 0 σ + . This non-dimensionalization does not carry physical meaning but is merely chosen for convenience. We consider two different parameter choices, cases A and B, defined in Tab. 2. For case A, the dielectric constants are the same throughout the entire domain but the surface charge densities on the inner and outer surface of the thin film differ in magnitude and sign. For case B, the dielectric constants in the thin film and bulk domains differ, and the surface charge densities have different magnitudes. The non-dimensional mid-surface radius R * 0 is varied and thus not listed in Tab. 2. Figures 6a and 6b show the potential and error profiles for cases A and B, respectively, with R * 0 = 5 and the pointwise relative error defined as E = |φ * − φ * | |φ * | .(63) The potential profiles from the exact and (2 + δ)-dimensional theory agree closely for both cases, with a maximum error of less than 1%. We note that in Figs. 6a and 6b, the radius of the mid-surface is only five times the thickness, even though, in the derivation of the (2 + δ)-dimensional theory, we used the assumption that the thickness is small compared to the radius of curvature (Eq. (44)). Figure 7a shows that the L 2 -error in the potential decreases quadratically with the non-dimensional curvature µ, µ = δ/2 R 0 ,(64) Spheres We consider a sphere with axisymmetry along both the azimuthal and polar angle, such that the potential only depends on the radial direction, similar to the setup described for cylinders in Sec. 5.1.2, Fig. 5b. The boundary conditions are the same as in Eqs. (61) and (62) Figure 9: The surface charge density is changing from σ ± 0 to σ ± 0 + ∆σ ± over a length L. L s denotes the length of the smooth transition region between the constant and linearly varying surface charge density. Numerical Solutions σ ± 0 σ ± 0 + ∆σ ± L s L s L s σ ± We now test the (2 + δ)-dimensional theory numerically on examples without analytical solutions but motivated by lipid membranes. Namely, we consider flat, cylindrical, and spherical lipid membranes-typical shapes in biological systems-embedded in a symmetric, monovalent electrolyte. The lipid membranes are equipped with spatially varying surface charge densities, modeling charged lipids or charges accumulated on the interfaces between the electrolyte and lipid membrane [51]. The surface charge densities are screened by charges in the electrical double layers in the electrolyte, as described by the Poisson-Boltzmann equation [52]. Accordingly, the charge densities in the bulk domains, required in Eqs. (10) and (16), are given by q = − ε B k B T eλ 2 D sinh eφ k B T ,(65) with e being the elementary charge and λ D the Debye length, determined by the bulk electrolyte concentration [52]. In physical systems, the surface charge densities on lipid membranes can vary spatially, consequently leading to in-plane variations of the electric potential. To test the accuracy of the (2 + δ)-dimensional theory under different characteristic in-plane length scales, we prescribe the surface charge densities by univariate functions along the direction s: σ ± = σ ± 0 + 1 2 ∆σ ±   1 + ln cosh L+2s Ls − ln cosh L−2s Ls ln cosh 5L Ls − ln cosh 3L Ls   ,(66) schematically shown in Fig. 9. The surface charge densities change from their constant value σ ± 0 to varying linearly over a length L until reaching the constant value σ ± 0 + ∆σ ± . The transition between constant and linearly varying surface charge densities is smoothed over the length L s . By varying L and L s , we can study the effects of different in-plane length scales on the accuracy of the (2 + δ)-dimensional theory. A more detailed description of the setup is presented in the respective geometry sections, Secs. 5.2.1-5.2.3. The differential equations governing the exact and (2 + δ)dimensional theories are solved using a second order finite difference scheme, with the interface conditions evaluated on S ± , as described in Fig. 4a. Consider a flat lipid membrane whose mid-surface lies in the x-y-plane, schematically shown in Fig. 10a. The surface charge densities vary only along the x-direction, i.e. x ≡ s in Eq. (66), rendering the potential independent of the y-direction. The problem is subjected to the boundary conditions where L B1 is the domain size along the x-direction and L B2 is the domain size above and below the membrane, with the mid-surface located at z = 0. We consider two different cases: In case A, the charge densities are constant while in case B, the charge densities change from ±1 mC/m 2 to ±40 mC/m 2 along the x-direction, centered at x = L B2 /2. The two cases A and B are summarized in Tab. 3 and the remaining geometric and material parameters are listed in Tab. 4. Flat Geometry σ + (x) σ − (x) e x∂φ ∂z z=−δ/2−L B2 = 0 , φ z=δ/2+L B2 = 0 ,(67)∂φ ∂x x=0 = 0 , ∂φ ∂x x=L B1 = 0 ,(68)L B1 [nm] L B2 [nm] R 0 [nm] δ [nm] L [nm] L s [nm] ε B [ε 0 ] ε M [ε 0 ] λ D [ The potential profile corresponding to case A, shown in Fig. 10b, is linear within the membrane and exponentially decays to zero in the bulk domains. Due to the non-zero surface charge density and different permittivities in the bulk and membrane, the slope of the potential is discontinuous on the top and bottom boundaries of the membrane. Since the solution is linear in the membrane, the exact and (2 + δ)-dimensional theories agree to machine precision. In Fig. 11, the results for case B are presented. Figure 11a (top) shows the potential in the region of varying surface charge densities at discrete values of x. The exact theory is plotted with full lines while the (2 + δ)-dimensional theory is plotted with dashed lines and ×-markers, revealing excellent qualitative agreement between the exact and (2 + δ)-dimensional theories across all values of x. Figure 11a (bottom) shows the corresponding relative, pointwise error, which remains below ≈ 20% throughout the entire domain. To find where the error is largest, Fig. 11b shows the potential and error where the surface charge densities change from constant values of ±1 mC/m 2 to varying linearly along x. In this narrow transition region of length L s , the potential is small and the deviations from the exact solutions are large compared to other regions of the domain. However, the qualitative behavior of the membrane is still well-captured by the (2 + δ)-dimensional theory. According to Tab. 4, the length over which the surface charge densities vary linearly, L, as well as the smoothing length, L s , are on the order of the thickness δ. This violates the assumption of the (2 + δ)-dimensional theory that the characteristic in-plane length scale is much larger than the thickness, Eq. (45). This motivates examining the error in the (2 + δ)-dimensional theory under varying L s and L. Figure 12a shows that the L 2 -error decreases linearly with L while L s = 2.5 nm is fixed. To show that this is due to the decrease in the error in the transition region between constant and linearly varying surface charge densities, Fig. 12b shows the L 2 -error along z as a function of x, defined as E z (x) = δ/2+L B2 −δ/2−L B2 φ − φ 2 dz δ/2+L B2 −δ/2−L B2φ 2 dz ,(69) where we find that the peak in the error in the transition region decays quickly as L is increased. Similarly, varying the smoothing length L s while fixing L = 20 nm yields an error that decreases with order 1/2 (Fig. 12c). As seen in Fig. 12d, this is again a result of the decrease in error in the transition region between constant and linearly varying surface charge densities. Thus, we conclude that the error of the (2 + δ)-dimensional theory becomes small when the characteristic in-plane length scales become large compared to the thickness of the membrane. Cylinder Consider a cylindrical lipid membrane with mid-surface radius R 0 , schematically shown in Fig. 13a. We choose a surface charge density that varies only along the z-direction of the cylinder such that the setup is axisymmetric, i.e. s ≡ z in Eq. (66). The boundary conditions remain similar to the flat case: ∂φ ∂r r=0 = 0 , φ r=R 0 +δ/2+L B2 = 0 ,(70)∂φ ∂z z=0 = 0 , ∂φ ∂z z=L B1 = 0 .(71) As compared to the flat case, however, the first boundary condition is replaced by a symmetry condition in the center of the cylinder. All geometric and material properties remain as before and are listed in Tabs. 3 and 4. Additionally, the mid-surface radius is fixed at R 0 = 25 nm, unless stated otherwise. The potential profile for case A is shown in Fig. 13b (top). Due to the cylinder's curvature, the solution is no longer linear in the membrane and the (2+δ)-dimensional theory does not capture the solution exactly. However, the relative error, plotted in Fig. 13b (bottom), does not exceed 0.2%. Figure 14a shows that the L 2 -error decreases quadratically with increasing mid-surface radius R 0 . This is consistent with the assumption of the (2 + δ)-dimensional theory that the radius of curvature is large compared to the thickness, Eq. (44). In Fig. 15a (top), the potential profiles for case B are plotted at discrete values of z along the cylinder. Again, the qualitative behavior of the potential is well approximated by the (2 + δ)dimensional theory. The relative error is plotted in Fig. 15a (bottom), and, as before, does not exceed 20% anywhere in the domain despite the additional error introduced by the curvature of the geometry. The largest error again appears in the transition region where the potential is small, as is shown in Fig. 15b. Similar to the flat case, the L 2 -error decreases with order 1 and about 1/2 with increasing L and L s , respectively, as shown in Figs. 12a and 12c. The error E z (r) decreases similarly to Figs. 12b and 12d and is thus omitted here. Therefore, we conclude, as before, that the error is small when the characteristic in-plane length scale is large compared the thickness of the membrane. Figure 14b shows how the error for case B changes with increasing radius, for L = 20 nm and L s = 10 nm. As compared to case A, the error does not converge quadratically but instead saturates. This is a result of the error due to in-plane surface charge density changes dominating over the error due to the curvature of the cylinder, and we expect the same scaling as in Fig. 14a for smaller radii of curvature. Spheres Consider a sphere with mid-surface radius R 0 , shown schematically in Fig. 16a. The surface charge density is chosen to only depend on the Θ-direction, i.e. s ≡ ΘR 0 in Eq. (66), and is thus axisymmetric along the Φ-direction. Similar to the cylindrical membrane, the problem is closed with the boundary conditions ∂φ ∂φ ∂r r=0 = 0 , φ r=R 0 +δ/2+L B2 = 0 ,(72)∂Θ ΘR 0 =(πR 0 −L B1 )/2 = 0 , ∂φ ∂Θ ΘR 0 =(πR 0 +L B1 )/2 = 0 .(73) The radius of the sphere's mid-surface is chosen as R 0 = 25 nm and the remaining geometric and material parameters are listed in Tabs. 3 and 4. For case A, Fig. 16b (top) shows excellent qualitative agreement between the exact and (2 + δ)dimensional theories while Fig. 16b (bottom) shows that the relative error does not exceed 0.5%. As with the cylinder, the error reduces quadratically with increasing radius, as shown in Fig. 14a. For case B, the potential profiles for the exact and (2 + δ)-dimensional theories at discrete values of Θ are plotted in Fig. 17a (top), showing good qualitative agreement. Figure 17a (bottom) shows that the corresponding relative error does not exceed 10%, with the error again being largest in the transition region, as seen in Fig. 17b. The decrease in error with increasing L, L s , and R 0 is consistent with the results for the flat and cylindrical geometries, as shown in Figs. 12a, 12c, and 14b, respectively. Conclusion and Outlook A theory describing the electromechanics of lipid membranes requires resolving the electric potential across their thickness. This requirement is incompatible with treating lipid membranes as strictly two-dimensional surfaces, a common approach to modeling lipid membrane mechanics. Nonetheless, surface theories have both analytical and numerical advantages, motivating the derivation of a novel, effective surface theory for the electromechanics of lipid membranes in this sequence of articles. We start from a three-dimensional model and propose a new dimension reduction procedure that assumes a low-order solution expansion along the lipid membrane thickness. Expanding using orthogonal polynomials allows us to derive new differential equations for the expansion coefficients. These equations are not dependent on the thickness direction but account for the finite thickness of lipid membranes. Therefore, we refer to such dimensionally-reduced, effective surface theory as (2 + δ)-dimensional. Applying the proposed dimension reduction procedure to the electrostatics of lipid membranes yields an effective surface form of Gauss' law. Using both analytical and numerical comparisons, we show excellent qualitative agreement between the three-dimensional and (2 + δ)dimensional theories. The two theories also show excellent quantitative agreement when the electric potential changes over length scales larger than the lipid membrane thickness, consistent with the assumptions of the theory. Similar approaches to derive dimensionally-reduced theories for the electromechanics of thin films were proposed by Green and Naghdi [53] and Khoma [54] based on Legendre polynomials. However, the authors do not make their order of expansion precise, only giving general equations for the expansion coefficients. This generality makes their theories largely intractable, and we are unaware of any practical applications beyond the examples discussed in Ref. [53]. Edmiston and Steigmann [55] also derive a dimensionally-reduced theory for the electrostatics of thin films but consider the limit of vanishing thickness, δ → 0. The authors assume equal and opposite surface charge densities on S + and S − and neglect fields external to the thin film. However, their theory can be easily generalized to account for arbitrary surface charge densities and external electric fields. This allows comparing the Edmiston-Steigmann theory to the (2 + δ)-dimensional theory in the limit of vanishing thickness. In this limit, the (2 + δ)-dimensional theory produces the same normal component of the electric field as the Edmiston-Steigmann theory. The φ 2 (ζ α ) contribution in Eq. (46) does not appear in the Edmiston-Steigmann theory, which is expected considering φ 2 (ζ α ) → 0 as δ → 0. However, the φ 1 (ζ α ) contribution in Eq. (46) is also absent in the Edmiston-Steigmann theory even though it remains non-zero in the limit of vanishing thickness. Thus, we find that generalizing the theory of Edmiston and Steigmann [55] does not correspond to the (2 + δ)-dimensional theory in the limit of vanishing thickness. The leaky dielectric model (LDM), originally devised by Melcher and Taylor for droplets in weak electrolytes [56,57], is often invoked to describe lipid vesicles in an external electric field [10,58]. The LDM describes a droplet or vesicle with radius R much larger than the Debye length λ D , exposed to an electric field that is large compared to the thermal voltage (Baygents-Saville limit). The Baygents-Saville limit allows for a macroscopic description that coarsegrains the genuine interface and its diffuse layer into an effective interface [59,60], thus not capturing electrokinetic effects on the length scale of the diffuse layer. In contrast, the (2 + δ)-dimensional theory takes a microscopic perspective and describes a material interface without making any assumption about the bulk fluid domains. Hence, the LDM and (2 + δ)-dimensional theory describe electric field effects on different length scales and are thus not comparable. Instead, the (2 + δ)-dimensional theory should serve as a starting point for deriving the LDM for lipid vesicles, a derivation currently missing from the literature. Recently, Ma et al. [61] proposed a model similar to the LDM, specific to lipid vesicles but valid in the strong electrolyte limit-as opposed to the LDM which is valid in the weak electrolyte limit. Their microscopic electrostatics model assumes equal and opposite surface charge densities and continuous electric displacements across the membrane. However, according to the (2 + δ)dimensional theory, the latter would only be valid in the limit of vanishing thickness. Furthermore, their microscopic electrostatics model is not consistent with the potential drop derived in Eq. (54). The effect of adopting the (2 + δ)-dimensional theory as a starting point in the derivation of the model by Ma et al. [61] as well as the LDM (see [59,60,[62][63][64]) currently remains an open question and merits future investigation. This article is the first in a series of three that systematically derives the governing equations describing the electromechanics of lipid membranes. In subsequent articles, the dimension reduction procedure proposed in this article is applied to the mechanical balance laws and appropriate constitutive equations, yielding a complete and self-consistent theory of the electromechanics of lipid membranes. Figure 2 : 2The mid-surface S 0 is parametrized using the curvilinear coordinates {ζ 1 , ζ 2 } ∈ Ω. Figure 3 : 3Plot of the first three Chebyeshev polynomials. S − denote averages and jumps across the thin film M, respectively. Figure 4 : 4When solving the (2 + δ)-dimensional theory numerically, a discretization that explicitly accounts for the finite thickness of the membrane, as shown in (a), may be cumbersome to implement, in particular for moving meshes. Alternatively, the mesh on the bounding surfaces, S − and S + , can be collapsed onto the membrane mid-surface S 0 , as shown in (b). [7,[47][48][49][50]. Section 5.1 considers examples with analytical solutions while Sec. 5.2 presents a numerical comparison for examples without analytical solutions but relevant for lipid membranes. Figure 5 : 5Setup for the flat geometry (a), cylinder and sphere (b). The bulk domains are dielectric materials without any free charge. On one boundary, an electric field is prescribed, while the potential is fixed on the other. Figure 6 : 6Comparison between the exact and (2 + δ)-dimensional theories on the cylinder for the two test cases described in Tab. 2. consistent with Eq. (44). In Sec. 3.2 of the SM, this result is confirmed by comparing the exact and (2 + δ)-dimensional solutions analytically.(a) Cylinder. (b) Sphere. Figure 7 : 7Dependence of the L 2 -error on the non-dimensional curvature µ = δ/(2R 0 ) for the cylinder (a) and sphere (b) for cases A and B. The error is found to reduce quadratically with µ. and the governing equations and corresponding solutions for both the exact and (2 + δ)-dimensional theory are shown in Sec. 3.3 of the SM. To compare the exact and (2 + δ)-dimensional solutions, we again consider the two test cases in Tab. 2. For case A and B, the potential profiles and relative errors are plotted in Figs. 8a and 8b, respectively. As for the cylindrical case, the pointwise, relative error does not exceed 1% in either case and the L 2 -error decreases quadratically with the non-dimensional curvature µ, as shown inFig. 7b. Figure 8 : 8Comparison between the exact and (2 + δ)-dimensional theories on the sphere for the two test cases described in Tab. 2. of a flat lipid membrane with spatially varying surface charges on the top and bottom surfaces.(b) Potential profiles for constant surface charge densities for the exact and (2 + δ)-dimensional theories. Figure 10 : 10Schematic setup (a) and potential profiles (b) for the flat lipid membrane embedded in a symmetric, monovalent electrolyte. The exact and (2 + δ)-dimensional theories agree to machine precision. a) Potential profiles in the region of varying surface charge densities. (b) Potential profiles in the left transition region of the varying surface charge densities. Figure 11 : 11Potential profiles (top) and relative errors (bottom) plotted at discrete values of x for case B of the flat membrane. The full lines represent the exact theory and the dashed lines (nearly indistinguishable in (a)) represent the (2 + δ)-dimensional theory. The error is only plotted down to 10 −5 . In (a), the values of x are taken from the entire region of varying surface charge densities while (b) shows potential profiles from the left transition region between constant and linearly varying surface charge densities. in L 2 -norm for varying L.(b) Error in E z -norm for varying L. (c) Error in L 2 2-norm for varying L s . (d) Error in E z -norm for varying L s . Figure 12 : 12Error in the L 2 -norm for case B against varying L (a) and L s (c) for all geometries and error in the E z -norm, as defined in Eq. (69), for the flat geometry ((b) and (d)). For the cylinder and sphere, the mid-surface radius is fixed at R 0 = 200 nm and for the case of varying L s , the length over which the surface charge densities vary linearly is fixed at L = 20 nm. The remaining parameters are given in Tabs. 3 and 4. of a cylinder with spatially varying surface charge densities.(b) Potential profiles (top) and relative error (bottom) for constant surface charge densities for the exact and (2 + δ)-dimensional theories. Figure 13 : 13Schematic setup (a) and potential profiles (b) for the cylindrical lipid membrane embedded in a symmetric, monovalent electrolyte. Figure 14 : 14Error convergence in the L 2 -norm with respect to the non-dimensional curvature µ = δ 2R0 for cylinders and spheres for cases A (a) and B (b), with L = 20 nm and L s = 10 nm. For case A, the expected quadratic convergence is observed while for case B, the error saturates as a result of the dominating error from the spatially varying surface charge densities. a) Potential profiles in the region of varying surface charge densities. (b) Potential profiles in the left transition region of the varying surface charge densities. Figure 15 : 15Potential profiles (top) and relative errors (bottom) plotted at discrete values of x for case B of the cylindrical membrane. The full lines represent the exact theory and the dashed lines (nearly indistinguishable in (a)) represent the (2 + δ)-dimensional theory. The error is only plotted down to 10 −5 . In (a), the values of x are taken from the entire region of varying surface charge densities while (b) shows profiles from the left transition region between constant and linearly varying surface charge densities. of a sphere with spatially varying surface charge densities. (b) Potential profiles (top) and relative error (bottom) for constant surface charge densities for the exact and (2 + δ)-dimensional theories. Figure 16 : 16Schematic setup (a) and potential profiles (b) for the spherical lipid membrane embedded in a symmetric, monovalent electrolyte. Figure 17 : 17Potential profiles (top) and relative errors (bottom) plotted at discrete values of x for case B of the spherical membrane. The full lines represent the exact theory and the dashed lines (nearly indistinguishable) represent the (2 + δ)-dimensional theory. The error is only plotted down to 10 −5 . In (a), the values of x are taken from the entire region of varying surface charge densities while (b) shows profiles from the left transition region between constant and linearly varying surface charge densities. Greek and Latin letters are used to denote indices taking values {1, 2} and {1, 2, 3}, respectively2 Table 2 : 2Non-dimensional quantities for the two analytical test cases for cylinders and spheres. Table 3 : 3Surface charge densities on the top (σ + ) and bottom (σ − ) surface for cases A and B. Table 4 : 4Geometric and material parameters for the flat, cylindrical, and spherical test cases. The length scales and parameters are typical for lipid membranes. Check symbols are used to distinguish corresponding quantities in the dimensionally-reduced theory, which do not carry a dedicated symbol. Equations(59) and(60)hold analogously for the (2 + δ)-dimensional theory. (a) Potential profiles in the region of varying surface charge densities.(b) Potential profiles in the left transition region of the varying surface charge densities. AcknowledgementsThis work was supported by the Director, Office of Science, Office of Basic Energy Sciences, of the U.S. Department of Energy under Contract No. DEAC02-05CH1123. KKM also acknowledges the support from the Hellmann Fellowship. Relaxation dynamics of fluid membranes. M Arroyo, A Desimone, Physical Review E. 7931915Arroyo, M. & DeSimone, A. Relaxation dynamics of fluid membranes. Physical Review E 79, 031915 (2009). Interaction between surface shape and intra-surface viscous flow on lipid membranes. P Rangamani, A Agrawal, K K Mandadapu, G Oster, D J Steigmann, Biomechanics and Modeling in Mechanobiology. 12Rangamani, P., Agrawal, A., Mandadapu, K. K., Oster, G. & Steigmann, D. J. Interaction between surface shape and intra-surface viscous flow on lipid membranes. Biomechanics and Modeling in Mechanobiology 12, 833-845 (2013). Irreversible thermodynamics of curved lipid membranes. A Sahu, R A Sauer, K K Mandadapu, Physical Review E. 9642409Sahu, A., Sauer, R. A. & Mandadapu, K. K. Irreversible thermodynamics of curved lipid membranes. Physical Review E 96, 042409 (2017). Deformation of spherical vesicles by electric fields. M Winterhalter, W Helfrich, Journal of Colloid and Interface Science. 122Winterhalter, M. & Helfrich, W. Deformation of spherical vesicles by electric fields. Journal of Colloid and Interface Science 122, 583-586 (1988). Deformation of giant lipid vesicles by electric fields. M Kummrow, W Helfrich, Physical Review A. 448356Kummrow, M. & Helfrich, W. Deformation of giant lipid vesicles by electric fields. Physical Review A 44, 8356 (1991). Giant vesicles in electric fields. R Dimova, Soft Matter. 3Dimova, R. et al. Giant vesicles in electric fields. Soft Matter 3, 817-827 (2007). Vesicles in electric fields: Some novel aspects of membrane behavior. R Dimova, Soft Matter. 5Dimova, R. et al. Vesicles in electric fields: Some novel aspects of membrane behavior. Soft Matter 5, 3201-3212 (2009). Destabilizing giant vesicles with electric fields: An overview of current applications. T Portet, The Journal of Membrane Biology. 245Portet, T. et al. Destabilizing giant vesicles with electric fields: An overview of current appli- cations. The Journal of Membrane Biology 245, 555-564 (2012). Lipid vesicles in pulsed electric fields: Fundamental principles of the membrane response and its biomedical applications. D L Perrier, L Rems, P E Boukany, Advances in Colloid and Interface Science. 249Perrier, D. L., Rems, L. & Boukany, P. E. Lipid vesicles in pulsed electric fields: Fundamental principles of the membrane response and its biomedical applications. Advances in Colloid and Interface Science 249, 248-271 (2017). Electrohydrodynamics of drops and vesicles. P M Vlahovska, Annual Review of Fluid Mechanics. 51Vlahovska, P. M. Electrohydrodynamics of drops and vesicles. Annual Review of Fluid Me- chanics 51, 305-330 (2019). Membrane potential of a ranvier node measured after electrical destruction of its membrane. R Stämpfli, M Willi, Experientia. 13Stämpfli, R. & Willi, M. Membrane potential of a ranvier node measured after electrical destruction of its membrane. Experientia 13, 297-298 (1957). The mechanism of electrical breakdown in the membranes ofvalonia utricularis. H G Coster, U Zimmermann, The Journal of Membrane Biology. 22Coster, H. G. & Zimmermann, U. The mechanism of electrical breakdown in the membranes ofvalonia utricularis. The Journal of Membrane Biology 22, 73-90 (1975). Evidence for a symmetrical uptake of fluorescent dyes through electro-permeabilized membranes of avena mesophyll protoplasts. W Mehrle, U Zimmermann, R Hampp, FEBS Letters. 185Mehrle, W., Zimmermann, U. & Hampp, R. Evidence for a symmetrical uptake of fluorescent dyes through electro-permeabilized membranes of avena mesophyll protoplasts. FEBS Letters 185, 89-94 (1985). Electro-mechanical permeabilization of lipid vesicles. role of membrane tension and compressibility. D Needham, R Hochmuth, Biophysical Journal. 55Needham, D. & Hochmuth, R. Electro-mechanical permeabilization of lipid vesicles. role of membrane tension and compressibility. Biophysical Journal 55, 1001-1009 (1989). Membrane conductance of an electroporated cell analyzed by submicrosecond imaging of transmembrane potential. M Hibino, M Shigemori, H Itoh, K Nagayama, K KinositaJr, Biophysical Journal. 59Hibino, M., Shigemori, M., Itoh, H., Nagayama, K. & Kinosita Jr, K. Membrane conductance of an electroporated cell analyzed by submicrosecond imaging of transmembrane potential. Biophysical Journal 59, 209-220 (1991). Electro-deformation and poration of giant vesicles viewed with high temporal resolution. K A Riske, R Dimova, Biophysical Journal. 88Riske, K. A. & Dimova, R. Electro-deformation and poration of giant vesicles viewed with high temporal resolution. Biophysical Journal 88, 1143-1155 (2005). Electroporation of planar lipid bilayers and membranes. M Pavlin, T Kotnik, D Miklavčič, P Kramar, A M Lebar, Advances in Planar Lipid Bilayers and Liposomes. 6Pavlin, M., Kotnik, T., Miklavčič, D., Kramar, P. & Lebar, A. M. Electroporation of planar lipid bilayers and membranes. Advances in Planar Lipid Bilayers and Liposomes 6, 165-226 (2008). Electroporation-based technologies for medicine: Principles, applications, and challenges. M L Yarmush, A Golberg, G Serša, T Kotnik, D Miklavčič, Annual Review of Biomedical Engineering. 16Yarmush, M. L., Golberg, A., Serša, G., Kotnik, T. & Miklavčič, D. Electroporation-based tech- nologies for medicine: Principles, applications, and challenges. Annual Review of Biomedical Engineering 16, 295-320 (2014). Electrical systems for pulsed electric field applications in the food industry: An engineering perspective. R N Arshad, Trends in Food Science & Technology. 104Arshad, R. N. et al. Electrical systems for pulsed electric field applications in the food industry: An engineering perspective. Trends in Food Science & Technology 104, 1-13 (2020). Pulsed electric field: A potential alternative towards a sustainable food processing. R N Arshad, Trends in Food Science & Technology. 111Arshad, R. N. et al. Pulsed electric field: A potential alternative towards a sustainable food processing. Trends in Food Science & Technology 111, 43-54 (2021). Cell electrofusion: Past and future perspectives for antibody production and cancer cell vaccines. M Kandušer, M Ušaj, Expert Opinion on Drug Delivery. 11Kandušer, M. & Ušaj, M. Cell electrofusion: Past and future perspectives for antibody pro- duction and cancer cell vaccines. Expert Opinion on Drug Delivery 11, 1885-1898 (2014). . D Purves, Neuroscience. De Boeck SupérieurPurves, D. et al. Neuroscience (De Boeck Supérieur, 2019). Untersuchungen zur thermodynamik der bioelektrischen ströme. J Bernstein, A Tschermak, Archiv für die Gesamte Physiologie des Menschen und der Tiere. 112Bernstein, J. & Tschermak, A. Untersuchungen zur thermodynamik der bioelektrischen ströme. Archiv für die Gesamte Physiologie des Menschen und der Tiere 112, 439-521 (1906). The positive and negative heat production associated with a nerve impulse. B C Abbott, A V Hill, J Howarth, Proceedings of the Royal Society of London. Series B-Biological Sciences. the Royal Society of London. Series B-Biological Sciences148Abbott, B. C., Hill, A. V. & Howarth, J. The positive and negative heat production associated with a nerve impulse. Proceedings of the Royal Society of London. Series B-Biological Sciences 148, 149-187 (1958). Mechanical changes in squid giant axons associated with production of action potentials. K Iwasa, I Tasaki, Biochemical and Biophysical Research Communications. 95Iwasa, K. & Tasaki, I. Mechanical changes in squid giant axons associated with production of action potentials. Biochemical and Biophysical Research Communications 95, 1328-1331 (1980). Mechanical changes in crab nerve fibers during action potentials. I Tasaki, K Iwasa, R C Gibbons, The Japanese Journal of Physiology. 30Tasaki, I., Iwasa, K. & Gibbons, R. C. Mechanical changes in crab nerve fibers during action potentials. The Japanese Journal of Physiology 30, 897-905 (1980). Piezoelectric nanoribbons for monitoring cellular deformations. T D Nguyen, Nature Nanotechnology. 7Nguyen, T. D. et al. Piezoelectric nanoribbons for monitoring cellular deformations. Nature Nanotechnology 7, 587-593 (2012). Imaging action potential in single mammalian neurons by tracking the accompanying sub-nanometer mechanical motion. Y Yang, ACS Nano. 12Yang, Y. et al. Imaging action potential in single mammalian neurons by tracking the accom- panying sub-nanometer mechanical motion. ACS Nano 12, 4186-4193 (2018). Mechanical surface waves accompany action potential propagation. A El Hady, B B Machta, Nature Communications. 6El Hady, A. & Machta, B. B. Mechanical surface waves accompany action potential propagation. Nature Communications 6, 1-7 (2015). It sounds like an action potential: Unification of electrical, chemical and mechanical aspects of acoustic pulses in lipids. M Mussel, M F Schneider, Journal of the Royal Society Interface. 1620180743Mussel, M. & Schneider, M. F. It sounds like an action potential: Unification of electrical, chemical and mechanical aspects of acoustic pulses in lipids. Journal of the Royal Society Interface 16, 20180743 (2019). Electrophysiological-mechanical coupling in the neuronal membrane and its role in ultrasound neuromodulation and general anaesthesia. A Jerusalem, Acta Biomaterialia. 97Jerusalem, A. et al. Electrophysiological-mechanical coupling in the neuronal membrane and its role in ultrasound neuromodulation and general anaesthesia. Acta Biomaterialia 97, 116-140 (2019). Elastic properties of lipid bilayers: Theory and possible experiments. W Helfrich, Zeitschrift für Naturforschung c. 28Helfrich, W. Elastic properties of lipid bilayers: Theory and possible experiments. Zeitschrift für Naturforschung c 28, 693-703 (1973). Fluid films with curvature elasticity. D Steigmann, Archive for Rational Mechanics and Analysis. 150Steigmann, D. Fluid films with curvature elasticity. Archive for Rational Mechanics and Analysis 150, 127-152 (1999). Electromagnetic theory. A Kovetz, Oxford University Press Oxford975Kovetz, A. Electromagnetic theory, vol. 975 (Oxford University Press Oxford, 2000). G B Arfken, H J Weber, Mathematical methods for physicists. Arfken, G. B. & Weber, H. J. Mathematical methods for physicists (1999). . J D Jackson, Classical electrodynamics. Jackson, J. D. Classical electrodynamics (1999). C Canuto, M Y Hussaini, A Quarteroni, T A Zang, Spectral methods: Fundamentals in single domains. Springer Science & Business MediaCanuto, C., Hussaini, M. Y., Quarteroni, A. & Zang, T. A. Spectral methods: Fundamentals in single domains (Springer Science & Business Media, 2007). Numerical analysis of spectral methods: Theory and applications. D Gottlieb, S A Orszag, SIAMGottlieb, D. & Orszag, S. A. Numerical analysis of spectral methods: Theory and applications (SIAM, 1977). Chebyshev and Fourier spectral methods. J P Boyd, Courier CorporationBoyd, J. P. Chebyshev and Fourier spectral methods (Courier Corporation, 2001). Theory of shells. P G Ciarlet, ElsevierCiarlet, P. G. Theory of shells (Elsevier, 2000). . J V Tranquillo, Quantitative, Synthesis Lectures on Biomedical Engineering. 3Tranquillo, J. V. Quantitative neurophysiology. Synthesis Lectures on Biomedical Engineering 3, 1-142 (2008). Pattern formation of stationary transcellular ionic currents in fucus. M Leonetti, E Dubois-Violette, F Homblé, Proceedings of the National Academy of Sciences. 101Leonetti, M., Dubois-Violette, E. & Homblé, F. Pattern formation of stationary transcellular ionic currents in fucus. Proceedings of the National Academy of Sciences 101, 10243-10248 (2004). Electrostatic and electrokinetic contributions to the elastic moduli of a driven membrane. D Lacoste, G Menon, M Bazant, J Joanny, The European Physical Journal E. 28Lacoste, D., Menon, G., Bazant, M. & Joanny, J. Electrostatic and electrokinetic contributions to the elastic moduli of a driven membrane. The European Physical Journal E 28, 243-264 (2009). A Poisson-Boltzmann approach for a lipid membrane in an electric field. F Ziebert, D Lacoste, New Journal of Physics. 12Ziebert, F. & Lacoste, D. A Poisson-Boltzmann approach for a lipid membrane in an electric field. New Journal of Physics 12, 0-15 (2010). Stability of charged membranes. D Bensimon, F David, S Leibler, A Pumir, Journal de Physique. 51Bensimon, D., David, F., Leibler, S. & Pumir, A. Stability of charged membranes. Journal de Physique 51, 689-695 (1990). The bending modulus of ionic lamellar phases. A Fogden, B Ninham, Langmuir. 7Fogden, A. & Ninham, B. The bending modulus of ionic lamellar phases. Langmuir 7, 590-595 (1991). Undulation instability of lipid membranes under an electric field. P Sens, H Isambert, Physical Review Letters. 88128102Sens, P. & Isambert, H. Undulation instability of lipid membranes under an electric field. Physical Review Letters 88, 128102 (2002). Electric field induced pearling instability in cylindrical vesicles. K P Sinha, S Gadkari, R M Thaokar, Soft Matter. 9Sinha, K. P., Gadkari, S. & Thaokar, R. M. Electric field induced pearling instability in cylindrical vesicles. Soft Matter 9, 7274-7293 (2013). Geometry and dynamics of lipid membranes: The scriven-love number. A Sahu, A Glisman, J Tchoufag, K K Mandadapu, Physical Review E. 10152401Sahu, A., Glisman, A., Tchoufag, J. & Mandadapu, K. K. Geometry and dynamics of lipid membranes: The scriven-love number. Physical Review E 101, 052401 (2020). Vesicle deformation in dc electric pulses. P F Salipante, P M Vlahovska, Soft Matter. 10Salipante, P. F. & Vlahovska, P. M. Vesicle deformation in dc electric pulses. Soft Matter 10, 3386-3393 (2014). The structure of ions and zwitterionic lipids regulates the charge of dipolar membranes. O Szekely, Langmuir. 27Szekely, O. et al. The structure of ions and zwitterionic lipids regulates the charge of dipolar membranes. Langmuir 27, 7419-7438 (2011). Introduction to electrostatics in soft and biological matter. D Andelman, Soft Condensed Matter Physics in Molecular and Cell Biology. 6Andelman, D. Introduction to electrostatics in soft and biological matter. Soft Condensed Matter Physics in Molecular and Cell Biology 6, 97-122 (2006). On electromagnetic effects in the theory of shells and plates. A E Green, P Naghdi, Philosophical Transactions of the Royal Society of London. Series A. Mathematical and Physical Sciences. 309Green, A. E. & Naghdi, P. On electromagnetic effects in the theory of shells and plates. Philosophical Transactions of the Royal Society of London. Series A. Mathematical and Physical Sciences 309, 559-610 (1983). The construction of a generalized theory of shells from thermopiezoelastic material. I Y Khoma, Soviet Applied Mechanics. 19Khoma, I. Y. The construction of a generalized theory of shells from thermopiezoelastic mate- rial. Soviet Applied Mechanics 19, 1101-1106 (1983). Analysis of nonlinear electrostatic membranes. J Edmiston, D Steigmann, Mechanics and electrodynamics of magneto-and electro-elastic materials. SpringerEdmiston, J. & Steigmann, D. Analysis of nonlinear electrostatic membranes. In Mechanics and electrodynamics of magneto-and electro-elastic materials, 153-180 (Springer, 2011). Studies in electrohydrodynamics. i. the circulation produced in a drop by an electric field. G I Taylor, Proceedings of the Royal Society of London. Series A. Mathematical and Physical Sciences. 291Taylor, G. I. Studies in electrohydrodynamics. i. the circulation produced in a drop by an electric field. Proceedings of the Royal Society of London. Series A. Mathematical and Physical Sciences 291, 159-166 (1966). Electrohydrodynamics: A review of the role of interfacial shear stresses. J Melcher, G Taylor, Annual Review of Fluid Mechanics. 1Melcher, J. & Taylor, G. Electrohydrodynamics: A review of the role of interfacial shear stresses. Annual Review of Fluid Mechanics 1, 111-146 (1969). Differential regulation of guv mechanics via actin network architectures. N H Wubshet, B Wu, S Veerapaneni, A P Liu, Wubshet, N. H., Wu, B., Veerapaneni, S. & Liu, A. P. Differential regulation of guv mechanics via actin network architectures. bioRxiv (2022). Electrohydrodynamics: The taylor-melcher leaky dielectric model. D Saville, Annual Review of Fluid Mechanics. 29Saville, D. Electrohydrodynamics: The taylor-melcher leaky dielectric model. Annual Review of Fluid Mechanics 29, 27-64 (1997). The taylor-melcher leaky dielectric model as a macroscale electrokinetic description. O Schnitzer, E Yariv, Journal of Fluid Mechanics. 773Schnitzer, O. & Yariv, E. The taylor-melcher leaky dielectric model as a macroscale electroki- netic description. Journal of Fluid Mechanics 773, 1-33 (2015). A model for the electric field-driven flow and deformation of a drop or vesicle in strong electrolyte solutions. M Ma, M R Booty, M Siegel, Journal of Fluid Mechanics. 94347Ma, M., Booty, M. R. & Siegel, M. A model for the electric field-driven flow and deformation of a drop or vesicle in strong electrolyte solutions. Journal of Fluid Mechanics 943, A47 (2022). The circulation produced in a drop by an electric field: a high field strength electrokinetic model. J Baygents, D Saville, AIP Conference Proceedings. American Institute of Physics197Baygents, J. & Saville, D. The circulation produced in a drop by an electric field: a high field strength electrokinetic model. In AIP Conference Proceedings, vol. 197, 7-17 (American Institute of Physics, 1990). An electrokinetic model of drop deformation in an electric field. E K Zholkovskij, J H Masliyah, J Czarnecki, Journal of Fluid Mechanics. 472Zholkovskij, E. K., Masliyah, J. H. & Czarnecki, J. An electrokinetic model of drop deformation in an electric field. Journal of Fluid Mechanics 472, 1-27 (2002). From electrodiffusion theory to the electrohydrodynamics of leaky dielectrics through the weak electrolyte limit. Y Mori, Y.-N Young, Journal of Fluid Mechanics. 855Mori, Y. & Young, Y.-N. From electrodiffusion theory to the electrohydrodynamics of leaky dielectrics through the weak electrolyte limit. Journal of Fluid Mechanics 855, 67-130 (2018).
[]
[ "Crowdsourcing Learning as Domain Adaptation: A Case Study on Named Entity Recognition", "Crowdsourcing Learning as Domain Adaptation: A Case Study on Named Entity Recognition" ]
[ "Xin Zhang \nSchool of New Media and Communication\nTianjin University\nChina\n", "Guangwei Xu ", "Yueheng Sun \nCollege of Intelligence and Computing\nTianjin University\nChina\n", "Meishan Zhang [email protected] \nSchool of New Media and Communication\nTianjin University\nChina\n", "Pengjun Xie " ]
[ "School of New Media and Communication\nTianjin University\nChina", "College of Intelligence and Computing\nTianjin University\nChina", "School of New Media and Communication\nTianjin University\nChina" ]
[ "Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing" ]
Crowdsourcing is regarded as one prospective solution for effective supervised learning, aiming to build large-scale annotated training data by crowd workers. Previous studies focus on reducing the influences from the noises of the crowdsourced annotations for supervised models. We take a different point in this work, regarding all crowdsourced annotations as goldstandard with respect to the individual annotators. In this way, we find that crowdsourcing could be highly similar to domain adaptation, and then the recent advances of cross-domain methods can be almost directly applied to crowdsourcing. Here we take named entity recognition (NER) as a study case, suggesting an annotator-aware representation learning model that inspired by the domain adaptation methods which attempt to capture effective domain-aware features. We investigate both unsupervised and supervised crowdsourcing learning, assuming that no or only smallscale expert annotations are available. Experimental results on a benchmark crowdsourced NER dataset show that our method is highly effective, leading to a new state-of-the-art performance. In addition, under the supervised setting, we can achieve impressive performance gains with only a very small scale of expert annotations.
10.18653/v1/2021.acl-long.432
[ "https://www.aclanthology.org/2021.acl-long.432.pdf" ]
235,253,726
2105.14980
f1e0008c6a6e388ceee6a9dfb68e299edffbd2b1
Crowdsourcing Learning as Domain Adaptation: A Case Study on Named Entity Recognition August 1-6, 2021 Xin Zhang School of New Media and Communication Tianjin University China Guangwei Xu Yueheng Sun College of Intelligence and Computing Tianjin University China Meishan Zhang [email protected] School of New Media and Communication Tianjin University China Pengjun Xie Crowdsourcing Learning as Domain Adaptation: A Case Study on Named Entity Recognition Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language ProcessingAugust 1-6, 20215558 Crowdsourcing is regarded as one prospective solution for effective supervised learning, aiming to build large-scale annotated training data by crowd workers. Previous studies focus on reducing the influences from the noises of the crowdsourced annotations for supervised models. We take a different point in this work, regarding all crowdsourced annotations as goldstandard with respect to the individual annotators. In this way, we find that crowdsourcing could be highly similar to domain adaptation, and then the recent advances of cross-domain methods can be almost directly applied to crowdsourcing. Here we take named entity recognition (NER) as a study case, suggesting an annotator-aware representation learning model that inspired by the domain adaptation methods which attempt to capture effective domain-aware features. We investigate both unsupervised and supervised crowdsourcing learning, assuming that no or only smallscale expert annotations are available. Experimental results on a benchmark crowdsourced NER dataset show that our method is highly effective, leading to a new state-of-the-art performance. In addition, under the supervised setting, we can achieve impressive performance gains with only a very small scale of expert annotations. Introduction Crowdsourcing has gained a growing interest in the natural language processing (NLP) community, which helps hard NLP tasks such as named entity recognition (Finin et al., 2010;Derczynski et al., 2016), part-of-speech tagging (Hovy et al., 2014), relation extraction (Abad et al., 2017), translation (Zaidan and Callison-Burch, 2011), argument retrieval (Mayhew et al., 2020), and others (Snow et al., 2008;Callison-Burch and Dredze, 2010) to * Corresponding author. collect a large scale dataset for supervised model training. In contrast to the gold-standard annotations labeled by experts, the crowdsourced annotations can be constructed quickly at a low cost with masses of crowd annotators (Snow et al., 2008;Nye et al., 2018). However, these annotations are relatively lower-quality with much-unexpected noise since the crowd annotators are not professional enough, which can make errors in complex and ambiguous contexts (Sheng et al., 2008). Previous crowdsourcing learning models struggle to reduce the influences of noises of the crowdsourced annotations (Hsueh et al., 2009;Raykar and Yu, 2012a;Hovy et al., 2013;Jamison and Gurevych, 2015). Majority voting (MV) is one straightforward way to aggregate high-quality annotations, which has been widely adopted (Snow et al., 2008;Fernandes and Brefeld, 2011;Rodrigues et al., 2014), but it requires multiple annotations for a given input. Recently, the majority of models concentrate on monitoring the distances between crowdsourced and gold-standard annotations, obtaining better performances than MV by considering the annotator information together (Nguyen et al., 2017;Simpson and Gurevych, 2019;Li et al., 2020). Most of these studies assume the crowdsourced annotations as untrustworthy answers, proposing sophisticated strategies to recover the golden answers from crowdsourced labels. In this work, we take a different view for crowdsourcing learning, regarding the crowdsourced annotations as the gold standard in terms of individual annotators. In other words, we assume that all annotators (including experts) own their specialized understandings towards a specific task, and they annotate the task consistently according to their individual principles by the understandings, where the experts can reach an oracle principle by consensus. The above view indicates that crowdsourcing learning aims to train a model based on the understandings of crowd annotators, and then test the model by the oracle understanding from experts. Based on the assumption, we find that crowdsourcing learning is highly similar to domain adaptation, which is one important topic that has been investigated extensively for decades (Ben-David et al., 2006;Daumé III, 2007;Chu and Wang, 2018;Jia and Zhang, 2020). We treat each annotator as one domain specifically, and then crowdsourcing learning is essentially almost a multi-source domain adaptation problem. Thus, one natural question arises: What is the performance when a state-of-the-art domain adaptation model is applied directly to crowdsourcing learning. Here we take NER as a study case to investigate crowdsourcing learning as domain adaptation, considering that NER has been one popular task for crowdsourcing learning in the NLP community (Finin et al., 2010;Rodrigues et al., 2014;Derczynski et al., 2016). We suggest a state-of-the-art representation learning model that can effectively capture annotator(domain)-aware features. Also, we investigate two settings of crowdsourcing learning, one being the unsupervised setting with no expert annotation, which has been widely studied before, and the other being the supervised setting where a certain scale of expert annotations exists, which is inspired by domain adaptation. Finally, we conduct experiments on a benchmark crowdsourcing NER dataset (Tjong Kim Sang and De Meulder, 2003;Rodrigues et al., 2014) to evaluate our methods. We take a standard BiLSTM-CRF (Lample et al., 2016) model with BERT (Devlin et al., 2019) word representations as the baseline, and adapt it to our representation learning model. Experimental results show that our method is able to model crowdsourced annotations effectively. Under the unsupervised setting, our model can give a strong performance, outperforming previous work significantly. In addition, the model performance can be greatly boosted by feeding with small-scale expert annotations, which can be a prospective direction for low-resource scenarios. Annotator 1 · · · Annotator M Domain 1 · · · Domain M x i j → y i j x i j → y i j Expert Domain tgt = =⇒ = =⇒ x i j = a i (x i j ) Crowdsourcing Learning Multi-source Domain Adaptation Figure 2: Illustration of the connection between multisource domain adaptation and crowdsourcing learning. In summary, we make the following three major contributions: (1) We present a different view of crowdsourcing learning, and propose to treat crowdsourcing learning as domain adaptation, which naturally connects the two important topics of machine learning for NLP. (2) We propose a novel method for crowdsourcing learning. Although the method is of a limited novelty for domain adaptation, it is the first work to crowdsourcing learning, and can achieve state-of-the-art performance on NER. (3) We introduce supervised crowdsourcing learning for the first time, which is borrowed from domain adaptation and would be a prospective solution for hard NLP tasks in practice. We will release the code and detailed experimental settings at github.com/izhx/CLasDA under the Apache License 2.0 to facilitate future research. The Basic Idea Here we describe the concepts of the domain adaptation and crowdsourcing learning in detail, and show how they are connected together. Domain Adaptation Domain adaptation happens when a supervised model trained on a fixed set of training corpus, including several specific domains, is required to test on a different domain (Ben-David et al., 2006;Mansour et al., 2009). The scenario is quite frequent in practice, and thus has received extensive attention with massive investigations (Csurka, 2017;Ramponi and Plank, 2020). The major problem lies in the different input distributions between source and target domains, leading to biased predictions over the inputs with a large gap to the source domains. Here we focus on multi-source cross-domain adaptation, which would suit our next corresponding mostly. Following Mansour et al. (2009);Zhao et al. (2019), the multi-source domain adaptation assumes a set of labeled examples from M domains available, denoted by D src = {(X i , Y i )} M i=1 , 1 where X i = {x i j } N i j=1 and Y i = {y i j } N i j=1 , 2 and we aim to train a model on D src to adapt to a specific target domain with the help of a large scale raw corpus X tgt = {x i } Nt i=1 of the target domain. Note that under this setting, all Xs, including source and target domains, are generated individually according to their unknown distributions, thus the abstract representations learned from the source domain dataset D src would inevitably be biased to the target domain, which is the primary reason for the degraded performance of the target domain (Huang and Yates, 2010;Ganin et al., 2016). A number of domain adaptation models have struggled for better transferable high-level representations as domain shifts (Ramponi and Plank, 2020). Crowdsourcing Learning Crowdsourcing aims to produce a set of large-scale annotated examples created by crowd annotators, which is used to train supervised models for a given task (Raykar et al., 2010). As the majority of NLP models assume that gold-standard highquality training corpora are already available (Manning and Schutze, 1999), crowdsourcing learning has received much less interest than cross-domain adaptation, although the availability of these corpora is always not the truth. Formally, under the crowdsourcing setting, we usually assume that there are a number of crowd annotators A = {a i } M i=1 (here we use the same M as well as later superscripts in order to align with the domain adaptation), and all annotators should have a sufficient number of training examples by their different understandings for a given task, which are referred to as D crowd = {(X i , Y i )} M i=1 where X i = {x i j } N i j=1 and Y i = {y i j } N i j=1 . We aim to train a model on D crowd and adapt it to predict the expert outputs. Note that all Xs do not have significant differences in their distributions in this paradigm. 1 A domain is commonly defined as a distribution on the input data in many works, e.g., Ben-David et al. (2006). To make domain adaptation and crowdsourcing learning highly similar in formula, we follow Zhao et al. (2019), defining a domain as a joint distribution on the input space X and the label space Y. Section 4.5 gives a discussion of their connection. 2 N * indicates the number of instances. Crowdsourcing Learning as Domain adaptation By scrutinizing the above formalization, when we set all Xs jointly with the annotators by using x i j = a i (x i j ), which indicates the contextualized understanding (a vectorial form is desirable here of the neural representations) of x i j by the annotator a i , then we would regard that X i = {a i (x i j )} N i j=1 is generated from different distributions as well. In this way, we are able to connect crowdsourcing learning and domain adaptation together, as shown in Figure 2, based on the assumption that all Y s are gold-standard for crowdsourced annotations when crowd annotators are united as joint inputs. And finally, we need to perform predictions by regarding x expert = expert(x), and in particular, the learning of expert differs from that of the target domain in domain adaptation. A Case Study On NER In this section, we take NER as a case study, which has been investigated most frequently in NLP (Yadav and Bethard, 2018), and propose a representation learning model mainly inspired by the domain adaptation model of (Jia et al., 2019) to perform crowdsourcing learning. In addition, we introduce the unsupervised and supervised settings for crowdsourcing learning which are directly borrowed from the domain adaptation. The Representation Learning Model We convert NER into a standard sequence labeling problem by using the BIO schema, following the majority of previous works, and extend a stateof-the-art BERT-BiLSTM-CRF model (Mayhew et al., 2020) to our crowdsourcing learning. Figure 3 shows the overall network structure of our representation learning model. By using a sophisticated parameter generator module (Platanios et al., 2018), it can capture annotator-aware features. Following, we introduce the proposed model by four components: (1) word representation, (2) annotator switcher, (3) BiLSTM Encoding, and (4) CRF inference and training. Word Representation Given a sentence of n words x = w 1 · · · w n , we first convert it to vectorial representations by BERT. Different from the standard BERT exploration, here we use Adapter•BERT (Houlsby et al., 2019), where two extra adapter modules are inside each transformer layer. The process can be simply formalized as: e 1 · · · e n = Adapter • BERT(w 1 · · · w n ) (1) where • indicates an injection operation. The detailed structure of the transformer with adapters is described in Appendix A. Noticeably, the Adapter • BERT method no longer needs fine-tuning the huge BERT parameters and can obtain comparable performance by adjusting the much lightweight adapter parameters instead. Thus the representation can be more parameter efficient, and in this way we can easily extend the word representations to annotator-aware representations. Annotator Switcher Our goal is to efficiently learn annotator-aware word representations, which can be regarded as contextualized understandings of individual annotators. Hence, we introduce an annotator switcher to support Adapter • BERT with annotator input as well, which is inspired byÜstün et al. (2020). The key idea is to use Parameter Generation Network (PGN) (Platanios et al., 2018;Jia et al., 2019) to produce adapter parameters dynamically by input annotators. In this way, our model can flexibly switch among different annotators. Concretely, assuming that V is the vectorial form of all adapter parameters by a pack operation, which can also be unpacked to recover all adapter parameters as well, the PGN module is to generate V for Adapter • BERT dynamically according the annotator inputs, as shown in Figure 3 by the right orange part. The switcher can be formalized as: x = r 1 · · · r n = PGN • Adapter • BERT(x, a) = Adapter • BERT(x, V = Θ × e a ),(2) where Θ ∈ R |V |×|e a | , x = r 1 · · · r n is the annotator-aware representations of annotator a for x = w 1 · · · w n , and e a is the annotator embedding. BiLSTM Encoding Adapter • BERT requires an additional task-oriented module for high-level feature extraction. Here we exploit a single BiL-STM layer to achieve it: h 1 · · · h n = BiLSTM(x), which is used for next-step inference and training. CRF Inference and Training We use CRF to calculate the score of a candidate sequential output y = l 1 · · · l n globally: o i = W crf h i + b crf score(y|x, a) = n i=1 (T [l i−1 , l i ] + o i [l i ])(3) where W crf , b crf and T are model parameters. Given an input (x, a), we perform inference by the Viterbi algorithm. For training, we define a sentence-level cross-entropy objective: p(y a |x, a) = exp score(y a |x, a) y exp score(y|x, a) L = − log p(y a |x, a) (4) where y a is the gold-standard output of x from a, y belongs to all possible candidates, and p(y a |x, a) indicates the sentence-level probability. The Unsupervised Setting Here we introduce unsupervised crowdsourcing learning in alignment with unsupervised domain adaptation, assuming that no expert annotation is available, which is the widely-adopted setting of previous work of crowdsourcing learning (Sheng et al., 2008;Zhang et al., 2016;Sheng and Zhang, 2019). This setting has a large divergence with domain adaptation in target learning. In the unsupervised domain adaptation, the information of the target domain can be learned through a large-scale raw corpus (Ramponi and Plank, 2020), where there is no correspondence in the unsupervised crowdsourcing learning to learn information of experts. To this end, here we suggest a simple and heuristic method to model experts by the specialty of crowdsourcing learning. Intuitively, we expect that experts should approve the knowledge of the common consensus for a given task, and meanwhile, our model needs the embedding representation of experts for inference. Thus, we can estimate the expert embedding by using the centroid point of all annotator embeddings: e expert = 1 |A| a∈A e a(5) where A represents all annotators contributed to the training corpus. This expert can be interpreted as the elected outcome by annotator voting with equal importance. In this way, we perform the inference in unsupervised crowdsourcing learning by feeding e expert as the annotator input. The Supervised Setting Inspired by the supervised domain adaptation, we also present the supervised crowdsourcing learning, which has been seldom concerned. The setting is very simple, just by assuming that a certain scale of expert annotations is available. In this way, we can learn the expert representation directly by supervised learning with our proposed model. The supervised setting could be a more practicable scenario in real applications. Intuitively, it should bring much better performance than the unsupervised setting with few shot expert annotations, which does not increase the overall annotation cost much. In fact, during or after the crowdsourcing annotation process, we usually have a quality control module, which can help to produce silvery quality pseudo-expert annotations (Kittur et al., 2008;Lease, 2011). Thus, the supervised setting can be highly valuable yet has been ignored mostly. Experiments Setting Dataset We use the CoNLL-2003 NER English dataset (Tjong Kim Sang and De Meulder, 2003) with crowdsourced annotations provided by Rodrigues and Pereira (2018) to investigate our methods in both unsupervised and supervised settings. The crowdsourced annotations consume 400 new articles, involving 5,985 sentences in practice, which are labeled by a total of 47 crowd annotators. The total number of annotations is 16,878. Thus the averaged number of annotated sentences per annotator is 359, which covers 6% of the total sentences. The dataset includes golden/expert annotations on the training sentences and a standard CoNLL-2003 test set for NER evaluation. Evaluation The standard CoNLL-2003 evaluation metric is used to calculate the NER perfor- (Rodrigues et al., 2014) 49.40 85.60 62.60 LC (Nguyen et al., 2017) 82.38 62.10 70.82 LC-cat (Nguyen et al., 2017) 79 mance, reporting the entity-level precision (P), recall (R), and their F1 value. All experiments of the same setting are conducted by five times, and the median outputs are used for performance reporting. We exploit the pair-wise t-test for significance test, regarding two results significantly different when the p-value is below 10 −5 . Baselines We re-implement several methods of previous work as baselines, and all the methods are based on Adapter•BERT-BiLSTM-CRF (no annotator switcher inside) for fair comparisons. For both the unsupervised and supervised settings, we consider the following baseline models: • ALL: which treats all annotations equally, ignoring the annotator information no matter crowd or expert. • MV: which is borrowed from Rodrigues et al. (2014), where aggregated labels are produced by token level majority voting. In particular, the gold-standard labels are used instead if they are available for a specific sentence during the supervised crowdsourcing learning. • LC: which is proposed by Nguyen et al. (2017), where the annotator bias to the goldstandard labels is explicitly modeled at the CRF layer for each crowd annotator, and specifically, the expert is with zero bias. • LC-cat: which is also presented by Nguyen et al. (2017) as a baseline to LC, where the annotator bias is modeled at the BiLSTM layer instead and also the expert bias is set to zero. 3 Notice that ALL and MV are annotator-agnostic models, which exploit no information specific to the individual annotators, while the other three models are all annotator-aware models, where the annotator information is used by different ways. Hyper-parameters We offer all detailed settings of Hyper-parameters in Appendix B. Table 1 shows the test results of the unsupervised setting. As a whole, we can see that our representation learning model (i.e., This Work) borrowed from domain adaptation can achieve the best performance, resulting in an F1 score of 77.95, significantly better than the second-best model LC-cat (i.e., 77.95 − 76.79 = 1.16). The result indicates the advantage of our method over the other models. Unsupervised Results By examining the results in-depth, we can find that the annotator-aware model is significantly better than the annotator-agnostic models, demonstrating that the annotator information is highly helpful for crowdsourcing learning. The observation further shows the reasonableness by aligning annotators to domains, since domain information is also useful for domain adaptation. In addition, the better performance of our representation learning method among the annotator-aware models indicates that our model can capture annotator-aware information more effectively because our start point is totally different. We do not attempt to model the expert labels based on crowdsourcing annotations. Further, we observe that several models show better precision values, while others give better recall values. A high precision but low recall indicates that the model is conservative in detecting named entities, and vice the reverse. Our proposed model is able to balance the two directions better, with the least gap between them. Also, the re-sults imply that there is still much space for future development, and the recent advances of domain adaptation might offer good avenues. Finally, we compare our results with previous studies. As shown, our model can obtain the best performance in the literature. In particular, by comparing our results with the original performances reported in Nguyen et al. (2017), we can see that our re-implementation is much better than theirs. The major difference lies in the exploration of BERT in our model, which brings improvements closed to 6% for both LC and LC-cat. Supervised Results To investigate the supervised setting, we assume that expert annotations (ground truths) of all crowdsourcing sentences are available. Besides exploring the full expert annotations, we study another three different scenarios by incrementally adding the expert annotations into the unsupervised setting, aiming to study the effectiveness of our model with small expert annotations as well. Concretely, we assume proportions of 1%, 5%, 25%, and 100% of the expert annotations available. 4 Table 2 shows all the results, including our four baselines and an gold model based on only expert annotations for comparisons. Overall, we can see that our representation learning model can bring the best performances for all scenarios, demonstrating its effectiveness in the supervised learning as well. Next, by comparing annotator-agnostic and annotator-aware models, we can see that annotatoraware models are better, which is consistent with (a) 0% (b) 5% (c) 25% (d) 100% Figure 4: The visualization of annotator embeddings by dimensionality reduction with PCA. Out designed unsupervised (0%) expert is consistent with the welllearned one (100%). With the expert annotations increases, the learned expert becomes more accurate. the unsupervised setting. More interestingly, the results show that All is better than gold with very small-scale expert annotations (1% and 5%), and the tendency is reversed only when there are sufficient expert annotations (25% and 100%). The observation indicates that crowdsourced annotations are always helpful when golden annotations are not enough. In addition, it is easy to understand that MV is worse than gold since the latter has a higher-quality of the training corpus. Further, we can find that even the annotatoraware LC and LC-cat models are unable to obtain any positive influence compared with gold, which demonstrates that distilling ground-truths from the crowdsourcing annotations might not be the most promising solution. While our representation learning model can give consistently better results than gold, indicating that crowdsourced annotations are always helpful by our method. By regarding crowdsourcing learning as domain adaptation, we no longer take crowdsourced annotations as noise, and on the contrary, they are treated as transferable knowledge, similar to the relationship between the source domains and the target domain. Thus they could always be useful in this way. Analysis To better understand our idea and model in-depth, we conducted the following fine-grained analyses. 5 Visualization of Annotator Embeddings Our representation learning model is able to learn annotator embeddings through the task objective. It is interesting to visualize these embeddings to check their distributions, which can reflect the relationships between the individual annotators. Figure 4 shows the visualization results after Principal Component Analysis (PCA) dimensionality reduction, where the unsupervised and three supervised scenarios are investigated. 6 As shown, we can see that most crowd annotators are distributed in a concentrated area for all scenarios, indicating that they are able to share certain common characteristics of task understanding. Further, we focus on the relationship between expert and crowd annotators, and the results show two interesting findings. First, the heuristic expert of our unsupervised learning is almost consistent with that of the supervised learning of the whole expert annotations (100%), which indicates that our unsupervised expert estimation is perfectly good. Second, the visualization shows that the relationship between expert and crowd annotators could be biased when expert annotations are not enough. As the size of expert annotations increases, their connection might be more accurate gradually. The Predictability of Crowdsourcing Annotations Our primary assumption is based on that all crowdsourced annotations are regarded as the gold-standard with respect to the crowd annotators, which naturally indicates that these annotations are predictable. Here we conduct analysis to verify the assumption by a new task to predicate the crowdsourced annotations, Concretely, we divide the annotations into two sections, where 85% of them are used as the training and the remaining are used for testing, and then we apply our baseline and proposed models to learn and evaluate. Figure 5: Comparisons by F1 scores between full and filtered crowdsourced annotations (i.e., excluding unreliable annotators). We compute F1 values of each annotator with respect to the gold-standard labels, and filter out 10 annotators with lowest scores. which indicates that our assumption is acceptable as a whole. The other models could be unsuitable for our assumption due to the poor performance induced by their modeling strategies. The Impact of Unreliable Annotators Handling unreliable annotators, such as spammers, is a practical and common issue in Crowdsourcing (Raykar and Yu, 2012b). Obviously, regarding crowd annotations as untrustworthy answers is more considerate to this problem. In contrast, our assumption might be challenged because these unreliable annotators are discrepant in their own annotations. To show the influence of unreliable annotators, we filter out several unreliable annotators in the corpus, and reevaluate the performance for the low-resource supervised and unsupervised scenarios on the remaining annotations. Figure 5 shows the comparison results of the original corpus and the filtered corpus. 8 First, we can find that improved performance can be achieved in all cases, indicating excluding these unreliable annotations is helpful for crowdsourcing. Second, the LC and LC-cat model give smaller score differences compared with the ALL model between these two kinds of results, which verified that they are considerate to unreliable annotators. Third, our model also performs robustly, it can cope with this practical issue in a certain degree as well. Results on The Sampled Annotators and Annotations The above analysis shows the benefit of removing unreliable annotators, which reduces a small number of annotators and annotations. A problem arises naturally: will the performance be Table 1. The Excluded is the filtered corpus in Figure 5. The Part-1 and Part-2 are both consist of 13 annotators. Part-1 have 1800 texts with 6275 crowd annotations, each text is labeled by at least 3 annotators. These numbers of Part-2 are 2192, 5582, and 2, respectively. consistent if we sample a small proportion of annotators? To verify it, we sampled two sub-set from the crowdsourced training corpus and re-train our model as well as baselines. Table 4 shows the evaluation results of re-trained models on the standard test set in unsupervised setting. We also add our main result for the comparison. As shown, all sampled datasets demonstrate similar trends with the main result (denoted as Full). The supervised results are consistent with our main result as well, which are not listed due to space reasons. The Discussion of Domain Definitions The most widely used definition of a domain is the distribution on the input space X . Zhao et al. (2019) define a domain D as the pair of a distribution D on the input space X and a labeling function f : X → Y, i.e., domain D = D, f . In this work, we assume each annotator is a unique labeling function a : X → Y. Uniting each annotator and the instances he/she labeled, we can result in a number of domains { D i , a i } |A| i=1 , where A represents all annotators. Then the crowdsourcing learning can be interpreted by the later definition, i.e., learning from these crowd annotators/domains and predicting the labels of raw inputs (sampled from the raw data distribution D expert ) in expert annotator/domain D expert , expert . To unify the definition in a single distribution, we directly define a domain as the joint distribution on the input space X and the label space Y. In addition, we can align to the former definition by using the representation outputs x i = a i (x) as the data input, which shows different distributions for the same sentence towards different annotators. Thus, each source domain D i is the distibution of x i , and we need learn the expert representations x expert to perform inference on the unlabled texts. 5566 5 Related Work Crowdsourcing Learning Crowdsourcing is a cheap and popular way to collect large-scale labeled data, which can facilitate the model training for hard tasks that require supervised learning (Wang and Zhou, 2016;Sheng and Zhang, 2019). In particular, crowdsourced data is often regarded as low-quality, including much noise regarding expert annotations as the gold-standard. Initial studies of crowdsourcing learning try to arrive at a high-quality corpus by majority voting or control the quality by sophisticated strategies during the crowd annotation process (Khattak and Salleb-Aouissi, 2011;Liu et al., 2017;Tang and Lease, 2011). Recently, the majority work focuses on full exploration of all annotated corpus by machine learning models, taking the information from crowd annotators into account including annotator reliability (Rodrigues et al., 2014), annotator accuracy (Huang et al., 2015), worker-label confusion matrix (Nguyen et al., 2017), and sequential confusion matrix (Simpson and Gurevych, 2019). In this work, we present a totally different viewpoint for crowdsourcing, regarding all crowdsourced annotations as golden in terms of individual annotators, just like the primitive gold-standard labels corresponded to the experts, and further propose a domain adaptation paradigm for crowdsourcing learning. Domain Adaptation Domain adaptation has been studied extensively to reduce the performance gap between the resourcerich and resource-scarce domains (Ben-David et al., 2006;Mansour et al., 2009), which has also received great attention in the NLP community (Daumé III, 2007;Jiang and Zhai, 2007;Finkel and Manning, 2009;Glorot et al., 2011;Chu and Wang, 2018;Ramponi and Plank, 2020). Typical methods include self-training to produce pseudo training instances for the target domain (Yu et al., 2015) and representation learning to capture transferable features across the source and target domains (Sener et al., 2016). In this work, we make correlations between domain adaptation and crowdsourcing learning, enabling crowdsourcing learning to benefit from the advances of domain adaptation, and then present a representation learning model borrowed from Jia et al. (2019Jia et al. ( ) andÜstün et al. (2020. Named Entity Recognition NER is a fundamental and challenging task of NLP (Yadav and Bethard, 2018). The BiLSTM-CRF (Lample et al., 2016) architecture, as well as BERT (Devlin et al., 2019), are able to bring state-of-theart performance in the literature (Jia et al., 2019;Wang et al., 2020;Jia and Zhang, 2020). Mayhew et al. (2020) exploits the BERT-BiLSTM-CRF model, achieving strong performance on NER. In addition, NER has been widely adopted as crowdsourcing learning as well (Finin et al., 2010;Rodrigues et al., 2014;Derczynski et al., 2016;. Thus, we exploit NER as a case study following these works, and take a BERT-BiLSTM-CRF model as the basic model for our annotator-aware extension. Conclusion and Future Work We studied the connection between crowdsourcing learning and domain adaptation, and then proposed to treat crowdsourcing learning as a domain adaptation problem. Following, we took NER as a case study, suggesting a representation learning model from recent advances of domain adaptation for crowdsourcing learning. By this case study, we introduced unsupervised and supervised crowdsourcing learning, where the former is a widelystudied setting while the latter has been seldom investigated. Finally, we conducted experiments on a widely-adopted benchmark dataset for crowdsourcing NER, and the results show that our representation learning model is highly effective in unsupervised learning, achieving the best performance in the literature. In addition, the supervised learning with a very small scale of expert annotations can boost the performance significantly. Our work sheds light on the application of effective domain adaptation models on crowdsourcing learning. There are still many other sophisticated cross-domain models, such as adversarial learning (Ganin et al., 2016) and self-training (Yu et al., 2015). Future work may include how to apply these advances to crowdsourcing learning properly. Ethical Impact We present a different view of crowdsourcing learning and propose to treat it as domain adaptation, showing the connection between these two topics of machine learning for NLP. In this view, many sophisticated cross-domain models could be applied to crowdsourcing learning. Moreover, the motivation that regarding all crowdsourced annotations as gold-standard to the corresponding annotators, also sheds light on introducing other transfer learning techniques in future work. The above idea and our proposed representation learning model for crowdsourcing sequence labeling, are totally agnostic to any private information of annotators. And we do not use any sensitive information, bu only the ID of annotators, in problem modeling and learning. The crowdsourced CoNLL English NER data also anonymized annotators. There will be no privacy issues in the future. A Transformer with Adapters In our Adapter • BERT word representation, we insert two adapter modules for each transformer layer inside BERT. Figure 6 shows the detailed network structure of transformer with adapters. More specifically, the forward operation of an adapter layer is computed as follows: h mid = GELU(W ap 1 h in + b ap 1 ) h out = W ap 2 h mid + b ap 2 + h in ,(6) where W Here we also give a supplement to illustrate the pack operation from all adapter parameters into a single vector V : (7) where first all parameters of a single adapter are reshaped and concatenated and then a further concatenation is performed over all adapters. V = Adapters {W ap 1 ⊕ W ap 2 ⊕ b ap 1 ⊕ b ap 2 }, B Hyper-parameters We choose the BERT-base-cased 9 , which is for English language and consists of 12-layer transformers with the hidden size 768 for all layers. We load the BERT weight and implement the adapter injection based on the transformers (Wolf et al., 2020) library. The sizes of the adapter middle hidden states are set to 128 constantly. The annotator embedding size is 8 to fit the model in one RTX-2080TI GPU of 11GB memory. The BiLSTM hidden size is set to 400. For all models, we inject adapters or switchers in all 12 layers of BERT. All experiments are run on the single GPU at an 8-GPU server with a 14 core CPU and 128GB memory. We exploit the stochastic gradient-based online learning, with a batch size of 64, to optimize model parameters. We apply the time-step dropout, which randomly sets several representations in the sequence to zeros with a probability of 0.2, on the word representations to avoid overfitting. We use the Adam algorithm to update the parameters with a constant learning rate 1 × 10 −3 , and apply the gradient clipping by a maximum value of 5.0 to avoid gradient explosion. C The Advantage of Adapter • BERT Our models are all based on Adapter • BERT as the basic representations, which is different from the widely-adopted BERT fine-tuning architecture. Here we compare the two strategies in detail. The results are shown in Table 5, where for Adapter • BERT we consider gradually increasing the number of transformer layers (covering the last n layers) inside the BERT. As shown, it is apparently that Adapter • BERT is much more parameter efficient, and when all layers are exploited, the model can be even better than BERT fine-tuning. Thus it is more desirable to use Adapter • BERT covering all BERT transformers inside. D Case Study Here we also offer a case study to understand the performance in unsupervised and supervised crowdsourcing learning, as well as the different crowdsourcing models. We exploit one complex example in Table 6 which involves different outputs for various models. As shown, we can see that supervised models are able to recall the ambiguous entity (i.e., Pace, a single word with multiple senses) correctly, while unsupervised models fail, which may be due to the inconsistencies of the crowdsourced annotations. By comparing our model with other baselines, we can show that our representation learning model can capture the global text input understanding consistently, e.g., being able to connect Ohio State and Arizona State together. Figure 1 : 1A NER example with crowdsourced labels, A and EXP denote annotator and expert, respectively. Figure 3 : 3The structure of our representation learning model, where the right orange part denotes the annotator switcher, and V denotes the generated adapter parameters by PGN. The transformer layers in gray are kept frozen in training, and other modules are trainable. adapter parameters, and the dimension size of h mid is usually smaller than that of the corresponding transformer. Figure 6 : 6Transformer integrated with Adapters inside. Table 1: The test results of the unsupervised setting, where the superscript † indicates that there exist differences in the test corpus..61 62.87 70.26 (Rodrigues and Pereira, 2018) 66.00 59.30 62.40 (Simpson and Gurevych, 2019) † 80.30 74.80 77.40 Table 2 : 2The test results of the supervised setting, where we add different proportions of the most informative gold-standard (expert) annotations incrementally. Note that MV at 100% is equivalent to the gold model, because all voted labels are substituted with gold-standard labels. Table 3 : 3The performance of training on 85% and testing on 15% of the crowdsourced annotations. Table 3 3shows the results. As shown, our model can achieve the best performance by an F1 score of 77.12%, and the other models are significantly worse (at least 4.86 drops by F1). Considering that the proportion of the averaged training examples per annotator over the full 5,985 sentences is only 5%, 7 we exploit the gold model of the 5% expert annotations for reference. We can see that the gap between them is small (77.12% v.s. 79.33%),5565 Table 4 : 4The unsupervised test results of differently sampled datasets. The Full is original results in Table 5 : 5The comparisons between BERT fine-tuning and Adapter • BERT based on the standard NER without annotator as input. Table 6 : 6A case study, where the text with underlines indicates errors. Note that although LC-cat is not as expected as LC in(Nguyen et al., 2017), our results show that LC-cat is slightly better based on Adapter•BERT-BiLSTM-CRF. Intuitively, if expert annotations are involved, we should intentionally choose the more informative inputs for annotations, which can reduce the overall cost to meet a certain performance standard. Thus, we can fully demonstrate the effectiveness of crowdsourced annotations under the semisupervised setting. Here we try to choose the most informative labeled instances for the 1%, 5%, and 25% settings. In addition, we could not perform the ablation study of our model because it is not an incremental work. The 1% setting is excluded for its incapability to capture the relationship between the expert and crowd annotators with such small expert annotations.7 The value can be directly calculated (0.06 * 0.85 ≈ 0.05). MV is not included because a proportion of instances are unable to obtain aggregated answers. https://github.com/google-research/bert AcknowledgmentsWe thank all reviewers for their hard work. This research is supported by grants from the National Key Research and Development Program of China (No. 2018YFC0832101) and the fonds of Beijing Advanced Innovation Center for Language Resources under Grant TYZ19005. Self-crowdsourcing training for relation extraction. Azad Abad, Moin Nabi, Alessandro Moschitti, 10.18653/v1/P17-2082Proceedings of the ACL: Short Papers. the ACL: Short PapersAzad Abad, Moin Nabi, and Alessandro Moschitti. 2017. Self-crowdsourcing training for relation ex- traction. In Proceedings of the ACL: Short Papers. Analysis of representations for domain adaptation. Shai Ben-David, John Blitzer, Koby Crammer, Fernando Pereira, Proceedings of the Twentieth Annual Conference on Neural Information Processing Systems. the Twentieth Annual Conference on Neural Information Processing SystemsMIT PressShai Ben-David, John Blitzer, Koby Crammer, and Fer- nando Pereira. 2006. Analysis of representations for domain adaptation. In Proceedings of the Twentieth Annual Conference on Neural Information Process- ing Systems, pages 137-144. MIT Press. Creating speech and language data with Amazon's Mechanical Turk. Chris Callison, - Burch, Mark Dredze, Proceedings of the NAACL-HLT 2010 Workshop on Creating Speech and Language Data with Amazon's Mechanical Turk. the NAACL-HLT 2010 Workshop on Creating Speech and Language Data with Amazon's Mechanical TurkChris Callison-Burch and Mark Dredze. 2010. Creat- ing speech and language data with Amazon's Me- chanical Turk. In Proceedings of the NAACL-HLT 2010 Workshop on Creating Speech and Language Data with Amazon's Mechanical Turk, pages 1-12. A survey of domain adaptation for neural machine translation. Chenhui Chu, Rui Wang, Proceedings of COLING. COLINGChenhui Chu and Rui Wang. 2018. A survey of do- main adaptation for neural machine translation. In Proceedings of COLING, pages 1304-1319. Domain adaptation for visual applications: A comprehensive survey. Gabriela Csurka, arXiv:1702.05374arXiv preprintGabriela Csurka. 2017. Domain adaptation for vi- sual applications: A comprehensive survey. arXiv preprint arXiv:1702.05374. Frustratingly easy domain adaptation. Hal Daumé, Iii , Proceedings of ACL. ACLHal Daumé III. 2007. Frustratingly easy domain adap- tation. In Proceedings of ACL, pages 256-263. Broad Twitter corpus: A diverse named entity recognition resource. Leon Derczynski, Kalina Bontcheva, Ian Roberts, Proceedings of the COL-ING: Technical Papers. the COL-ING: Technical PapersLeon Derczynski, Kalina Bontcheva, and Ian Roberts. 2016. Broad Twitter corpus: A diverse named entity recognition resource. In Proceedings of the COL- ING: Technical Papers, pages 1169-1179. BERT: Pre-training of deep bidirectional transformers for language understanding. Jacob Devlin, Ming-Wei Chang, Kenton Lee, Kristina Toutanova, 10.18653/v1/N19-1423Proceedings of NAACL-HLT. NAACL-HLTJacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of NAACL-HLT. Learning from partially annotated sequences. R Eraldo, Ulf Fernandes, Brefeld, 10.1007/978-3-642-23780-5_36ECML-PKDD. Springer6911Eraldo R. Fernandes and Ulf Brefeld. 2011. Learn- ing from partially annotated sequences. In ECML- PKDD, volume 6911 of Lecture Notes in Computer Science, pages 407-422. Springer. Annotating named entities in Twitter data with crowdsourcing. Tim Finin, William Murnane, Anand Karandikar, Nicholas Keller, Justin Martineau, Mark Dredze, Proceedings of the NAACL-HLT 2010 Workshop on Creating Speech and Language Data with Amazon's Mechanical Turk. the NAACL-HLT 2010 Workshop on Creating Speech and Language Data with Amazon's Mechanical TurkTim Finin, William Murnane, Anand Karandikar, Nicholas Keller, Justin Martineau, and Mark Dredze. 2010. Annotating named entities in Twitter data with crowdsourcing. In Proceedings of the NAACL- HLT 2010 Workshop on Creating Speech and Lan- guage Data with Amazon's Mechanical Turk. Hierarchical Bayesian domain adaptation. Jenny Rose Finkel, Christopher D Manning, Proceedings of HLT-NAACL. HLT-NAACLJenny Rose Finkel and Christopher D. Manning. 2009. Hierarchical Bayesian domain adaptation. In Pro- ceedings of HLT-NAACL, pages 602-610. Domain-adversarial training of neural networks. Yaroslav Ganin, Evgeniya Ustinova, Hana Ajakan, Pascal Germain, Hugo Larochelle, François Laviolette, Mario Marchand, Victor S Lempitsky, J. Mach. Learn. Res. 1735Yaroslav Ganin, Evgeniya Ustinova, Hana Ajakan, Pas- cal Germain, Hugo Larochelle, François Laviolette, Mario Marchand, and Victor S. Lempitsky. 2016. Domain-adversarial training of neural networks. J. Mach. Learn. Res., 17:59:1-59:35. Domain adaptation for large-scale sentiment classification: A deep learning approach. Xavier Glorot, Antoine Bordes, Yoshua Bengio, Proceedings of ICML. ICMLXavier Glorot, Antoine Bordes, and Yoshua Bengio. 2011. Domain adaptation for large-scale sentiment classification: A deep learning approach. In Pro- ceedings of ICML, pages 513-520. Parameter-efficient transfer learning for NLP. Neil Houlsby, Andrei Giurgiu, Stanislaw Jastrzebski, Bruna Morrone, Quentin De Laroussilhe, Andrea Gesmundo, Mona Attariyan, Sylvain Gelly, Proceedings of ICML. ICMLNeil Houlsby, Andrei Giurgiu, Stanislaw Jastrzebski, Bruna Morrone, Quentin De Laroussilhe, Andrea Gesmundo, Mona Attariyan, and Sylvain Gelly. 2019. Parameter-efficient transfer learning for NLP. In Proceedings of ICML, pages 2790-2799. Learning whom to trust with MACE. Dirk Hovy, Taylor Berg-Kirkpatrick, Ashish Vaswani, Eduard Hovy, Proceedings of the NAACL-HLT. the NAACL-HLTDirk Hovy, Taylor Berg-Kirkpatrick, Ashish Vaswani, and Eduard Hovy. 2013. Learning whom to trust with MACE. In Proceedings of the NAACL-HLT. Experiments with crowdsourced re-annotation of a POS tagging data set. Dirk Hovy, Barbara Plank, Anders Søgaard, 10.3115/v1/P14-2062Proceedings of ACL. ACLDirk Hovy, Barbara Plank, and Anders Søgaard. 2014. Experiments with crowdsourced re-annotation of a POS tagging data set. In Proceedings of ACL. Data quality from crowdsourcing: A study of annotation selection criteria. Pei-Yun Hsueh, Prem Melville, Vikas Sindhwani, Proceedings of the NAACL HLT 2009 Workshop on Active Learning for Natural Language Processing. the NAACL HLT 2009 Workshop on Active Learning for Natural Language ProcessingPei-Yun Hsueh, Prem Melville, and Vikas Sindhwani. 2009. Data quality from crowdsourcing: A study of annotation selection criteria. In Proceedings of the NAACL HLT 2009 Workshop on Active Learning for Natural Language Processing, pages 27-35. Exploring representation-learning approaches to domain adaptation. Fei Huang, Alexander Yates, Proceedings of the 2010 Workshop on Domain Adaptation for Natural Language Processing. the 2010 Workshop on Domain Adaptation for Natural Language ProcessingFei Huang and Alexander Yates. 2010. Exploring representation-learning approaches to domain adap- tation. In Proceedings of the 2010 Workshop on Do- main Adaptation for Natural Language Processing. Estimation of discourse segmentation labels from crowd data. Ziheng Huang, Jialu Zhong, Rebecca J Passonneau, 10.18653/v1/D15-1261Proceedings of the EMNLP. the EMNLPZiheng Huang, Jialu Zhong, and Rebecca J. Passon- neau. 2015. Estimation of discourse segmentation labels from crowd data. In Proceedings of the EMNLP, pages 2190-2200. Noise or additional information? leveraging crowdsource annotation item agreement for natural language tasks. Emily Jamison, Iryna Gurevych, 10.18653/v1/D15-1035Proceedings of the EMNLP. the EMNLPEmily Jamison and Iryna Gurevych. 2015. Noise or additional information? leveraging crowdsource an- notation item agreement for natural language tasks. In Proceedings of the EMNLP, pages 291-297. Crossdomain NER using cross-domain language modeling. Chen Jia, Xiaobo Liang, Yue Zhang, 10.18653/v1/P19-1236Proceedings of ACL. ACLChen Jia, Xiaobo Liang, and Yue Zhang. 2019. Cross- domain NER using cross-domain language model- ing. In Proceedings of ACL, pages 2464-2474. Multi-cell compositional LSTM for NER domain adaptation. Chen Jia, Yue Zhang, 10.18653/v1/2020.acl-main.524Proceedings of ACL. ACLChen Jia and Yue Zhang. 2020. Multi-cell composi- tional LSTM for NER domain adaptation. In Pro- ceedings of ACL, pages 5906-5917. Instance weighting for domain adaptation in NLP. Jing Jiang, Chengxiang Zhai, Proceedings of ACL. ACLJing Jiang and ChengXiang Zhai. 2007. Instance weighting for domain adaptation in NLP. In Pro- ceedings of ACL, pages 264-271. Quality control of crowd labeling through expert evaluation. Ansaf Faiza Khan Khattak, Salleb-Aouissi, Proceedings of the NIPS 2nd Workshop on Computational Social Science and the Wisdom of Crowds. the NIPS 2nd Workshop on Computational Social Science and the Wisdom of Crowds25Faiza Khan Khattak and Ansaf Salleb-Aouissi. 2011. Quality control of crowd labeling through expert evaluation. In Proceedings of the NIPS 2nd Work- shop on Computational Social Science and the Wis- dom of Crowds, volume 2, page 5. Crowdsourcing user studies with mechanical turk. Aniket Kittur, Ed H Chi, Bongwon Suh, 10.1145/1357054.1357127Proceedings of the 2008 Conference on Human Factors in Computing Systems, CHI 2008. the 2008 Conference on Human Factors in Computing Systems, CHI 2008Florence, ItalyACMAniket Kittur, Ed H. Chi, and Bongwon Suh. 2008. Crowdsourcing user studies with mechanical turk. In Proceedings of the 2008 Conference on Human Factors in Computing Systems, CHI 2008, 2008, Flo- rence, Italy, April 5-10, 2008, pages 453-456. ACM. Neural architectures for named entity recognition. Guillaume Lample, Miguel Ballesteros, Sandeep Subramanian, Kazuya Kawakami, Chris Dyer, 10.18653/v1/N16-1030Proceedings of NAACL-HLT. NAACL-HLTGuillaume Lample, Miguel Ballesteros, Sandeep Sub- ramanian, Kazuya Kawakami, and Chris Dyer. 2016. Neural architectures for named entity recognition. In Proceedings of NAACL-HLT, pages 260-270. On quality control and machine learning in crowdsourcing. Matthew Lease, Human Computation, volume WS-11-11 of AAAI Workshops. AAAIMatthew Lease. 2011. On quality control and machine learning in crowdsourcing. In Human Computation, volume WS-11-11 of AAAI Workshops. AAAI. A neural model for aggregating coreference annotation in crowdsourcing. Maolin Li, Hiroya Takamura, Sophia Ananiadou, 10.18653/v1/2020.coling-main.507Proceedings of COLING. COLINGMaolin Li, Hiroya Takamura, and Sophia Ananiadou. 2020. A neural model for aggregating coreference annotation in crowdsourcing. In Proceedings of COLING, pages 5760-5773. Improving learningfrom-crowds through expert validation. Mengchen Liu, Liu Jiang, Junlin Liu, Xiting Wang, Jun Zhu, Shixia Liu, 10.24963/ijcai.2017/324Proceedings of IJCAI. IJCAIMengchen Liu, Liu Jiang, Junlin Liu, Xiting Wang, Jun Zhu, and Shixia Liu. 2017. Improving learning- from-crowds through expert validation. In Proceed- ings of IJCAI, pages 2329-2336. Foundations of statistical natural language processing. Christopher Manning, Hinrich Schutze, MIT pressChristopher Manning and Hinrich Schutze. 1999. Foundations of statistical natural language process- ing. MIT press. Domain adaptation with multiple sources. Yishay Mansour, Mehryar Mohri, Afshin Rostamizadeh, Advances in Neural Information Processing Systems. 21Yishay Mansour, Mehryar Mohri, and Afshin Ros- tamizadeh. 2009. Domain adaptation with multiple sources. In Advances in Neural Information Pro- cessing Systems, volume 21, pages 1041-1048. Robust named entity recognition with truecasing pretraining. Stephen Mayhew, Nitish Gupta, Dan Roth, AAAI 2020. Stephen Mayhew, Nitish Gupta, and Dan Roth. 2020. Robust named entity recognition with truecasing pre- training. In AAAI 2020, pages 8480-8487. Aggregating and predicting sequence labels from crowd annotations. An Thanh Nguyen, Byron Wallace, Junyi Jessy Li, Ani Nenkova, Matthew Lease, 10.18653/v1/P17-1028Proceedings of ACL. ACLAn Thanh Nguyen, Byron Wallace, Junyi Jessy Li, Ani Nenkova, and Matthew Lease. 2017. Aggregating and predicting sequence labels from crowd annota- tions. In Proceedings of ACL, pages 299-309. A corpus with multi-level annotations of patients, interventions and outcomes to support language processing for medical literature. Benjamin Nye, Junyi Jessy Li, Roma Patel, Yinfei Yang, Iain Marshall, 10.18653/v1/P18-1019Proceedings of the ACL. the ACLAni Nenkova, and Byron WallaceBenjamin Nye, Junyi Jessy Li, Roma Patel, Yinfei Yang, Iain Marshall, Ani Nenkova, and Byron Wal- lace. 2018. A corpus with multi-level annotations of patients, interventions and outcomes to support language processing for medical literature. In Pro- ceedings of the ACL, pages 197-207. Contextual parameter generation for universal neural machine translation. Mrinmaya Emmanouil Antonios Platanios, Graham Sachan, Tom Neubig, Mitchell, 10.18653/v1/D18-1039Proceedings of EMNLP. EMNLPEmmanouil Antonios Platanios, Mrinmaya Sachan, Graham Neubig, and Tom Mitchell. 2018. Contex- tual parameter generation for universal neural ma- chine translation. In Proceedings of EMNLP. Neural unsupervised domain adaptation in NLP-A survey. Alan Ramponi, Barbara Plank, 10.18653/v1/2020.coling-main.603Proceedings of the COLING. the COLINGAlan Ramponi and Barbara Plank. 2020. Neural unsu- pervised domain adaptation in NLP-A survey. In Proceedings of the COLING, pages 6838-6855. Eliminating spammers and ranking annotators for crowdsourced labeling tasks. C Vikas, Shipeng Raykar, Yu, J. Mach. Learn. Res. 13Vikas C. Raykar and Shipeng Yu. 2012a. Eliminating spammers and ranking annotators for crowdsourced labeling tasks. J. Mach. Learn. Res., 13:491-518. Eliminating spammers and ranking annotators for crowdsourced labeling tasks. C Vikas, Shipeng Raykar, Yu, J. Mach. Learn. Res. 13Vikas C. Raykar and Shipeng Yu. 2012b. Eliminating spammers and ranking annotators for crowdsourced labeling tasks. J. Mach. Learn. Res., 13:491-518. Learning from crowds. C Vikas, Shipeng Raykar, Linda H Yu, Gerardo Hermosillo Zhao, Charles Valadez, Luca Florin, Linda Bogoni, Moy, J. Mach. Learn. Res. 11Vikas C. Raykar, Shipeng Yu, Linda H. Zhao, Ger- ardo Hermosillo Valadez, Charles Florin, Luca Bo- goni, and Linda Moy. 2010. Learning from crowds. J. Mach. Learn. Res., 11:1297-1322. Deep learning from crowds. Filipe Rodrigues, Francisco C Pereira, Proceedings of the AAAI. the AAAIFilipe Rodrigues and Francisco C. Pereira. 2018. Deep learning from crowds. In Proceedings of the AAAI. Sequence labeling with multiple annotators. Filipe Rodrigues, Francisco C Pereira, Bernardete Ribeiro, 10.1007/s10994-013-5411-2Mach. Learn. 952Filipe Rodrigues, Francisco C. Pereira, and Bernardete Ribeiro. 2014. Sequence labeling with multiple an- notators. Mach. Learn., 95(2):165-181. Learning transferrable representations for unsupervised domain adaptation. Ozan Sener, Hyun Oh Song, Ashutosh Saxena, Silvio Savarese, Advances in Neural Information Processing Systems. Ozan Sener, Hyun Oh Song, Ashutosh Saxena, and Sil- vio Savarese. 2016. Learning transferrable represen- tations for unsupervised domain adaptation. In Ad- vances in Neural Information Processing Systems. Get another label? improving data quality and data mining using multiple, noisy labelers. S Victor, Foster J Sheng, Panagiotis G Provost, Ipeirotis, 10.1145/1401890.1401965Proceedings of the KDD. the KDDVictor S. Sheng, Foster J. Provost, and Panagiotis G. Ipeirotis. 2008. Get another label? improving data quality and data mining using multiple, noisy label- ers. In Proceedings of the KDD, pages 614-622. Machine learning with crowdsourcing: A brief summary of the past research and future directions. S Victor, Jing Sheng, Zhang, 10.1609/aaai.v33i01.33019837Proceedings of the AAAI. the AAAI33Victor S. Sheng and Jing Zhang. 2019. Machine learn- ing with crowdsourcing: A brief summary of the past research and future directions. Proceedings of the AAAI, 33(01):9837-9843. A Bayesian approach for sequence tagging with crowds. Edwin Simpson, Iryna Gurevych, 10.18653/v1/D19-1101Proceedings of the EMNLP-IJCNLP. the EMNLP-IJCNLPEdwin Simpson and Iryna Gurevych. 2019. A Bayesian approach for sequence tagging with crowds. In Proceedings of the EMNLP-IJCNLP. Cheap and fast -but is it good? evaluating non-expert annotations for natural language tasks. Rion Snow, O&apos; Brendan, Daniel Connor, Andrew Jurafsky, Ng, Proceedings of the EMNLP. the EMNLPRion Snow, Brendan O'Connor, Daniel Jurafsky, and Andrew Ng. 2008. Cheap and fast -but is it good? evaluating non-expert annotations for natural lan- guage tasks. In Proceedings of the EMNLP. Semi-supervised consensus labeling for crowdsourcing. Wei Tang, Matthew Lease, SIGIR 2011 workshop on crowdsourcing for information retrieval (CIR). Wei Tang and Matthew Lease. 2011. Semi-supervised consensus labeling for crowdsourcing. In SIGIR 2011 workshop on crowdsourcing for information re- trieval (CIR), pages 1-6. Introduction to the CoNLL-2003 shared task: Language-independent named entity recognition. Erik F Tjong, Kim Sang, Fien De Meulder, Proceedings of the CoNLL at HLT-NAACL. the CoNLL at HLT-NAACLErik F. Tjong Kim Sang and Fien De Meulder. 2003. Introduction to the CoNLL-2003 shared task: Language-independent named entity recognition. In Proceedings of the CoNLL at HLT-NAACL 2003. UDapter: Language adaptation for truly Universal Dependency parsing. Arianna Ahmetüstün, Gosse Bisazza, Gertjan Bouma, Van Noord, 10.18653/v1/2020.emnlp-main.180Proceedings of the EMNLP. the EMNLPAhmetÜstün, Arianna Bisazza, Gosse Bouma, and Gertjan van Noord. 2020. UDapter: Language adap- tation for truly Universal Dependency parsing. In Proceedings of the EMNLP, pages 2302-2315. Multi-domain named entity recognition with genre-aware and agnostic inference. Jing Wang, Mayank Kulkarni, Daniel Preotiuc-Pietro, 10.18653/v1/2020.acl-main.750Proceedings of the ACL. the ACLJing Wang, Mayank Kulkarni, and Daniel Preotiuc- Pietro. 2020. Multi-domain named entity recogni- tion with genre-aware and agnostic inference. In Proceedings of the ACL, pages 8476-8488. Cost-saving effect of crowdsourcing learning. Lu Wang, Zhi-Hua Zhou, Proceedings of IJCAI, IJCAI'16. IJCAI, IJCAI'16Lu Wang and Zhi-Hua Zhou. 2016. Cost-saving effect of crowdsourcing learning. In Proceedings of IJCAI, IJCAI'16, pages 2111-2117. Transformers: State-of-the-art natural language processing. Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Remi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Clara Patrick Von Platen, Yacine Ma, Julien Jernite, Canwen Plu, Teven Le Xu, Sylvain Scao, Mariama Gugger, Quentin Drame, Alexander Lhoest, Rush, 10.18653/v1/2020.emnlp-demos.6Proceedings of the EMNLP: System Demonstrations. the EMNLP: System DemonstrationsThomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pier- ric Cistac, Tim Rault, Remi Louf, Morgan Funtow- icz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander Rush. 2020. Trans- formers: State-of-the-art natural language process- ing. In Proceedings of the EMNLP: System Demon- strations, pages 38-45. A survey on recent advances in named entity recognition from deep learning models. Vikas Yadav, Steven Bethard, Proceedings of the COLING. the COLINGVikas Yadav and Steven Bethard. 2018. A survey on re- cent advances in named entity recognition from deep learning models. In Proceedings of the COLING. Adversarial learning for chinese NER from crowd annotations. Yaosheng Yang, Meishan Zhang, Wenliang Chen, Wei Zhang, Haofen Wang, Min Zhang, Proceedings of the AAAI. the AAAIYaoSheng Yang, Meishan Zhang, Wenliang Chen, Wei Zhang, Haofen Wang, and Min Zhang. 2018. Adver- sarial learning for chinese NER from crowd annota- tions. In Proceedings of the AAAI. Domain adaptation for dependency parsing via selftraining. Juntao Yu, Mohab Elkaref, Bernd Bohnet, 10.18653/v1/W15-2201Proceedings of the 14th International Conference on Parsing Technologies. the 14th International Conference on Parsing TechnologiesJuntao Yu, Mohab Elkaref, and Bernd Bohnet. 2015. Domain adaptation for dependency parsing via self- training. In Proceedings of the 14th International Conference on Parsing Technologies, pages 1-10. Crowdsourcing translation: Professional quality from non-professionals. Omar F Zaidan, Chris Callison-Burch, Proceedings of the ACL-HLT. the ACL-HLTOmar F. Zaidan and Chris Callison-Burch. 2011. Crowdsourcing translation: Professional quality from non-professionals. In Proceedings of the ACL- HLT, pages 1220-1229. Learning from crowdsourced labeled data: a survey. Jing Zhang, Xindong Wu, Victor S Sheng, 10.1007/s10462-016-9491-9Artif. Intell. Rev. 464Jing Zhang, Xindong Wu, and Victor S. Sheng. 2016. Learning from crowdsourced labeled data: a survey. Artif. Intell. Rev., 46(4):543-576. Model Text and Entities Unsupervised MV Pace, a junior, helped [Ohio State]LOC to a 10-1 record and a berth in the Rose Bowl against. Han Zhao, Remi Tachet, Kun Combes, Geoffrey J Zhang, Gordon, PMLRProceedings of the ICML. the ICMLArizona]ORG StateOn learning invariant representations for domain adaptationHan Zhao, Remi Tachet des Combes, Kun Zhang, and Geoffrey J. Gordon. 2019. On learning invariant rep- resentations for domain adaptation. In Proceedings of the ICML, pages 7523-7532. PMLR. Model Text and Entities Unsupervised MV Pace, a junior, helped [Ohio State]LOC to a 10-1 record and a berth in the Rose Bowl against [Arizona]ORG State. . Lc-Cat Pace, Ohio State]ORG to a 10-1 record and a berth in the [Rose Bowl]MISC against [Arizona]ORG StateLC-cat Pace, a junior, helped [Ohio State]ORG to a 10-1 record and a berth in the [Rose Bowl]MISC against [Arizona]ORG State. This Work Pace, a junior, helped [Ohio State]ORG to a 10-1 record and a berth in the. Rose Bowl]MISC against [Arizona State]ORG. Supervised (25%)This Work Pace, a junior, helped [Ohio State]ORG to a 10-1 record and a berth in the [Rose Bowl]MISC against [Arizona State]ORG. Supervised (25%) Mv Pace, LOC to a 10-1 record and a berth in the. Rose Bowl]MISC against [Arizona State]LOC . Gold [Pace]P ER, a junior, helped [Ohio State]ORG to a 10-1 record and a berth in the [Rose Bowl]MISC against [Arizona]ORG StateMV Pace, a junior, helped [Ohio State]LOC to a 10-1 record and a berth in the [Rose Bowl]MISC against [Arizona State]LOC . Gold [Pace]P ER, a junior, helped [Ohio State]ORG to a 10-1 record and a berth in the [Rose Bowl]MISC against [Arizona]ORG State. . Lc-Cat Pace, Ohio State]ORG to a 10-1 record and a berth in the [Rose Bowl]MISC against [Arizona State]LOC . This WorkLC-cat Pace, a junior, helped [Ohio State]ORG to a 10-1 record and a berth in the [Rose Bowl]MISC against [Arizona State]LOC . This Work . P Er, Junior, Ohio State]ORG to a 10-1 record and a berth in the [Rose Bowl]MISC against [Arizona State]ORGP ER, a junior, helped [Ohio State]ORG to a 10-1 record and a berth in the [Rose Bowl]MISC against [Arizona State]ORG. Ground-truth. Ground-truth . P Er, Junior, Ohio State]ORG to a 10-1 record and a berth in the [Rose Bowl]MISC against [Arizona State]ORGP ER, a junior, helped [Ohio State]ORG to a 10-1 record and a berth in the [Rose Bowl]MISC against [Arizona State]ORG.
[ "https://github.com/google-research/bert" ]
[ "Structured Kernel Estimation for Photon-Limited Deconvolution", "Structured Kernel Estimation for Photon-Limited Deconvolution" ]
[ "Yash Sanghvi [email protected] \nSchool of Electrical and Computer Engineering\nPurdue University\n\n", "Zhiyuan Mao \nSchool of Electrical and Computer Engineering\nPurdue University\n\n", "Stanley H Chan [email protected] \nSchool of Electrical and Computer Engineering\nPurdue University\n\n" ]
[ "School of Electrical and Computer Engineering\nPurdue University\n", "School of Electrical and Computer Engineering\nPurdue University\n", "School of Electrical and Computer Engineering\nPurdue University\n" ]
[]
Images taken in a low light condition with the presence of camera shake suffer from motion blur and photon shot noise. While state-of-the-art image restoration networks show promising results, they are largely limited to well-illuminated scenes and their performance drops significantly when photon shot noise is strong.In this paper, we propose a new blur estimation technique customized for photon-limited conditions. The proposed method employs a gradient-based backpropagation method to estimate the blur kernel. By modeling the blur kernel using a low-dimensional representation with the key points on the motion trajectory, we significantly reduce the search space and improve the regularity of the kernel estimation problem. When plugged into an iterative framework, our novel low-dimensional representation provides improved kernel estimates and hence significantly better deconvolution performance when compared to end-to-end trained neural networks. The source code and pretrained models are available at https://github. com / sanghviyashiitb / structured -kernel -cvpr23
null
[ "https://export.arxiv.org/pdf/2303.03472v1.pdf" ]
257,378,061
2303.03472
24f8742e55182fdcfc7f937247f583f924618c36
Structured Kernel Estimation for Photon-Limited Deconvolution Yash Sanghvi [email protected] School of Electrical and Computer Engineering Purdue University Zhiyuan Mao School of Electrical and Computer Engineering Purdue University Stanley H Chan [email protected] School of Electrical and Computer Engineering Purdue University Structured Kernel Estimation for Photon-Limited Deconvolution Images taken in a low light condition with the presence of camera shake suffer from motion blur and photon shot noise. While state-of-the-art image restoration networks show promising results, they are largely limited to well-illuminated scenes and their performance drops significantly when photon shot noise is strong.In this paper, we propose a new blur estimation technique customized for photon-limited conditions. The proposed method employs a gradient-based backpropagation method to estimate the blur kernel. By modeling the blur kernel using a low-dimensional representation with the key points on the motion trajectory, we significantly reduce the search space and improve the regularity of the kernel estimation problem. When plugged into an iterative framework, our novel low-dimensional representation provides improved kernel estimates and hence significantly better deconvolution performance when compared to end-to-end trained neural networks. The source code and pretrained models are available at https://github. com / sanghviyashiitb / structured -kernel -cvpr23 Introduction Photon-Limited Blind Deconvolution: This paper studies the photon-limited blind deconvolution problem. Blind deconvolution refers to simultaneously recovering both the blur kernel and latent clean image from a blurred image and "photon-limited" refers to presence of photonshot noise in images taken in low-illumination / short exposure. The corresponding forward model is as follows: y = Poisson(αh x).(1) In this equation, y ∈ R N is the blurred-noisy image, x ∈ R N is the latent clean image, and h ∈ R M is the blur kernel. We assume that x is normalized to [0, 1] and the entries of h are non-negative and sum up to 1. The constant α represents the average number of photons per pixel and is inversely proportional to the amount of Poisson noise. Deep Iterative Kernel Estimation: Blind image deconvolution has been studied for decades with many successful algorithms including the latest deep neural networks [8,24,34,42,43]. Arguably, the adaptation from the traditional Gaussian noise model to the photon-limited Poisson noise model can be done by retraining the existing networks with appropriate data. However, the restoration is not guaranteed to perform well because the end-to-end networks seldom explicitly take the forward image formation model into account. Recently, people have started to recognize the importance of blur kernel estimation for photon-limited conditions. One of these works is by Sanghvi et. al [30], where they propose an iterative kernel estimation method to backpropagate the gradient of an unsupervised reblurring function, hence to update the blur kernel. However, as we can see in Figure 1, their performance is still limited when the photon shot noise is strong. Structured Kernel Estimation: Inspired by [30], we believe that the iterative kernel estimation process and the unsupervised reblurring loss are useful. However, instead of searching for the kernel directly (which can easily lead to local minima because the search space is too big), we propose to search in a low-dimensional space by imposing structure to the motion blur kernel. To construct such a low-dimensional space, we frame the blur kernel in terms of trajectory of the camera motion. Motion trajectory is often a continuous but irregular path in the two-dimensional plane. To specify the trajectory, we introduce the concept of key point estimation where we identify a set of anchor points of the kernel. By interpolating the path along these anchor points, we can then reproduce the kernel. Since the number of anchor points is significantly lower than the number of pixels in a kernel, we can reduce the dimensionality of the kernel estimation problem. The key contribution of this paper is as follows: We propose a new kernel estimation method called Kernel Trajectory Network (KTN). KTN models the blur kernel in a low-dimensional and differentiable space by specifying key points of the motion trajectory. Plugging this lowdimensional representation in an iterative framework im-Blurred and Noisy MPR-Net [42] Sanghvi et. al [30] Ours Ground-Truth Figure 1. The proposed Kernel Trajectory Network (KTN) on real noisy blurred image from Photon-Limited Deblurring Dataset (PLDD) [29] The result corresponding to MPR-Net was generated by retraining the network with GoPro dataset [24] corrupted by Poisson noise. The inset images for "Sanghvi et. al" and "Ours" represent the estimated kernel and the inset image for "Ground-Truth" represents the kernel captured using a point source, as provided in PLDD. proves the regularity of the kernel estimation problem. This leads to substantially better blur kernel estimates in photonlimited regimes where existing methods fail. Related Work Traditional Blind Deconvolution: Classical approaches to the (noiseless) blind deconvolution problem [6,7,22,32,40] use a joint optimization framework in which both the kernel and image are updated in an alternating fashion in order to minimize a cost function with kernel and image priors. For high noise regimes, a combination of 1 +TV prior has been used in [2]. Levin et. al [16] pointed out that this joint optimization framework for the blind deconvolution problem favours the no-blur degenerate solution i.e. (x * , h * ) = (y, I) where I is the identity operator. Some methods model the blur kernel in terms of the camera trajectory and then recover both the trajectory and the clean image using optimization [12,38,39] and supervised-learning techniques [11,33,46]. For the non-blind case, i.e., when the blur kernel is assumed to be known, the Poisson deconvolution problem has been studied for decades starting from Richardson-Lucy algorithm [21,26]. More contemporary methods include Plug-and-Play [28,29], PURE-LET [18], and MAP-based optimization methods [10,13]. Deep Learning Methods. Recent years, many deep learning-based methods [5,31] have been proposed for the blind image deblurring task. The most common strategy is to train a network end-to-end on large-scale datasets, such as the GoPro [24] and the RealBlur [27] datasets. Notably, many recent works [8,24,34,42,43] improve the performance of deblurring networks by adopting the multiscale strategies, where the training follows a coarse-tofine setting that resembles the iterative approach. Generative Adversarial Network (GAN) based deblurring methods [3,14,15,44] are also shown to produce visually appealing images. Zamir et al. [41] and Wang et al. [37] adapt the popular vision transformers to the image restoration problems and demonstrate competitive performance on the deblurring task. Neural Networks and Iterative Methods: While neural networks have shown state-of-the-art performance on the deblurring task, another class of methods incorporating iterative methods with deep learning have shown promising results. Algorithm unrolling [23], where an iterative method is unrolled for fixed iterations and trained end-to-end has been applied to image deblurring [1,19]. In SelfDeblur [25], authors use Deep-Image-Prior [35] to represent the image and blur kernel and obtain state-of-the-art blind deconvolution performance. Method Kernel as Structured Motion Estimation Camera motion blur can be modeled as a latent clean image x convolved with a blur kernel h. If we assume the blur kernel lies in a window of size 32 × 32, then h ∈ R 1024 . However, in this high dimensional space, only few entries of the blur kernel h are non-zero. Additionally, the kernel is generated from a two-dimensional trajectory which suggests that a simple sparsity prior is not sufficient. Given the difficulty of the photon-limited deconvolution problem, we need to impose a stronger prior on the kernel. To this end, we propose a differentiable and low-dimensional representation of the blur kernel, which we will use as the search space in our kernel estimation algorithm. We take the two-dimensional trajectory of the camera during the exposure time and divide it into K "key points". Each key point represents either the start, the end or a change in direction of the camera trajectory as seen in Figure 2. Given the K key points as points mapped out in x-y space, we can interpolate them using cubic splines to form a continuous trajectory in 2D. To convert this continuous trajectory to an equivalent blur kernel, we assume a point source image and move it through the given trajectory. The We learn a differentiable representation from the vectorized K key points to a blur kernel using a neural network. This lower dimensional and differentiable representation is leveraged to estimate a better blur kernel and avoiding local minima during inference. resulting frames are then averaged to give the corresponding blur kernel as shown in Figure 2. Given the formulation of blur kernel h in terms of K key points, we now need to put this representation to a differentiable form since we intend to use it in an iterative scheme. To achieve this, we learn the transformations from the key points to the blur kernels using a neural network, which will be referred to as Kernel-Trajectory Network (KTN), and represent it using a differentiable function T (.). Why differentiability is important to us will become clear to the reader in the next subsection. To train the Kernel-Trajectory Network, we generate training data as follows. First, for a fixed K, we get K key points by starting from (0, 0) and choosing the next K − 1 points by successively adding a random vector with a uniformly chosen random direction i.e. U [0, 360] and uniformly chosen length from from U [0, 100/(K − 1)]. Next, the set of key points are converted to a continuous smooth trajectory using bicubic interpolation. Then, we move a point source image through the given trajectory using the warpPerspective function in OpenCV, and average the resulting frames. Using the process defined above, we generate 60,000 blur kernels and their corresponding key point representations. For the Kernel-Trajectory Network T (.), we take a U-Net like network with the first half replaced by 3-fully connected layers and train it with the generated data using Proposed Iterative Scheme We described in the previous subsection how to obtain a low-dimensional and differentiable representation T (.) for the blur kernel and now we are ready to present the full iterative scheme in detail. The proposed iterative scheme can be divided into three stages which are summarized as follows. We first generate an initial estimate of the direction and magnitude of the blur. This is used as initialization for a gradient-based scheme in Stage I which searches the appropriate kernel representation in the latent space z. This is followed by Stage II where we fine-tune the kernel obtained from Stage I using a similar process. Initialization Before starting the iterative scheme, we need a light-weight initialization method. This is important because of multiple local minima in the kernel estimation process. We choose to initialize the method with a rectilinear motion kernel, parameterized by length ρ and orientation θ. To determine the length and orientation of the kernel, we use a minor variation of the kernel estimation in PolyBlur [9]. In this variation, the "blur-only image" G(y) is used as the input and ρ, θ for the initial kernel are estimated using the minimum of the directional gradients. We refer the reader to Section II in the supplementary document for further details on the initialization. Explanation on the "blur-only image" is provided when we describe Stage I of the scheme. Stage I: Kernel Estimation in Latent Space Given an initial kernel, we choose initial latent z 0 by dividing the rectilinear kernel into K key points. Following the framework in [30], we run a gradient descent based scheme which optimizes the following cost function: L(z) def = G(y) − h z F (y, h z )) 2 2 Reblurring Loss ,(2) where h z def = T (z) represents the kernel output from Kernel-Trajectory network T (.) given the vectorized key points representation z. F (.) represents the Poisson non-blind deconvolution solver which takes both noisy-blurred image and a blur kernel as the input. G(y) represents a denoiser which is trained to remove only the noise from noisy-blurred image. The overall cost function represents reblurring loss i.e. how well the kernel estimate and corresponding image estimate h z F (y, h z ) match the blur-only image G(y). To minimize the cost function in (2), we use a simple gradient descent based iterative update for z as follows: where δ > 0 is the step size and ∇ z L(z k ) represents the gradient of the cost function L with respect to z evaluated z k . It should be noted that the cost function is evaluated using the non-blind solver F (.) and Kernel-Trajectory Network T (.) -two neural network computations. Therefore, we can compute the gradient ∇ z L(z k ) using autodifferentiation tools provided in PyTorch by backpropagating the gradients through F (.) and then T (.) Stage II: Kernel Fine-tuning In the second stage, using the kernel estimate of Stage I, we fine-tune the kernel by "opening up" the search space to the entire kernel vector instead of parametrizing by T (.). Specifically, we optimize the following loss function z k+1 = z k − δ ∇ z L(z k ) backpropagation (3)L(h) def = G(y) − h F (y, h)) 2 2 + γ h 1 .(4) Note the presence of the second term which acts as an 1norm sparsity prior. Also the kernel vector h is being optimized instead of the latent key point vector z. Using variable splitting as used in Half-Quadratic Splitting (HQS), we convert the optimization problem in (4) to as follows: L(h, v) = G(y) − h F (y, h)) 2 2 + γ h 1 + µ 2 h − v 2 2(5) for some hyperparameter µ > 0. This leads us to the following iterative updates h k+1 = h k − δ · ∇ h L(h k ) + µ(h k − v k ) , (6) v k+1 = max h k+1 − γ/µ, 0 · sign(h k+1 ) def = S γ/µ (h k+1 ). (7) Algorithm 1 Iterative Poisson Deconvolution Scheme 1: Input: Noisy-blurry y, Photon-Level α, denoiser G(·), non-blind solver F (·), Kernel-Trajectory-Network T (·). 2: Initialize z 0 using method described in Algorithm 1 from supplementary 3: for k = 0, 1, 2, · · · do % Stage I begins here 4: h k z ← T (z k ) 5: L(z) ← G(y) − h k z F (y, h k z ) 2 2 6: Calculate ∇zL(z k ) using automatic differentiation 7: z k+1 ← z k − δ∇zL(z k ) 8: end for 9: h 0 , v 0 ← T (z ∞ ), µ ← 2.0, γ ← 10 −4 10: for k = 0, 1, 2, · · · do % Stage II begins here 11: L(h) ← G(y) − h F (y, h) 2 2 12: Calculate ∇ h L(h k ) using automatic differentiation 13: h k+1 ← h k − δ ∇ h L(h k ) + µ(h k − v k ) 14: v k+1 ← S γ/µ (h k+1 ) 15: µ ← 1.01µ 16: end for 17: return h (∞) and x (∞) = F (y, h (∞) ) Experiments Training While our overall method is not end-to-end trained, it contains pre-trained components, namely the non-blind solver F (.) and denoiser G(.). The architectures of F (.) and G(.) are inherited from PhD-Net [29] which takes as input a noisy-blurred image and kernel. For the denoiser G(.), we fix kernel input to identity operator since it is trained to remove only the noise from noisy-blurred image y. F (.) and G(.) are trained using synthetic data as follows. We take clean images from Flickr2K dataset [20] and the blur kernels from the code in Boracchi and Foi [4]. The blurred images are also corrupted using Poisson shot noise with photon levels α uniformly sampled from [1,60]. The non-blind solver F (.) is trained using kernel and noisy blurred image as input, and clean image as the target. The denoiser G(.) is trained with similar procedure but with blurred-noisy image as the only input blur-only images as the target. The training processes, along with other experiments described in this paper are implemented using PyTorch on a NVIDIA Titan Xp GPU. For quantitative comparison of the method presented, we retrain the following state-of-the-art networks for Poisson Noise: Scale Recurrent Network (SRN) [34], Deep-Deblur [24], DHMPN [43], MPR-Net [42], and MIMO-UNet+ [8]. We perform this retraining in the following two different ways. First, we use synthetic data training as described for F (.) and G(.). Second, for testing on realistic blur, we retrain the networks using the GoPro dataset [24] as it is often used to train neural networks in contemporary deblurring literature. We add the Poisson noise with the same distribution as the synthetic datasets to the blurred images. While retraining the networks, we use the respective loss functions from the original papers for sake of a fair comparison. Quantitative Comparison We quantitatively evaluate the proposed method on three different datasets, and compare it with state-of-theart deblurring methods. In addition to the end-to-end trained methods described previously, we also compare our approach to the following Poisson deblurring methods: Poisson-Plug-and-Play [28], and PURE-LET [18]. Even though these methods assume the blur kernel to be known, we include them in the quantitative comparison since they are specifically designed for Poisson noise. For all of the methods described above, we compare the restored image's quality using PSNR, SSIM, and Learned Perceptual Image Patch Similarity (LPIPS-Alex, LPIPS-VGG) [45]. We include the latter as another metric in our evaluation since failure of MSE/SSIM to assess image quality has been well documented in [36,45] BSD100: First, we evaluate our method on synthetic blur as follows. We collect 100 random images from the BSD-500 dataset, blur them synthetically with motion kernels from the Levin dataset [17] followed by adding Poisson noise at photon-levels α = 10, 20, and 40. The results of the quantitative evaluation are provided in Table 1. Since the blur is synthetic, ground-truth kernel is known and hence, can be used to simultaneously evaluate Poisson non-blind deblurring methods i.e, Poisson Plug-and-Play, PURE-LET, and PhD-Net. The last method is the non-blind solver F (.) and serves as an upper bound on performance. Levin Dataset: Next, we evaluate our method on the Levin dataset [17] which contains 32 real blurred images along with the ground truth kernels, as measured through a point source. We evaluate our method on this dataset with addition of Poisson noise at photon levels α = 10, 20 and 40 and the results are shown in Table 2. For a fair comparison, end-to-end trained methods are retrained using synthetically blurred data (as described in Section IV-A) for evaluation on BSD100 and Levin dataset. RealBlur-J [27]: To demonstrate that our method is able to handle realistic blur, we evaluate our performance on randomly selected 50 patches of size 256 × 256 from the Real-Blur-J [27] dataset. Note that we reduce the size of the tested image because our method is based on a single-blur convolutional model. Such model may not be applicable for a large image with spatially varying blur and local motion of objects. However, for a smaller patch of a larger image, the single-blur-kernel model of deconvolution is a much more valid assumption. To ensure a fair comparison, we evaluate end-to-end networks by retraining on both the synthetic and GoPro datset. As shown in Table 3, we find that end-to-end networks perform consistently better on the RealBlur dataset when trained using the GoPro dataset instead of synthetic blur. This can be explained by the fact both GoPro and RealBlur have realistic blur which is not necessarily captured by a single blur convolutional model. Qualitative Comparison Color Reconstruction We show reconstructions on examples from the real-blur dataset in Figure 4. While our method is grayscale, we perform colour reconstruction by estimating the kernel from the luminance-channel. Given the estimated kernel, we deblur each channel of the image using the non-blind solver and then combine the different channels into a single RGB-image. Note that all qualitative examples in this paper for end-to-end trained networks are trained using the GoPro dataset, since they provide the better visual result. Photon-Limited Deblurring Dataset We also show qualitative examples from photon-limited deblurring dataset [29] which contains 30 images' raw sensor data, blurred by camera shake and taken in extremely lowillumination. For reconstructing these images, we take the average of the R, G, B channels of the Bayer patter image, average it and then reconstruct it using the given method. The qualitative results for this dataset can be found in Figure 5. We also show the estimated kernels, along with estimated kernels from [30,40], in Figure 6. However, instead of using the reblurring loss directly, we find the scheme is more numerically stable if we take the gradients of the image first and then estimate the reblurring loss. This can be explained by the fact that unlike simulated Table 2. Performance on Levin dataset with realistic camera shake blur [16]. The best performing blind deconvolution method for each metric and photon level is shown in bold and non-blind deconvolution methods are shown for reference in grey columns. data, the photon level is not known exactly and is estimated using the sensor data itself by a simple heuristic. For further details on how to use the sensor data, we refer the reader to [29]. Ablation Study In Table 4, we provide an ablation study by running the scheme for different number of key points i.e. K = 4, 6, and 8 and without KTN (K = 0) on RealBlur dataset. Through this study, we demonstrate the effect of the Kernel Trajectory Network has on the iterative scheme. As expected, changing the search space for kernel estimation improves the performance significantly across all metrics. In-creasing the number of key points used for representing kernels also steadily improves the performance of the scheme, which can be explained by the fact there are larger degrees of freedom. Conclusion In this paper, we use an iterative framework for the photon-limited blind deconvolution problem. More specifically, we use a non-blind solver which can deconvolve Poisson corrupted and blurred images given a blur kernel. To mitigate ill-posedness of the kernel estimation in such high noise regimes, we propose a novel low-dimensional Table 3. Performance on RealBlur-J Dataset with realistic blur [27]: Bold and underline refer to overall best performing method and best synthetic performance method. It should be noted that methods that are not trained end-to-end are usually at disadvantage when comparing on metrics like PSNR. However, it can be seen that our reconstruction is generally preferred by other perceptual metrics. Input SRN DHMPN MPR-Net MIMO-UNet+ Ours Ground-Truth representation to represent the motion blur. By using this novel low-dimensional and differentiable representation as a search space, we show state-of-the-art deconvolution performance and outperform end-to-end trained image restoration networks by a significant margin. We believe this is a promising direction of research for both deblurring and general blind inverse problems i.e., inverse problems where the forward operator is not fully known. Future work could involve a supervised version of this scheme which does not involve backpropagation through a network as it would greatly reduce the computational cost. Another possible line of research could be to apply this framework to the problem of spatially varying blur. Input SRN DHMPN MPR-Net Ours Non-Blind Ground-Truth Figure 5. Visual comparisons on Photon-Limited Deblurring Dataset. Qualitative results on realistic blurred and photon-limited images from the Photon-Limited Deblurring dataset [29].The inset image for "Ours" and "Non-Blind" represent the estimated and ground-truth kernel respectively. For a more extensive set of qualitative results, we refer the reader to the supplementary document. Two-Phase [40] Sanghvi et. al [30] Ours Ground-Truth Overview of Camera Noise Model In this section, we present an overview of different camera noise sources followed by a justification of the Poisson noise model used for photon-limited settings. Consider the sensor output of i th pixel of camera, denoted as Y i . From [10], we model Y i as the following random variable: Y i ∼ K d K a (P(I i ) + η a ) + η d + η q(1) Here I i represents the average number of incident photons during the exposure. K a , K d represent the analog and digital gain respectively. η a represents noise sources before the analog gain (dark current shot noise, flicker noise etc.) and η d represents noise sources before the digital gain (thermal noise, fixed pattern noise). η q represent the quantization noise. From Eq. (1), we can see view Y i is a noisy measurement of the parameter I i . Eq. (1) can be simplified in the following way: Y i ∼ P(I i ) signal-dependent + η a + 1 K a (η d + η q ) signal-independent (2) whereỸ i def = Y i /(K a K d ) is the normalized the pixel measurement. The noise sources can be easily decoupled into signal dependent Poisson noise and signal independent noise sources. The latter is often approximated as zero mean Gaussian noise in literature [2,3,6] leading to the following Poisson-Gaussian mixture modelling Y i ∼ P(I i ) + Z i Z i ∼ N (0, σ 2 )(3) We can further break down average number of incident photon I i as I i def = αx i where α is a function of camera parameters such as exposure time, quantum efficiency, sensor area etc. and x i is the scene radiance corresponding to the i th pixel. This helps us decouple the scene and camera characteristics in the signal I i . Therefore, we arrive at the following camera model formulation where Y i is the measurement of the true signal x i : Y i ∼ P(αx i ) + N (0, σ 2 )(4) Poisson Noise SNR For this subsection, we ignore the Gaussian term in Eq. (4) i.e. σ = 0 to focus on the nature of Poisson noise. An interesting property of the Poisson random variable is that its mean and variance are the same. Thus, the signal-tonoise ratio forỸ i , in absence of Gaussian noise, is given as follows: SNR(Ỹ i ) def = E[Ỹ i ] Var[Ỹ i ] = √ αx i(5) This implies that our measurements get noisier with decreasing number of incident photons. If the scene is not well-illuminated (low x i ) or the exposure is short (small α), the number of incident photons on the sensor is also low leading to noisier images. Photon-Limited Scenes Scenes where the Poisson noise dominates other sources of noise in the measurements are defined as photon-limited [4]. For the random variableỸ i defined in (4), this occurs when the variance due to Poisson noise is greater than the variance due to Gaussian noise, i.e. αx i ≥ σ 2 . However, for the purpose of this paper, not all photonlimited scenes are equally significant. To emphasize this point, we inspect the SNR for Poisson-Gaussian mixtureỸ i for different levels of α. The SNR forỸ i in presence of both Poisson and Gaussian noise is given as follows: SNR[Ỹ i ] def = E[Ỹ i ] Var[Ỹ i ] = αx i √ αx i + σ 2(6) Consider the case of read noise σ = 1.6e- [7]. We assume x i = 1 and inspect the random variable Y i for different α in the photon-limited regime, as shown in Table 1. From the table we can conclude the following: Images taken in well-illuminated scenes with good exposure can be approximated as noiseless for the purpose of deblurring. However, on the other end of photon-limited regime i.e. α = σ 2 , there is too much noise in the image for any meaningful recovery from a single frame. Therefore, for this paper, we explore photon-limited deconvolution for α ∈ [10,40] where shot noise dominates read noise but there is still a possibility of extracting the clean image. Initialization Algorithm In Algorithm 1, we describe the kernel initialization method for the proposed method. This scheme is a minor variation of the kernel estimation method from [1] and used to estimate a rectilinear kernel with parameters {ρ, θ} from the blur-only image G(y). We would like to reiterate to the reader that the scheme in Algorithm 1 is not the kernel estimation process in its entirety and only represents the initialization process. The kernel estimated at the end of this algorithm is further refined in Stage I and II of the main iterative scheme. (x k , y k ) ← kρ K−1 cos(θ), kρ K−1 sin(θ) 12: end for 13: z 0 ← [x 0 , y 0 , x 1 , y 1 , ..., x K−1 , y K−1 ] T 14: return z 0 Qualitative Comparison Extended qualitative results comparing end-to-end trained methods to our approach are provided in the supplementary document. Figure 1 and 2 provide qualitative examples from the RealBlur dataset which contains realistic blur. Figure 3 contains examples from the PLDD dataset [9] cotnains real-shot noise corrupted and blurred image sensor data along with the ground-truth kernel, as measured using a point source. Finally, Figure 4 provides reconstruction results on synthetically blurred images using motion kernels from Levin dataset [5]. KTN Architecture and Training The KTN architecture can be summarized as follows: the vectorized control points of dimension 2 × (K − 1) are passed through 3 fully-connected layers followed by reshaping into an image. The reshaped image is then passed through the decoding half of a UNet to give the kernel output. The final output, when used in the iterative scheme, is clipped to zero and then normalized to one. Architecture details of KTN are provide in Figure 5. Implementation Details Boundary Conditions: While blurring the image synthetically, the boundary conditions are important to take into account. Circular boundary conditions allow the blur operator to be written in terms of FFTs and making it computationally inexpensive, it is not a realistic assumption for natural blur. A more appropriate boundary condition to assume is symmetric boundary condition. This has major implications for our inverse problem scheme. Since PhD-Net assumes circular boundary conditions, we need to pad the image symmetrically, pass through PhD-Net and crop out the relevant portion to deblur the image without any artifacts. Second, when calculating the reblurring loss, h z F (y, h z ) needs to be calculated using symmetric boundary conditions. Step Size and Backtracking: For Stage I, we set the initial step size as δ = 10 5 and for Stage II, we set δ = 2.0. For every iteration, we check whether the current choice of step-size decreases the cost-function or not. If it doesn't, then the step-size is reduced by half for rest of the iterative scheme until the next time the cost function increases instead of decreasing. Note that δ is set very large in the first stage compared to the second. This is because the gradients are backpropagated through two networks i.e., F (.) and T (.) instead of one, leading to the vanishing gradient problem and hence justifying the larger step size. Computational Time: For each stage of the iterative scheme, we limit the number of iterations to 150. The experiments in the main document are performed on a Nvidia TitanX GPU, and take approximately 0.35 seconds per iteration. . Qualitative Examples on Synthetic Blur: "Non-Blind" is provided for reference and serves as an upper bound on the deconvolution performance. It is obtained through PhD-Net with noisy-blurred image and ground truth kernel as inputs. The kernels in inset of "Ours" and "Non-Blind" represent the estimated and true blur kernel respectively. Input Figure 2 . 2Blur as Structured Motion Estimation: In our formulation, we view the blur kernel as the continuous camera trajectory reduced to K key points, as shown in top half of the figure. Figure 3 . 3Flowchart describing first stage of the proposed scheme. We estimate the motion kernel of the blurry noisy image in lower dimensional latent space z where the blur kernel is represented by T (z) and by minimizing the reblurring loss L as defined in equation 2 Figure 4 . 4Qualitative example on the Real-Blur Dataset: For a more extensive set of results, we refer the reader to the supplementary document. Figure 6 . 6Estimated Kernels for different methods: We show the estimated kernels from two examples from the PLDD dataset. Two-Phase[40] uses blur-only image G(y) as input, and groundtruth kernel is estimated using a point-source. Input: Blur-only Image G(y), Number of Key Points K 2: Estimate gradient images D x , D y from G(y) 3: for θ = 1, 2, · · ·, 180 do 4:D θ ← D x cos(θ) − D x sin(θ) 5:f θ ← max(|D θ |) 6: end for 7: fθ,θ ← min f θ (x 0 , y 0 ) ← (0, 0) 10: for k = 1, 2, · · ·, K − 1 do 11: Figure 2 . 2More Qualitative examples on the Real-Blur Dataset Li. Rethinking noise synthesis and modeling in raw denoising. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), pages 4593- Figure 3 . 3More comparisons on Photon-Limited Deblurring Dataset [ Figure 4 4Figure 4. Qualitative Examples on Synthetic Blur: "Non-Blind" is provided for reference and serves as an upper bound on the deconvolution performance. It is obtained through PhD-Net with noisy-blurred image and ground truth kernel as inputs. The kernels in inset of "Ours" and "Non-Blind" represent the estimated and true blur kernel respectively. Figure 5 . 5Kernel Trajectory Network Architecture 3 fully connected layers followed by a U-Net decoder Table 1. Performance on BSD100 Dataset with Synthetic Blur. ↑ represents metrics where higher means better and vice versa for ↓. LPIPS-Alex and LPIPS-VGG represent the perceptual measures from[45]. The best performing blind deconvolution method for each metric and photon level is shown in bold. The non-blind deconvolution methods are shown for reference in grey columns.Method Photon Level, Metric SRN [34] DHMPN [43] Deep- Deblur [24] MIMO- UNet+ [8] MPRNet [42] Ours P4IP [28] PURE-LET [18] PhD-Net [29] α = 10 PSNR ↑ 20.71 20.89 21.17 21.04 21.09 21.57 19.26 22.49 23.00 SSIM ↑ 0.386 0.391 0.401 0.356 0.393 0.471 0.348 0.485 0.500 LPIPS-Alex ↓ 0.681 0.702 0.656 0.733 0.678 0.560 0.733 0.588 0.544 LPIPS-VGG ↓ 0.646 0.652 0.627 0.683 0.641 0.587 0.674 0.607 0.567 α = 20 PSNR ↑ 20.79 21.03 21.30 21.36 21.25 21.93 19.45 22.94 23.63 SSIM ↑ 0.392 0.401 0.410 0.396 0.405 0.483 0.353 0.516 0.540 LPIPS-Alex ↓ 0.683 0.688 0.666 0.660 0.667 0.542 0.726 0.526 0.500 LPIPS-VGG ↓ 0.639 0.640 0.621 0.663 0.631 0.578 0.668 0.584 0.539 α = 40 PSNR ↑ 20.89 21.15 21.43 21.63 21.41 21.62 20.18 23.48 24.38 SSIM ↑ 0.409 0.418 0.425 0.441 0.428 0.527 0.372 0.561 0.593 LPIPS-Alex ↓ 0.677 0.673 0.673 0.586 0.647 0.488 0.706 0.467 0.446 LPIPS-VGG ↓ 0.629 0.626 0.612 0.639 0.613 0.549 0.660 0.557 0.503 Blind? End-To-End Trained? Method Photon Level, Metric SRN [34] DHMPN [43] Deep- Deblur [24] MIMO- UNet+ [8] MPRNet [42] Ours P4IP [28] PURE-LET [18] PhD-Net [29] α = 10 PSNR ↑ 20.26 20.50 20.93 21.25 21.04 22.01 19.92 21.63 22.41 SSIM ↑ 0.510 0.509 0.524 0.516 0.533 0.611 0.463 0.590 0.638 LPIPS-Alex ↓ 0.507 0.521 0.496 0.594 0.479 0.340 0.546 0.371 0.341 LPIPS-VGG ↓ 0.531 0.526 0.518 0.661 0.511 0.477 0.555 0.522 0.466 α = 20 PSNR ↑ 20.49 20.39 21.11 21.64 21.33 22.72 19.53 21.79 22.78 SSIM ↑ 0.523 0.521 0.536 0.554 0.551 0.641 0.442 0.607 0.667 LPIPS-Alex ↓ 0.496 0.502 0.492 0.485 0.459 0.304 0.533 0.339 0.304 LPIPS-VGG ↓ 0.515 0.514 0.501 0.610 0.493 0.448 0.554 0.510 0.447 α = 40 PSNR ↑ 20.59 20.50 21.20 21.88 21.54 22.32 17.32 21.78 22.96 SSIM ↑ 0.535 0.532 0.545 0.583 0.567 0.647 0.362 0.614 0.687 LPIPS-Alex ↓ 0.491 0.494 0.494 0.428 0.447 0.273 0.487 0.324 0.263 LPIPS-VGG ↓ 0.506 0.506 0.493 0.557 0.479 0.444 0.560 0.507 0.432 Table 4 . 4AblationStudy for effect of Kernel Trajectory Net- work T (.): Reconstruction metrics for different number of key points. K = 0 represents the variant of the scheme which does not use the Kernel-Trajectory Network and estimates the kernel directly from Stage II. Blind image deconvolution using deep generative priors. IEEE Transactions on Computational Imaging, 6:1493- 1506, 2020. 2 [4] Giacomo Boracchi and Alessandro Foi. Modeling the perfor- mance of image restoration from motion blur. IEEE Trans- actions on Image Processing, 21(8):3502-3517, 2012. 5 [5] Ayan Chakrabarti. A neural approach to blind motion deblur- ring. In European Conference on Computer Vision (ECCV), Photon level ↓ SNR (in dB)Table 1. Signal-to-noise ratio (SNR) for different photon levels in photon-limited regime.α = 1000 29.98 dB α = 40 15.75 dB α = 20 12.48 dB α = σ 2 1.07 dB Convolutional Block:Conv. Layer 1 (Output channel = input channel) Conv. Layer 2 (Output channel = 1) Key Point VectorAffine-Transform + ReLU, = Input Dimension = Output Dimension U-Net Decoder Reshape Input Size Output Size U-Net Decoder = ... Upsampling Block: Transposed Convolution , Residual Block -loss. For further architectural details on the KTN, we refer the reader to the supplementary document. AcknowledgementThe work is supported, in part, by the United States National Science Foundation under the grants IIS-2133032 and ECCS-2030570. Deep-url: A model-aware approach to blind deconvolution based on deep unfolded Richardson-Lucy network. Chirag Agarwal, Shahin Khobahi, Arindam Bose, Mojtaba Soltanalian, Dan Schonfeld, 2020 IEEE International Conference on Image Processing (ICIP). Chirag Agarwal, Shahin Khobahi, Arindam Bose, Mojtaba Soltanalian, and Dan Schonfeld. Deep-url: A model-aware approach to blind deconvolution based on deep unfolded Richardson-Lucy network. In 2020 IEEE International Conference on Image Processing (ICIP), pages 3299-3303, 2020. 2 Efficient blind deblurring under high noise levels. Jérémy Anger, Mauricio Delbracio, Gabriele Facciolo, 2019 11th International Symposium on Image and Signal Processing and Analysis (ISPA). Jérémy Anger, Mauricio Delbracio, and Gabriele Facciolo. Efficient blind deblurring under high noise levels. In 2019 11th International Symposium on Image and Signal Process- ing and Analysis (ISPA), pages 123-128. IEEE, 2019. 2 . Muhammad Asim, Fahad Shamshad, Ali Ahmed, SpringerMuhammad Asim, Fahad Shamshad, and Ali Ahmed. pages 221-235. Springer, 2016. 2 Total variation blind deconvolution. F Tony, Chiu-Kwong Chan, Wong, IEEE Transactions on Image Processing. 73Tony F Chan and Chiu-Kwong Wong. Total variation blind deconvolution. IEEE Transactions on Image Processing, 7(3):370-375, 1998. 2 Fast motion deblurring. Sunghyun Cho, Seungyong Lee, ACM Transactions on Graphics. 2Sunghyun Cho and Seungyong Lee. Fast motion deblurring. ACM Transactions on Graphics, pages 1-8, 2009. 2 Rethinking coarse-to-fine approach in single image deblurring. Sung-Jin Cho, Seo-Won Ji, Jun-Pyo Hong, Seung-Won Jung, Sung-Jea Ko, Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV). the IEEE/CVF International Conference on Computer Vision (ICCV)67Sung-Jin Cho, Seo-Won Ji, Jun-Pyo Hong, Seung-Won Jung, and Sung-Jea Ko. Rethinking coarse-to-fine approach in sin- gle image deblurring. In Proceedings of the IEEE/CVF In- ternational Conference on Computer Vision (ICCV), pages 4641-4650, 2021. 1, 2, 5, 6, 7 Polyblur: Removing mild blur by polynomial reblurring. Mauricio Delbracio, Ignacio Garcia-Dorado, Sungjoon Choi, Damien Kelly, Peyman Milanfar, IEEE Transactions on Computational Imaging. 73Mauricio Delbracio, Ignacio Garcia-Dorado, Sungjoon Choi, Damien Kelly, and Peyman Milanfar. Polyblur: Removing mild blur by polynomial reblurring. IEEE Transactions on Computational Imaging, 7:837-848, 2021. 3 Deconvolution of Poissonian images using variable splitting and augmented Lagrangian optimization. A T Mario, Jose M Bioucas-Dias Figueiredo, Proceedings of the IEEE/SP Workshop on Statistical Signal Processing. the IEEE/SP Workshop on Statistical Signal ProcessingMario AT Figueiredo and Jose M Bioucas-Dias. Decon- volution of Poissonian images using variable splitting and augmented Lagrangian optimization. In Proceedings of the IEEE/SP Workshop on Statistical Signal Processing, pages 733-736, 2009. 2 From motion blur to motion flow: A deep learning solution for removing heterogeneous motion blur. Dong Gong, Jie Yang, Lingqiao Liu, Yanning Zhang, Ian Reid, Chunhua Shen, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)Anton Van Den Hengel, and Qinfeng ShiDong Gong, Jie Yang, Lingqiao Liu, Yanning Zhang, Ian Reid, Chunhua Shen, Anton Van Den Hengel, and Qinfeng Shi. From motion blur to motion flow: A deep learning so- lution for removing heterogeneous motion blur. In Proceed- ings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 2319-2328, 2017. 2 Single image deblurring using motion density functions. Ankit Gupta, Neel Joshi, Lawrence Zitnick, Michael Cohen, Brian Curless, European Conference on Computer Vision (ECCV). SpringerAnkit Gupta, Neel Joshi, C Lawrence Zitnick, Michael Co- hen, and Brian Curless. Single image deblurring using mo- tion density functions. In European Conference on Computer Vision (ECCV), pages 171-184. Springer, 2010. 2 This is SPIRAL-TAP: Sparse Poisson intensity reconstruction algorithms-theory and practice. T Zachary, Harmany, F Roummel, Rebecca M Marcia, Willett, IEEE Transactions on Image Processing. 213Zachary T Harmany, Roummel F Marcia, and Rebecca M Willett. This is SPIRAL-TAP: Sparse Poisson intensity re- construction algorithms-theory and practice. IEEE Trans- actions on Image Processing, 21(3):1084-1096, 2011. 2 DeblurGAN: Blind motion deblurring using conditional adversarial networks. Orest Kupyn, Volodymyr Budzan, Mykola Mykhailych, Dmytro Mishkin, Jiří Matas, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)Orest Kupyn, Volodymyr Budzan, Mykola Mykhailych, Dmytro Mishkin, and Jiří Matas. DeblurGAN: Blind motion deblurring using conditional adversarial networks. In Pro- ceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 8183-8192, 2018. 2 DeblurGAN-v2: Deblurring (orders-of-magnitude) faster and better. Orest Kupyn, Tetiana Martyniuk, Junru Wu, Zhangyang Wang, The IEEE/CVF International Conference on Computer Vision (ICCV). Orest Kupyn, Tetiana Martyniuk, Junru Wu, and Zhangyang Wang. DeblurGAN-v2: Deblurring (orders-of-magnitude) faster and better. In The IEEE/CVF International Conference on Computer Vision (ICCV), Oct 2019. 2 Blind motion deblurring using image statistics. Anat Levin, Advances in Neural Information Processing Systems (NeurIPS). 196Anat Levin. Blind motion deblurring using image statis- tics. Advances in Neural Information Processing Systems (NeurIPS), 19, 2006. 2, 6 Understanding and evaluating blind deconvolution algorithms. Anat Levin, Yair Weiss, Fredo Durand, William T Freeman, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)Anat Levin, Yair Weiss, Fredo Durand, and William T Free- man. Understanding and evaluating blind deconvolution al- gorithms. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 1964-1971, 2009. 5 PURE-LET image deconvolution. Jizhou Li, Florian Luisier, Thierry Blu, IEEE Transactions on Image Processing. 2716Jizhou Li, Florian Luisier, and Thierry Blu. PURE-LET im- age deconvolution. IEEE Transactions on Image Processing, 27(1):92-105, 2017. 2, 5, 6 An algorithm unrolling approach to deep image deblurring. Yuelong Li, Mohammad Tofighi, Vishal Monga, Yonina C Eldar, IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). Yuelong Li, Mohammad Tofighi, Vishal Monga, and Yon- ina C Eldar. An algorithm unrolling approach to deep image deblurring. In IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 7675-7679. IEEE, 2019. 2 Enhanced deep residual networks for single image super-resolution. Bee Lim, Sanghyun Son, Heewon Kim, Seungjun Nah, Kyoung Mu Lee, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops. the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) WorkshopsBee Lim, Sanghyun Son, Heewon Kim, Seungjun Nah, and Kyoung Mu Lee. Enhanced deep residual networks for sin- gle image super-resolution. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, July 2017. 5 An iterative technique for the rectification of observed distributions. B Leon, Lucy, The Astronomical Journal. 792745Leon B Lucy. An iterative technique for the rectification of observed distributions. The Astronomical Journal, 79:745, 1974. 2 Image reconstruction of static and dynamic scenes through anisoplanatic turbulence. Zhiyuan Mao, Nicholas Chimitt, Stanley H Chan, IEEE Transactions on Computational Imaging. 62Zhiyuan Mao, Nicholas Chimitt, and Stanley H. Chan. Image reconstruction of static and dynamic scenes through aniso- planatic turbulence. IEEE Transactions on Computational Imaging, 6:1415-1428, 2020. 2 Algorithm unrolling: Interpretable, efficient deep learning for signal and image processing. Vishal Monga, Yuelong Li, Yonina C Eldar, IEEE Signal Processing Magazine. 382Vishal Monga, Yuelong Li, and Yonina C Eldar. Algorithm unrolling: Interpretable, efficient deep learning for signal and image processing. IEEE Signal Processing Magazine, 38(2):18-44, 2021. 2 Deep multi-scale convolutional neural network for dynamic scene deblurring. Seungjun Nah, Hyun Tae, Kyoung Mu Kim, Lee, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)67Seungjun Nah, Tae Hyun Kim, and Kyoung Mu Lee. Deep multi-scale convolutional neural network for dynamic scene deblurring. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), July 2017. 1, 2, 5, 6, 7 Neural blind deconvolution using deep priors. Kai Dongwei Ren, Qilong Zhang, Qinghua Wang, Wangmeng Hu, Zuo, 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Dongwei Ren, Kai Zhang, Qilong Wang, Qinghua Hu, and Wangmeng Zuo. Neural blind deconvolution using deep pri- ors. In 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 3338-3347, 2020. 2 Bayesian-based iterative method of image restoration. William Hadley Richardson, Journal of the Optical Society of America. 621William Hadley Richardson. Bayesian-based iterative method of image restoration. Journal of the Optical Soci- ety of America, 62(1):55-59, 1972. 2 Real-world blur dataset for learning and benchmarking deblurring algorithms. Jaesung Rim, Haeyun Lee, Jucheol Won, Sunghyun Cho, Proceedings of the European Conference on Computer Vision (ECCV). the European Conference on Computer Vision (ECCV)Springer57Jaesung Rim, Haeyun Lee, Jucheol Won, and Sunghyun Cho. Real-world blur dataset for learning and benchmarking de- blurring algorithms. In Proceedings of the European Confer- ence on Computer Vision (ECCV), pages 184-201. Springer, 2020. 2, 5, 7 Journal of Visual Communication and Image Representation. Arie Rond, Raja Giryes, Michael Elad, 416Poisson inverse problems by the plug-and-play schemeArie Rond, Raja Giryes, and Michael Elad. Poisson in- verse problems by the plug-and-play scheme. Journal of Vi- sual Communication and Image Representation, 41:96-108, 2016. 2, 5, 6 Photon limited non-blind deblurring using algorithm unrolling. Yash Sanghvi, Abhiram Gnanasambandam, Stanley H Chan, IEEE Transactions on Computational Imaging (TCI). 88Yash Sanghvi, Abhiram Gnanasambandam, and Stanley H Chan. Photon limited non-blind deblurring using algorithm unrolling. IEEE Transactions on Computational Imaging (TCI), 8:851-864, 2022. 2, 4, 5, 6, 8 Photon-limited blind deconvolution using unsupervised iterative kernel estimation. Yash Sanghvi, Abhiram Gnanasambandam, Zhiyuan Mao, Stanley H Chan, IEEE Transactions on Computational Imaging. 8Yash Sanghvi, Abhiram Gnanasambandam, Zhiyuan Mao, and Stanley H. Chan. Photon-limited blind deconvolution using unsupervised iterative kernel estimation. IEEE Trans- actions on Computational Imaging, 8:1051-1062, 2022. 1, 2, 3, 5, 8 Learning to deblur. J Christian, Michael Schuler, Stefan Hirsch, Bernhard Harmeling, Schölkopf, IEEE Transactions on Pattern Analysis and Machine Intelligence. 387Christian J Schuler, Michael Hirsch, Stefan Harmeling, and Bernhard Schölkopf. Learning to deblur. IEEE Transactions on Pattern Analysis and Machine Intelligence, 38(7):1439- 1451, 2015. 2 High-quality motion deblurring from a single image. Qi Shan, Jiaya Jia, Aseem Agarwala, ACM Transactions on Graphics. 273Qi Shan, Jiaya Jia, and Aseem Agarwala. High-quality mo- tion deblurring from a single image. ACM Transactions on Graphics, 27(3):1-10, 2008. 2 Learning a convolutional neural network for non-uniform motion blur removal. Jian Sun, Wenfei Cao, Zongben Xu, Jean Ponce, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)Jian Sun, Wenfei Cao, Zongben Xu, and Jean Ponce. Learn- ing a convolutional neural network for non-uniform motion blur removal. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 769-777, 2015. 2 Scale-recurrent network for deep image deblurring. Xin Tao, Hongyun Gao, Xiaoyong Shen, Jue Wang, Jiaya Jia, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)67Xin Tao, Hongyun Gao, Xiaoyong Shen, Jue Wang, and Ji- aya Jia. Scale-recurrent network for deep image deblurring. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 8174-8182, 2018. 1, 2, 5, 6, 7 Deep image prior. Dmitry Ulyanov, Andrea Vedaldi, Victor Lempitsky, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)Dmitry Ulyanov, Andrea Vedaldi, and Victor Lempitsky. Deep image prior. In Proceedings of the IEEE/CVF Confer- ence on Computer Vision and Pattern Recognition (CVPR), pages 9446-9454, 2018. 2 Mean squared error: Love it or leave it? a new look at signal fidelity measures. Zhou Wang, Alan C Bovik, IEEE Signal Processing Magazine. 261Zhou Wang and Alan C Bovik. Mean squared error: Love it or leave it? a new look at signal fidelity measures. IEEE Signal Processing Magazine, 26(1):98-117, 2009. 5 Uformer: A general U-shaped transformer for image restoration. Zhendong Wang, Xiaodong Cun, Jianmin Bao, Wengang Zhou, Jianzhuang Liu, Houqiang Li, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)Zhendong Wang, Xiaodong Cun, Jianmin Bao, Wengang Zhou, Jianzhuang Liu, and Houqiang Li. Uformer: A general U-shaped transformer for image restoration. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pat- tern Recognition (CVPR), pages 17683-17693, June 2022. 2 Deblurring shaken and partially saturated images. Oliver Whyte, Josef Sivic, Andrew Zisserman, International Journal of Computer Vision. 1102Oliver Whyte, Josef Sivic, and Andrew Zisserman. Deblur- ring shaken and partially saturated images. International Journal of Computer Vision, 110(2):185-201, 2014. 2 Non-uniform deblurring for shaken images. Oliver Whyte, Josef Sivic, Andrew Zisserman, Jean Ponce, International Journal of Computer Vision. 982Oliver Whyte, Josef Sivic, Andrew Zisserman, and Jean Ponce. Non-uniform deblurring for shaken images. Inter- national Journal of Computer Vision, 98(2):168-186, 2012. 2 Two-phase kernel estimation for robust motion deblurring. Li Xu, Jiaya Jia, Proceedings of the European Conference on Computer Vision (ECCV). the European Conference on Computer Vision (ECCV)SpringerLi Xu and Jiaya Jia. Two-phase kernel estimation for robust motion deblurring. In Proceedings of the European Confer- ence on Computer Vision (ECCV), pages 157-170. Springer, 2010. 2, 5, 8 Restormer: Efficient transformer for high-resolution image restoration. Aditya Syed Waqas Zamir, Salman Arora, Munawar Khan, Hayat, Ming-Hsuan Fahad Shahbaz Khan, Yang, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)2022Syed Waqas Zamir, Aditya Arora, Salman Khan, Mu- nawar Hayat, Fahad Shahbaz Khan, and Ming-Hsuan Yang. Restormer: Efficient transformer for high-resolution image restoration. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2022. 2 Multi-stage progressive image restoration. Aditya Syed Waqas Zamir, Salman Arora, Munawar Khan, Hayat, Ming-Hsuan Fahad Shahbaz Khan, Ling Yang, Shao, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)67Syed Waqas Zamir, Aditya Arora, Salman Khan, Munawar Hayat, Fahad Shahbaz Khan, Ming-Hsuan Yang, and Ling Shao. Multi-stage progressive image restoration. In Pro- ceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 14821-14831, 2021. 1, 2, 5, 6, 7 Deep stacked hierarchical multi-patch network for image deblurring. Hongguang Zhang, Yuchao Dai, Hongdong Li, Piotr Koniusz, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)67Hongguang Zhang, Yuchao Dai, Hongdong Li, and Piotr Ko- niusz. Deep stacked hierarchical multi-patch network for im- age deblurring. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 5978-5986, 2019. 1, 2, 5, 6, 7 Deblurring by realistic blurring. Kaihao Zhang, Wenhan Luo, Yiran Zhong, Lin Ma, Bjorn Stenger, Wei Liu, Hongdong Li, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)Kaihao Zhang, Wenhan Luo, Yiran Zhong, Lin Ma, Bjorn Stenger, Wei Liu, and Hongdong Li. Deblurring by realis- tic blurring. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 2737-2746, 2020. 2 The unreasonable effectiveness of deep features as a perceptual metric. Richard Zhang, Phillip Isola, Alexei A Efros, Eli Shechtman, Oliver Wang, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)56Richard Zhang, Phillip Isola, Alexei A Efros, Eli Shecht- man, and Oliver Wang. The unreasonable effectiveness of deep features as a perceptual metric. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 586-595, 2018. 5, 6 Exposure trajectory recovery from motion blur. Youjian Zhang, Chaoyue Wang, J Stephen, Dacheng Maybank, Tao, IEEE Transactions on Pattern Analysis and Machine Intelligence. 4411Youjian Zhang, Chaoyue Wang, Stephen J Maybank, and Dacheng Tao. Exposure trajectory recovery from motion blur. IEEE Transactions on Pattern Analysis and Machine Intelligence, 44(11):7490-7504, 2021. 2 Polyblur: Removing Input SRN DHMPN MIMO-UNet+ reblurring. Mauricio Delbracio, Ignacio Garcia-Dorado, Sungjoon Choi, Damien Kelly, Peyman Milanfar, IEEE Transactions on Computational Imaging. 72Mauricio Delbracio, Ignacio Garcia-Dorado, Sungjoon Choi, Damien Kelly, and Peyman Milanfar. Polyblur: Removing Input SRN DHMPN MIMO-UNet+ reblurring. IEEE Transactions on Computational Imaging, 7:837-848, 2021. 2 Clipped noisy images: Heteroskedastic modeling and practical denoising. Alessandro Foi, Signal Processing. 8912Alessandro Foi. Clipped noisy images: Heteroskedas- tic modeling and practical denoising. Signal Processing, 89(12):2609-2629, 2009. 1 Practical Poissonian-Gaussian noise modeling and fitting for single-image raw-data. Alessandro Foi, Mejdi Trimeche, Vladimir Katkovnik, Karen Egiazarian, IEEE Transactions on Image Processing. 1710Alessandro Foi, Mejdi Trimeche, Vladimir Katkovnik, and Karen Egiazarian. Practical Poissonian-Gaussian noise mod- eling and fitting for single-image raw-data. IEEE Transac- tions on Image Processing, 17(10):1737-1754, 2008. 1 Photon, poisson noise. W Samuel, Hasinoff, A Reference Guide. 4116Computer VisionSamuel W Hasinoff. Photon, poisson noise. Computer Vi- sion, A Reference Guide, 4:16, 2014. 1 Blind motion deblurring using image statistics. Anat Levin, Advances in Neural Information Processing Systems (NeurIPS). 19Anat Levin. Blind motion deblurring using image statis- tics. Advances in Neural Information Processing Systems (NeurIPS), 19, 2006. 2 Image denoising in mixed Poisson-Gaussian noise. Florian Luisier, Thierry Blu, Michael Unser, IEEE Transactions on Image Processing. 2031Florian Luisier, Thierry Blu, and Michael Unser. Image de- noising in mixed Poisson-Gaussian noise. IEEE Transac- tions on Image Processing, 20(3):696-708, 2010. 1 Ultra-high-resolution quanta image sensor with reliable photon-number-resolving and high dynamic range capabilities. Jiaju Ma, Dexue Zhang, Dakota Robledo, Leo Anzagira, Saleh Masoodian, Scientific Reports. 121Jiaju Ma, Dexue Zhang, Dakota Robledo, Leo Anzagira, and Saleh Masoodian. Ultra-high-resolution quanta image sen- sor with reliable photon-number-resolving and high dynamic range capabilities. Scientific Reports, 12(1):1-9, 2022. 1 Photon limited non-blind deblurring using algorithm unrolling. Yash Sanghvi, Abhiram Gnanasambandam, Stanley H Chan, IEEE Transactions on Computational Imaging (TCI). 85Yash Sanghvi, Abhiram Gnanasambandam, and Stanley H Chan. Photon limited non-blind deblurring using algorithm unrolling. IEEE Transactions on Computational Imaging (TCI), 8:851-864, 2022. 5 Photon-limited blind deconvolution using unsupervised iterative kernel estimation. Yash Sanghvi, Abhiram Gnanasambandam, Zhiyuan Mao, Stanley H Chan, IEEE Transactions on Computational Imaging. 82Yash Sanghvi, Abhiram Gnanasambandam, Zhiyuan Mao, and Stanley H. Chan. Photon-limited blind deconvolution using unsupervised iterative kernel estimation. IEEE Trans- actions on Computational Imaging, 8:1051-1062, 2022. 2 . Yi Zhang, Hongwei Qin, Xiaogang Wang, Hongsheng , Yi Zhang, Hongwei Qin, Xiaogang Wang, and Hongsheng
[]
[ "Second-Order Differential Invariants of the Rotation Group O(n) and of its Extensions: E(n), P(1, n), G(1, n) 1", "Second-Order Differential Invariants of the Rotation Group O(n) and of its Extensions: E(n), P(1, n), G(1, n) 1" ]
[ "W I Fushchych ", "Irina Yehorchenko ", "\nInstitute of Mathematics\nNAS of Ukraine\n\n", "\nTereshchenkivska Str\nKyiv 4Ukraine\n" ]
[ "Institute of Mathematics\nNAS of Ukraine\n", "Tereshchenkivska Str\nKyiv 4Ukraine" ]
[]
Functional bases of second-order differential invariants of the Euclid, Poincaré, Galilei, conformal, and projective algebras are constructed. The results obtained allow us to describe new classes of nonlinear many-dimensional invariant equations.where u k is the set of all kth-order partial derivatives of the function u is called a differential invariant for the Lie algebra L with basis elements X i of the form (0.1) (L = X i ) if it is an invariant of the l-th prolongation of this algebra:where the λ s are some functions; when λ i = 0, F is called an absolute invariant; when λ i = 0, it is a relative invariant.Further, we deal mostly with absolute differential invariants and when writing 'differential invariant' we mean 'absolute differential invariant'.Definition 2. A maximal set of functionally independent invariants of order r ≤ l of the Lie algebra L is called a functional basis of the lth-order differential invariants for the algebra L.We consider invariants of order 1 and 2 and need the first and second prolongations of the operator X (0.1) (see, e.g., [8-11])
null
[ "https://export.arxiv.org/pdf/math-ph/0510042v1.pdf" ]
6,626,283
math-ph/0510042
6ab5edc9bedbd87760e9355644e075da66bd63f7
Second-Order Differential Invariants of the Rotation Group O(n) and of its Extensions: E(n), P(1, n), G(1, n) 1 W I Fushchych Irina Yehorchenko Institute of Mathematics NAS of Ukraine Tereshchenkivska Str Kyiv 4Ukraine Second-Order Differential Invariants of the Rotation Group O(n) and of its Extensions: E(n), P(1, n), G(1, n) 1 Functional bases of second-order differential invariants of the Euclid, Poincaré, Galilei, conformal, and projective algebras are constructed. The results obtained allow us to describe new classes of nonlinear many-dimensional invariant equations.where u k is the set of all kth-order partial derivatives of the function u is called a differential invariant for the Lie algebra L with basis elements X i of the form (0.1) (L = X i ) if it is an invariant of the l-th prolongation of this algebra:where the λ s are some functions; when λ i = 0, F is called an absolute invariant; when λ i = 0, it is a relative invariant.Further, we deal mostly with absolute differential invariants and when writing 'differential invariant' we mean 'absolute differential invariant'.Definition 2. A maximal set of functionally independent invariants of order r ≤ l of the Lie algebra L is called a functional basis of the lth-order differential invariants for the algebra L.We consider invariants of order 1 and 2 and need the first and second prolongations of the operator X (0.1) (see, e.g., [8-11]) Introduction The concept of the invariant is widely used in various domains of mathematics. In this paper, we investigate the differential invariants within the framework of symmetry analysis of differential equations. Differential invariants and construction of invariant equations were considered by S. Lie [1] and his followers [2,3]. Tresse [2] had proved the theorem on the existence and finiteness of a functional basis of differential invariants. However, there exist quite a few papers devoted to the construction in explicit form of differential invariants for specific groups involved in mechanics and mathematical physics. Knowledge of differential invariants of a certain algebra or group facilitates classification of equations invariant with respect to this algebra or group. There are also some general methods for the investigation of differential equations which need tide explicit form of differential invariants for these equations' symmetry groups (see e.g. [3,4]). A brief review of our investigation of second-order differential invariants for the Poincaré and Galilei groups is given in [5,6]. Our results on functional bases of differential invariants are founded on the Lemma about functionally independent warrants for the proper orthogonal group and two n-dimensional symmetric tensors of the order 2. We should like to stress that we consider functionally independent invariants of but not irreducible ones, as in the classical theory of invariants. Bases of irreducible invariants for the group O(3) and three-dimensional symmetric tensors and vectors are adduced in [7]. The definitions of differential invariants differ in various domains of mathematics, e.g. in differential geometry and symmetry analysis of differential equations. Thus, we believe that some preliminary notes are necessary, though these formulae and definitions can be found in [8,9,10]. We deal with Lie algebras consisting of the infinitesimal operators X = ξ i (x, u)∂ x i + η r (x, u)∂ u r . (0.1) Here x = (x 1 , x 2 , . . . , x n ), u = (u 1 , . . . , u m ). We usually mean the summation over the repeating indices. X = X + η r i ∂ u r i , 2 X = 1 X +η r ij ∂ u r ij the coefficients η r i and η r ij taking the form η r i = (∂ x i + u s i ∂ u s )η r − u r k (∂ x i + u s i ∂ u s )ξ k , η r ij = (∂ x i + u s j ∂ u s + u s jk ∂ u s k )η r i − u r ik (∂ x j + u s j ∂ u s )ξ k . While writing out lists of invariants, we shall use the following designations u a ≡ ∂u ∂xa , u ab ≡ ∂ 2 u ∂xa∂x b , S k (u ab ) ≡ u a 1 a 2 u a 2 a 3 · · · u a k−1 a k u a k a 1 , S jk (u ab , v ab ) ≡ u a 1 a 2 · · · u a j−1 a j v a j a j+1 · · · v a k a 1 , R k (u a , u ab ) ≡ u a 1 u a k u a 1 a 2 u a 2 a 3 · · · u a k−1 a k u a k a 1 . (0.3) Here and further we mean summation over the repeated indices from 1 to n. In all the lists of invariants, k takes on the values from 1 to n and j takes the values from 0 to k. We shall not discern the upper and lower indices with respect to summation: for all Latin indices x a x a ≡ x a x a ≡ x a x a = x 2 1 + x 2 2 + · · · + x 2 n . 1. Differential invariants for the Euclid algebra The Euclid algebra AE(n) is defined by basis operators ∂ a ≡ ∂ ∂ xa , J ab = x a ∂ b − x b ∂ a . (1.1) Here and further, the letters a, b, c, d, when used as indices, take on the values from 1 to n, n being the number of space variables (n ≥ 3). The algebra AE(n) is an invariance algebra for a wide class of many-dimensional scalar equations involved in mathematical physics -the Schrödinger, heat, d'Alembert equations, etc. In this section, we shall explain in detail how to construct a functional basis of the secondorder differential invariants for the algebra AE(n). This basis will be further used to find invariant bases for various algebras containing the Euclid algebra as a subalgebra -the Poincaré, Galilei, conformal, projective algebras, etc. 1.1. The main results. Let us first formulate the main results of the section in the form of theorems. Theorem 1. There is a functional basis of second-order differential invariants for the Euclid algebra AE(n) with the basis operators (1.1) for the scalar function u = u(x 1 , . . . , x n ) consisting of these 2n + 1 invariants u, S k (u ab ), R k (u a , u ab ). (1.2) Theorem 2. The second-order differential invariants of the algebra AE(n) (1.1) for the set of scalar functions u r , r = 1, . . . , m, can be represented as functions of the following expressions: u r , S jk (u 1 ab , u r ab ), R k (u r a , u 1 ab ). (1.3) 1.2. Proofs of the theorems. Absolute differential invariants are obtained as solutions of a linear system of first-order partial differential equations (PDE). Thus, the number of elements of a functional basis is equal to the number of independent integrals of this system. This number is equal to the difference between the number of variables on which the functions being sought depend, and the rank of the corresponding system of PDE (in our case, this rank is equal to the generic rank of the prolonged operator algebra [8,9]. To prove the fact that N invariants which have been found, F i = F i (x, u, u 1 , . . . , u l ), form a functional basis, it is necessary and sufficient to prove the following statements: (1) the F i are invariants; (2) the F i are functionally independent; (3) the set of invariants F i is complete or N is equal to the difference of the number of variables (x, u, u 1 , . . . , u l ) and the rank of the system of defining operators. We seek second-order differential invariants in the form F = F (x, u, u 1 ,u 2 ) . It follows from the condition of invariance with respect to translation operators ∂ a that F does not depend on x a ; evidently, u is an invariant of the operators (1.1). Thus, it is sufficient to seek invariants depending on u whereĴ ab = u r a ∂ u r b − u r b ∂ u r a + 2(u r ac ∂ u r bc − u r bc ∂ u r ac ), (1.5) the summation over r from 1 to m being implied. In that way, the problem of finding the second-order differential invariants of the algebra AE(n) is reduced to the construction of a functional basis for the rotational algebra AO(n) with the basis operators (1.5) for m vectors and m symmetric tensors of order 2. Lemma 1. The rank of the algebra AO(n) is equal to (n(n − 1))/2. Proof. It is sufficient to prove the lemma for m = 1. The basis of the algebra (1.5) consists of (n(n − l))/2 operators. According to definition [8], its rank is equal to the generic rank of the coefficient matrix of these operators. Let us put u ab = 0 when a = b and write down the coefficient columns by ∂ u ab of the operators (1.5):     u 11 − u 22 0 · · · 0 0 u 11 − u 33 · · · 0 · · · · · · · · · · · · 0 0 · · · u n−1,n−1 − u nn     . (1.6) When u aa = u bb for a = b and all u aa = 0, the determinant of the matrix (1.6) does not vanish, therefore its generic rank (that is, the generic rank the algebra being considered) cannot be less than (n(n − 1))/2. The lemma is proved. Lemma 2. The expressions S k (u ab ), R k (u a , u ab ) (1.7) are functionally independent. Proof. To establish independence of expressions (1.7), it is sufficient to consider the case when u ab = 0 if a = b and u aa = 0. Let us write down the Jacobian of the invariants 1 · · · 1 2u 11 · · · 2u nn 0 · · · · · · · · · nu n−1 11 · · · nu n−1 rr 2u 1 · · · 2u n · · · · · · · · · · · · 2u 1 u n−1 11 · · · 2u n u n−1 nn (1.8) The Jacobian (1.8) is equal up to a coefficient to the product of two Vandermonde determinants and is not equal to zero if u aa = u bb whenever a = b. Thus, the expressions (1.7) are functionally independent. Proof of Theorem 1. The fact that expressions (1.2) are invariants of AO(n) can be easily proved by direct substitution of these expressions into the invariance conditions. Nevertheless, it is useful to note that S k (u ab ) are traces of the symmetric matrix (u ab ) = U and its powers, R k (u a , u ab ) are the scalar products of the vector (u a ) = (u 1 , . . . , u n ), the matrix U k−1 and the vector (u a ) T . The invariants for the vector (u a ) and the symmetric tensor (u ab ) depend on their (n(n+3))/2 elements. Thus, it follows from Lemma 1 that a functional basis of the algebra AO(n) for (u a ) and (u ab ) must consist of n(n + 3) 2 − n(n − 1) 2 = 2n invariants. Therefore the set (1.7) is a complete set of functionally independent invariants of the form F = F (u 1 , u 2 ) and (1.2) represents a functional basis of the second-order invariants for the algebra AE(n). The theorem is proved. Let us consider the case of two vectors (u a ), (v a ) and two symmetric tensors of the second order (u ab ), (v ab ). The operators of the rotation algebra have the form (1.5), u ≡ u 1 , v ≡ u 2 . In this case, a functional basis of invariants contains 2 n(n − 1) 2 + 2n − n(n − 1) 2 = n(n + 7) 2 elements for which we take the following expressions R k (u a , u ab ), R k (v a , u ab ), S jk (u ab , v ab ). (1.9) The invariance of expressions (1.9) with respect to the operators (1.5) can be easily proved by their direct substitution to (1.4). To establish their functional independence, we shall use the following lemma. S jk (u ab , v ab ) = tr U j V k−j , j = 0, . . . , k; k = 1, . . . , n, (1.10) are functionally independent. Proof. To prove Lemma 3, it is sufficient to show that the generic rank of the Jacobi matrix of expressions (1.10) is equal to (n(n + 3))/2 that is the difference between the number of independent elements of U and V and the rank of the operators (1.5). We shall limit ourselves to the case when u ab = 0 if a = b. Then equations (1.10) depend on (n(n + 3))/2 variables and their independence is equivalent to the nonvanishing of the Jacobian. Let us write down the elements of the Jacobian which are needed for further reasoning 1 · · · 1 2u 11 · · · 2u n 0 · · · · · · · · · nu n−1 11 · · · nu n−1 nn 1 0 · · · 0 1 · · · 1 · · · 2v 11 4v 12 · · · 4v 1n 2v 22 · · · 2v nn · · · . (1.11) Since in the first n rows all the elements besides the first n columns are equal to j zero, the Jacobian (1.11) is equal to the product of the Jacobian of the elements tr U k , k = 1, . . . , n, and the Jacobian of all other elements. According to Lemma 2, the expressions tr U k , k = 1, . . . , n, are independent and their Jacobian is not equal to zero; thus, it remains to show the nonvanishing of the Jacobian and the functional independence only for the elements tr U j V k−j , j = 0, . . . , k − 1; k = 1, . . . , n. It follows from (1.11) that it is sufficient to show the nonvanishing of this Jacobian without the (n+1)th rows and columns. Thus, to prove the lemma it is enough to show that the following expressions are independent tr U j V k−j V, j = 0, . . . , k; k = 1, . . . , n − 1. (1.12) The above reasoning allows us to make use of the principle of mathematical induction. When n = 1, u 11 and v 11 are independent and the lemma is true. Let us suppose that it is true for n − 1 and then prove from this that it is valid for n. Let the expressions tr U j V k−j , j = 0, . . . , k; k = 1, . . . , n − 1, (1.13) where U , V are symmetric (n − 1) × (n − 1) matrices and are independent. Then, we shall prove the independence of (1.12) for the same matrices. The sets (1.12) and (1.13) coincide with the exception of the following subsets tr U j V n−j , j = 0, . . . , n − 1 (1.14) belong only to (1.12) and tr U j , j = 1, . . . , n − 1 (1.15) belong only to (1.13). The assumption of validity of the lemma for n − 1 means that for two symmetric tensors of order 2, the set (1.13) is a functional basis of invariants of the rotation algebra. Thus, all the invariants of this algebra can be represented as functions of (1.13). To prove the functional independence of (1.12), it is sufficient to prove the nondegeneracy of the Jacobi matrix of the functions expressing the invariants (1.12) with (1.13). This matrix has the form           1 0 1 · · · . . . 0 1 W 0 ∂(tr U j V n−j ) ∂(tr U j )           , (1.16) W being the derivative by tr V of the expression tr V n = F (tr V k , k = 1, . . . , n − 1). (We know that from the Hamilton-Cayley theorem); W = 0. We have only to prove the nonvanishing of the Jacobian of the expressions Functional independence of the expressions (1.12) for (n − 1) × (n − 1) matrices implies their independence for n × n matrices. From the above, it follows that the expressions (1.10) are independent, thus Lemma 3 is proved. tr (U j V n−j ) = F (tr U k , k = 1, . . . , n − 1, . . .). (1.17) When V = E, Proof of Theorem 2. It is easy to see from the structure of the set (1.3) that the invariants involving (u 1 a ), . . . , (u m a ), (u 2 ab ), . . . , (u m ab ) depend on the components of (u 1 ab ) and of the corresponding vector or tensor, thus it is sufficient to prove the functions independence of each of the following sets: R k (u r a , u 1 ab ) for every r = 1, . . . , m; S jk (u 1 ab , u r ab ) for every r = 2, . . . , m; Functional independence of each set of R k (u r a , u 1 ab ) can be proved similarly to the proof of Lemma 2. The functional independence of the set S jk (u 1 ab , u r ab ) easily follows from Lemma 3, u r are evidently independent of other elements of (1.3). To make sure that expressions (1.3) are invariants of AO(n), it is sufficient to substitute them into the condition (1.4). The set (1.3) consists of 2mn + m + (m − 1) n(n − 1) 2 = m n(n + 1) 2 + n + 1 − n(n − 1) 2 elements and, thus, it is complete. So we have proved that this set forms a basis of invariants for the algebra AE(n) (1.1). 1.3. Bases of invariants for the extended Euclid algebra and for the conformal algebra. The extended Euclid algebra AE 1 (n) for one scalar function is defined by the basis operators ∂ a , J ab (1.1) and D depending on a parameter λ: D = x a ∂ a + λu∂ u (∂ u = ∂/∂u). (1.18) The basis of the conformal algebra AC(n) consists of the operators ∂ a , J ab (1.1) and D (1.18) and K a = 2x a D − x a x b ∂ a . ( 1.19) Theorem 3. There is a functional basis for the extended Euclid algebra that has the following form (1) when λ = 0: R k (u a , u ab ) u k(1−2/λ)+1 , S k (u ab ) u k(1−2/λ) ; (1.20) (2) when λ = 0: u, R k (u a , u ab ) (u aa ) k , S k (u ab ) (u aa ) k (k = 1); (1.21) a functional basis for the conformal algebra has the following form: (1) when λ = 0: Proof. To find absolute differential invariants of the algebra AE 1 (n), it is necessary to add to (1.4) the following condition S k (θ ab )u k(2/λ−1) ; (1.22) (2) when λ = 0: u, S k (w ab )(u a u a ) −2k (k = n), (1.23) where θ ab = λu ab + (1 − λ) uau b u − δ ab ucuc 2u , w ab = u c u c u ab + δ ab 2−n u dd − u c (u a u bc + u b u ac ),(1.2 D F ≡ x a F xa + λuF u + (λ − 1)u a F ua + (λ − 2)u ab F u ab = 0. (1.25) Solving equation (1.25) for F = F (u, R k (u a , u ab ), S k (u ab )),: 2 Ka = 2x a 2 D + x b 2 Jab + 2λ[u∂ ua + 2u b ∂ u ab ] + 2u a ∂ ucc − 4u b ∂ u ab . Solving this system for an arbitrary n requires a lot of cumbersome computations. It is simpler to construct conformally covariant tensors from u, u a , u ab and then to construct invariants of the rotation algebra. X i θ a = σ i ab θ b + σ i θ a , X i θ ab = ρ i ab θ cb + ρ i bc θ ac + ρ i θ ab ,(1. 27) X i are operators of the form (0.1), ρ i , σ i are some functions, σ i ab , ρ i ab are some skew-symmetric tensors. It is easy to show that the expressions S k (θ ab ), R k (θ a , θ ab ), where θ a , θ ab are tensors covariant with respect to the algebra L are relative invariants of this algebra. The fact that θ ab and w ab (1.24) are covariant with respect to the conformal algebra AC(n) can be verified by direct substitution of these tensors into the conditions (1.27) for the operators 2 D and 2 Ka. The rank of the second prolongation of the algebra AC(n) is equal to the number of its operators n(n − 1) 2 + n + n + 1 = n(n + 3) 2 + 1 and, therefore, a functional basis of second-order differential invariants must contain n invariants. Functional independence of the expressions (1.22) follows from Lemma 2 if we notice that the transformation u ab → θ ab is nondegenerated. The same is true for the set (1.23). The expressions (1.22) and (1.23) satisfy (1.25) and (1.26) for the corresponding λ and they are invariants of the conformal algebra. All that is stated above leads to the conclusion that (1.22) and (1.23) form functional bases for the conformal algebra AC(n) with λ = 0 and λ = 0, respectively. Note 1. Using condition (1.26), it is easy to show that when λ = 0 covariant tensors exist for AC(n) of order 2 only; when λ = 0, the tensors w ab (l.24) and u a are conformally covariant but S k (w ab ) and R k (u a , w ab ) are dependent. Theorem 4. The second-order differential invariants for a vector function u = (u 1 , . . . , u m ) and for the algebra AE 1 (n) = ∂ a , J ab , D , the operator D having the form D = x a ∂ a + λu r ∂ u r (1.28) with a summation over r from 1 to m, can be represented as the functions of the following expressions: (1) when λ = 0: u r u 1 (r = 2, . . . , m), S jk (u 1 ab , u r ab ) (u 1 ) k(1−2/λ) , R k (u r a , u 1 ab ) (u 1 ) k(1−2/λ)+1 ; (2) when λ = 0: u r , R k (u r a , u 1 ab )(u 1 aa ) −k , S jk (u 1 ab , u r ab )(u 1 aa ) −k (when r = 1 then k = 1) ; the corresponding basis for the conformal algebra AC(n) = ∂ a , J ab , D, K a (K a = 2x a D − x b x b ∂ a ) has the following form: (1) when λ = 0: S jk (θ r ab , θ 1 ab )(u 1 ) k(2/λ−1) , u r u 1 , R k (θ r a , θ 1 ab ) k(2/λ−1)−1 (r = 2, . . . , m); (1.29a) (2) when λ = 0: Theorem 4 is proved similarly to Theorem 3. Functional independence of the sets of invariants follows from Lemma 2 and 3 taking into account the fact that transformations u r ab → θ r ab , u r ab → w r ab (r = 1, . . . , m) and u r a → θ r a (r = 2, . . . , m) are nondegenerate. u r (r = 1, . . . , m), (u 1 d u 1 d ) −2k S jk (w 1 ab , w r ab ), (u 1 d u 1 d ) 1−2k R k (u r a , Differential invariants of the rotation algebra. The rotation algebra is defined by the basis operators J ab (1.1). The second-order invariants of this algebra for m scalar functions u r are constructed with x a , u r , u r a , w r ab similarly to invariants of the Euclid algebra. Theorem 5. There is a functional basis of the second-order differential invariants for the algebra AO(n) that has the form u r , S jk (u 1 ab , u r ab ), R k (u r a , u 1 ab ), R k (x a , u 1 ab ), r = 1, . . . , m; the corresponding basis of invariants for the algebra J ab , D , where D is defined by (1.28), consists of the expressions u r u 1 (r = 2, . . . , m), S jk (u 1 ab ,u r ab ) (u 1 ) k(1−2/λ) , R k (u r a , u 1 ab )(u 1 ) 2k/λ−1−k , R k (x a , u 1 ab )(u 1 ) 2/λ(k−2)−k+1 , when λ = 0; u r , R k (u r a , u 1 ab )(u 1 aa ) −k , S jk (u 1 ab , u r ab )(u 1 aa ) −k (k = 1 when r = 1), R k (x a , u 1 ab )(u 1 aa ) 2−k when λ = 0. A basis of invariants for the algebra J ab , D, K a when λ = 0, consists of the expressions (1.29a) and R k (x a , θ 1 ab ) x 2 (u 1 ) (k−1)(1−2/λ) , k = 2, . . . , n + 1; when λ = 0 it consists of the expressions (1.29b) and R k (x a , w 1 ab ) x 2 (w 1 aa ) k−1 (x 2 = x a x a ). The proof of this theorem is similar to the proofs of Theorems 2 and 3; notice that (x a ) is a covariant tensor with respect to the conformal operators. Differential invariants of the Poincaré and conformal algebra In this section, we consider differential invariants of the second order for a set of m scalar functions u r = u r (x 0 , x 1 , . . . , x n ), n ≥ 3. The Poincaré algebra AP (1, n) is defined by the basis operators p µ = ig µν ∂ ∂x µ , J µν = x µ p ν − x ν p µ ,(2.1) where µ, ν take the values 0, 1, . . . , n; the summation is implied over the repeated indices (if they are small Greek letters) in the following way: x ν x ν ≡ x ν x ν ≡ x ν x ν = x 2 0 − x 2 1 − · · · − x 2 n , g µν = diag (1, −1, . . . , −1). (2.2) We consider x ν and x ν equal with respect to summation, not to mix signs of derivatives and numbers of functions. The quasilinear second-order invariants of the Poincaré algebra were described in [12]. Theorem 6. There is a functional basis of the second-order differential invariants of the Poincaré algebra AP (l, n) for a set of m scalar functions u r consisting of m(2n + 3) + (m − 1) n(n + 1) 2 invariants u r , R k (u r µ , u 1 µν ), S jk (u r µν , u 1 µν ). In this section, everywhere k = 1, . . . , n + 1; j = 0, . . . , k; r = 1, . . . , m. For the extended Poincaré algebra AP (l, n) = p µ , J µν , D , where D = x µ p µ + λu r p u r (2.3) (p u r = i(∂/∂u r ), the summation over r from 1 to m is implied) the corresponding basis has the following form: (1) when λ = 0: u r , S jk (u r µν , u 1 µν )(u 1 αα ) −k , R k (u r µ , u 1 µν )(u 1 αα ) −k ; (2) when λ = 0: u r u 1 , S jk (u r µν , u 1 µν )(u 1 ) k(2/λ−1) , R k (u r µ , u 1 µν )(u 1 ) 2k/λ−k−1 , where S jk , R k are defined similarly to (0.3) and the summation over small Greek indices is of the type (2.2). For the conformal algebra AC(1, n) = p µ , J µν , D, K µ , where K µ = 2x µ D − x ν x ν p µ (D being the dilation operator (2.3)), the corresponding basis consists of the expressions S jk (θ r µν , θ 1 µν )(u 1 ) k(2/λ−1) , u r u 1 , R k (θ r µ , θ 1 µν )(u 1 ) k(2/λ−1)−1 ; when λ = 0; r = 2, . . . , m, there is no summation over r; the conformally covariant tensors have the form θ r µ = u r µ u r − u 1 µ u 1 , θ r µν = λu r µν + (1 − λ) u r µ u r ν u r − g µν u r β u r β 2u r . When λ = 0, the corresponding basis of invariants for the conformal algebra has the form u r , S jk (w µν , w 1 µν )(u 1 α u 1 α ) −2k , R k (u r µ , w 1 µν )(u 1 α u 1 α ) 1−2k , r = 2, . . . , m; the tensors (w r µν ), w r µν = u r α u r α u r µν − g µν 1 − n u r ββ − u r β (u r µ u r βν + u r ν u r βµ ) are conformally invariant (there is no summation over r). The proof of Theorem 6 follows from those of Theorems 2, 3 for x = (x 1 , . . . , x n+1 ) if we substitute ix 0 instead of x n+1 . Similarly to the results of Paragraph 1.4, it is possible to construct the invariants of the algebras J µν , J µν , D , J µν , D, K µ . The obtained results allow us to construct new nonlinear many-dimensional equations, e.g. the equation u α u α 1 − n u νν − u µ u ν u µν = (u ν u ν ) 2 F (u), where F is an arbitrary function, is invariant under the algebra AC(1, n), λ = 0. The left-hand part of the above equation is equal to w µµ . There is another quasi-linear relativistic equation with rich symmetry properties (1 − u α u α )u µµ − u α u µ u αµ = 0, that is, the Born-Infeld equation. The symmetry and solutions of this equation were investigated in [10,13]. This equation is invariant under the algebra AP (1, n + 1) with the basis operators J AB = x A p B − x B p A , A, B = 1, . . . , n + 1, x n+1 ≡ u. Let us consider the class of equations u µν u µν = F (u µµ , u µ u ν u µν , u µ u µ , u). It is evident that they are invariant with respect to the Poincaré algebra AP (1, n) out the straightforward search the conformally invariant equations from this class with the standard Lie technique requires a lot of cumbersome calculations. The use of differential invariants turns this problem into one of elementary algebra, e.g. if λ = 0 F − u µν u µν = − 1 λ S 2 (θ µν ) + u 2(1−2/λ) φ(S 1 (θ µν )u 2/λ−1 ), where θ µν is of the form (1.24) and φ is an arbitrary function. Whence F = u 2(1−2/λ) φ u 2/λ−1 u µµ − λ+n λ uαuα u − − 1 λ 2 u 2 (λ 2 + n 2 )(u α u α ) 2 − 2(1−λ) λu u µ u ν u µν + 2uµµuαuα λu . It is useful to note that besides the traces of matrix powers (0.3), one can utilize all possible invariants of covariant tensors θ r µν , w r µν to construct conformally invariant equations. Differential invariants of an infinite-dimensional algebra It is well-known that the simplest first-order relativistic equation -the eikonal or Hamilton equation u α u α ≡ u 2 0 − u 2 1 − · · · − u 2 n = 0 (3.1) is invariant under the infinite-dimensional algebra AP ∞ (1, n) generated by the operators [10, 14] X = (b µν x ν + a µ )∂ µ + η(u)∂ u ,(3. 2) −b µν = b νµ , a µ , η being arbitrary differentiate functions on u. Equation (3.1) is widely used in geometrical optics. In this section, we describe a class of second-order equations invariant under the algebra (3.2). It is easy to show that the tensor of the rank 2 θ µν = u µ u λν u λ + u ν u λµ u λ − u µ u ν u λλ − u λ u λ u µν is invariant under the generalized Galilei algebra AG I 2 (1, n) with the basis operators ∂ t = ∂ ∂t , ∂ a = ∂ ∂xa , J ab = x a ∂ b − x b ∂ a , G a = t∂ a + µx a u∂ u ∂ u = ∂ ∂u , u∂ u , D = 2t∂ t + x a ∂ a + λu∂ u , A = tD − t 2 ∂ t + µ∂x 2 2 u∂ u λ = − n 2 . (4. 2) The Schrödinger equation 2imψ t + ψ aa = 0, (4.3) ψ = ψ(t, x) being a complex-valued function, is also invariant [16] under the generalized Galilei algebra with the basis operators p 0 = i ∂ ∂t , p a = −i ∂ ∂xa , J ab = x a p b − x b p a , J = i(ψ∂ ψ − ψ * ∂ ψ * ), G a = tp a − mx a J, D = 2tp 0 − x a p a + λI (I = ψ∂ ψ + ψ * ∂ ψ * ), A = t 2 p 0 − tx a p a + λtI + m bf x 2 2 J λ = − n 2 . (4.4) The asterisk means the complex conjugation. We shall designate the algebra (4.4) with the symbol AG II 2 (1, n). Besides, AG I (1, n) = ∂ t , ∂ a , u∂ u , G a , J ab , the operators being of the form (4.2). A basis of the algebra AG I 1 (1, n) consists of the basis operators or AG I (1, n) and of the operator D. Furthermore AG II (1, n) = p 0 , p a , J, J ab , G a (4.4). A basis of the algebra AG II 1 (1, n) consists of the previous operators and also D (4.4). To simplify the form of invariants, we introduce the following change of dependent variables: u = exp ϕ, ψ = exp φ Imφ = arctan Imψ Reψ . (4.5) All the indices k in the expressions of the type (0.3) here will take on values from 1 to n, the indices j will take on values from 0 to k. We seek invariants of the algebra AG I 2 (1, n) in the form F = F (ϕ t , ϕ a , ϕ tt , ϕ at , ϕ ab ). (4.6) Obviously, they do not include ϕ, x a , and t because the basis (4.2) contains operators ∂ ϕ , ∂ a , ∂ t . Using the definition of an absolute differential invariant (0.2) we get the following conditions on the function F (4.6): 2 JabF = ϕ a F ϕ b − ϕ b F ϕa + F ϕ bt ϕ at − ϕ bt F ϕat + 2ϕ ac F ϕ bc − 2ϕ bc F ϕac = 0, (4.7) 2 GaF = −ϕ a F ϕt + µF ϕa − 2ϕ at F ϕtt − ϕ ab F ϕ bt = 0, (4.8) 2 D F = −2ϕ t F ϕt − ϕ a F ϕa − 4ϕ tt F ϕtt − 3ϕ at F ϕat − 2ϕ ab F ϕ ab = 0, (4.9) 2 A F = t 2 D F + x a 2 GaF − λF ϕt − 2ϕ t F ϕtt − ϕ a F ϕat + µδ ab F ϕ ab = 0. (4.10) From equations (4.8), we can see that the tensors θ a = µϕ at + ϕ b ϕ ab , ϕ ab (4.11) are covariant with respect to the algebra AG I (1, n) (µ = 0). Theorem 9. There is a functional basis of absolute differential invariants for the algebra AG I (1, n), when µ = 0, consisting of these 2n + 2 invariants: M 1 = 2µϕ t + ϕ a ϕ a , M 2 = µ 2 ϕ tt + 2µϕ a ϕ at + ϕ a ϕ b ϕ ab , R k = R k (θ a , θ ab ), S k = S k (ϕ ab ). (4.12) For the algebra AG I 1 (1, n) (µ = 0) such a basis has the form M 2 M 2 1 , R k M 2+k 1 , S k M k 1 . (4.13) For the algebra AG I 2 (1, n) (µ = 0), there is a basis of the form 14) where N 1 = 2µϕ t + ϕ a ϕ a + ϕ aa , N 2 N 2 1 ,R k N 2+k 1 ,Ŝ k N k 1 (k = 2, . . . , n),(4.N 2 = µ 2 ϕ tt + 2µ 1 n ϕ t ϕ aa + ϕ a ϕ at + ϕ a ϕ b ϕ ab + 1 n ϕ a ϕ a ϕ bb + 1 n ϕ 2 bb , R k = k l=0 R l (ϕ aa ) k−1 (−n) l k! l!(k−l)! , S k = k l=0 (−n) l (k−1)!(k+1) (l+1)!(k−l)! S l (ϕ aa ) k−l ,(4.15) S k , R k are defined by (4.12) and θ a has the form (4.11). The proof of this theorem is similar to the proof of Theorems 2 and 3. We shall present here only some hints to the proof. It is evident that the function F must depend on the invariants of the Euclid algebra F = F (ϕ t , ϕ tt , R k (ϕ a , ϕ ab ), R k (ϕ at , ϕ ab ), S k (ϕ ab )). First we construct two invariants of AG I (1, n) M 1 and M 2 (4.12) which depend on ϕ t and ϕ tt respectively. The other invariants of the adduced basis (4.12) do not depend on ϕ t or ϕ tt and the sets {M 1 , M 2 } and {R k , S k } are independent. The invariants R k , S k are constructed with the covariant tensors θ a , ϕ ab (4.11) similarly to invariants of the conformal algebra investigated above, and it is easy to see that they are independent. The generic ranks of the prolonged algebras AG I (1, n), AG I 1 (1, n), AG I 2 (1, n) are equal to the numbers of their operators and from this fact we can compute the number of elements in the bases for these algebras. Adding to (4.7) and (4.8) the condition (4.9), we obtain from the invariants (4.12) the basis (4.13) for the algebra AG I 1 (1, n). Relative invariantsR k ,Ŝ k (4.15) of the algebra AG I 2 (1, n) were found from the equation λF ϕt − 2ϕ t F ϕtt − ϕ a F ϕat + µδ ab F ϕ ab = 0, F = F (R k , S k ) , and then we constructed absolute invariants using (4.9). Besides, it is possible to construct analogues toR k ,Ŝ k with AG I 2 (1, n)-covariant tensors θ a (4.11) and θ ab = ϕ ab − 2δ ab n (ϕ c ϕ c + µϕ t ). Considering (ϕ at ), (ϕ a ), (ϕ ab ) as independent vectors and tensors and putting ϕ ab = 0 whenever a = b, ϕ a = 0, we see from Lemma 2 that the adduced sets of invariants are independent. Note 2. A basis of invariants for the Galilei algebra without translations contains expressions (4.12) and R k (h a , φ ab ), 1 2 µx 2 − ϕt, the Galilei-covariant vector h a having the form h a = µx a − tϕ a . Let us also adduce an A-covariant tensor h a = µx a t − ϕ a depending on x a , and a relative invariant of the operators A and D (4.2) exp ϕ − µx 2 2t with which it is possible to construct a basis of invariants for the algebra G a , J ab , D, A . We have presented a method to find the bases of invariants for Lie algebras for which J ab (1.1) are basis operators. Further, we shall adduce functional bases for the algebras AG I 2 (1, n) where µ = 0 and AG II 2 (1, n) where µ = 0 or µ = 0. We omit proofs because they are similar to proofs of the previous theorems. It is evident from the conditions (4.7)-(4.10) that the case µ = 0 for the algebra AG I 2 (1, n) has to be specially considered. The tensors (ϕ a ) and (ϕ ab ) are covariant with respect to this algebra; the tensor (θ a ) involved in invariants is defined by an implicit correlation ϕ bt = θ a ϕ ab . (4.16) 4.2. Let us proceed to describe the basis of the invariants for the algebra AG II 2 (1, n). Theorem 11. Any absolute differential invariant of order ≤ 2 for the algebras listed below is a function of the following expressions: (1) AG II (1, n), m = 0: φ + φ * , M 1 = 2imφ t + φ a φ a , M * 1 , M 2 = −m 2 φ tt + 2imφ a φ at + φ a φ b φ ab , M * 2 , S jk = S jk (φ ab , φ * ab ), R 1 k = R k (θ a , φ ab ), R 2 k = R k (θ * a , φ ab ), R 3 k = R k (φ a + φ * a , φ ab ), N 1 e (−4/n)(φ+φ * ) , N 1 N * 1 , N 2 N 2 1 , N * 2 N 2 1 ,R l k N 2+k 1 (l = 1, 2),R 3 k N k 1 ,Ŝ jk N k 1 , where N 1 = 2imφ t + φ aa + φ a φ a , N 2 = −m 2 φ tt + 2im φ a φ at + 1 n φ t φ aa + φ a φ b φ ab + 1 n φ a φ a φ bb + 1 n φ 2 aa , S jk = k l=0 j r=0 S rl (−n) l C r j C l+1−r k (φ aa ) j−r (φ * aa ) k−l−j+r + k(φ aa ) j (φ * aa ) k−j−1 , R l k = k j=0 R l j (φ aa ) k−j (−n) j k! j!(k−j)! (l = 1, 2, 3). The invariants for the algebras AG II (1, n), AG II 1 (1, n) (m = 0) can be constructed similarly to the case of real function. Let us adduce a functional basis for the algebra AG II 2 (1, n). (1) when λ = 0, then there is a basis consisting of the following expressions: φ + φ * , N 2 1 N 2 2 , N * 2 1 N 2 , (S jk ) 2 N k 1 , (R l k ) 2 N −k−1 1 (l = 1, 2, 4); (2) λ = 0: N 1 e (4/λ)(φ+φ * ) , N * 1 N 1 , N 3 e (3/λ)(φ+φ * ) , (R l k ) 2 N k 1 (l = 1, 2, 3), (S jk ) 2 N k 1 , where N 1 = (φ t − θ a φ a ) 2 + (φ tt − θ a φ at )(λ + φ a φ ab r ab ) (with {r ab } = {φ ab } −1 and θ a = r ab φ bt ), N 2 = (φ t − φ c θ c )φ * a φ * b r * ab − (φ * t − φ * c θ * c )φ a φ b r ab , N 3 = (φ t − φ * t ) − τ a (φ a − φ * a ) (τ a (λφ ab + φ a φ b ) = φ b φ t + λφ bt ), R 1 k = R k (φ a , φ ab ), R 2 k = R k (φ * a , φ ab ), R 3 k = R k (θ a − θ * a , φ ab ), R 4 k = R k (ρ a , φ ab ) (ρ a = (φ t − θ b φ b )(φ * c r ac − φ c r * ac ) − φ b φ d r bd (θ a − θ * a )). The proof of this theorem will be easier if we notice that by putting µ = im in (4.4), we obtain operators similar to the operators (4.2). The change of variables (4.5) in the adduced invariants allows us to obtain bases for the algebras AG I 2 and AG II 2 in the representations (4.2) and (4.4). These results can also be generalized for the case of several scalar functions. Let us present some examples of new invariant equations φ tt + 1 µ 2 2µ 1 n φ t φ aa + φ a φ t + φ a φ b φ ab + 1 n φ a φ a φ bb + 1 n φ 2 bb = = (2µφ t + φ a φ a + φ aa ) 2 F, (4.18) −m 2 φ tt + 2im φ a φ at + 1 n φ t φ aa + φ a φ b φ ab + 1 n φ a φ a φ bb + 1 n φ 2 aa = = (2imφ t + φ a φ a + φ aa ) 2 F. Equations (4.18) and (4.19) are invariant, respectively, under the algebras AG I 2 (1, n), µ = 0 (4.2), and AG II 2 (1, n), m = 0 (4.4). The F 's are arbitrary functions of the invariants for corresponding algebras. Evidently, wide classes of invariant equations can be constructed with the adduced invariants. Conclusion It is well-known that a mathematical model of physical or some other phenomena must obey one of the relativity principles of Galilei or Poincaré. Speaking the language of mathematics, it means that the equations of the model must be invariant under the Galilei or the Poincaré groups. Having bases of differential invariants for these groups (or for the corresponding algebras), we can describe all the invariant scalar equations, or sort the invariant ones out of a set of equations. The construction of differential invariants for vector and spinor fields presents more complicated problems. The first-order invariants for a four-dimensional vector potential had been found in [18]. The cases of spinor and many-dimensional vector Poincaré-invariant equations and corresponding bases of invariants are still to be investigated. Note 5. After having prepared the present paper, we became acquainted with the article [19] where realizations of the Poincaré group P (1, 1) and the corresponding conformal group were investigated, and all second-order scalar differential equations invariant under these groups were obtained. Reference [19] contains bases of absolute differential invariants of the order 2 for the Poincaré, the similitude, and the conformal groups in (1 + 1)-dimensional Minkowski space for various realizations of the corresponding Lie algebras. Note 6. It was noticed by the referee that an essential misunderstanding arose in the calculation of second prolongations for differential operators, e.g. in formulae (1.5) and (1.25). When we calculate such prolongations with the usual Lie technique (see, e.g., [8]), we imply that action of an operator of the form X ab ∂ u ab , where X ab are some functions, is as follows X ab ∂ u ab (u cd u cd ) = 2X ab u ab , ∂ u ab u cd = δ ac δ bd . With this assumption, ∂ u ab u ba = 0, a = b. Otherwise, the second prolongation of the operator J ab (1.1) will be of the form Note 7. The equations which are conditionally invariant with respect to the Poincaré and Galilei algebras were investigated in [20,21]. Definition 1 . 1The function F = F (x, u, u 1 , . . . , u l ), 1 Acta Appl. Math., 1992, 28, 69-92; some misprints were corrected. Lemma 3 . 3Let U = (u ab ) a,b=1,...,n , V = (v ab ) a,b=1,...,n be symmetric matrices. Then the expressions we obtain functional bases (1.20), (1.21) for the extended Euclid algebra.The second-order differential invariants of the algebra AC(n) are defined by the conditionsKa are the second prolongations of the operators K a (1.19) Definition 3 . 3Tensors θ a and θ ab of order 1 and 2 are called covariant with respect to some algebra L = J ab , X i if w 1 ab ) (r = 2, . . . , m) (1.29b) (for the set of invariants (u 1 d u 1 d ) −2k S k (w ab ), k does not take the value n); the tensors θ r ab , Theorem 8 .X 8under the algebra AP ∞ (1, n) (3.2). Theorem 7. The equations of the form S k (θ µν ) = 0, k = 1, 2, . . . , (3.4) S k being defined as (0.3), are invariant with respect to the algebra AP ∞ (1, n) (3.2). The problem of the description of all such equations is more difficult and we do not consider it here. Let us investigate in more detail the quasi-linear second-order equation of the form u µ u µν u ν − u µ u µ u αα = When n ≥ 2, equation (3.5) is invariant with respect to the algebra AP ∞ (1, n) with generators of the form X + d(u)x µ ∂ µ , X is of the form (3.2), d(u) is an arbitrary function on u. The proofs of Theorems 7 and 8 can be easily obtained with the Lie technique using the criterion of invariance 2 X S k (θ µν ) is the second prolongation of the operator X [8-10].4. Differential invariants of the Galilei algebra 4.1. It is well-known that the heat equation 2µu t + ∆u = 0, ∆u ≡ u aa , u = u(t, x), x = (x 1 , . . . , x n ), n ≥ 3 (4.1) the corresponding quadrant of the matrix (1.16) is the unit matrix and its determinant does not vanish identically. This fact proves the nondegeneracy of the matrix (1.16). The expressions (1.17) can be obtained from the Hamilton-Cayley theorem. They are polynomials and, thus, continuous functions of their arguments. 24 ) 24δ ab being the Kronecker symbol. Jab = J ab +Ĵ ab , J ab = u a ∂ u b − u b ∂ ua + u ac ∂ u bc − u bc ∂ uac + u ab (∂ u bb − ∂ uaa ). Acknowledgement. Authors would like to thank the referees for valuable comments.Theorem 10. There is a functional basis of the second-order differential invariants for the algebra AG I (1, n), where µ = 0, that has the form(4.17)The corresponding basis for the algebra AG I 1 (1, n), where µ = 0 has the formfor the algebra AG I 2 (1, n), when µ = 0, it has the formwhere R k , S k are defined by(4.17)andHere, the matrix {r ab } = {ϕ ab } −1 ; θ a = r ab ϕ bt are the same as in (4.16). Note 3. It is possible to use, instead of M 1 , M 2 , the invariantŝ M 1 = ϕ t ϕ 1 · · · ϕ n ϕ 1t ϕ 11 · · · ϕ 1n · · · · · · · · · · · · ϕ nt ϕ n1 · · · ϕ nn ,M 2 = ϕ tt ϕ 1t · · · ϕ nt ϕ 1t ϕ 11 · · · ϕ 1n · · · · · · · · · · · · ϕ nt ϕ n1 · · · ϕ nn , which have been found in[17]as the solution of the problem of finding the equations invariant under the Galilei algebra when µ = 0. Note 4. The invariants for the algebra J ab , G a , J, D, A (4.2), where µ = 0, which depend on x a , t, can be constructed with ϕ a , ϕ ab and the following covariant vectorwhere h a = x b ϕ ab + tϕ at is covariant with respect to the operators G a when µ = 0. . S Lie, Math. Ann. 24Lie S., Math. Ann., 1884, 24, 52-89. . A Tresse, Acta Math. 18Tresse A., Acta Math., 1894, 18, 1-88. . E Vessiot, Acta Math. 28Vessiot E., Acta Math., 1904, 28, 307-349. A D Michal, Proc. Nat. Acad. Sci. Nat. Acad. Sci37Michal A.D., Proc. Nat. Acad. Sci., 1951, 37, 623-627. . W I Fushchych, I A Yegorchenko, Dokl AN Ukr. SSR, Ser. A. 4Fushchych W.I., Yegorchenko I.A., Dokl AN Ukr. SSR, Ser. A, 1989, 4, 29-32. . W I Fuschchych, I A Yegorchenko, Dokl. AN Ukr. SSR, Ser. A. 5Fuschchych W.I., Yegorchenko I.A., Dokl. AN Ukr. SSR, Ser. A, 1989, 5, 21-22. Theory of invariants. A J M Spencer, Academic PressNew York, LondonSpencer A.J.M., Theory of invariants, New York, London, Academic Press, 1971. Group analysis of differential equations. L V Ovsyannikov, Academic PressNew YorkOvsyannikov L.V., Group analysis of differential equations, New York, Academic Press, 1982. Application of Lie groups to differential equations. P Olver, Springer-VerlagNew YorkOlver P., Application of Lie groups to differential equations, New York, Springer-Verlag, 1987. Symmetry analysis and exact solutions of nonlinear equations of mathematical physics. W I Fushchych, W M Shtelen, N I Serov, Kiev, Naukova Dumkain RussianFushchych W.I., Shtelen W.M., Serov N.I., Symmetry analysis and exact solutions of nonlinear equations of mathematical physics, Kiev, Naukova Dumka, 1989 (in Russian); Symmetries and differential equations. G W Bluman, S Kumei, Springer VerlagNew YorkBluman G.W., Kumei S., Symmetries and differential equations, New York, Springer Verlag, 1989. . W I Fuschchych, I A Yegorchenko, Dokl. AN SSSR. 298Fuschchych W.I., Yegorchenko I.A., Dokl. AN SSSR, 1988, 298, 347-351. . W I Fuschchych, N I Serov, Dokl. AN SSSR. 847Fuschchych W.I., Serov N.I., Dokl. AN SSSR, 1984, 278, 847. . W I Fushchych, W M Shtelen, Nuovo Lett, Cimento, 34498Fushchych W.I., Shtelen W.M., Lett. Nuovo Cimento, 1982, 34, 498. . J A Goff, Amer. J. Math. 49Goff J.A., Amer. J. Math., 1927, 49, 117-122. . U Niederer, Helv. Phys. Acta. 45Niederer U., Helv. Phys. Acta, 1972, 45, 802-810. . W I Fushchych, R M Cherniha, J. Phys. A. 18Fushchych W.I., Cherniha R.M., J. Phys. A, 1985, 18, 3491-3503. Symmetry properties of nonlinear equations for complex vector fields. I A Yegorchenko, Institute of Mathematics of the Ukr. Acad. Sci. Preprint 89.48Yegorchenko I.A., Symmetry properties of nonlinear equations for complex vector fields, Preprint 89.48, Institute of Mathematics of the Ukr. Acad. Sci, 1989. . G Rideau, P Winternitz, J. Math. Phys. 31Rideau G., Winternitz P., J. Math. Phys., 1990, 31, 1095-1105. Symmetries of Maxwell's equations. W I Fushchych, A G Nikitin, Dordrecht, D. ReidelFushchych W.I., Nikitin A.G., Symmetries of Maxwell's equations, Dordrecht, D. Reidel, 1987. . W I Fushchych, Ukrain. Mat. Zh. 1456Fushchych W.I., Ukrain. Mat. Zh., 1991, 43, 1456.
[]
[ "Super-Eddington Black-Hole Models for SS 433", "Super-Eddington Black-Hole Models for SS 433" ]
[ "Publ Astron \nHakodate College\nHokkaido University of Education\n1-2 Hachiman-cho040-8567Hakodate\n", "Soc \nHakodate College\nHokkaido University of Education\n1-2 Hachiman-cho040-8567Hakodate\n", "Japan \nHakodate College\nHokkaido University of Education\n1-2 Hachiman-cho040-8567Hakodate\n", "-?? \nHakodate College\nHokkaido University of Education\n1-2 Hachiman-cho040-8567Hakodate\n", "Toru Okuda [email protected] \nHakodate College\nHokkaido University of Education\n1-2 Hachiman-cho040-8567Hakodate\n" ]
[ "Hakodate College\nHokkaido University of Education\n1-2 Hachiman-cho040-8567Hakodate", "Hakodate College\nHokkaido University of Education\n1-2 Hachiman-cho040-8567Hakodate", "Hakodate College\nHokkaido University of Education\n1-2 Hachiman-cho040-8567Hakodate", "Hakodate College\nHokkaido University of Education\n1-2 Hachiman-cho040-8567Hakodate", "Hakodate College\nHokkaido University of Education\n1-2 Hachiman-cho040-8567Hakodate" ]
[]
We examine highly super-Eddington black-hole models for SS 433, based on two-dimensional hydrodynamical calculations coupled with radiation transport. The super-Eddington accretion flow with a small viscosity parameter, α = 10 −3 , results in a geometrically and optically thick disk with a large opening angle of ∼ 60 • to the equatorial plane and a very rarefied, hot, and optically thin high-velocity jets region around the disk. The thick accretion flow consists of two different zones: an inner advection-dominated zone and an outer convection-dominated zone. The high-velocity region around the disk is divided into two characteristic regions, a very rarefied funnel region along the rotational axis and a moderately rarefied high-velocity region outside of the disk. The temperatures of ∼ 10 7 K and the densities of ∼ 10 −7 g cm −3 in the upper disk vary sharply to ∼ 10 8 K and 10 −8 g cm −3 , respectively, across the disk boundary between the disk and the high-velocity region. The X-ray emission of iron lines would be generated only in a confined region between the funnel wall and the photospheric disk boundary, where flows are accelerated to relativistic velocities of ∼ 0.2 c due to the dominant radiation-pressure force. The results are discussed regarding the collimation angle of the jets, the large mass-outflow rate obserevd in SS 433, and the ADAFs and the CDAFs models.
10.1093/pasj/54.2.253
[ "https://arxiv.org/pdf/astro-ph/0202446v1.pdf" ]
16,875,041
astro-ph/0202446
3747e29087708211a73f5972c698446ffb2ff93f
Super-Eddington Black-Hole Models for SS 433 25 Feb 2002 Publ Astron Hakodate College Hokkaido University of Education 1-2 Hachiman-cho040-8567Hakodate Soc Hakodate College Hokkaido University of Education 1-2 Hachiman-cho040-8567Hakodate Japan Hakodate College Hokkaido University of Education 1-2 Hachiman-cho040-8567Hakodate -?? Hakodate College Hokkaido University of Education 1-2 Hachiman-cho040-8567Hakodate Toru Okuda [email protected] Hakodate College Hokkaido University of Education 1-2 Hachiman-cho040-8567Hakodate Super-Eddington Black-Hole Models for SS 433 25 Feb 2002(Received ; accepted )accretion, accretion disks -black hole physics -convection -hydrodynamics -stars: individual (SS 433) We examine highly super-Eddington black-hole models for SS 433, based on two-dimensional hydrodynamical calculations coupled with radiation transport. The super-Eddington accretion flow with a small viscosity parameter, α = 10 −3 , results in a geometrically and optically thick disk with a large opening angle of ∼ 60 • to the equatorial plane and a very rarefied, hot, and optically thin high-velocity jets region around the disk. The thick accretion flow consists of two different zones: an inner advection-dominated zone and an outer convection-dominated zone. The high-velocity region around the disk is divided into two characteristic regions, a very rarefied funnel region along the rotational axis and a moderately rarefied high-velocity region outside of the disk. The temperatures of ∼ 10 7 K and the densities of ∼ 10 −7 g cm −3 in the upper disk vary sharply to ∼ 10 8 K and 10 −8 g cm −3 , respectively, across the disk boundary between the disk and the high-velocity region. The X-ray emission of iron lines would be generated only in a confined region between the funnel wall and the photospheric disk boundary, where flows are accelerated to relativistic velocities of ∼ 0.2 c due to the dominant radiation-pressure force. The results are discussed regarding the collimation angle of the jets, the large mass-outflow rate obserevd in SS 433, and the ADAFs and the CDAFs models. Introduction SS 433 is a very unusual and puzzling X-ray source, which exhibits remarkable observational features, such as two oppositely directed relativistic jets moving with a velocity of 0.26 c, its expected too-high energy, and a precessing motion of the jets over a period of 162.5 d. Although a number of papers concerning observational and theoretical data on SS 433 have been published (e.g., Margon 1984;Cherepashchuk 1993), the true nature of SS 433 is still not clear. Before the discovery of SS 433, Shakura and Sunyaev (1973) already discussed the observational appearance of a super-Eddington accretion disk around black holes. Regarding these points, many studies have been made concerning the relativistic jets of SS 433 (Katz 1980;Meier 1982;Lipunov, Shakura 1982) and its collimated ejection (Sikora, Wilson 1981;Begelman, Rees 1984). Thus, the super-Eddington accretion disks were generally expected to possess vortex funnels and radiation-pressure driven jets from geometrically thick disks (Lynden-Bell 1978;Fukue 1982;Calvani, Nobili 1983 ). The radiative acceleration of particles near to the disk was also studied by several authors (Bisnovatyi-Kogan, Blinnikov 1977;Icke 1980Icke , 1989Fukue 1996;Tajima, Fukue 1998). A two-dimensional hydrodynamical calculation of a super-Eddington accretion disk around a black hole was first examined by Eggum et al. (1985Eggum et al. ( , 1988, and was discussed regarding SS 433. They showed a relativistic jet formation just outside a conical photosphere of the accretion disk and a collimation angle of ∼ 30 • of the jet, where the dominant radiation-pressure force accelerates the jet to about 1/3 c. However, the jet's mass flux is only 0.4% of the input accretion rate and is too small for SS 433. Kotani et al. (1996) strongly suggested that SS 433 is a binary system under a highly super-Eddington regime of accretion, where the mass-outflow rate of SS 433 jets is restricted to be ≥ 5 × 10 −7 M ⊙ yr −1 , combining spectral X-ray observations of iron emission lines with hydrodynamical modeling. Brinkmann and Kawai (2000) have modeled the two-dimensional hydrodynamical outflow of the SS 433 jets, while developing their previous model of conically out-flowing jets, and discussed the relationship between the physical parameters of SS 433 jets and observationally accessible data. Blandford and Begelman (1999) investigated an adiabatic inflow-outflow solution of the accretion disk, where the super-Eddington flow leads to a powerful wind. King and Begelman (1999) discussed the super-Eddington mass transfer in binary systems and how to accrete in such super-Eddington accretion disks. Furthermore, recent theoretical studies and two-dimensional simulations of the black holes accretion flows ( Igumenshchev, Abramowicz 1999;Stone et al. 1999;Narayan et al. 2000;Quataert, Gruzinov 2000;Stone, Pringle 2001; showed T. Okuda [Vol. , a new development in the advection-dominated accretion flows (ADAFs) and the convection-dominated accretion flows (CDAFs) which may be deeply related to the super-Eddington accretion flows. Okuda and Fujita (2000) have examined the super-Eddington accretion disk model of a neutron star for SS 433 by two-dimensional hydrodynamical calculations and discussed the characteristics of the super-Eddington flows. In this paper, following the neutron star model, we examine the super-Eddington black-hole models and discuss the results regarding SS 433 and also AVAFs and CDAFs. Model Equations A set of relevant equations consists of six partial differential equations for density, momentum, and thermal and radiation energy. These equations include the full viscous stress tensor, heating and cooling of the gas, and radiation transport. The radiation transport is treated in the gray, flux-limited diffusion approximation (Levermore, Pomraning 1981). We use spherical polar coordinates (r,ζ,ϕ), where r is the radial distance, ζ is the polar angle measured from the equatorial plane of the disk, and ϕ is the azimuthal angle. The gas flow is assumed to be axisymmetric with respect to Z-axis (∂/∂ϕ = 0) and the equatorial plane . In this coordinate system, the basic equations for mass, momentum, gas energy, and radiation energy are written in the following conservative form (Kley 1989): ∂ρ ∂t + div(ρv) = 0,(1)∂(ρv) ∂t + div(ρvv) = ρ w 2 r + v 2 ϕ r − GM * (r − r g ) 2 − ∂p ∂r + f r + divS r + 1 r S rr ,(2)∂(ρrw) ∂t + div(ρrwv) = −ρv 2 ϕ tanζ − ∂p ∂ζ + div(rS ζ ) + S ϕϕ tanζ + f ζ ,(3)∂(ρrcosζv ϕ ) ∂t + div(ρrcosζv ϕ v) = div(rcosζS ϕ ),(4)∂ρε ∂t + div(ρεv) = −p divv + Φ − Λ, and(5)∂E 0 ∂t + divF 0 + div(vE 0 + v · P 0 ) = Λ − ρ (κ + σ) c v · F 0 ,(6) where ρ is the density, v = (v, w, v ϕ ) are the three velocity components, G is the gravitational constant, M * is the central mass, p is the gas pressure, ε is the specific internal energy of the gas, E 0 is the radiation energy density per unit volume, and P 0 is the radiative stress tensor. It should be noticed that the subscript "0" denotes the value in the comoving frame and that the equations are correct to the first order of v/c (Kato et al. 1998). We adopt the pseudo-Newtonian potential (Paczyńsky, Wiita 1980) in equation (2), where r g is the Schwarzschild radius. The force density f R = (f r , f ζ ) exerted by the radiation field is given by f R = ρ κ + σ c F 0 ,(7) where κ and σ denote the absorption and scattering coefficients and F 0 is the radiative flux in the comoving frame. S = (S r , S ζ , S ϕ ) denote the viscous stress tensor. Φ = (S ∇)v is the viscous dissipation rate per unit mass. The quantity Λ describes the cooling and heating of the gas, i.e., the energy exchange between the radiation field and the gas due to absorption and emission processes, Λ = ρcκ(S * − E 0 ),(8) where S * is the source function and c is the speed of light. For this source function, we assume local thermal equilibrium S * = aT 4 , where T is the gas temperature and a is the radiation constant. For the equation of state, the gas pressure is given by the ideal gas law, p = R G ρT /µ, where µ is the mean molecular weight and R G is the gas constant. The temperature T is proportional to the specific internal energy, ε, by the relation p = (γ − 1)ρε = R G ρT /µ, where γ is the specific heat ratio. To close the system of equations, we use the flux-limited diffusion approximation (Levermore, Pomraning 1981) for the radiative flux: F 0 = − λc ρ(κ + σ) grad E 0 ,(9) and where λ and T Edd are the flux-limiter and the Eddington Tensor, respectively, for which we use the approximate formulas given in Kley (1989). The formulas fulfill the correct limiting conditions in the optically thick diffusion limit and the optically thin streaming limit, respectively. For the kinematic viscosity, ν, we adopt a modified version (Papaloizou, Stanley 1986;Kley, Lin 1996) of the standard α-model. The modified prescription for ν is given by P 0 = E 0 · T Edd ,(10)ν = α c s min [H p , H] ,(11) where α is a dimensionless parameter , usually α=0.001-1.0, c s the local sound speed, H the disk height, and H p = p/ | grad p | the pressure scale height on the equatorial plane. Numerical Methods The set of partial differential equations (1)-(6) is numerically solved by a finite-difference method under initial and boundary conditions. The numerical schemes used are basically the same as that described by Kley (1989) and Okuda et al. (1997). The methods are based on an explicit-implicit finite difference scheme. N r grid points (=150) in the radial direction are spaced logarithmically, while N ζ grid points (=100) in the angular direction are equally spaced, but more refined near the equatorial plane, typically ∆ζ = π/150 for π/2 ≥ ζ ≥ π/6 and ∆ζ = π/300 for π/6 ≥ ζ ≥ 0. Model Parameters For the central star of SS 433, we assume a Schwarzschild black hole with mass M * = 10M ⊙ and examine the structure and dynamics of an accretion disk around the black hole and its surrounding atmosphere. From an observational constraint of the mass-outflow rate,Ṁ loss , of ≥ 5 × 10 −7 M ⊙ yr −1 (3 × 10 19 g s −1 ) in SS 433 (Kotani et al. 1996), we adopt a mass accretion rateṀ * of 8 × 10 19 g s −1 , which corresponds to ∼ 4Ṁ E , whereṀ E is the Eddington critical accretion rate for the black hole, given bẏ M E = 48πGM * κ e c ,(12) where κ e is the electron scattering opacity. For the viscosity parameter, α, we consider two cases, 10 −3 (BH-1) and 0.1 (BH-2). The inner-boundary radius, R * , of the computational domain is taken to be 2r g and the outer-boundary radius, R max , is selected so that the radiation pressure is comparable to the gas pressure at the outer boundary, where the Shakura-Sunyaev instability never occurs. The model parameters used are listed in table 1. Boundary and Initial Conditions Physical variables at the inner boundary, except for the velocities, are given by extrapolation of the variables near the boundary. However, for the velocities, we impose limited conditions that the radial velocities are always negative and the angular velocities are zero. If the radial velocity by the extrapolation is positive, it is set to be zero; that is, outflow at the inner boundary is prohibited. On the rotational axis and the equatorial plane, the meridional tangential velocity w is zero and all scalar variables must be symmetric relative to these axes. The outer boundary at r = R max is divided into two parts. One is the disk boundary through which matter is entering from the outer disk. At the outer-disk boundary we assume a continuous inflow of matter with a constant accretion rate,Ṁ * . The other is the outer boundary region above the accretion disk. We impose free-floating conditions on this outer boundary (i.e., all gradients vanish) and allow for outflow of matter, whereas any inflow is prohibited here. We also assume the outer boundary region above the disk to be in the optically-thin limit, |F 0 | → cE 0 . This imposes a boundary condition on the radiation energy density, E 0 . The initial configuration consists of a cold, dense, and optically thick disk and a hot, rarefied, and optically thin atmosphere around the disk. The initial disk is approximated by the Shakura-Sunyaev's standard model. The initial hot rarefied atmosphere around the disk is constructed to be approximately in hydrostatic equilibrium. T.Okuda [Vol. , Results 4.1. Model BH-1 with α = 10 −3 The initial disk thickness, H/r, based on the standard disk model, is ∼ 1 at the inner region and ∼ 0.1 at the outer boundary. The ratio β of the gas pressure to the total pressure at the initial outer disk boundary is ∼ 0.27 . We performed a time evolution of the disk until t = 4 × 10 3 P d for model BH-1, where P d is the Keplerian orbital period at the inner boundary. Figure 1 shows the time evolution of the total luminosity, L, for model BH-1. After an initial sharp rise of the luminosity, the luminosity curve descends gradually toward a steady state value. The luminosity L at the final phase is 1.6 × 10 39 erg s −1 and L/L E is ∼ 1, which is far smaller thanṀ * /Ṁ E (= 4), where L E is the Eddington luminosity. At the initial stage of the evolution, matter in the inner region of the disk is ejected strongly outward. Some of the ejected gas hit on the rotational axis and others propagate outward. The gas hitting on the axis leads to a hightemperature region along the axis and anisotropic radiation fields, in which equ-contour lines of the radiation energy density are concentric. This results in an outward radiation-pressure force, which dominates the gravitational force, and a high-velocity jet region is formed along the axis. At the beginning of the evolution, the high-velocity region is confined to a narrow region along the rotational axis, but gradually spreads outward from the axis. The spreading-jet gas interacts with the surrounding medium and finally settles down to a quasi-steady state. On the other hand, the initially optically thin atmosphere above the initial disk is occupied by the dense gas ejected from the inner disk with increasing time, and becomes optically thick at the final phase, forming a cone-like shape with a large opening angle of ∼ 60 • to the equatorial plane. We regard this optically thick and dense region as an accretion disk. The boundary between the high-velocity region and the disk is characterized by sharp gradients of density and temperature. In order to understand well the evolution of the disk and the high-velocity region, we animated the density and temperature evolutions. Figure 2 denotes nine snapshots of the density evolution, where the right-hand number in each snap shows the evolutionary time in units of P d ; also, a color legend for log ρ (g cm −3 ) is shown. The green and orange parts in the later snaps express the rarefied, hot, and optically thin high-velocity jets region, while the blue and dark blue parts denote the optically thick and dense disk region. In the high-velocity region, we distinguish the orange region (which we call the funnel region) from the green region, because the densities in the orange region decrease sharply towards the rotational axis and are as very low as 10 −18 -10 −26 g cm −3 , compared with 10 −8 -10 −18 g cm −3 in the green region. Therefore, we have three characteristic regions here: (A) the disk region (blue), (B) the high-velocity region (green), and (C) the very rarefied funnel region (orange). As discussed later, the centrifugal barrier and the funnel wall lie near the boundaries, between (A) and (B), and between (B) and (C) at the final phase, respectively. Figure 3 shows the density contours with velocity vectors (a) and the temperature contours (b) on the meridional plane at t = 3577P d . In figure 3a, the density contours are denoted by the labels log ρ = −5,−6,−7,−8,−10, and −18. The velocity vectors show the maximum velocity, 0.4 c, at ζ ∼ 80 • and r/R * ∼ 200. The thick dashed and the thick dot-dashed lines in (a) show the funnel wall and the disk boundary between the disk and the high-velocity region, respectively. Here, we approximately define the disk boundary as an interface through which a temperature of ∼ 10 7 K in the disk jumps to ∼ 10 8 K. The disk boundary lies roughly between the density labels −7 and −8. In the funnel region (C) ( ζ > ∼ 80 • ), the flow velocities are very large, but rather chaotic near the rotational axis. In the high-velocity T.Okuda [Vol. , region (B) ( 80 • > ∼ ζ > ∼ 60 • ) , the flows are relativistic to be ∼ 0.1-0.2 c, and the gas streams run radially. Figure 3b shows a bird's-eye view of the temperature, where the disk boundary is clearly recognized by a sharp wall with high temperatures. The temperatures range from 2 × 10 6 to ∼ 10 7 K in the disk region, jump to ∼ 10 8 K just across the disk boundary, and again distribute gradually between ∼ 10 8 and 10 9 K in the high-velocity region. In the funnel region near the rotational axis, the temperatures are as very high as ∼ 10 9 -10 11 K. On the mid-plane, the densities and the temperatures are not largely different from the initial ones, except for the innermost region of r/R * < ∼ 10, where the flow is more rarefied and hotter than the initial flow. Figure 4 shows the contours of radiation energy density, E 0 , in an arbitrary unit (a) and angular-velocity Ω (b) on the meridional plane at t = 3577P d , where the thick dot-dashed line denotes the disk boundary. In figure 4a, E 0 is almost independent on ζ in the optically thin high-velocity region and weakly dependent on ζ in the upper disk region, and so the radiation-pressure gradient forces act only radially in the high-velocity region (B). In figure 4b, the angular velocities, Ω, in unit of the Keplerian angular velocity at the inner boundary are denoted by the contours of log Ω. The angular velocities show two kinds of characteristic distributions. Near to the equatorial plane, the angular velocities are nearly Keplerian. However, in the upper disk and the high-velocity region, the equi-contours of Ω are parallel to the Z-axis, and Ω behaves approximately as K/x 2.1 , where x is the distance in units of 2r g from the Z-axis T. Okuda [Vol. , and K is a constant, although the constant K seems to change slightly on the disk boundary and the funnel wall. As a result, the specific angular momentum λ a (= x 2 Ω) is approximately conserved in these regions. From the numerical data, we find Ω = 5.6/x 2.1 with our non-dimensional units in the high-velocity region. The funnel wall, a barrier where the effective potential due to the gravitational potential and the centrifugal one vanishes, is described by a surface (x f , z f ), (Φ) eff = −1 (r f − 1/2) + λ a 2 2x 3 f = 0,(13) where r f = (x f 2 + z f 2 ) 1/2 is given in units of 2r g (Molteni et al. 1996). From equation (13) together with the above Ω, the funnel wall is shown by the thick dashed line in figure 4b. The funnel region (C) and the high-velocity region (B) are roughly separated by the funnel wall. The acceleration for the relativistic flows depends on the distribution of the radiation pressure, P r (= f E E 0 ), where f E is the Eddington factor. The flux limiter, λ, almost equals to 1/3 in the optically thick disk, while in the optically thin high-velocity region, λ is very small and its spatial variation is almost identical to that of the density, ρ. In regions (B) and (C), the radiation temperatures, T r = (E 0 /a) 1/4 , depend almost only on r and are small compared with the gas temperature, T ; also, the gas temperatures in the region (C) are much higher than that in the region (B). However, its too high temperature (∼ 10 9 -10 11 K) in region (C) may be unreliable because we did not take account of other physical processes, such as Compton processes and pair production-annihilation, which may be important at such high temperatures. On the other hand, we consider that the densities would not be drastically altered by the processes. The radiation pressures are very dominant everywhere, especially in the inner region of the disk. In the disk region (A), the centrifugal forces balance the gravitational forces near to the disk midplane, whereas, at the upper disk region, the balances due to the centrifugal, the gravitational, and the radiation-pressure gradient forces are maintained. In regions (B) and (C), the dominant radial radiation-pressure forces are one order of magnitude larger than the gravitational forces because the densities in these regions are very low. As a result, the radiation-pressure gradient forces accelerate the flows radially to relativistic velocities. The azimuthal velocities in the high-velocity region are in the range of 0.1-0.2 c, and the high-velocity gas is blown off spirally and relativistically. At the disk boundary between regions (A) and (B), the gravitational force balances roughly the radial component of centrifugal force, i.e., the boundary region corresponds to what is called a centrifugal barrier. Figure 5 shows magnified velocity fields and the density contours near the upper disk boundary at t = 3577P d , where the maximum velocity is 0.22c and the contours of log ρ (g s −3 ) is shown by lines with labels −6, −7, −8, and −10. The disk boundary between regions (A) and (B) is between the density labels −7 and −8. As mentioned later, this high-velocity region above the disk boundary would correspond to a typical X-ray emitting region of iron lines with velocities of ∼ 0.2 c. Figure 6 shows the flow features in the inner disk and the surrounding high-velocity region, where the velocities are indicated by unit vectors and the thick dot-dashed line shows the disk boundary. One of the remarkable features of luminous accretion disks is convective phenomena in the inner region of the disk (Eggum et al. 1985(Eggum et al. , 1988Milsom, Taam 1997;Fujita, Okuda 1998). The convective motions are clearly found in this figure, and there appear more than a dozen of convective cells. Previously it has been shown that a luminous accretion disk withṀ * ∼Ṁ E is unstable against convection, and that this is induced by a large negative gradient of entropy vartical to the equatorial plane (Bisnovatyi-Kogan, Blinnikov 1977;Fujita, Okuda 1998). The accretion flow is severely disrupted by the convective eddies. Recent theoretical studies and two-dimensional hydrodynamical simulations of radiatively inefficient black hole accretion flows (Stone et al. 1999;Igumenshchev, Abramowicz 1999;Narayan et al. 2000;Quataert, Gruzinov 2000) showed that accretion flows with low viscosities are generally convection-dominated flows (ADAFs and CDAFs) and have characteristic self-similar solutions of the disk variables which are described by power law profiles with radius. This was also confirmed in recent magnetohydrodynamical simulations of the accretion flows around black holes (Stone, Pringle 2001). Model BH-1 belongs to a typical ADAFs or a CDAFs case because of its small value of the viscosity parameter, α. Figure 7 denotes the radial and angular profiles of the time-averaged flow variables near the equatorial plane for model BH-1: (a) density, ρ, in units of ρ 0 (= 10 −8 g cm −3 ); (b) radial velocity, v, in units of Keplerian velocity v K * at r = R * ; (c) total pressure, P t , in units of ρ 0 v 2 K * ; (d) rotational velocity, v ϕ (solid line), in units of v K * and the Keplerian one (dotted line); (e) radial mass-inflow rate,Ṁ in (r) (g s −1 ) (solid line), mass-outflow rate,Ṁ out (r) (dashed line), net mass accretion rate,Ṁ a (r)(=Ṁ in +Ṁ out ) (dotted line), in the whole region, and mass-outflow rateṀ jet (r) (⋄) in the jet region; (f) convective luminosity, L c = F c (r)dS (erg s −1 ); (g) entropy, S, at ζ ∼ 0 • (solid line) and 42 • (dotted line) in an arbitrary unit; and (h) angular profiles of mass-inflow rate,Ṁ in (ζ) (g s −1 ) (solid line), mass-outflow rate, M out (ζ) (dashed line), and net mass-flow rate,Ṁ p (=Ṁ in +Ṁ out )(⋄), where F c in (f) is the convective energy flux given by Narayan et al. (2000) and its integral is taken within the disk region, and the ordinate scale in (h) shows Sign(Ṁ ) log |Ṁ |. The time-averaged and angle-integrated mass inflow and outflow rates (Ṁ in andṀ out , respectively) andṀ out (r) = 4πr 2 π/2 0 ρ max (v, 0)cos ζdζ.(15) For these rates, the net mass accretion rate isṀ a =Ṁ in +Ṁ out . These mass-flux rates are actually regarded as those in the disk region, sinceṀ jet is a few orders of magnitude smaller thanṀ in ,Ṁ out , andṀ a in figure 7(e). Similarly, M in (ζ) andṀ out (ζ) are defined aṡ M in (ζ) = 4πcosζ Rmax R * r ρ min (w, 0)dr,(16) andṀ out (ζ) = 4πcosζ Rmax R * r ρ max (w, 0)dr. suggests that low-viscosity accretion flows around black holes consist of two zones: an inner advection-dominated zone, in which the net mass-inflow rate,Ṁ in , is small, and an outer convection-dominated zone, in whichṀ in increases with increasing radii. From figures 6 and 7, we can see that the accretion flow apparently T. Okuda [Vol. , consists of two such zones. The transition radius, r tr , between the zones is ∼ 10R * (= 20r g ), which is larger than 10r g in Stone and Pringle (2001), but smaller than 30-50r g in . At r < ∼ r tr , convection is absent and the mass-inflow rate is very small. On the other hand, at r ≥ r tr the flow is convectively turbulent and accretes slowly. From figure 7, we have rough approximations of ρ ∝ r, P T ∝ r −0.85 , v ϕ ∝ r −1/2 , andṀ in ∝ r. The angle-averaged variables are averaged over twenty mesh points between 0 ≤ ζ ≤ 12 • . However, it should be noticed that some variables, such as the entropy, may be considerably dependent on ζ, as is found in figure 7g, which shows S(ζ ∼ 0) ∝ r −1.6 at r/R * > ∼ 12 but S(ζ = 42 • ) ∼ constant at r/R * > ∼ 40. The entropies at ζ = 0 and 42 • sharply increase at r/R * < ∼ 12 and 40, respectively, which lie almost in the disk surface, because of S ∝ T 3 /ρ in the radiation-dominated region. The entropy profiles within the disk show equi-contours vertical to the equator and considerable radial gradients near to the disk mid-plane, but radial equi-contours in the upper disk. Other radial profiles are compared with those of (1) ρ ∝ r −1/2 , P t ∝ r −3/2 , v ∝ r −1/2 , andṀ in ∝ r for a self-similar solution (Igumenshchev, Abramowicz 1999;Narayan et al. 2000), (2) ρ ∝ r −1/2 , P t ∝ r −3/2 , v = 0, v ϕ ∝ r −1/2 , andṀ in =0 for a non-accretion convective envelope solution (Narayan et al. 2000;Quataert, Gruzinov 2000), and (3) ρ ∝ r 0 , P t ∝ r −1 , v ∝ r −1 , andṀ in ∝ r for hydrodynamical simulations with ν = 10 −2 ρ (Stone et al. 1999). The large differences of the profiles between us and them are the density profile with ρ ∝ r in model BH-1. However, it is natural that the very luminous disk like model BH-1 would have such density inversion profile, because the initial Shakura-Sunyaev disk has ρ ∝ r 3/2 and P t ∝ T 4 ∝ r −3/2 in the inner region of the disk, where the radiation-pressure and elctron scattering are dominant (Shakura, Sunyaev 1973). However, the self-similar solutions and other simulations are considered under a negligible radiation-pressure condition. In figure 6, we remark on the existence of an "accretion zone" just below the disk boundary, but above the top of the convective zones. This accretion zone is also found in Eggum et al. (1985Eggum et al. ( , 1988. At r > r tr , roughly half the mass at any time at any radius will be flowing in and flowing out, respectively, and the mass-inflow rate,Ṁ in , balances the mass-outflow rate,Ṁ out . Near to the transition region, by way of the convective zones and the accretion zone, matter is accreted towards the equatorial plane. The accreting matter, which is carried to the transition region by convection, partly diverts into the high-velocity jets and partly flows into the inner advection-dominated zone. Thus, the mass-flow rate swallowed into the black hole is as very small as ∼ 5 × 10 14 g s −1 , because the densities near the inner boundary are as very small as ∼ 10 −14 -10 −26 g cm −3 , although the velocities are as very large as −0.3c to −0.6c. On the other hand, the mass-loss rate of the jets at the outermost boundary is ∼ 4 × 10 18 g s −1 , which is one order of magnitude smaller than the input accretion rate,Ṁ * , whereas the mass-outflow rate,Ṁ out , through the outermost disk boundary is as large as the input accretion rate. We notice that the radial mass-outflow rate in the jets region T.Okuda [Vol. , increases with radii, roughly asṀ jet ∝ r in figure 7e, analogously with other mass-flow ratesṀ in (r) andṀ out (r), and furthermore that the ζ-direction net mass-outflow rateṀ p in figure 7h changes from its negative value to a positive value of ∼ 10 18 g s −1 at ζ ∼ 56 • , where the angle position corresponds to the boundary between the disk and the jets region. After crossing the boundary,Ṁ p decreases abruptly because the outflow gas from the disk surface is strongly bent toward the radial direction due to the dominant radial radiation-pressure force in the high-velocity region. This shows that a considerable disk wind is generated in the upper disk and is incorporated into the high-velocity flow. Therefore, a part of the convective flow escapes from the whole system as the disk wind and the other may always remain in convective circulations through the disk. Thus far, we expect that the mass-loss rate of the jets at great large radii will become much larger than the 4 × 10 18 g s −1 calculated here. Model BH-2 with α = 0.1 The shape of the initial disk is very similar to that of model BH-1, but the densities and the temperatures on the disk mid-plane in model BH-2 are more than one order of magnitude and by a few factors larger than those in model BH-1, respectively. Figure 8 shows the time evolution of the luminosity for model BH-2. The luminosity, 2.5 × 10 38 erg s −1 , at the final phase, t = 4365P d , is one order of magnitude smaller than that in model BH-1, and L/L E is ∼ 0.16. This model shows very different time evolutions from model BH-1. Mass ejection from the disk surface first begins at the innermost region of the disk, and subsequently occurs at the outer part of the disk. Although the ejected gas is blown off partly through the outer boundary, it accumulates in the outer part of the disk with increasing time. Finally, the disk becomes fatter at the outer part than the initial disk. The obtained accretion disk is geometrically thick as H/r ∼ 0.2, and has an opening angle of ∼ 10 • to the equatorial plane. Figure 9 shows the density contours with velocity vectors (a) and bird's-eye views of the temperature contours (b) on the meridional plane at the final phase. The thick dot-dashed line in figure 9a shows the disk boundary between the disk and the surounding atmosphere. In this figure we never find any relativistic jets along the rotational axis as is found in model BH-1. The temperatures and densities at the atmospheric region around the disk are as hot as ∼ 10 8 -10 10 K and as low as ∼ 10 −9 -10 −12 g cm −3 , respectively, but the outflow velocities are not very large in model BH-2, where the maximum velocity is ∼ 0.07 c near the rotational axis. Large gradients of the density and the temperature across the disk boundary are also found in figures 9a and b. Figure 10 shows contours of the radiation energy density E 0 (a) and the angular velocity Ω (b) on the meridional plane at t = 4365P d with the same units as in figure 4. The disk is nearly Keplerian throughout its whole shape, but the atmospheric region above the disk shows non-Keplerian angular velocities. There also appear convective cells at 15 < ∼ r/R * < ∼ 50. The densities and temperatures at r/R * < ∼ 10 on the disk mid-plane become more rarefied and much hotter than those in the initial disk, respectively, and a spherical high-temperature region is formed around here. The densities near the inner boundary are much larger, ∼ 10 −9 -10 −10 g cm −3 , than those in model BH-1. The resultant mass-inflow rate swallowed into the black hole is much larger, ∼ −1.4 × 10 16 g s −1 , than the ∼ −5 × 10 14 g s −1 in model BH-1, but is negligibly small compared with the input accretion rate. The disk features obtained here are interpreted, rather, under a category of the standard disk model, though the disk is geometrically thick. The result of model BH-2 describes that the concept of relativistic jets expected for the super-Eddington accretion disks around black holes must be modified under the α-model of viscosity, i.e., for the case No. ] Super-Eddington Black-Hole Models for SS 433 13 T.Okuda [Vol. , of strong kinematic viscosity, there exists no relativistic and well-collimated outflow jet, even in the super-Eddington accretion disks. Though the densities and temperatures at the atmosphere around the disk are in the range of those required for the X-ray jets of SS 433, the maximum outflow velocity is at most ∼ 0.07c and the flows are not well collimated anywhere. Therefore, we can not expect X-ray emission lines with a definite Doppler shift of 0.26 c observed in SS 433, and conclude that model BH-2 is unfavorable for SS 433. X-ray Emitting Region of Iron Lines For a Maxwellian distribution of the electron velocities, the power emitted per unit volume due to excitations of level n ′ of ion Z in the ground state n by electron collision is given by dP dV = 1.9 × 10 −16 T −1/2 Ω ∆E I H e −∆E/kT N e N Z erg cm −3 s −1 ,(18) where ∆E, I H , k, N e , N Z , and Ω are the excitation energy between the n and n ′ levels, the ionization potential of hydrogen, the Boltzmann constant, the number density of electrons, the number density of ion Z, and the effective average of collision strength Ω, respectively (Blumenthal, Tucker 1974). The red-and blue-shifted highly ionized iron lines, FeXXV K α and Fe XXVI K α , are clearly seen in all of the observed X-ray spectra of SS 433 (Kotani et al. 1996). Typically, focusing on the Fe XXV K α line, we calculated the total power emitted by the iron lines through the optically thin region. Figure 11 shows the mesh points which contributed most to the total X-ray power of the iron lines at t = 3577P d for model BH-1, where only mesh points with emissivity of more than a hundredth of the maximum emissivity in the optically thin region are indicated by the triangles, squares, filled triangles, and filled squares, which show different flow velocities of ∼ 0.10 c, 0.13 c, 0.17 c, and 0.20 c at the mesh points, respectively. The influences of photoionization and photoexcitation by the intense radiation field in the innermost region were not taken account of here. If these effects are considerable, the results in figure 11 may be invalid. In this respect, we mention that Brinkmann and Kawai (2000) showed the effect of photoionization on the iron-line flux ratio to be relatively small, using their jets model of SS 433 and assuming that the emission at the base of the jets is given by a black body, where the temperatures and densities (4 × 10 8 K and 10 −9 -10 −10 g cm −3 , respectively) used in their model are of the same orders as those in the marked region in figure 11. We notice that the emitting region of the iron lines in figure 11 is confined into a narrow region at 63 • < ∼ ζ < ∼ 68 • just outside the disk boundary, and that the high-velocity gas in the funnel region would never contribute to the X-ray line emission, because the densities in the region are more than ten orders of magnitude smaller than those in the narrow region, regardless of the temperatures in the funnel region. The accretion disk, itself, does not contribute to the X-ray line emission, because the temperatures are as low as ∼ 10 6 -10 7 K and its emissivity decreases exponentially as e −∆E/kT in spite of the high densities in the disk. Therefore, we consider that the unique velocity, 0.26 c, of SS 433 would be attributed to the effective velocity in the above emitting region of the iron lines. Discussion Comparison with Other Results Our present results are compared with those of a super-Eddington black hole model by Eggum et al. (1985Eggum et al. ( , 1988, who considered a black hole of M * = 3M ⊙ andṀ * = 4Ṁ E and used the flux-limited diffusion approximation for the radiation transport. The black hole mass is by a factor 3 smaller than ours, but the input accretion rate is the same as ours. They used a constant kinematic viscosity, ν = 1.5 × 10 15 cm 2 s −1 , instead of the usual α-model for viscosity. There exist some qualitatively important and common results between our model BH-1 and their model, such as the relativistic funnel jets, the geometrically thick disk, the accretion zone, and the remarkable convective phenomena in the disk. In their results, the flows between the disk and the rotational axis are accelerated to relativistic velocities due to the radiation-pressure force. The collimation angle of the jets to the rotational axis is also as large as ∼ 30 • in model BH-1. The mass-outflow rate obtained by Eggum et al. (1985Eggum et al. ( , 1988 is three orders of magnitude smaller than the input accretion rate,Ṁ * , and accordingly the estimatedṀ loss of SS 433. On the other hand, the mass-outflow T. Okuda [Vol. , rate from the high-velocity jets in model BH-1 is much larger than that in Eggum et al. although it is one order of magnitude smaller than the estimated value in SS 433. Most of the accreting matter in Eggum et al. (1985Eggum et al. ( , 1988 is swallowed into the central black hole through the inner boundary, but the mass-flow rate into the black hole in our model is negligibly small compared with the input accretion rate. The differences in the numerical results may be attributed to the high kinematic-viscosity, ν, of 1.5 × 10 15 cm 2 s −1 in Eggum et al. (1985Eggum et al. ( , 1988, which is more than two orders of magnitude larger than those at r/R * < ∼ 20 and at r/R * > ∼ 50, respectively, on the disk mid-plane in model BH-1. Convection-Dominated Accretion Flows with Small α As an alternative model to ADAFs models, since Narayan and Yi (1994), Blandford and Begelman (1999) proposed advection-dominated inflow-outflow solutions (ADIOS) which include a powerful wind that carries away mass, angular momentum, and energy from the accreting gas. A typical case with appropriate parameters shows that only a small fraction of the mass supplied will reach the black hole, accompanying a disk wind. In relation to these ADAFs, Narayan et al. (2000), Quataert and Gruzinov(2000), and developed the convection-dominated accretion flows (CDAFs) or a 'convective envelope solution'. The CDAFs models assume that the convective energy transport is primarily directed radially outward through the disk, but with no outflows, whereas in the ADIOS models the convective energy transport has a large ζ-direction component toward the disk surface at high latitudes, which will drive strong outflows. The initial Shakura-Sunyaev disk for model BH-1 is originally unstable to convection, since the entropy, S ∝ T 3 /ρ, in the radiation-pressure dominant inner disk falls abruptly with increasing z. Convection tends to establish an isentropic structure along the Z-axis, which results in equi-contours of entropy vertical to the equator near to the disk mid-plane in model BH-1. The convective flows accompanied by a large mass-outfow from the disk surface may be relevant to the ADIOS models. On the other hand, large negative radial gradients of the entropy near to the mid-plane in model BH-1 also induce large convective energy transport directed radially outward through the disk, as can be seen in figures 7f and g. As a result, the accretion flows in model BH-1 may be dominated by a ADIOS model and also a CDAFs model. The convective envelope solution, which is special one of the CDAFs, was addressed by Narayan et al. (2000) and Quataert and Grunizov (2000). The convective envelope solution expresses that v ∼ 0 and the viscous dissipation rate, Q + , is ∼ 0 because the net shear stress via viscosity and via convection vanishes. This leads to no advection entropy, and thus the divergence of the convective energy flux vanishes from the energy equation, that is, F c (r) ∝ r −2 . The relation F c ∝ r −2 is also derived from the definition F c = −3α(c 2 s /Ω K )ρT dS/dr (Narayan et al. 2000) and using the radial profiles of ρ ∝ r and P t ∝ r −0.85 in figure 7, where c s is the sound speed. Figure 7f shows roughly L c ∼ constant in the outer region; accordingly, F c ∝ r −2 if F c is independent of ζ. Even for net zero stress, the negative radial gradient of the entropy would be maintained if the high-entropy gas is lost in a considerable disk wind, as is found in model BH-1. Accordingly, model BH-1 may possess a circumstance near the convective envelope solution, although the mean radial velocity is not exactly zero. In this respect, further knowledge of the actual mechanism of the angular momentum transport by convection is required. suggest that CDAFs have a significant outward energy flux carried by convection, with a luminosity of L c = ǫ cṀ c 2 , where the efficiency, ǫ c , is ∼ 3×10 −3 -10 −2 , independently of the accretion rate, and the radiative output comes mostly from the convective part. In model BH-1, the convective luminosity, L c , near the outer boundary is ∼ 5 × 10 37 erg s −1 , which is far smaller than the radiative luminosity, L(∼ 1.6 × 10 39 at t = 3577 P d in model BH-1); this corresponds to ǫ c ∼ 10 −3 , which agrees with that by . However, the luminosity, L, comes mostly from the innermost hot region at r ∼ r tr through the optically thin high-velocity region, instead of the convective-dominated disk. Indeed,Ṁ a GM/r tr gives 2 × 10 39 erg s −1 , which corresponds well to L, roughly usingṀ a (r tr ) ∼ 10 20 g s −1 and r tr ∼ 20r g from figure 7. Finally, we emphasize that most of the simulations and theoretical models so far have been done on thin accretion disks without radiation, and that it is not still clear whether these results are valid for the much thicker and radiationpressure dominant disks. Further theoretical studies on the radiation-pressure dominant ADAFs and CDAFs are required. Relativistic Jets, Collimation, and Mass-Outflow Rate Observations of SS 433 show many emission lines of heavy elements, such as Fe and Ni, which denote the unique red and blue Doppler shifts of 0.26 c in the X-ray region. The collimation angle of SS 433 jets seems to be as small as 0.1 radian (several degrees) (Margon 1984). Any models for SS 433 must explain the characteristics of these emission lines. The unique velocity, 0.26 c, may be reasonably explained in terms of the relativistic velocities at a confined region just outside of the upper disk boundary, which lies between the funnel wall and the disk, as is found in model BH-1. The velocity 0.26 c is not an inherent velocity of SS 433 like an orbital velocity in the binary system, but is a result of the super-Eddington accretion disk. The collimation angle, ∼ 30 • , of the high-velocity jets in model BH-1 is rather large compared with the value of ∼ 0.1 radian expected for SS 433 jets. If the calculated high-velocity jets should be truly confined to a small angle of ∼ 0.1 radian from the rotating axis, we need a collimation mechanism operating outside the present computation grids, or some mechanism, such as magnetohydrodynamical collimation, which is not considered here. This problem remains open, although there exist some discussions that the jets of SS 433 may not be well collimated, as is inferred from the line profiles, and that the 0.1 radian jet may not be consistent with the X-ray observations of SS 433 (Eggum et al. 1988). Hydrodynamical modeling of the jets combined with recent X-ray observations reveals temperatures of 6-8×10 8 K, particle densities of between 5×10 11 -5×10 13 cm −3 at the base of the jets, and the length of the X-ray jets being > ∼ 10 10 cm (Brinkmann et al. 1991;Brinkmann 1993;Kotani et al. 1996). These temperatures and densities required for the X-ray jets qualitatively agree with those at the high-velocity region between the funnel wall and the disk boundary in model BH-1. The X-ray emitting region observed in SS 433 would be in the range of r = 10 10 -10 12 cm (Kotani et al. 1996), which far exceeds the present computational domain. The high-velocity jets region in model BH-1 would probably extend up to r ∼ 10 12 cm. Additionally, from the characteristics of the results given in figure 3, we expect that such an extended high-velocity region would have temperatures and densities of ∼ 10 8 K and ∼ 10 −12 -10 −10 g cm −3 . Although the mass-loss rate (4 × 10 18 g s −1 ) of the jets in model BH-1 is one order of magnitude smaller than the mass-loss rate ( ≥ 3 × 10 19 g s −1 ) estimated in SS 433, we speculate that the actual calculated mass-loss rate at large radii may become comparable to the observed one, because the jets mass-flux would increase due to the wind gas from the disk at large radii. In such a case, we may have a problem that the jets gas may slow down below 0.26 c if the enhanced region of the jets mass-flux is too far from the present computational domain. Alternatively, the super-Eddington models with a much larger input accretion rate,Ṁ * , than the present value, 4Ṁ E , may be responsible for a mass-loss rate comparable to ∼ 10 20 g s −1 . Central Object of SS 433 Whether the central object of SS 433 is a black hole or a neutron star remains unsolved even at present, because we have not yet obtained any decisive evidence of a central source mass. From some observational and theoretical estimates of a massive mass and its too high energetics of SS 433, many astrophysicists seem to favor a black-hole hypothesis of SS 433 (Leibowitz et al. 1984;Fabrika, Bychkova 1990 ;Cherepashchuk 1993;Fukue et al. 1998;Hirai, Fukue 2001), while some people suggest that the compact object is a neutron star, from the viewpoint of a theoretical model and observations of the He ii line (Begelman, Rees 1984;D'Odorico et al. 1991;Zwitter et al. 1993). In this respect, our present results are also compared with those of a neutron-star model by Okuda and Fujita (2000) with M * = 1.4M ⊙ , R * = 10 6 cm, and a mass accretion rate (Ṁ * ) of 10 20 g s −1 , which corresponds to ∼ 100Ṁ E for a neutron star. The viscosity parameter, α = 10 −3 , used is the same as that in model BH-1. The characteristic features in model BH-1 are also found in this neutron-star model, where there appear a high-velocity jet region with a collimation angle of ∼ 10 • to the rotational axis, the existence of a confined X-ray emitting region of iron lines in the high-velocity region, a geometrical thick disk with an opening angle of 80 • to the equatorial plane, a mass-loss rate far less than 10 20 g s −1 , and convective motions in the inner disk. Regarding the collimation angle, the neutron-star model with a smaller angle ∼ 10 • may be favorable for SS 433. However, except for this point, there is no intrinsic or large quantitative difference between the neutron-star model and the black-hole model; that is, we could not find any decisive proof for these objects as a candidate of SS 433. Assumption of the Flux-Limited Diffusion Approximation The radiation transport was treated here as an approximation of flux-limited diffusion. The flux limiter, λ, and the Eddington factor, f E , used in the approximation are given from the empirical formulas fitted to some stellar atmosphere models (Kley 1989). Recently, Turner and Stone (2001) addressed that the flux-limited diffusion approximation (FLD) is less accurate when the flux has a component perpendicular to the gradient in radiation energy density, and in optically thin regions when the radiation field depends strongly on the angle. The calculations considered in our case typically result in two optical regions, which are the optically thick disk region and the optically thin high-velocity region. The flux limiter, λ, is almost 1/3 in the disk and, contrarily, very small everywhere in the high-velocity region. In an optically thick disk, FLD is sufficient. On the other hand, in the optically thin high-velocity region, the radiation fields are highly anisotropic and FLD may be less accurate. The radiation fields in the high-velocity region originate in radiation from the central hot region and the outer disk surface. However, the contours of radiation energy density are almost concentric around the central object. Actually, the radial radiation flux, which is attributed to the central hot region, is one order of manitude larger than the radiation flux normal to the disk surface. As a result, we consider that FLD in our code would not result in any serious influences on the whole dynamics of the flows and the total luminosity; still, this problem should be checked with a more accurate method in future improvements of the numerical code. T.Okuda [Vol. , Conclusions We examined highly super-Eddington black-hole models for SS 433, based on two-dimensional hydrodynamical calculations coupled with radiation transport. A model with a small viscosity parameter, α = 10 −3 , shows that a geometrically and optically thick convection-dominated disk with a large opening angle of ∼ 60 • to the equatorial plane and rarefied, very hot, and optically thin high-velocity jets region around the disk are formed. The thick accretion flow near to the equatorial plane consists of two different zones: an inner advection-dominated zone, in which the net mass-inflow rate,Ṁ in , is very small, and an outer convection-dominated zone, in whichṀ in increases with increasing radii. The high-velocity region along the rotating axis is divided into two characteristic regions by the funnel wall, a barrier where the effective potential due to the gravitational potential and the centrifugal one vanishes. A confined region in the high-velocity jet region just outside of the photospheric disk boundary would be responsible for the observed X-ray iron emission lines with a Doppler shift of 0.26 c. However, from this model, we can not obtain a small collimation degree of ∼ 0.1 radian of the jets and a sufficient mass-outflow rate of the jets comparable to ∼ 10 20 g s −1 , as is expected for SS 433. These problems still remain open. On the other hand, from a model with a large α = 0.1, we find a geometrically and optically thick quasi-Keplerian disk with an opening angle of ∼ 10 • to the eauatorial plane, whereas the disk is far thinner than the disk with α = 10 −3 . This model may be unfavorable to SS 433 because we never find relativistic jets with ∼ 0.2 c here. Further investigations of the super-Eddington models with other model parameters and over a wider range of the computational domain ( r ∼ 10 12 cm) may give some key to the open problems. The author gratefully thanks Dr. D. Molteni for his useful discussions of the numerical method and the results, and also the referee for many useful comments concerning the advection-dominated and the convection-dominated accretion flows. Numerical computations were carried out at the Information Processing Center of Hokkaido University of Education. Fig. 1 . 1Time evolution of the luminosity, L, for model BH-1, where P d is the Keplerian orbital period at the inner boundary 2rg. Fig. 2 . 2Snapshots of the density evolution for model BH-1, where the right-hand number in each photo shows the evolutionary time in units of P d ; also, a color legend for log ρ (g cm −3 ) is shown. The first snapshot denotes the initial density configurations, which consist of the Shakura-Sunyaev disk and the surrounding rarefied atmosphere. The green and orange parts in the later snaps show the hot, rarefied, and optically thin high-velocity jets region, while the blue and dark blue parts denote the cold, dense, and optically thick disk region. The flow velocities at the final phase are ∼ 0.1-0.2 c in the deep-green part and ∼ 0.2-0.4 c in the light-green and orange parts. The observed X-ray emission of iron lines in SS 433 would be formed in the deep-green high-velocity region just outside the disk boundary. Fig. 3 . 3Velocity vectors and contours of the density, ρ (g s −3 ), in logarithmic scale (a) and bird's-eye views of the gas temperature T (K) (b), on the meridional plane at the evolutionary time t = 3577P d for model BH-1. The density contours are denoted by the labels log ρ = −5, −6, −7, −8, −10, and −18, and the velocity vectors show the maximum velocity 0.4 c at ζ ∼ 80 • and r/R * ∼ 200. The thick dashed and the thick dot-dashed lines in (a) show the funnel wall and the disk boundary between the disk and the high-velocity region, respectively. Fig. 4 . 4Contours of the logarithmic radiation energy density log E 0 in an arbitrary unit (a) and the logarithmic angular-velocity log Ω normalized by the Keplerian angular velocity at the inner boundary with intervals of 0.25 (b) on the meridional plane at t = 3577P d for model BH-1, where the thick dashed and the thick dot-dashed lines denote the funnel wall and the disk boundary between the disk and the high-velocity jets region, respectively. Fig. 5 . 5Magnified velocity fields and density contours near the upper disk boundary at t = 3577P d for model BH-1, where the maximum velocity is 0.22 c and the contours of log ρ (g s −3 ) is shown by lines with labels −6, −7, −8, and −10. This high-velocity region above the disk boundary is a typical X-ray emitting region of iron lines with velocities of ∼ 0.2 c. in figure 7e are defined aṡ M in (r) Fig. 6 . 6Unit-velocity vectors and density contours of log ρ = −5, −6, −7, −8, −10, and −18 in the inner disk and the surrounding high-velocity jets region at t = 3577P d for model BH-1, where the dot-dashed line denotes the disk boundary bewteen the disk and the high-velocity jets region. A transition between the inner advection-dominated zone and the outer convection-dominated zone occurs at r ∼ 10R * (= 20rg). Fig. 7 . 7Radial and angular profiles of the time-averaged flow variables near to the equatorial plane for model BH-1: (a) density ρ , (b) radial velocity v, (c) total pressure Pt, (d) rotational velocity vϕ, (e) four radial mass-flux ratesṀ (r), (f) convective luminosity Lc, (g) entropy S, and (h) angular profiles of three mass-flux ratesṀ (ζ) (see text in detail). Fig. 8 . 8Time evolution of the luminosity L for model BH-2. Fig. 9 . 9Velocity vectors and contours of the density ρ (g s −3 ) in logarithmic scale (a) and bird's-eye views of the gas temperature T (K) (b) on the meridional plane at t = 4365P d for model BH-2, where the thick dot-dashed line in (a) shows the disk boundary between the disk and the surrounding atmosphere, and the flow vectors are denoted in the same units as infigure 3a. Fig. 10 . 10Contours of the radiation energy density (a) and the angular-velocity Ω (b) on the meidional plane at t = 4365P d for model BH-2 with the same units as infigure 4. Fig. 11 . 11Distribution of mesh points contributed mostly to the total X-ray emission of iron lines at t = 3577P d for model BH-1, where the disk boundary (thick dot-dashed line) and the temperature contours of log T = 7,8, 9, and 11 (lines) are also plotted. The diamonds, triangles, filled triangles, and filled squares denote the mesh points with different flow velocities of ∼ 0.10 c, 0.13 c, 0.17 c, and 0.20 c, respectively. Only mesh points with emissivity of more than a hundredth of the maximum emissivity in the optically thin region are plotted here. Table 1 . 1Model parameterModel M * /M ⊙Ṁ * /Ṁ EṀ * ( g s −1 ) α R * R max /R * BH-1 10 4 8 × 10 19 10 −3 2r g 220 BH-2 10 4 8 × 10 19 0.1 2r g 220 . M A Abramowicz, I V Igumenshchev, ApJ. 55453Abramowicz, M. A., & Igumenshchev, I. V. 2001, ApJ, 554, L53 . M A Abramowicz, I V Igumenshchev, E Quataert, R Narayan, preprint (astro-ph/0110371Abramowicz, M. A., Igumenshchev, I. V., Quataert, E., & Narayan, R. 2001, preprint (astro-ph/0110371) . M C Begelman, M J Rees, MNRAS. 206209Begelman, M. C., & Rees, M. J. 1984, MNRAS, 206, 209 . G S Bisnovatyi-Kogan, S I Blinnikov, A&A. 59111Bisnovatyi-Kogan, G. S., & Blinnikov, S. I. 1977, A&A, 59, 111 . R D Blandford, M C Begelman, MNRAS. 3031Blandford, R. D., & Begelman, M. C. 1999, MNRAS, 303, L1 G R Blumenthal, W H Tucker, Reidel). X-Ray Astronomy, ed. R. Giacconi & H. GurskyDordrecht99Blumenthal, G. R., & Tucker, W. H. 1974, in X-Ray Astronomy, ed. R. Giacconi & H. Gursky ( Dordrecht: Reidel), 99 W Brinkmann, Stellar Jets and Bipolar Outflows. L. Errico & A. A. VittoneDordrechtKluwer193Brinkmann, W. 1993, in Stellar Jets and Bipolar Outflows, ed. L. Errico & A. A. Vittone ( Dordrecht: Kluwer), 193 . W Brinkmann, N Kawai, A&A. 363640Brinkmann, W, & Kawai, N. 2000, A&A, 363, 640 . W Brinkmann, N Kawai, M Matsuoka, H H Fink, A&A. 112Brinkmann, W., Kawai, N., Matsuoka, M., & Fink, H. H. 1991, A&A, 241, 112 M Calvani, L Nobili, Astrophysical jets. A. Ferrari & A. G. PacholczykDordrecht189: Reidel)Calvani, M., & Nobili, L. 1983, in Astrophysical jets, ed. A. Ferrari & A. G. Pacholczyk ( Dordrecht: Reidel), 189 A M Cherepashchuk, Stellar Jets and Bipolar Outflows. L. Errico & A. A. VittoneDordrechtKluwer179Cherepashchuk, A. M. 1993, in Stellar Jets and Bipolar Outflows, ed. L. Errico & A. A. Vittone (Dordrecht: Kluwer), 179 . S D&apos;odorico, T Oosterloo, T Zwitter, M Calvani, Nature. 353329D'Odorico, S., Oosterloo, T., Zwitter, T., & Calvani, M. 1991, Nature, 353, 329 . G E Eggum, F V Coroniti, J I Katz, ApJ. 29841Eggum, G. E., Coroniti, F. V., & Katz, J. I. 1985, ApJ, 298, L41 . G E Eggum, F V Coroniti, J I Katz, ApJ. 330142Eggum, G. E., Coroniti, F. V., & Katz, J. I. 1988, ApJ, 330, 142 . S N Fabrika, L V Bychkova, A&A. 2405Fabrika, S. N., & Bychkova, L. V. 1990, A&A, 240, L5 . M Fujita, T Okuda, PASJ. 50639Fujita, M., & Okuda, T. 1998, PASJ, 50, 639 . J Fukue, PASJ. 34163Fukue, J. 1982, PASJ, 34, 163 . J Fukue, PASJ. 48631Fukue, J. 1996, PASJ, 48, 631 . J Fukue, Y Obana, M Okugami, PASJ. 5081Fukue, J., Obana, Y., & Okugami, M. 1998, PASJ, 50, 81 . Y Hirai, J Fukue, PASJ. 53679Hirai, Y., & Fukue, J. 2001, PASJ, 53, 679 . V Icke, AJ. 85329Icke, V. 1980, AJ, 85, 329 . V Icke, A&A. 216294Icke, V. 1989, A&A, 216, 294 . I V Igumenshchev, M A Abramowicz, MNRAS. 303309Igumenshchev, I. V., & Abramowicz, M. A. 1999, MNRAS, 303, 309 S Kato, J Fukue, S Mineshige, Black Hole Accretion Disks. KyotoKyoto University PressKato, S., Fukue, J., & Mineshige, S. 1998, Black Hole Accretion Disks (Kyoto: Kyoto University Press) . J I Katz, ApJ. 236127Katz, J. I. 1980, ApJ, 236, L127 . A R King, M C Begelman, ApJ. 519169King, A. R., & Begelman, M. C. 1999, ApJ, 519, L169 . W Kley, A&A. 20898Kley, W. 1989, A&A, 208, 98 . W Kley, D N C Lin, ApJ. 461933Kley, W., & Lin, D. N. C. 1996, ApJ, 461, 933 . T Kotani, N Kawai, M Matsuoka, W Brinkmann, PASJ. 48619Kotani, T., Kawai, N., Matsuoka, M., & Brinkmann W. 1996, PASJ, 48, 619 . E M Leibowitz, T Mazeh, H Mendelson, Nature. 307341Leibowitz, E. M., Mazeh, T., & Mendelson, H. 1984, Nature, 307, 341 . C D Levermore, G C Pomraning, ApJ. 248321Levermore, C. D., & Pomraning, G. C. 1981, ApJ, 248, 321 . D Lynden-Bell, Phys. Scr. 17185Lynden-Bell, D. 1978, Phys. Scr., 17, 185 . V M Lipunov, N I Shakura, SvA. 26386Lipunov, V. M., & Shakura, N. I. 1982, SvA 26, 386 . B Margon, ARA&A. 22507Margon, B. 1984, ARA&A, 22, 507 . D L Meier, ApJ. 256706Meier, D. L. 1982, ApJ, 256, 706 . J A Milsom, R E Taam, MNRAS. 286358Milsom, J. A., & Taam, R. E. 1997, MNRAS, 286, 358 . D Molteni, D Ryu, S K Chakrabati, ApJ. 470460Molteni, D., Ryu, D., & Chakrabati, S. K. 1996, ApJ, 470, 460 . R Narayan, I Yi, ApJ. 42813Narayan, R., & Yi, I. 1994, ApJ, 428, L13 . R Narayan, I V Igumenshchev, M A Abramowicz, ApJ. 539798Narayan, R., Igumenshchev, I. V., & Abramowicz, M. A. 2000, ApJ, 539, 798 . T Okuda, M Fujita, PASJ. 525Okuda, T., & Fujita, M. 2000, PASJ, 52, L5 . T Okuda, M Fujita, S Sakashita, PASJ. 49679Okuda, T., Fujita, M., & Sakashita, S. 1997, PASJ, 49, 679 . B Paczyńsky, P J Wiita, A&A. 8823Paczyńsky, B., & Wiita, P. J. 1980, A&A, 88, 23. . J C B Papaloizou, G Q Stanley, MNRAS. 220593Papaloizou, J. C. B., & Stanley, G. Q. G. 1986, MNRAS, 220, 593 . E Quataert, A Gruzinov, ApJ. 539809Quataert, E., & Gruzinov, A. 2000, ApJ, 539, 809 . N I Shakura, R A Sunyaev, A&A. 24337Shakura, N. I., & Sunyaev, R. A. 1973, A&A, 24, 337 . M Sikora, D B Wilson, MNRAS. 197529Sikora, M., & Wilson, D. B. 1981, MNRAS, 197, 529 . J M Stone, J E Pringle, MNRAS. 322461Stone, J. M., & Pringle, J. E. 2001, MNRAS, 322, 461 . J M Stone, J E Pringle, M C Begelman, MNRAS. 3101002Stone, J. M., Pringle, J. E., & Begelman, M. C. 1999, MNRAS, 310, 1002 . Y Tajima, J Fukue, PASJ. 50483Tajima, Y., & Fukue, J. 1998, PASJ, 50, 483 . N J Turner, J M Stone, ApJS. 13595Turner, N. J., & Stone, J. M. 2001, ApJS, 135, 95 T Zwitter, S D&apos;odorico, T Oosterloo, M Calvani, Stellar Jets and Bipolar Outflows. L. Errico & A. A. VittoneDordrechtKluwer209Zwitter, T., D'Odorico, S., Oosterloo, T., & Calvani, M. 1993, in Stellar Jets and Bipolar Outflows, ed. L. Errico & A. A. Vittone ( Dordrecht: Kluwer), 209
[]
[ "RECOVERING ORTHOGONALITY FROM QUASI-TYPE KERNEL POLYNOMIALS USING SPECIFIC SPECTRAL TRANSFORMATIONS", "RECOVERING ORTHOGONALITY FROM QUASI-TYPE KERNEL POLYNOMIALS USING SPECIFIC SPECTRAL TRANSFORMATIONS" ]
[ "Vikash Kumar ", "A Swaminathan " ]
[]
[]
In this work, the concept of quasi-type Kernel polynomials with respect to a moment functional is introduced. Difference equation satisfied by these polynomials along with the criterion for orthogonality conditions are discussed. The process of recovering orthogonality for the linear combination of a quasi-type kernel polynomial with another orthogonal polynomial, which is identified by involving linear spectral transformation, is provided. This process involves an expression of ratio of iterated kernel polynomials. This lead to considering the limiting case of ratio of kernel polynomials involving continued fractions. Special cases of such ratios in terms of certain continued fractions are exhibited.2020 Mathematics Subject Classification. Primary 33C45, 33C05; Secondary 42C05.
null
[ "https://export.arxiv.org/pdf/2211.10704v3.pdf" ]
256,389,928
2211.10704
a06791598701a382841c9816b5b0fa1818b78c9b
RECOVERING ORTHOGONALITY FROM QUASI-TYPE KERNEL POLYNOMIALS USING SPECIFIC SPECTRAL TRANSFORMATIONS 29 Jan 2023 Vikash Kumar A Swaminathan RECOVERING ORTHOGONALITY FROM QUASI-TYPE KERNEL POLYNOMIALS USING SPECIFIC SPECTRAL TRANSFORMATIONS 29 Jan 2023 In this work, the concept of quasi-type Kernel polynomials with respect to a moment functional is introduced. Difference equation satisfied by these polynomials along with the criterion for orthogonality conditions are discussed. The process of recovering orthogonality for the linear combination of a quasi-type kernel polynomial with another orthogonal polynomial, which is identified by involving linear spectral transformation, is provided. This process involves an expression of ratio of iterated kernel polynomials. This lead to considering the limiting case of ratio of kernel polynomials involving continued fractions. Special cases of such ratios in terms of certain continued fractions are exhibited.2020 Mathematics Subject Classification. Primary 33C45, 33C05; Secondary 42C05. introduction Let µ be a non-trivial positive Borel measure with support containing infinitely many points. The support of µ having only finitely many points leads to the linear dependence of monomials in L 2 (dµ)-known as trivial measure. Thus we deal with the measure µ having infinitely many points in the support. The monomials {x j } ∞ j=0 then become linearly independent in L 2 (dµ). Applying Gram-Schmidt process on {x j } ∞ j=0 one obtains certain polynomials {P n } n≥0 satisfying L(P n (x)P m (x)) = P n (x)P m (x)dµ = δ nm . (1.1) It can be noted that, by considering two sequences of complex constants {λ n } and {c n }, where λ n 's and c n 's are related to moment functional L, the following three term recurrence relation (TTRR) [13] xP n (x) = P n+1 (x) + c n+1 P n (x) + λ n+1 P n−1 (x), (1.2) with P −1 (x) = 0, P 0 (x) = 1, can also be used to obtain recursively the sequence of orthogonal polynomials {P n } n≥0 . Favard's theorem [13] guarantees that there exists a unique linear moment functional L such that the orthogonality condition(1.1) is satisfied with respect to L. Moreover, the definiteness property of the moment functional depends on the parameters λ n and c n . The concept of linear combination of two consecutive members of a sequence of orthogonal polynomials was first studied by Riesz [35] in 1923 in his solution to the Hamburger moment problem. Later, in 1937, Fejér [21] studied the linear combination of three consecutive member of sequence of orthogonal polynomials. Finally, Shohat [39] generalized the concept to finite linear combination of orthogonal polynomials in the study of mechanical quadrature. The concept of quasi-orthogonal polynomial on the unit circle has been studied by Alfaro and Moral [4]. However, the quasi-orthogonality on the unit circle with respect to linear functional defined on the space of Laurent polynomials is not so strong in comparison to the real line case. For further study of quasi-orthogonal polynomials, we encourage the readers to see [1,11,13,[16][17][18]. In [25], Grinshpun studied the necessary and sufficient conditions for the orthogonality of the linear combinations of polynomials which he called a special linear combination of orthogonal polynomials with respect to a weight function. The support of this weight function lies in an interval. These type of orthogonal families of polynomials appears in the solution to the problem of Peebles-Korous [34], approximate solution to the Cauchy problem for the ordinary differential equations [19] and Gelfond's problem of the polynomials [22]. Grinshpun also proved that the Bernstein-Szegö orthogonal polynomials of any kind can be written as a special linear combination of the Chebyshev polynomials of the same kind. The special feature of this representation is that the coefficients are independent of n. Orthogonality of the linear combination of orthogonal polynomials with constant coefficients is also discussed in [2,3]. Furthermore, the TTRR type relation satisfied by a quasi-orthogonal polynomial of order one along with the orthogonality of quasi-orthogonal polynomials is discussed in [28].The second-order differential equations for quasi-orthogonal polynomials of order one is also addressed in [28]. When we deal with the measure dμ of the form dμ = (x − k)dµ, where k does not belong to the support of measure dµ, we obtain a sequence of orthogonal polynomials, which we call kernel polynomials. We refer to [6,12,13,31] and references therein for further details in this direction. In this article, we define the linear combination of two consecutive terms of a sequence of kernel polynomials, which we call quasi-type kernel polynomials of order one. The orthogonality of these quasi-type kernel polynomials does not arise naturally. Hence the objective of this manuscript is to recover orthogonality from the given quasi-type kernel polynomials. In particular, given a quasi-type kernel polynomial, the process of identifying a related orthogonal polynomials which, with the linear combination of the quasi-type kernel polynomials provide the orthogonality. This orthogonality is same as the one given by the sequence of polynomials {P n } that would lead to the quasi-type kernel polynomials. 1.1. Organization. In Section 2, we discuss the necessary and sufficient condition for the quasi-type kernel polynomial of order one. In addition, we discuss the criterion for the orthogonality of quasi-type kernel polynomials. Section 3 describes the recovery of orthogonality from the quasi-type kernel polynomial of order one and specific linear spectral transformations. Further the recovery of orthogonal polynomials from the quasi-type kernel polynomial of order two and the iterated kernel polynomials is addressed. In Section 4, we calculate the limiting case of the ratio of kernel polynomials. As specific cases, the ratio of certain kernel polynomials, namely the Laguerre polynomial and the Jacobi polynomial, in terms of continued fractions is also exhibited. quasi-type kernel polynomial and orthogonality In this section, we discuss the known results about a linear combination of orthogonal polynomials known as quasi-orthogonal polynomial and the polynomials generated by Christoffel transformation known as kernel polynomials. Motivated by this, we define quasi-type kernel polynomial of order one. We also give an example that satisfies the condition of the quasi-type kernel polynomials. We conclude this section with the discussion of orthogonality of quasi-type kernel polynomials. Definition 2.1. [13] A non-zero polynomial p is called quasi-orthogonal polynomial of order one if it is of degree at most n + 1 and L(x m p(x)) = x m p(x)dµ = 0 for m = 0, 1, 2, ..., n − 1. Remark 2.1. Note that (1) L(x m P n+1 (x)) = 0 for m = 0, 1, 2, ..., n − 1, and (2) L(x m P n (x)) = 0 for m = 0, 1, 2, ..., n − 1. This gives that P n+1 (x) and P n (x) are both quasi-orthogonal polynomials of order one. Thus, one can think of p(x) as a linear combination of P n+1 (x) and P n (x). Note that this linear functional L will be employed throughout this manuscript, whenever we discuss about quasi-type kernel polynomials. Now, we will state the result which justifies the above remark. Q n+1 (x) = aP n+1 (x) + bP n (x). For given k ∈ C, we can define the new linear functional L * for a polynomial p(x) as L * (p(x)) = L((x − k)p(x)). This new linear functional is called the Christoffel transformation of L at k. We can define the corresponding kernel polynomials by the formula [13] P * n (k; x) = (x − k) −1 P n+1 (x) − P n+1 (k) P n (k) P n (x) for n ≥ 0 and P n (k) = 0. (2.1) {P * n (k; x)} ∞ n=0 is a monic orthogonal polynomial sequence with respect to L * [13], and hence by Favard's theorem, it satisfies the TTRR: xP * n (k; x) = P * n+1 (k; x) + c * n+1 P * n (k; x) + λ * n+1 P * n−1 (k; x), (2.2) where λ * n = λ n P n (k)P n−2 (k) P 2 n−1 (k) , c * n = c n+1 − P 2 n (k) − P n−1 (k)P n+1 (k) P n−1 (k)P n (k) . (2. 3) The study of the kernel polynomials with respect to the non-trivial probability measure on the unit circle is also an active part of research. When k = 1, the sequence of kernel polynomials satisfies the three-term recurrence relation with their recurrence coefficients related to the positive chain sequences. The kernel polynomials on the unit circle are closely related to the para-orthogonal polynomials. For more information concerning kernel polynomials (known as Christoffel-Darboux kernel) and their asymptotic behavior, we refer to [9,14,36,37] and references therein. The Christoffel-Darboux identity [13, eq. 4.9] is given by λ 1 λ 2 ...λ n+1 n j=0 P j (x)P j (x ′ ) λ 1 λ 2 ...λ j+1 = P n+1 (x)P n (x ′ ) − P n+1 (x ′ )P n (x) x − x ′ , (2.4) where {P n (x)} n≥0 is a orthogonal polynomial sequence with respect to dµ. With the use of (2.4), we can express the kernel polynomials as P * n (k; x) = λ 1 λ 2 ...λ n+1 (P n (k)) −1 K n (x, k), (2.5) where K n (x, k) = n j=0 p j (x)p j (k), p n (x) = (λ 1 λ 2 ...λ n+1 ) −1/2 P n (x). (2.6) For a fixed k, we can easily deduce from the Christoffel-Darboux identity that (x − k)K n (x, k) is a quasi-orthonormal polynomial of order one. On the other hand if we replace fixed number k by variable u then we can discuss the orthogonality of the polynomials {(x − u)K n (x, u)} n≥0 .R × R) on L 2 (R 2 , dµ(R × R)). |x − u| 2 K n (x, u)K m (x, u)dµ(u)dµ(x) = 0 for m = n 2λ 2 n+1 for m = n. (2.7) Proof. (x − u) 2 K n (x, u)K m (x, u)dµ(u)dµ(x) = λ 2 n+1 (p n+1 (x)p n (u) − p n+1 (u)p n (x))(p m+1 (x)p m (u) − p m+1 (u)p m (x))dµ(u)dµ(x) = λ 2 n+1 p n+1 (x)p m+1 (x) p n (u)p m (u)dµ(u) − p m+1 (x)p n (x) p n+1 (u)p m (u)dµ(u) −p n+1 (x)p m (x) p m+1 (u)p n (u)dµ(u) + p n (x)p m (x) p m+1 (u)p n+1 (u)dµ(u) dµ(x). For m ≤ n − 2, we have (x − u) 2 K n (x, u)K m (x, u)dµ(u)dµ(x) = 0. For m = n − 1, using the orthonormal property of p n . We have (x − u) 2 K n (x, u)K m (x, u)dµ(u)dµ(x) = −λ 2 n+1 p n+1 (x)p n−1 (x)dµ(x) = 0. For m = n, we have (x − u) 2 K n (x, u)K m (x, u)dµ(u)dµ(x) = λ 2 n+1 p 2 n+1 (x)dµ(x) + p 2 n (x)dµ(x) = 2λ 2 n+1 . This completes the proof. Now, we give the following definition: Definition 2.2. Let {P * n (k; x)} ∞ n=0 be the sequence of kernel polynomials which exists for some k ∈ C, and forms an orthogonal polynomial sequence with respect to L * . A non-zero polynomial Q * n+1 (k; ·) is called a quasi-type kernel polynomial of order one if it is of degree at most n + 1 and L * (x m Q * n+1 (k; x)) = 0 for m = 0, 1, ..., n − 1. Remark 2.2. Note that (1) L * (x m P * n+1 (k; x)) = 0 for m = 0, 1, 2, ..., n − 1, and (2) L * (x m P * n (k; x)) = 0 for m = 0, 1, 2, ..., n − 1, so that both P * n+1 (k; x) and P * n (k; x) are quasi-type kernel polynomials of order 1. Remark 2.3. In general, we can say a polynomial Q * n+1 (k; ·) = 0 is called quasi-type kernel polynomial of order l ≥ 1 if, and only if, it is of degree at most n + 1, n ≥ l + 1 and L * (x m Q * n+1 (k; x)) = 0 for m = 0, 1, ..., n − l. Q * n+1 (k; x) = aP * n+1 (k; x) + bP * n (k; x). Proof. If Q * n+1 (k; x) is a quasi-type kernel polynomial of order 1, then for some constant c 0 , c 1 , ...c n+1 , we can write Q * n+1 (k; x) = n+1 m=0 c m P * m (k; x) with c m = L * [Q * n+1 (k;x)P * m (k;x)] L * [P * m 2 (k;x)] and hence, c m = 0 for m ∈ {0, 1, ..., n − 1}. Thus, we get Q * n+1 (k; x) = aP * n+1 (k; x) + bP * n (k; x) . Conversely, If a and b are not simultaneously zero, then L * (x m Q * n+1 (k; x)) =L * (ax m P * n+1 (k; x)) + bx m P * n (k; x) =aL * (x m P * n+1 (k; x)) + bL * (x m P * n (k; x)) =0 for m = 0, 1, ..., n − 1. This completes the proof. Remark 2.4. In general, we can extend the above theorem for order l and say Q * n+1 (k; x) is a monic quasi-type kernel polynomial of order l if Q * n+1 (k; x) = P * n+1 (k; x) + l m=1 α m P * n−m+1 (k; x) for n ≥ l + 1. Next, we consider an example which supports Theorem 2.2. For this, first we easily show that polynomial P n+1 (x) is a quasi-type kernel polynomial of order one with respect to L * . Indeed, P n+1 (x) can be written as a linear combination of P * n+1 (k; x) and P * n (k; x) with constant coefficients in the following result using TTRR (1.2) satisfied by orthogonal polynomial P n (x). Note that the same was established in [31, eq. 2.5] using Christoffel-Darboux kernel (2.4). Proposition 2. Let {P * n (k; x)} ∞ n=0 be a sequence of monic orthogonal polynomials with respect to L * which exists for some k ∈ C. Then we can write P n (x) in terms of linear combinations of kernel polynomials as follows: P n+1 (x) = P * n+1 (k; x) − P n (k) P n+1 (k) λ n+2 P * n (k; x),(2. 8) where λ n+2 is a strictly positive constant in TTRR (1.2). Proof. Using equation (2.1), we can write P * n+1 (k; x) + D n+1 P * n (k; x) = 1 x − k P n+2 (x) − P n+2 (k) P n+1 (k) P n+1 (x) + D n+1 P n+1 (x) − D n+1 P n+1 (k) P n (k) P n (x) = 1 x − k P n+2 (x) − x − k − D n+1 + P n+2 (k) P n+1 (k) P n+1 (x) − D n+1 P n+1 (k) P n (k) P n (x) + P n+1 (x), by substituting D n+1 = −λ n+2 P n (k) P n+1 (k) , we can write the above equation as P * n+1 (k; x) + D n+1 P * n (k; x) = 1 x − k [P n+2 (x) − (x − c n+2 ) P n+1 (x) + λ n+2 P n (x)] + P n+1 (x), where λ n+2 = −D n+1 P n+1 (k) P n (k) , c n+2 = k + D n+1 − P n+2 (k) P n+1 (k) . Using TTRR (1.2), we get the desired result. Example 2.1. Let {C n (x)} ∞ n=0 be a sequence of polynomials which forms an orthogonal polynomial sequence with respect to the Chebyshev measure dµ = (1 − x 2 ) −1/2 dx with compact support [−1, 1]. This is referred to as Chebyshev polynomial of first kind. The corresponding monic Chebyshev polynomial can be written aŝ C 0 (x) =C 0 (x), C n+1 (x) =2 −n C n+1 (x), n ≥ 0. The monic polynomialĈ n (x) satisfies the following TTRR C n+1 (x) =xĈ n (x) − 1 4Ĉ n−1 (x), n ≥ 2, C 2 (x) =xĈ 1 (x) − 1 2Ĉ 0 (x) with initial dataĈ 0 (x) = C 0 (x) = 1,Ĉ 1 (x) = C 1 (x) = x. The kernel of the Chebyshev polynomials for k ≤ −1 and k ≥ 1 is given by [13, eq. 7.5] C * n+1 (k; x) = 1 x − k Ĉ n+2 (x) −Ĉ n+2 (k) C n+1 (k)Ĉ n+1 (x) . {Ĉ * n (k; x)} ∞ n=0 is a monic orthogonal polynomial sequence with respect to the quasi definite linear functional L * . Then, by (2.8), we say thatĈ n+1 (x) is a quasi-type kernel polynomial of order one. Indeed,Ĉ n+1 (x) =Ĉ * n+1 (k; x) − 1 4Ĉ n (k) C n+1 (k)Ĉ * n (k; x). In addition, from the above equation it is natural to ask about the behavior of the ratio of Chebyshev polynomial and Chebyshev kernel polynomial. In particular, for k = 1, we haveĈ n+1 (x) =Ĉ * n+1 (1; x) − 1 2Ĉ * n (1; x). By using Corollary 4.1, we get lim x→1Ĉ n+1 (x) C * n+1 (1; x) = 4 3 + 2n . Recurrence relation and orthogonality. It is known that we do not have the orthogonality of quasi-orthogonal polynomials with respect to L, although it is interesting to obtain the difference equation similar to TTRR of quasi-orthogonal polynomials. In [28], Ismail and Wang discussed the TTRR type relation for quasi-orthogonal polynomials. In the next result, we generalize their result to obtain the difference equation with variable coefficients of quasi-type kernel polynomials. Theorem 2.3. Let Q * n+1 (k; x) be a monic quasi-type kernel polynomial of order 1. Then Q * n+1 (k; x) satisfy the difference equation J n (x)Q * n+2 (k; x) = [D n+1 (x)J n (x) − bJ n+1 (x)] Q * n+1 (k; x) − λ * n+1 J n+1 (x)Q * n (k; x), where D n+1 (x) = x − c * n+2 + b, J n+1 (x) = bD n (x) + λ * n+1 . Proof. By the definition of Q * n+1 (k; x), we have Q * n+1 (k; x) = P * n+1 (k; x) + bP * n (k; x). (2.9) By using (2.2), we can write (2.9) as Q * n (k; x) = P * n (k; x) + bP * n−1 (k; x) = − b λ * n+1 P * n (k; x) + (x − c * n+1 ) b λ * n+1 + 1 P * n (k; x). (2.10) We can write the equations (2.9) and (2.10), in the matrix form as follows Q * n+1 (k; x) Q * n (k; x) = 1 b − b λ * n+1 (x − c * n+1 ) b λ * n+1 + 1 P * n+1 (k; x) P * n (k; x) . Since the right side of the matrix is invertible, we have P * n+1 (k; x) P * n (k; x) = λ * n+1 b 2 + λ * n+1 + (x − c * n+1 )b (x − c * n+1 ) b λ * n+1 + 1 −b b λ * n+1 1 Q * n+1 (k; x) Q * n (k; x) (2.11) Further, using (2.2), we write Q * n+2 (k; x) = (x − c * n+2 + b)P * n+1 (k; x) − λ * n+2 P * n (k; x). Again, we can use (2.11) to obtain the expression of Q * n+2 (k; x) in terms of Q * n+1 (k; x) and Q * n (k; x) as Q * n+2 (k; x) = λ * n+1 b 2 + λ * n+1 + (x − c * n+1 )b (x − c * n+2 + b) (x − c * n+1 ) b λ * n+1 + 1 Q * n+1 (k; x) −bQ * n (k; x)] − λ * n+2 b λ * n+1 Q * n+1 (k; x) + Q * n (k; x) . After simplifying the above equation, we obtain the desired result J n (x)Q * n+2 (k; x) = [D n+1 (x)J n (x) − bJ n+1 (x)] Q * n+1 (k; x) − λ * n+1 J n+1 (x)Q * n (k; x), where D n+1 (x) = x − c * n+2 + b, J n+1 (x) = bD n (x) + λ * n+1 . This completes the proof. Next, we discuss the necessary and sufficient conditions for orthogonality of quasi-type kernel polynomial of order l. One may prove Theorem 2.4 in the same line as [2,Theorem 1], and hence we omit the proof. Theorem 2.4. Suppose {P n (x)} ∞ n=0 be a sequence of monic orthogonal polynomials with respect to a quasi definite linear functional L and suppose {P * n (k; x)} ∞ n=0 be a sequence of kernel polynomials generated by Christoffel transformation L * at k, which satisfy TTRR (2.2) with recurrence parameters c * n+1 , λ * n+1 given by (2.3). Further, let {Q * n (k; x)} ∞ n=0 be a sequence of quasi-type kernel polynomials Q * n (k; x) = P * n (k; x) + l m=1 α m P * n−m (k; x) for n ≥ l + 1, where {α m } l m=1 are scalars with nonzero value of α l . Then {Q * n (k; x)} ∞ n=0 is monic orthogonal with respect to a linear functional, if, and only if, the following conditions hold: (i) The polynomials Q * m (k; x) satisfy a TTRR given by Q * m+1 (k; x) + (x −c * m+1 )Q * m (k; x) +λ * m+1 Q * m−1 (k; x) = 0, withλ * m+1 = 0 for m ∈ {0, 1, 2, ..., l}. (ii) For n > l + 1, λ * n+1 − λ * n−l+1 = α 1 (c * n+1 − c * n ) = = 0, α m (c * n−m+1 −c * n+1 )+α m−1 [λ * n−m+2 −λ * n+1 −(α 1 (c * n −c * n+1 ))] = 0, m ∈ {1, 2, ..., l}. (iii) For m ∈ {1, ..., l − 1}, λ * l+2 = α 1 (c * l+2 − c * l+1 ), α m+1 (c * l−m+1 − c * l+2 ) + α m λ * l−j+2 = α (l) m [λ * l+2 − α 1 (c l+1 − c * l+2 )], α (l) l λ * l+2 + α 1 α (l) l (c l+1 − c * l+2 ) = α l λ * 2 , where α (l) m , m ∈ {1, 2, , ..., l}, represents the constant coefficients of P * l−m (k; ·) in the Fourier representation of Q * l (k; ·). Moreover, for n ≥ l + 1, we havẽ c * n+1 = c * n+1 ,λ * n+1 = λ * n+1 + α 1 (c * n − c * n+1 ), wherec * n+1 andλ * n+1 are the recurrence coefficients in the TTRR expansion of Q * n (k; ·). Recovery of orthogonal polynomials In this section, our primary goal is to recover the orthogonality of the polynomials which are the linear combination of polynomials generated by Darboux transformations and quasi-type kernel polynomials of orders 1 and 2 via suitable coefficients. In this process, we identify the unique sequences of constants that are necessary to recover such orthogonal polynomials. 3.1. Christoffel transformation. The relations among the quasi-orthogonal polynomials, monic orthogonal polynomial sequence and kernel polynomials are discussed in [6]. In Theorem 3.1, we recover the polynomials P n (x) from the linear combination of polynomial generated by Christoffel transformation and quasi-type kernel polynomials of order one with rational coefficients. We identify two sequences of parameters that are responsible for obtaining P n (x). We work with the monic quasi-type kernel polynomials of order one for some k ∈ C, which can be defined as T * n (k, x) = P * n (k; x) + B n P * n−1 (k; x). Theorem 3.1. Let {P n (x)} ∞ n=0 be a monic orthogonal polynomial sequence with respect to the positive definite linear functional L. Let T * n (k 1 , x) be a monic quasi-type kernel polynomial of order one for some k 1 ∈ C. Suppose also that the sequence {P * n (k 2 ; x)} ∞ n=0 of kernel polynomials generated by Christoffel transformation exists for some k 2 ∈ C. Then there exist unique sequences of constants {γ n } and {η n } with an explicit expression such that the sequence of polynomials {Q C n (k 1 , k 2 ; x)} given by Q C n (k 1 , k 2 ; x) := x − k 1 x − γ n−1 T * n (k 1 ; x) + η n−1 x − k 2 x − γ n−1 P * n−1 (k 2 ; x) (3.1) satisfies the same orthogonality as that of {P n (x)}. In particular, if k = k 1 = k 2 ∈ C theñ T * n (k; x) = P * n (k; x) +B n P * n−1 (k; x) = x − γ n−1 x − k P n (x), and if supp(dµ) ⊂ R is compact theñ T * n (k; x) ∈ L 1 (dµ). Proof. If the sequence {Q C n (k 1 , k 2 ; x)} is orthogonal with respect to the linear functional L, then by uniqueness theorem of orthogonal polynomials with respect to linear functional, {Q C n (k 1 , k 2 ; x)} and {P n (x)} are the same system of orthogonal polynomials and viceversa. Consider Q C n+1 (k 1 , k 2 ; x) = x − k 1 x − γ n T * n+1 (k 1 ; x) + η n x − k 2 x − γ n P * n (k 2 ; x) = 1 x − γ n (x − k 1 )T * n+1 (k 1 ; x) − (x − γ n )P n+1 (x) + η n (x − k 2 )P * n (k 2 ; x) + P n+1 (x). Using the definitions of kernel polynomials and quasi-type kernel polynomial of order one, we have (x − k 1 )T * n+1 (k 1 ; x) − (x − γ n )P n+1 (x) + η n (x − k 2 )P * n (k 2 ; x) = (x − k 1 )(P * n+1 (k 1 ; x) + B n+1 P * n (k 1 ; x)) − (x − γ n )P n+1 (x) + η n (x − k 2 )P * n (k 2 ; x) = P n+2 (x) − P n+2 (k 1 ) P n+1 (k 1 ) P n+1 (x) + B n+1 P n+1 (x) − B n+1 P n+1 (k 1 ) P n (k 1 ) P n (x) − (x − γ n )P n+1 (x) + η n P n+1 (x) − η n P n+1 (k 2 ) P n (k 2 ) P n (x) Combining the coefficients of P n+2 (x), P n+1 (x) and P n (x), we can write the above expression as (x − k 1 )T * n+1 (k 1 ; x) − (x − γ n )P n+1 (x) + η n (x − k 2 )P * n (k 2 ; x) = P n+2 (x) − x − γ n + P n+2 (k 1 ) P n+1 (k 1 ) − B n+1 − η n P n+1 (x) − η n P n+1 (k 2 ) P n (k 2 ) + B n+1 P n+1 (k 1 ) P n (k 1 ) P n (x). (3.2) Consider η n = − λ n+2 + B n+1 P n+1 (k 1 ) P n (k 1 ) P n (k 2 ) P n+1 (k 2 ) , and γ n = c n+2 + P n+2 (k 1 ) P n+1 (k 1 ) − B n+1 + λ n+2 + B n+1 P n+1 (k 1 ) P n (k 1 ) P n (k 2 ) P n+1 (k 2 ) . Then, we can write (3.2) as (x − k 1 )T * n+1 (k 1 ; x) − (x − γ n )P n+1 (x) + η n (x − k 2 )P * n (k 2 ; x) = P n+2 (x) − (x − c n+2 ) P n+1 (x) + λ n+2 P n (x), (3.3) where c n+2 = γ n − P n+2 (k 1 ) P n+1 (k 1 ) + B n+1 + η n , λ n+2 = −η n P n+1 (k 2 ) P n (k 2 ) − B n+1 P n+1 (k 1 ) P n (k 1 ) . The above expression (3.3) must be equal to zero because P n (x) is a monic orthogonal polynomial sequence with respect to measure dµ. Hence, by Favard's theorem it satisfies the TTRR, which gives the desired result. If both quasi-type kernel polynomial of order one and kernel polynomials exist for some k = k 1 = k 2 ∈ C, then (3.1) can be written as (x − k) P * n+1 (k; x) +B n+1 P * n (k; x) − (x − γ n )P n+1 (x) = 0, whereB n+1 = B n+1 + η n . This implies (x − k)T * n+1 (k; x) − (x − γ n )P n+1 (x) = 0, which further givesT * n+1 (k; x) = x − γ n x − k P n+1 (x). (3.4) If the support of a measure µ is a compact subset of real line and k ∈ supp(dµ) then T * n+1 (k; x) L 1 (dµ) = x − γ n x − k P n+1 (x) dµ ≤ x x − k P n+1 (x)dµ + γ n P n+1 (x) x − k dµ ≤ 1 |x − k| 2 dµ 1/2 |xP n+1 | 2 dµ 1/2 + γ n |P n+1 | 2 dµ 1/2 < ∞. In the above, we used the triangle inequality and Hölder's inequality to obtain the first and second inequalities, respectively. Moreover, finiteness follows directly from the fact that multiplication by x is in L 2 (dµ) and k ∈ supp(dµ). Geronimus transformation. Let L be a linear functional. For given k ∈ C, define L((x − k)p(x)) = L(p(x)) for any polynomial p(x). This transformation L is known as Geronimus transformation at k ∈ C. Geronimus transformation can be regarded as the inverse of Christoffel transformation at k [10]. For any polynomial p(x), we can write L(p(x)) = L p(x) − p(k) x − k (x − k) + p(k) = L p(x) − p(k) x − k + p(k) L(1), where L(1) is not uniquely determined, and hence an arbitrary constant. However, L(1) = 0, because there does not exist any orthogonal polynomial sequence with such property [6]. Next we state the result in which a sequence of quasi-orthogonal polynomials of order one with suitable choice of A n is taken, which forms an orthogonal polynomial sequence with respect to the Geronimus transformation at k. Theorem 3.2. [10,24] Let {P n (x)} ∞ n=0 be the sequence of orthogonal polynomials with respect to the positive definite linear functional L. If k ∈ C \ suppµ then the sequence of monic polynomials P n (k; x) = P n (x) + A n P n−1 (x), (3.5) where A n = − Pn(x) k−x dµ(x) P n−1 (x) k−x dµ(x) is an orthogonal polynomial sequence for the corresponding Geronimus transformation L at k. In Theorem 3.2, we see that one can find the explicit form of polynomials generated by Geronimus transformation in terms of orthogonal polynomials P n (x). In the Proposition 3, we give the expression for orthogonal polynomials P n (x) in terms of polynomials generated by L using TTRR (1.2). Note that the similar expression for P n (x) with different approach was given in [24] and references therein. Proposition 3. Let { P n (x)} ∞ n=0 be the sequence of orthogonal polynomials with respect to the Geronimus transformation which exists for some k ∈ C. Then we can write P n (x) in terms of linear combinations of P n (k; x) and P n+1 (k; x) as follows: P n (x) = 1 x − k P n+1 (k; x) − 1 x − k λ n+1 A n P n (k; x). Proof. Using equation (3.5), we can write 1 x − k P n+1 (k; x) + 1 x − k B n P n (k; x) = 1 x − k [P n+1 (x) − (x − k − A n+1 + B n )P n (x) − B n A n P n−1 (x)] + P n (x). Since A n = 0, by putting B n = − λ n+1 A n , we can write the above equation as 1 x − k P n+1 (k; x) + 1 x − k B n P n (k; x) = 1 x − k [P n+1 (x) − (x − c n+1 )P n (x) + λ n+1 P n−1 (x)] + P n (x) where c n+1 = k + A n+1 − B n , λ n+1 = −B n A n . Using TTRR (1.2), we get the desired result. In the next theorem, we recover the orthogonality for Q G n (k 1 , k 2 ; x) by obtaining three sequences of parameters. Theorem 3.3. Let {P n (x)} ∞ n=0 be a monic orthogonal polynomial sequence with respect to the positive definite linear functional L. Let T * n (k 2 , x) be a quasi-type kernel polynomial of order one for some k 2 ∈ C. Further, suppose that the sequence { P n (k 1 ; x)} ∞ n=0 of the polynomials corresponding to Geronimus transformation exist for some k 1 ∈ C. Then there exist unique sequences of constants {α n }, {γ n } and {η n } such that the sequence of polynomials {Q G n (k 1 , k 2 ; x)} given by Q G n (k 1 , k 2 ; x) := 1 α n x − γ n P n+1 (k 1 ; x) + η n x − k 2 α n x − γ n T * n (k 2 ; x) (3.6) satisfies the same orthogonality as that of {P n (x)}. Proof. If the sequence {Q G n (k 1 , k 2 ; x)} is orthogonal with respect to the linear functional L, then by uniqueness theorem of orthogonal polynomials, {Q G n (k 1 , k 2 ; x)} and {P n (x)} are the same system of orthogonal polynomials and vice-versa. We can write (3.6) as Q G n (k 1 , k 2 ; x) = 1 α n x − γ n P n+1 (k 1 ; x) + η n x − k 2 α n x − γ n T * n (k 2 ; x) = 1 α n x − γ n P n+1 (k 1 ; x) − (α n x − γ n )P n (x) + η n (x − k 2 )T * n (k 2 ; x) + P n (x). Considering (3.5) together with the definition of kernel polynomials and quasi-type kernel polynomial of order one gives P n+1 (k 1 ; x) − (α n x − γ n )P n (x) + η n (x − k 2 )T * n (k 2 ; x) = P n+1 (x) + A n+1 P n (x) − (α n x − γ n )P n (x) + η n (x − k 2 )(P * n (k 2 ; x) +B n P * n−1 (k 2 ; x)) = P n+1 (x) + A n+1 P n (x) − (α n x − γ n )P n (x) + η n P n+1 (x) − η n P n+1 (k 2 ) P n (k 2 ) P n (x) + η nBn P n (x) − η nBn P n (k 2 ) P n−1 (k 2 ) P n−1 (x). Combining the coefficients of P n+1 (x), P n (x) and P n−1 (x), we get P n+1 (k 1 ; x) − (α n x − γ n )P n (x) + η n (x − k 2 )T * n (k 2 ; x) = (1 + η n ) P n+1 (x) − α n 1 + η n x − γ n + A n+1 − η n P n+1 (k 2 ) Pn(k 2 ) + η nBn 1 + η n P n (x) − η nBn P n (k 2 ) (1 + η n )P n−1 (k 2 ) P n−1 (x) . (3.7) Since P n (k 2 ) = 0, P n−1 (k 2 ) = 0, by substituting η n = − λ n+1 λ n+1 +B n Pn(k 2 ) P n−1 (k 2 ) , α n = 1 − λ n+1 λ n+1 +B n Pn(k 2 ) P n−1 (k 2 ) , and γ n = c n+1 (1 + η n ) − A n+1 + η n P n+1 (k 2 ) P n (k 2 ) − η nBn , we can write the right side of the expression (3.7) as (1 + η n ) [P n+1 (x) − (x − c n+1 ) P n (x) + λ n+1 P n−1 (x)] ,(3.8) where c n+1 = γ n + A n+1 − η n P n+1 (k 2 ) Pn(k 2 ) + η nBn 1 + η n and λ n+1 = − η nBn P n (k 2 ) (1 + η n )P n−1 (k 2 ) . The above expression (3.8) must be equal to zero. Since P n (x) is a monic orthogonal polynomial sequence, by Favard's theorem it satisfies TTRR. This completes the proof. Uvarov Transformation. Linear spectral transformations play a significant role in the study of perturbation of orthogonal polynomials. We can obtain one of the main transformations by adding point mass to the original measure. In other words, if L is a quasi-definite linear functional, then we can defineL bŷ L = L + R o δ(x − k), where δ(·) is a mass point at k and R o is a non zero constant. The new linear functional L is known as canonical Uvarov transformation [40] of L. To study the structure of polynomials corresponding to Uvarov transformation, it is essential that the Uvarov transformation has at least the property of quasi definiteness. In this regard, the necessary and sufficient conditions for preserving the quasi definite property of the linear functional are given in [32] . In addition, the condition for preserving the positive definite property of Uvarov transformation from the original positive definite linear functional is given in [27]. where T n = R o P 2 n (k) λ 1 ...λ n+1 1 + RoP * n−1 (k;k)Pn(k) λ 1 ...λ n+1 . The following result shows that one can recover the original sequence of orthogonal polynomials from the linear combination of quasi-type kernel polynomials of order one and polynomials generated by Uvarov transformation with rational coefficients and by suitably identifying three sequences of constants. Q U n (k 1 , k 2 ; x) := x − k 1 α n x − γ nP n (x) + η n x − k 2 α n x − γ n T * n (k 2 ; x) (3.10) satisfies the same orthogonality given by {P n (x)}. Proof. If the sequence {Q U n (k 1 , k 2 ; x)} is orthogonal with respect to the linear functional L, then by uniqueness theorem of orthogonal polynomials, {Q U n (k 1 , k 2 ; x)} and {P n (x)} are the same system of orthogonal polynomials and vice-versa. We can write the expression (3.10) as Q U n (k 1 , k 2 ; x) = x − k 1 α n x − γ nP n (x) + η n x − k 2 α n x − γ n T * n (k 2 ; x) = 1 α n x − γ n (x − k 1 )P n (x) − (α n x − γ n )P n (x) + η n (x − k 2 )T * n (k 2 ; x) + P n (x). First, we simplify the bracketed portion of the above equation. For this, consider (x − k 1 )P n (x) − (α n x − γ n )P n (x) + η n (x − k 2 )T * n (k 2 ; x) = (x − k 1 )P n (x) − T n P n (x) + T n P n (k 1 ) P n−1 (k 1 ) P n−1 (x) − α n xP n (x) + β n P n (x) + η n (x − k 2 )P * n (k 2 ; x) +B n η n (x − k 2 )P * n−1 (k 2 ; x) = (1 − α n )xP n (x) + (β n − k 1 − T n )P n (x) + T n P n (k 1 ) P n−1 (k 1 ) P n−1 (x) + η n P n+1 (x) − η n P n+1 (k 2 ) P n (k 2 ) P n (x) + η nBn P n (x) − η nB n P n (k 2 ) P n−1 (k 2 ) P n−1 (x) = (1 − α n ) [P n+1 (x) + c n+1 P n (x) + λ n+1 P n−1 (x)] + (β n − k 1 − T n )P n (x) + T n P n (k 1 ) P n−1 (k 1 ) P n−1 (x) + η n P n+1 (x) − η n P n+1 (k 2 ) P n (k 2 ) P n (x) + η nBn P n (x) − η nB n P n (k 2 ) P n−1 (k 2 ) P n−1 (x) = (1 − α n + η n )P n+1 (x) + β n − k 1 − T n − η n P n+1 (k 2 ) P n (k 2 ) + η nBn + c n+1 − α n c n+1 P n (x) + T n P n (k 1 ) P n−1 (k 1 ) − η nB n P n (k 2 ) P n−1 (k 2 ) + λ n+1 − α n λ n+1 P n−1 (x). (3.11) Here to obtain (3.11), we have used the expression for multiplication by x with P n (x) and combining the coefficients of P n+1 (x), P n (x) and P n−1 (x) to obtain equation (3.11). Next, setting the right side of (3.11) equal to zero and by using the fact that P n+1 (x), P n (x) and P n−1 (x) are linearly independent, we get that the coefficients must be equal to zero. So we obtain the unique sequence of constants {α n }, {γ n } and {η n } as α n = 1 + T n Pn(k 1 ) P n−1 (k 1 ) B n Pn(k 2 ) P n−1 (k 2 ) + λ n+1 β n = k 1 + T n + c n+1 −B n + P n+1 (k 2 ) P n (k 2 ) T n Pn(k 1 ) P n−1 (k 1 ) B n Pn(k 2 ) P n−1 (k 2 ) + λ n+1 η n = T n Pn(k 1 ) P n−1 (k 1 ) B n Pn(k 2 ) P n−1 (k 2 ) + λ n+1 , this completes the proof. 3.4. Quasi-type kernel polynomials of order two. Now we define the monic quasiorthogonal polynomials of order two. Define S n (x) [6] as follows: S n (x) = P n (x) + L n P n−1 (x) + M n P n−2 (x), (3.12) where {P n (x)} ∞ n=0 is a monic orthogonal polynomial sequence with respect to the linear functional L for any choice of L n , M n ∈ C. If L(x m S n (x)) = 0 for m = 0, 1, 2, ..., n − 3, for any choice of L n , M n ∈ C, then S n (x) is called quasi-orthogonal polynomial of order two. Similarly, we can extend this definition to the quasi-type kernel polynomial of order two with respect to L * . Define S * n (k; x) as follows: S * n (k; x) = P * n (k; x) +L n P * n−1 (k; x) +M n P * n−2 (k; x), where {P * n (k; x)} ∞ n=0 is a sequence of kernel polynomials which exist for some k ∈ C and form a monic orthogonal polynomial system with respect to the quasi-definite linear functional L * . If L * (x m S * n (k; x)) = 0 for m = 0, 1, 2, ..., n − 3 and for any choice ofL n ,M n ∈ C, then S * n (k; x) is called quasi-type kernel polynomial of order two. In the next theorem, we recover the polynomials P n (x) from the linear combination of iterated kernel polynomials [6, page 9] with two parameters and quasi-type kernel polynomials of order two with rational coefficients. We obtain two sequences of constants responsible for obtaining P n (x). Theorem 3.6. Let {P n (x)} ∞ n=0 be a monic orthogonal polynomial sequence with respect to the positive definite linear functional L. Let S * n+1 (k 1 , x) be a quasi-type kernel polynomial of order two for some k 1 ∈ C with suitable choice ofL n ,M n which satisfỹ L n +M n P n (k 1 ) λ n+1 P n−1 (k 1 ) = P n+2 (k 1 ) P n+1 (k 1 ) − P n+2 (k 2 ) P n+1 (k 2 ) − P * n+1 (k 2 , k 3 ) P * n (k 2 , k 3 ) . (3.13) Further, suppose that the sequence {P * * n (k 2 , k 3 ; x)} ∞ n=0 of iterated kernel polynomials exists for some k 2 ∈ C ∓ , k 3 ∈ C ± . Then, there exist unique sequences of constants {α n }, {β n } such that the sequence of polynomials {Q S n (k 1 , k 2 , k 3 ; x)} given by Q S n (k 1 , k 2 , k 3 ; x) := x − k 1 α n x − β n S * n+1 (k 1 ; x) − (x − k 2 )(x − k 3 ) α n x − β n P * n (k 2 , k 3 ; x) (3.14) satisfies the same orthogonality given by the polynomials {P n (x)}. Remark 3.1. The iterated kernel polynomials sequence for some k 2 ∈ C ∓ , k 3 ∈ C ± are given in [6, eq. 3.5], where C ∓ = {z : Imz ≶ 0}. Proof. If the sequence {Q S n (k 1 , k 2 , k 3 ; x)} is orthogonal with respect to the linear functional L, then by uniqueness theorem of orthogonal polynomials, {Q S n (k 1 , k 2 , k 3 ; x)} and {P n (x)} are the same system of orthogonal polynomials and vice-versa. We can write the expression (3.14) as Q S n (k 1 , k 2 , k 3 ; x) = x − k 1 α n x − β n S * n+1 (k 1 ; x) − (x − k 2 )(x − k 3 ) α n x − β n P * n (k 2 , k 3 ; x) = 1 α n x − β n (x − k 1 )S * n+1 (k 1 ; x) − (α n x − β n )P n (x) − (x − k 2 )(x − k 3 )P * n (k 2 , k 3 ; x) + P n (x). Using the definition of quasi-type kernel polynomial of order two and the expression of kernel and iterated kernel polynomials, we have (x − k 1 )S * n+1 (k 1 ; x) − (α n x − β n )P n (x) − (x − k 2 )(x − k 3 )P * n (k 2 , k 3 ; x) = P n+2 (x) − P n+2 (k 1 ) P n+1 (k 1 ) P n+1 (x) +L n P n+1 (x) −L n P n+1 (k 1 ) P n (k 1 ) P n (x) +M n P n (x) −M n P n (k 1 ) P n−1 (k 1 ) P n−1 (x) − α n xP n (x) + β n P n (x) − P n+2 (x) + P n+2 (k 2 ) P n+1 (k 2 ) P n+1 (x) + P * n+1 (k 2 , k 3 ) P * n (k 2 , k 3 ) P n+1 (x) − P * n+1 (k 2 , k 3 ) P * n (k 2 , k 3 ) P n+1 (k 2 ) P n (k 2 ) P n (x). Since {P n (x)} ∞ n=0 is a monic orthogonal polynomial sequence, we can use (1.2) to write the expression for xP n (x), which gives (x − k 1 )S * n+1 (k 1 ; x) − (α n x − β n )P n (x) − (x − k 2 )(x − k 3 )P * n (k 2 , k 3 ; x) = − P n+2 (k 1 ) P n+1 (k 1 ) P n+1 (x) +L n P n+1 (x) −L n P n+1 (k 1 ) P n (k 1 ) P n (x) +M n P n (x) −M n P n (k 1 ) P n−1 (k 1 ) P n−1 (x) − α n [P n+1 (x) + c n+1 P n (x) + λ n+1 P n−1 (x)] + β n P n (x) + P n+2 (k 2 ) P n+1 (k 2 ) P n+1 (x) + P * n+1 (k 2 , k 3 ) P * n (k 2 , k 3 ) P n+1 (x) − P * n+1 (k 2 , k 3 ) P * n (k 2 , k 3 ) P n+1 (k 2 ) P n (k 2 ) P n (x). Combining the coefficients of P n+1 (x), P n (x) and P n−1 (x), we get (x − k 1 )S * n+1 (k 1 ; x) − (α n x − β n )P n (x) − (x − k 2 )(x − k 3 )P * n (k 2 , k 3 ; x) = − P n+2 (k 1 ) P n+1 (k 1 ) + P n+2 (k 2 ) P n+1 (k 2 ) +L n − α n + P * n+1 (k 2 , k 3 ) P * n (k 2 , k 3 ) P n+1 (x) + −M n P n (k 1 ) P n−1 (k 1 ) − α n λ n+1 P n−1 (x) + −L n P n+1 (k 1 ) P n (k 1 ) +M n + β n − α n c n+1 − P * n+1 (k 2 , k 3 ) P * n (k 2 , k 3 ) P n+1 (k 2 ) P n (k 2 ) P n (x). Setting the left side of the above equation equal to zero and by using the fact that P n+1 (x), P n (x) and P n−1 (x) are linearly independent we get that the coefficients must be equal to zero. This gives α n = − 1 λ n+1M n P n (k 1 ) P n−1 (k 1 ) , β n =L n P n+1 (k 1 ) P n (k 1 ) −M n + λ n+2        n+1 j=0 P j (k 3 )P j (k 2 ) λ 1 λ 2 ...λ j+1 n j=0 P j (k 3 )P j (k 2 ) λ 1 λ 2 ...λ j+1        − 1 λ n+1 P n (k 1 ) P n−1 (k 1 ) c n+1 . We used the Christoffel-Darboux formula [13] and the fact that zeros [38] of P n (x) lie on the real line for n = 1, 2, ... , for k 2 , k 3 ∈ C ± to write the expression for β n . This completes the proof. Ratio of kernel polynomials and continued fractions While considering the quasi-type kernel polynomials of order two, (3.13) provides the ratio of iterated kernel polynomials. Further, in Example 2.1, we are interested to find the behavior of the ratio of Chebyshev polynomial and Chebyshev kernel polynomial. To answer the above problem, we need the ratio of kernel polynomials. In particular, in this section, we are interested in the limiting case of ratio of kernel polynomials, which is addressed in Theorem 4.2. For this, we require the confluent form of Christoffel-Darboux formula which we recall in Theorem 4.1. Then we discuss the ratio of kernel polynomials in terms of infinite continued fractions. As specific cases we exhibit the ratio of kernel of Laguerre polynomials and Jacobi polynomials, in terms of, Confluent and Gaussian hypergeometric functions, respectively. P 2 j (x) λ 1 λ 2 ...λ j+1 = P ′ n+1 (x)P n (x) − P n+1 (x)P ′ n (x) λ 1 λ 2 ...λ n+1 · Now we will compute the ratio of kernel polynomials as x approaches k. Theorem 4.2. Let {P * n (k; x)} ∞ n=0 be a sequence of kernel polynomials that exists for some k ∈ C. Then lim x→k P * n+1 (k; x) P * n (k; x) = P n (k) P n+1 (k) λ n+2       1 + 1 λ 1 λ 2 ...λ n+2 P 2 n+1 (k) n j=0 P 2 j (k) λ 1 λ 2 ...λ j+1       (4.1) and lim x→k P * n (k; x) P * n+1 (k; x) = P n+1 (k) P n (k) 1 λ n+2        1 − P 2 n+1 (k) λ 1 λ 2 ...λ n+2 n+1 j=0 P 2 j (k) λ 1 λ 2 ...λ j+1        . (4.2) Proof. Using the Definition 2.1 and Theorem 4.1, we have lim x→k P * n+1 (k; x) P * n (k; x) = P n (k) P n+1 (k) lim x→k P n+2 (x)P n+1 (k) − P n+2 (k)P n+1 (x) P n+1 (x)P n (k) − P n+1 (k)P n (x) = P n (k) P n+1 (k) P ′ n+2 (k)P n+1 (k) − P n+2 (k)P ′ n+1 (k) P ′ n+1 (k)P n (k) − P n+1 (k)P ′ n (k) = P n (k) P n+1 (k) λ n+2        n+1 j=0 P 2 j (k) λ 1 λ 2 ...λ j+1 n j=0 P 2 j (k) λ 1 λ 2 ...λ j+1        = P n (k) P n+1 (k) λ n+2       1 + P 2 n+1 (k) λ 1 λ 2 ...λ n+2 n j=0 P 2 j (k) λ 1 λ 2 ...λ j+1       . In similar lines, we can obtain (4.2). This completes the proof. Proof. By using Theorem 4.2, we have lim x→1Ĉ * n+1 (1; x) C * n (1; x) = 1 2 1 + 4 2n + 1 . Allowing n → ∞, we get the desired result. Next, we will discuss the link between the ratio of kernel polynomials and infinite continued fractions. For this, we first need the definition of the hypergeometric functions. The Gauss hypergeometric function 2 F 1 (p, q; r; z) is given by F (p, q; r; z) := 2 F 1 (p, q; r; z) = ∞ n=0 (p) n (q) n (r) n n! z n for r ∈ {0, −1, −2, ...},(4.3) where the symbol (·) n is known as Pochhammer symbol and is defined as (p) n = p(p + 1)(p + 2)...(p + n − 1) = Γ(p + n) Γp , with (p) 0 = 1. The above series converges absolutely in {z ∈ C : |z| < 1}. Further we can analytically continue the series as a single valued function everywhere except any path joining the branch points 1 and infinity [5]. Note that if we take either p or q to be a negative integer, the terms of the series will vanish after some stage and we will be left with a finite linear combination of monomials. If this happens, the convergence of the hypergeometric series is not an issue. If we replace z by z/q and allow q → ∞, then by using lim n→∞ (q) n q n = 1, we obtain the Kummer or confluent hypergeometric function φ(p; r; z) := 1 F 1 (p; r; z) = lim q→∞ F (p, q; r; z/q) = ∞ n=0 (p) n (r) n n! z n for r ∈ {0, −1, −2, ...}. We shall use the following contiguous relation satisfied by the Gaussian hypergeometric function to obtain the continued fraction of ratio of hypergeometric functions. F (p + 1, q; r; z) = F (p, q; r; z) − q r zF (p + 1, q + 1; r + 1; z), (4.4) F (p, q; r; z) = F (p, q + 1; r + 1; z) − p(r − q) r(r + 1) zF (p + 1, q + 1; r + 2; z), (4.5) F (p, q + 1; r + 1; z) = F (p + 1, q + 1; r + 2; z) − (q + 1)(r − p + 1) (r + 1)(r + 2) zF (p + 1, q + 2; r + 3; z). (4.6) We can use (4.4)-(4.6) to get the ratio of Gauss hypergeometric functions [41, p. 337](see also [15,30]). F (p + 1, q; r; z) F (p, q; r; z) = 1 1 − (1 − g 0 ) g 1 z 1 − (1 − g 1 ) g 2 z 1 − (1 − g 2 ) g 3 z 1 − · · · (4.7) with g j = g j (p, q, r) :=      0 for j = 0, p+k r+2k−1 for j = 2k ≥ 2, k ≥ 1, q+k−1 r+2k−2 for j = 2k − 1 ≥ 1, k ≥ 1. Hence, we can write the ratio of Kummer hypergeometric functions as a limit of the ratio of Gauss hypergeometric functions by φ(p + 1; r; z) φ(p; r; z) = lim q→∞ F (p + 1, q; r; z/q) F (p, q; r; z/q) = 1 1 − d 1 z 1 − d 2 z 1 − d 3 z 1 − · · · (4.8) with d j = lim q→∞ (1−g j−1 )g j q for all j ≥ 1. So d j = d j (p, r) :=      1 r for j = 1, −(p+k) (r+2k−1)(r+2k−2) for j = 2k, k ≥ 1, r−p+k−1 (r+2k−1)(r+2k−2) for j = 2k − 1, k ≥ 2. If we put p = −n, r = γ + 2 and z = −x in (4.8), we obtain φ(−n + 1; γ + 2; −x) φ(−n; γ + 2; −x) = 1 1 +d {L n (x)} ∞ n=0 forms an orthonormal system on (−∞, 0] with respect to the weight function (−x) γ e x [38]. Considering (2.5) for the Laguerre polynomials with the particular value k = 0, we get L γ * n (0; x) = λ 1 λ 2 ...λ n+1 (L γ n (0)) −1 L n (γ + 1; x), where λ n+1 = n(n + γ) [13, p. 154]. So, the ratio of the kernel of Laguerre polynomials for k = 0 is given by 1 x 1 +d 2 x 1 +d 3 x 1 + · · · (4.9) withd j =d j (−n, γ + 2) :=      1 γ+2 for j = 1, (n−k) (γ+2k)(γ+2k+1) for j = 2k, k ≥ 1, γ+n+k+1 (γ+2k)(γ+2k+1) for j = 2k − 1, k ≥ 2.L γ * n−1 (0; x) L γ * n (0; x) = 1 λ n+1 L γ n (0)L n−1 (γ + 1; x) L γ n−1 (0)L n (γ + 1; x) . We can write the above expression as L γ * n−1 (0; x) L γ * n (0; x) = 1 n 2 B(n, γ + 2) nB(n, γ + 1) φ(−n + 1; γ + 2; −x) φ(−n; γ + 2; −x) , where B(·, ·) denotes the well-known Beta function. Hence, we can use (4.9) to obtain the ratio of kernel of Laguerre polynomials for k = 0 in terms of the continued fractions as L γ * n−1 (0; x) L γ * n (0; x) = 1 n 2 B(n, γ + 2) nB(n, γ + 1) 1 1 +d 1 x 1 +d 2 x 1 +d 3 x 1 + · · · , whered j 's are given by (4.10). Similarly we can express the ratio of the kernels of Laguerre polynomials with different parameteric value of γ > 0 for k = 0 as L γ * n−1 (0; x) L (γ−1) * n (0; x) = λ 1 λ 2 ...λ ñ λ 1λ2 ...λ n+1 L (γ−1) n (0)L n−1 (γ + 1; x) L γ n−1 (0)L n (γ; x) , whereλ n+1 = n(n + γ − 1). The above ratio can be simplified as L γ * n−1 (0; x) L (γ−1) * n (0; x) = γ 2 n 3/2 (n + γ)(γ + 1)(n + γ − 1) φ(−n + 1; γ + 2; −x) φ(−n; γ + 1; −x) . (4.11) Now, we can use [41, eq. 91.1] to obtain L γ * n−1 (0; x) L (γ−1) * n (0; x) = γ 2 n 3/2 (n + γ)(γ + 1)(n + γ − 1) 1 1 + d ′ 1 x 1 − d ′ 2 x 1 + d ′ 3 x 1 − · · · , (4.12) where d ′ j = d ′ j (n, r) := n+k+γ+1 (γ+2k+1)(γ+2k+2) for j = 2k + 1, k ≥ 0, 1−n+k (γ+2k+1)(γ+2k+2) for j = 2k + 2, k ≥ 0. 4.2. Kernel of Jacobi polynomials. We know that the Jacobi polynomials with parameter (γ, δ) can be written in the form of Gauss hypergeometric functions [33] P (γ,δ) n (x) = n + γ n F −n, n + γ + δ + 1; γ + 1; 1 − x 2 , n ∈ Z + . Note that {P (γ,δ) n (x)} ∞ n=0 forms an orthogonal system on [−1, 1] with respect to the weight function w(x) = (1 − x) γ (1 + x) δ , γ, δ > −1. The normalisatioñ P (γ,δ) n (x) = (2n + γ + δ + 1)Γ(n + 1)Γ(n + γ + δ + 1) 2 γ+δ+1 Γ(n + γ + 1)Γ(n + δ + 1) P (γ,δ) n (x) (4.13) provides the sequence {P (γ,δ) n (x)} ∞ n=0 as an orthonormal system on [−1, 1] with respect to the weight function w(x) = (1 − x) γ (1 + x) δ , γ, δ > −1. Considering (2.5) for the Jacobi polynomials with the particular value k = 1, we get P (γ,δ) * n (1; x) = λ 1 λ 2 ...λ n+1 (P (γ,δ) n (1)) −1P (γ+1,δ) n (x), where [13, p. 153] λ n+1 = 4n(n + γ)(n + δ)(n + γ + δ) (2n + γ + δ) 2 (2n + γ + δ + 1)(2n + γ + δ − 1) . Hence, the ratio of the kernel of Jacobi polynomials with parameters γ > −1, δ > 0 for k = 1 is given by P (γ+1,δ) * n−1 (1; x) P (γ+1,δ−1) * n (1; x) = λ 1 λ 2 ...λ ñ λ 1λ2 ...λ n+1 P (γ+1,δ−1) n (1)P (γ+1,δ) n−1 (x) P (γ+1,δ) n−1 (1)P (γ+1,δ−1) n (x) . The above ratio can be simplified as where C(n, γ, δ) = (γ + δ + 2) 2 (2n + γ + δ + 1)(2n + γ + δ) 3 32n 3 (n + γ + 1)(γ + 1) 2 δ 2 . Hence, we can use (4.7) to write the ratio of kernel of Jacobi polynomials in terms of continued fractions as for j = 2k − 1, k ≥ 1. P (γ+1,δ) * n−1 (1; x) P (γ+1,δ−1) * n (1; x) = C(n, γ, δ) 1 1 − (1 − e 0 ) e 1 ( 1−x 2 ) 1 − (1 − e 1 ) e 2 ( 1−x 2 ) 1 − (1 − e 2 ) e 3 ( 1−x 2 ) 1 − · · · ,(4. (4.16) Concluding remarks In this work the quasi-type kernel polynomials are introduced and one of our main objective that was established was the following. Given a quasi-type kernel polynomial, to find a suitable orthogonal polynomial which is an outcome of specific spectral transformation, whose linear combination with the quasi-type kernel polynomial recovers the orthogonality property. Besides this, several observations are made which are useful for future research and the same is outlined in this section. The identity (2.8) in Proposition 2 was proved using TTRR (1.2) and the same was established using Christoffel-Darboux kernel (2.4) in [31, eq. 2.5]. Hence, it would be interesting to revisit many other results proved in the literature using Christoffel-Darboux kernel (2.4), and give an attempt to prove using TTRR. In the hypothesis of Theorem 3.6, we required the coefficientsL n andM n of quasi-type kernel polynomial of order two to satisfy the expression (3.13). As a result, we obtained two unique sequences of constants α n and β n that are useful in recovering the orthogonality given by the polynomial P n (x). It would be interesting to remove the hypothesis of this particular choice of the coefficientsL n andM n . More specifically, we end this point of discussion with the following question. Problem 1. Is it possible to obtain three sequences of constants so that relaxation of the hypothesis (3.13) is permissible? In theorems 3.1, 3.3, 3.5 and 3.6, the quasi-type Kernel polynomial is written in combination with a specific spectral transformed polynomial and it has been established that the resultant polynomial has the same orthogonality given by the moment functional L. This leads to the question of decomposing the original orthogonal polynomial {P n }, given by the moment functional L into the linear combination of quasi-type Kernel polynomial and another orthogonal polynomial, related to the given orthogonal polynomial and the relation between these decompositions. Hence we propose the following problem. Problem 2. To find the conditions under which an orthogonal polynomial can be decomposed into two parts, viz., a quasi-type kernel polynomial and a specific orthogonal polynomial related to the given polynomial. It is expected that the decomposed part of the orthogonal polynomial, from the proved results, is a specific spectral transformation of the given orthogonal polynomial. However, it may be some other orthogonal polynomial with different properties, other than the spectral transformation of the given polynomial. Further, it is possible that the decomposed orthogonal polynomial and the quasi-type kernel polynomial have an orthogonality between them, leading to the biorthogonality property given in the sense of Konhauser [29]. For details of this biorthogonality, we refer to [7,29]. We formulate this as another problem. Problem 3. Given the decomposition of an orthogonal polynomial into its quasi-type kernel polynomial and another orthogonal polynomial, is there any biorthogonality relation between these two decomposed polynomials? The g n 's given by (4.7) while finding the ratio of Gaussian hypergeometric functions constitute the g-sequence and hence the g-fraction, see [13]. Hence, thed n 's given by (4.10) for the ratio related to the Laguerre polynomials and the e n 's given by (4.16) for the ratio related to the Jacobi polynomials lead to the study of chain sequences [13]. In fact, the sequence λ n+1 c n c n+1 obtained from the TTRR (1.2) is a chain sequence, for c n > 0, n ≥ 1. A sequence {l n } that satisfies l n = (1 − g n−1 )g n , n ≥ 1 is a positive chain sequence, where the g n 's are called parameter sequence with 0 ≤ g 0 < 1 and 0 < g n < 1 for n ≥ 1 [13]. Hence, given c n , using this parameter sequence, we can find λ n and hence the TTRR (1.2) can be formed and the sequence of orthogonal polynomials can be extracted for the given moment functional L. Further, the parameter sequence {g n } is called minimal parameter sequence and denoted by {m n }, if g 0 := m 0 = 0. In fact, every parameter sequence has a minimal parameter sequence [13, p.91-92]. The sequence {M n } is called the maximal parameter sequence for the fixed chain sequence {l n }, where M n = inf {g n , for each n, {g k } ∈ G}, with G to be the set of all parameter sequence {g k } of {l n }. If m n = M n , then the parameter sequence is unique and the chain sequence {l n } is called the Single Parameter Positive Chain Sequence or SPPCS in short. For the details of this terminology, we refer to [13,14] and for recent results in this direction, we refer to [26]. Note that chain sequences are useful in studying various properties of the corresponding orthogonal polynomials including the moment problems. In this context, it would be useful, if it is a SPPCS. In case the chain sequence is not SPPCS, there are many ways of finding a SPPCS and one such method is given in [8], where given a chain sequence {l n } the complementary chain sequence {k n } is defined as k n := 1 − l n . It was established in [8] that either {l n } or {k n } must be a SPPCS. Hence we end this manuscript with the following problem which would provide an interesting future research in this direction. Problem 4. To find the nature of the SPPCS related to the sequences given by (4.10) and (4.16) and its significance in studying the properties of the corresponding orthogonal polynomials. Theorem 3.4 (cf. [23, page 256]). Let {P n (x)} ∞n=0 be a monic orthogonal polynomial sequence corresponding to the quasi definite linear functionalL. Suppose that the sequence {P * n (k; x)} ∞ n=0 of kernel polynomials generated by Christoffel transformation exists for some k ∈ C. Then we havê P n (x) = P n (x) − T n P * n−1 (k; x),(3.9) Theorem 3. 5 . 5Let T * n (k 2 , x) be a quasi-type kernel polynomial of order one for some k 2 ∈ C. Further, suppose that sequence {P n (x)} ∞ n=0 of the polynomials corresponding to Uvarov transformation. Then there exist unique sequences of constants {α n }, {γ n } and {η n } such that the sequence of polynomials {Q U n (k 1 , k 2 ; x)} given by Theorem 4.1. [13] Let {P n (x)} ∞n=1 be a sequence of monic orthogonal polynomials and λ n = 0. Then n j=0 Corollary 4. 1 . 1Let {Ĉ n (x)} ∞ n=0 be a sequence of monic Chebyshev polynomials of first kind. . Kernel of Laguerre polynomials. We know that Laguerre polynomials with parameter γ can be written in the form of Kummer hypergeometric functions[20]. 1 (−n; γ + 1; x), n = 0, 1, 2, ....{L γ n (x)} ∞ n=0 forms an orthogonal system on [0, +∞) with respect to the weight function w(x) = x γ e −x , γ > −1. We can normalize the Laguerre polynomials and define L n (x) = L n (γ; x) := 1 Γ(γ + 1) n + γ n L γ n (−x), n = 0, 1, 2, .... F x) = C(n, γ, δ) F −n + 1, n + γ + δ + 1; γ + 2; −n, n + γ + δ + 1; γ + 2; j = e j (n, γ, δ)for j = 2k, k ≥ 1, n+γ+δ+k γ+2k Theorem 2.1. [13] Let Q n+1 (x) be a quasi-orthogonal polynomial of order one, if, and only if, there are constants a and b, not both zero simultaneously, such that Proposition 1. Let dµ be a positive Borel measure on R with finite moments.Then the sequence {(x − u)K n (x, u)} n≥0 forms an orthogonal polynomial sequence with respect to the product measure dµ( The classical moment problem and some related questions in analysis. N I Akhiezer, N. KemmerHafner Publishing CoNew YorkN. I. Akhiezer, The classical moment problem and some related questions in analysis, translated by N. Kemmer, Hafner Publishing Co., New York, 1965. When do linear combinations of orthogonal polynomials yield new sequences of orthogonal polynomials?. M Alfaro, F Marcellán, A Peña, M Luisa Rezola, J. Comput. Appl. Math. 2336M. Alfaro, F. Marcellán, A. Peña and M. Luisa Rezola, When do linear combinations of orthogonal polynomials yield new sequences of orthogonal polynomials?, J. Comput. Appl. Math. 233 (2010), no. 6, 1446-1452. Orthogonal polynomials associated with an inverse quadratic spectral transform. M Alfaro, A Peña, M Luisa Rezola, F Marcellán, Comput. Math. Appl. 614M. Alfaro, A. Peña, M. Luisa Rezola and F. Marcellán, Orthogonal polynomials associated with an inverse quadratic spectral transform, Comput. Math. Appl. 61 (2011), no. 4, 888-900. Quasi-orthogonality on the unit circle and semi-classical forms. M Alfaro, L , Portugal. Math. 511M. Alfaro and L. Moral, Quasi-orthogonality on the unit circle and semi-classical forms, Portugal. Math. 51 (1994), no. 1, 47-62. G E Andrews, R Askey, R Roy, Special functions, Encyclopedia of Mathematics and its Applications. CambridgeCambridge University Press71G. E. Andrews, R. Askey and R. Roy, Special functions, Encyclopedia of Mathematics and its Applications, 71, Cambridge University Press, Cambridge, 1999. R Bailey, M Derevyagin, arXiv:2107.09824Complex Jacobi matrices generated by Darboux transformations. 32arXiv preprintR. Bailey and M. Derevyagin, Complex Jacobi matrices generated by Darboux transformations, arXiv preprint arXiv:2107.09824, (2021), 32 pages. Biorthogonal rational functions of R II type. Kiran Kumar Behera, A Swaminathan, Proc. Amer. Math. Soc. 1477Kiran Kumar Behera and A. Swaminathan, Biorthogonal rational functions of R II type, Proc. Amer. Math. Soc., 147 (7), (2019) 3061-3073. Orthogonal polynomials associated with complementary chain sequences, SIGMA Symmetry Integrability Geom. K K Behera, A Sri Ranga, A Swaminathan, Methods Appl. 12ppPaper No. 075K. K. Behera, A. Sri Ranga and A. Swaminathan, Orthogonal polynomials associated with comple- mentary chain sequences, SIGMA Symmetry Integrability Geom. Methods Appl. 12 (2016), Paper No. 075, 17 pp. Christoffel formula for kernel polynomials on the unit circle. C F Bracciali, A Martinéz-Finkelshtein, A Sri Ranga, D O Veronese, J. Approx. Theory. 235C. F. Bracciali, A. Martinéz-Finkelshtein, A. Sri Ranga and D.O. Veronese, Christoffel formula for kernel polynomials on the unit circle, J. Approx. Theory 235 (2018), 46-73. On linear spectral transformations and the Laguerre-Hahn class. K Castillo, M N Rebocho, Integral Transforms Spec. Funct. 2811K. Castillo and M. N. Rebocho, On linear spectral transformations and the Laguerre-Hahn class, Integral Transforms Spec. Funct. 28 (2017), no. 11, 859-875. On quasi-orthogonal polynomials. T S Chihara, Proc. Amer. Math. Soc. 8T. S. Chihara, On quasi-orthogonal polynomials, Proc. Amer. Math. Soc. 8 (1957), 765-767. On kernel polynomials and related systems. T S Chihara, Boll. Un. Mat. Ital. 3T. S. Chihara, On kernel polynomials and related systems, Boll. Un. Mat. Ital. (3) 19 (1964), 451- 459. T S Chihara, An introduction to orthogonal polynomials, Mathematics and its Applications. New YorkGordon and Breach Science Publishers13T. S. Chihara, An introduction to orthogonal polynomials, Mathematics and its Applications, Vol. 13, Gordon and Breach Science Publishers, New York, 1978. Orthogonal polynomials on the unit circle and chain sequences. M S Costa, H M Felix, A Sri Ranga, J. Approx. Theory. 173M. S. Costa, H. M. Felix and A. Sri Ranga, Orthogonal polynomials on the unit circle and chain sequences, J. Approx. Theory 173 (2013), 14-32. Jacobi matrices generated by ratios of hypergeometric functions. M Derevyagin, J. Difference Equ. Appl. 242M. Derevyagin, Jacobi matrices generated by ratios of hypergeometric functions, J. Difference Equ. Appl. 24 (2018), no. 2, 267-276. On quasi-orthogonal polynomials. D Dickinson, Proc. Amer. Math. Soc. 12D. Dickinson, On quasi-orthogonal polynomials, Proc. Amer. Math. Soc. 12 (1961), 185-194. On quasi-orthogonal polynomials. A Draux, J. Approx. Theory. 621A. Draux, On quasi-orthogonal polynomials, J. Approx. Theory 62 (1990), no. 1, 1-14. On quasi-orthogonal polynomials of order r. A Draux, Integral Transforms Spec. Funct. 279A. Draux, On quasi-orthogonal polynomials of order r, Integral Transforms Spec. Funct. 27 (2016), no. 9, 747-765. Approximation by polynomials of boundary value problems for ordinary linear differential equations. V K Dzyadyk, L A Ostrovetskiȋ, Akad. Nauk Ukrain. SSR Inst. Mat. 1131PreprintV. K. Dzyadyk and L. A. Ostrovetskiȋ, Approximation by polynomials of boundary value problems for ordinary linear differential equations, Akad. Nauk Ukrain. SSR Inst. Mat. Preprint 1985, no. 11, 31 pp Higher transcendental functions. Vols. I. A Erdélyi, W Magnus, F Oberhettinger, F G Tricomi, McGraw-Hill Book Company, IncIINew YorkA. Erdélyi, W. Magnus, F. Oberhettinger and F.G. Tricomi, Higher transcendental functions. Vols. I, II, McGraw-Hill Book Company, Inc., New York, 1953. Mechanische Quadraturen mit positiven Cotesschen Zahlen. L Fejér, Math. Z. 371L. Fejér, Mechanische Quadraturen mit positiven Cotesschen Zahlen, Math. Z. 37 (1933), no. 1, 287-309. On polynomials deviating least from zero along with their derivatives. A Gelfond, Doklady Akad. Nauk SSSR (N.S.). 96A. Gelfond, On polynomials deviating least from zero along with their derivatives, Doklady Akad. Nauk SSSR (N.S.) 96 (1954), 689-691. From standard orthogonal polynomials to Sobolev orthogonal polynomials: The role of semiclassical linear functionals. J C García-Ardila, F Marcellán, M Marriaga, Series Tutorials, Schools, and Workshops in the Mathematical Sciences (TSWMS). Douala, Cameroon; Birkhauser, ChamOrthogonal Polynomials. 2nd AIMS-Volkswagen Stiftung WorkshopJ. C. García-Ardila, F. Marcellán and M. Marriaga, From standard orthogonal polynomials to Sobolev orthogonal polynomials: The role of semiclassical linear functionals. In Orthogonal Poly- nomials. 2nd AIMS-Volkswagen Stiftung Workshop, Douala, Cameroon, 5-12 October, 2018, M. Foupouagnigni, W. Koepf Editors. Series Tutorials, Schools, and Workshops in the Mathematical Sciences (TSWMS), Birkhauser, Cham. 245-292 (2020). Associated orthogonal polynomials of the first kind and Darboux transformations. J C García-Ardila, F Marcellán, P H Villamil-Hernández, 10.1016/j.jmaa.2021.125883J. Math. Anal. Appl. 5082J. C. García-Ardila, F. Marcellán and P. H. Villamil-Hernández, Associated orthogonal polynomials of the first kind and Darboux transformations, J. Math. Anal. Appl. 508 (2022), no. 2, 26 pp, https://doi.org/10.1016/j.jmaa.2021.125883. Special linear combinations of orthogonal polynomials. Z Grinshpun, J. Math. Anal. Appl. 2991Z. Grinshpun, Special linear combinations of orthogonal polynomials, J. Math. Anal. Appl. 299 (2004), no. 1, 1-18. Parameters of a positive chain sequence associated with orthogonal polynomials. G A Marcato, A Sri Ranga, Y C Lun, Proc. Amer. Math. Soc. 1506G. A. Marcato, A. Sri Ranga and Y. C. Lun, Parameters of a positive chain sequence associated with orthogonal polynomials, Proc. Amer. Math. Soc. 150 (2022), no. 6, 2553-2567. When is the Uvarov transformation positive definite?. M Humet, M Van Barel, Numer. Algorithms. 591M. Humet and M. Van Barel, When is the Uvarov transformation positive definite?, Numer. Algo- rithms 59 (2012), no. 1, 51-62. On quasi-orthogonal polynomials: their differential equations, discriminants and electrostatics. M E H Ismail, X.-S Wang, J. Math. Anal. Appl. 4742M. E. H. Ismail and X.-S. Wang, On quasi-orthogonal polynomials: their differential equations, discriminants and electrostatics, J. Math. Anal. Appl. 474 (2019), no. 2, 1178-1197. Some properties of biothogonal polynomials. J D E Konhauser, J. Math. Anal. Appl. 11J. D. E. Konhauser, Some properties of biothogonal polynomials, J. Math. Anal. Appl., 11 (1965), 242-260. Mapping properties of hypergeometric functions and convolutions of starlike or convex functions of order α. R Küstner, Comput. Methods Funct. Theory. 2R. Küstner, Mapping properties of hypergeometric functions and convolutions of starlike or convex functions of order α, Comput. Methods Funct. Theory 2 (2002), 597-610. On kernel polynomials and self-perturbation of orthogonal polynomials. K H Kwon, D W Lee, F Marcellán, S B Park, Ann. Mat. Pura Appl. 4K. H. Kwon, D.W. Lee, F. Marcellán and S.B. Park, On kernel polynomials and self-perturbation of orthogonal polynomials, Ann. Mat. Pura Appl. (4) 180 (2001), no. 2, 127-146. Sur l'adjonction d'une masse de Diracà une forme régulière et semiclassique. F Marcellán, P Maroni, Ann. Mat. Pura Appl. 4F. Marcellán and P. Maroni, Sur l'adjonction d'une masse de Diracà une forme régulière et semi- classique, Ann. Mat. Pura Appl. (4) 162 (1992), 1-22. The differential equation for Jacobi-Sobolev orthogonal polynomials with two linear perturbutions. C Markett, 10.1016/j.jat.2022.105782J. Approx. Theory. 280C. Markett, The differential equation for Jacobi-Sobolev orthogonal polynomials with two linear perturbutions, J. Approx. Theory 280 (2022), 24 pp, https://doi.org/10.1016/j.jat.2022.105782. I P Natanson, Constructive function theory. New YorkFrederick Ungar Publishing CoIItranslated from the Russian by John R. SchulenbergerI. P. Natanson, Constructive function theory. Vol. II, translated from the Russian by John R. Schu- lenberger, Frederick Ungar Publishing Co., New York, 1965. Sur le probleme des moments, Troisième note. M Riesz, Ark. Mat. Astr. Fys. 17M. Riesz, Sur le probleme des moments, Troisième note, Ark. Mat. Astr. Fys., 17 (1923), 1-52. Asymptotic behavior of Christoffel-Darboux kernel via three-term recurrence relation II. G Świderski, B Trojan, 10.1016/j.jat.2020.105496J. Approx. Theory. 261G.Świderski and B. Trojan, Asymptotic behavior of Christoffel-Darboux kernel via three-term recur- rence relation II, J. Approx. Theory 261 (2021), 48 pp., https://doi.org/10.1016/j.jat.2020.105496. Christoffel functions for multiple orthogonal polynomials. G Świderski, W Van Assche, 10.1016/j.jat.2022.105820J. Approx. Theory. 283G.Świderski and W. Van Assche, Christoffel functions for multiple orthogonal polynomials, J. Ap- prox. Theory 283 (2022), 22 pp., https://doi.org/10.1016/j.jat.2022.105820. Orthogonal polynomials. G Szegö, American Mathematical Society23Providence, RIG. Szegö, Orthogonal polynomials, American Mathematical Society Colloquium Publications, Vol. 23, American Mathematical Society, Providence, RI, 1959. On mechanical quadratures, in particular, with positive coefficients. J Shohat, Trans. Amer. Math. Soc. 423J. Shohat, On mechanical quadratures, in particular, with positive coefficients, Trans. Amer. Math. Soc. 42 (1937), no. 3, 461-496. The connection between systems of polynomials that are orthogonal with respect to different distribution functions. V B Uvarov, Ž. Vyčisl. Mat i Mat. Fiz. 9V. B. Uvarov, The connection between systems of polynomials that are orthogonal with respect to different distribution functions,Ž. Vyčisl. Mat i Mat. Fiz. 9 (1969), 1253-1262. Analytic Theory of Continued Fractions. H S Wall, Van Nostrand Company, IncNew York, NYH. S. Wall, Analytic Theory of Continued Fractions, D. Van Nostrand Company, Inc., New York, NY, 1948.
[]
[ "Multiplicative deformations of spectrale triples associated to left invariant metrics on Lie groups", "Multiplicative deformations of spectrale triples associated to left invariant metrics on Lie groups" ]
[ "Amine Bahayou ", "Mohamed Boucetta " ]
[]
[]
We study the triple (G, π, , ) where G is a connected and simply connected Lie group, π and , are, respectively, a multiplicative Poisson tensor and a left invariant Riemannian metric on G such that the necessary conditions, introduced by Hawkins, to the existence of a non commutative deformation (in the direction of π) of the spectrale triple associated to , are satisfied. We show that the geometric problem of the classification of such triple (G, π, , ) is equivalent to an algebraic one. We solve this algebraic problem in low dimensions and we give the list of all (G, π, , ) satisfying Hawkins's conditions, up to dimension four.
null
[ "https://arxiv.org/pdf/0906.2887v1.pdf" ]
14,188,735
0906.2887
89ab6cdea128aad6b8e2016255d0a02c1604bfd9
Multiplicative deformations of spectrale triples associated to left invariant metrics on Lie groups 16 Jun 2009 Amine Bahayou Mohamed Boucetta Multiplicative deformations of spectrale triples associated to left invariant metrics on Lie groups 16 Jun 2009MSC classification: 58B34; Secondary 46I6553D17 keywords: Poisson-Lie groupscontravariant connectionsmetacurvatureunimodularityspecrale triples We study the triple (G, π, , ) where G is a connected and simply connected Lie group, π and , are, respectively, a multiplicative Poisson tensor and a left invariant Riemannian metric on G such that the necessary conditions, introduced by Hawkins, to the existence of a non commutative deformation (in the direction of π) of the spectrale triple associated to , are satisfied. We show that the geometric problem of the classification of such triple (G, π, , ) is equivalent to an algebraic one. We solve this algebraic problem in low dimensions and we give the list of all (G, π, , ) satisfying Hawkins's conditions, up to dimension four. Introduction In [8] and [9], Hawkins showed that if a deformation of the graded algebra of differential forms on a Riemannian manifold (M, , ) comes from a deformation of the spectral triple describing the Riemannian manifold M , then the Poisson tensor π (which characterizes the deformation) and the Riemannian metric satisfy the following conditions: 1. The associated metric contravariant connection D is flat. The metacurvature of D vanishes, (D is metaflat). 3. The Poisson tensor π is compatible with the Riemannian volume µ: d(i π µ) = 0. The metric contravariant connection associated naturally to any couple of pseudo-Riemannian metric and Poisson tensor is an analogue of the Levi-Civita connection. It has appeared first in [3]. The metacurvature, introduced by Hawkins in [9], is a (2, 3)-tensor field (symmetric in the contravariant indices and antisymmetric in the covariant indices) associated naturally to any torsion-free and flat contravariant connection. In [9], Hawkins studied completely the geometry of the triples (M, , , π) satisfying 1-3 when M is compact and , is Riemannian. In [4], the second author gave a method which permit the construction of a large class of triples (M, , , π) satisfying 1-3. We call the conditions 1-3 Hawkins's conditions and a couple (π, , ) satisfying 1-2 will be called flat and metaflat. In this paper, we study the triples (G, π, , ) satisfying Hawkins's conditions, where G is a connected and simply connected Lie group endowed with a multiplicative Poisson tensor π and a left invariant Riemannian metric , . We reduce the geometric problem of classifying such triples to an algebraic one and we solve it when the dimension of the Lie group is ≤ 4. In [1], the authors gave the complete description of the triples (G, π, , ) satisfying Hawkins's conditions when G is the 2n+1-dimensional Heisenberg group. To state our main results, let us introduce the notion of Milnor Lie algebra which will be central in this paper and recall briefly some classical facts about Poisson-Lie groups. The notion of Poisson-Lie group was first introduced by Drinfel'd [5] and studied by Semenov-Tian-Shansky [13] (see also [11]). A Milnor Lie algebra is a finite dimensional real Lie algebra G endowed with a scalar product , such that: (a) the Lie subalgebra S = {u ∈ G, ad u + ad t u = 0} is abelian (ad t u denotes the adjoint of ad u w.r.t. , ), (b) the derived ideal [G, G] is abelian and S ⊥ = [G, G] (S ⊥ is the orthogonal of S). This terminology is justified by a classical result of Milnor. Indeed, in [12], Milnor showed that a left invariant Riemannian metric on a Lie group is flat if and only if its Lie algebra is a semi-direct product of an abelian algebra b with an abelian ideal u and, for any u ∈ b, ad u is skew-symmetric. This result can be formulated in a more precise way and, in Proposition 2.1, we will show that a left invariant Riemannian metric on a Lie group is flat if and only if its Lie algebra is a Milnor Lie algebra. is called multiplicative if, for any a, b ∈ G, π(ab) = (L a ) * π(b) + (R b ) * π(a), where (L a ) * (resp. (R b ) * ) denotes the tangent map of the left translation of G by a (resp. the right translation of G by b). Pulling π back to the identity element e of G by left translations, we get a map π ℓ : G −→ G ∧ G defined by π ℓ (g) = (L g −1 ) * π(g). Let ξ := d e π ℓ : G −→ G ∧ G be the intrinsic derivative of π ℓ at e. It is well-known that (G, [ , ], ξ) is a Lie bialgebra, i.e., ξ is a 1-cocycle relative to the adjoint representation of G on G ∧ G, and the dual map of ξ, [ , ] * : G * × G * −→ G * , is a Lie bracket on G * . It is well-known also that (G * , [ , ] * , ρ) is also a Lie bialgebra, where ρ : G * −→ G * ∧ G * is the dual of the Lie bracket on G. Note that ρ = −d where d is the restriction of the differential to left invariant 1-forms. A Poisson-Lie group endowed with a left invariant Riemannian metric will be called Riemannian Poisson-Lie group. For any scalar product , on a Lie algebra G, we denote by , * the associated scalar product on G * . Let us state our main results: 1. (G * , [ , ] * ) is an unimodular Lie algebra, 2. for any u ∈ G, ρ(i ξ(u) µ e ) = 0,(2) where ξ is the 1-cocycle associated to π and ρ = −d is the dual 1cocycle extended as a differential to ∧ dim G−2 G * . We will see (cf. Proposition 3.1) that for a general connected Poisson-Lie group the condition d (i π µ) = 0 implies (2). If G is abelian then ρ = 0 and one can deduce easily from Theorems 1.1-1.2 the following result. The paper is organized as follows. In Section 2, we present a reformulation of a classical result of Milnor and we recall some standard facts about Levi-Civita contravariant connections and about the metacurvature of flat and torsion-free contravariant connections. In section 3, we prove Theorems 1.1-1.2 and finally, Section 4 is devoted to the determination of Riemannian Poisson-Lie groups satisfying Hawkins's conditions in dimension 2, 3 and 4. Preliminaries Milnor Lie algebras The following lemma is interesting in itself: Proof. For any u ∈ G, we denote by u + the left invariant vector field associated to u. Remark that S + = {u + , u ∈ S} is the Lie algebra of left invariant Killing vector fields. Now, since for any u ∈ S, u + , u + is constant then, for any left invariant vector field X we have: ∇ X ∇ X u + , u + + ∇ X u + , ∇ X u + = 0,(3) where ∇ is the Levi-Civita connection associated to , . The vector field u + is Killing, thus we have the well-known formula (see [2], Theorem 1.81) ∇ X ∇ X u + − ∇ ∇ X X u + = R(u + , X)X where R(X, Y ) = ∇ [X,Y ] − [∇ X , ∇ Y ] is the tensor curvature. Moreover ∇ ∇ X X u + , u + = 0, hence the formula (3) becomes: R(u + , X)X, u + + ∇ X u + , ∇ X u + = 0. This implies, since the curvature is nonpositive, that ∇ X u + , ∇ X u + = 0. So u + is a parallel vector field and the lemma follows. The following proposition is a reformulation of a classical result of Milnor (see [12] Theorem 1.5). Proof. Note first that the Levi-Civita connection of , is entirely determined by the product A : G × G −→ G given by 2 A u v, w e = [u, v], w e + [w, u], v e + [w, v], u e ,(4) and the curvature vanishes if and only if, for any u, v ∈ G, A [u,v] = [A u , A v ]. If G = S ⊕ [G, G] is a Milnor Lie algebra, then one can deduce easily from (4) that A u = 0 if u ∈ [G, G] ad u if u ∈ S, and hence the curvature vanishes identically. Suppose now that the curvature vanishes. In the proof of his result, Milnor considered u = {u ∈ G, A u = 0} and showed that u is an abelian ideal, its orthogonal b is an abelian subalgebra and for all u ∈ b, ad u is skew-symmetric. Hence b ⊂ S and [G, G] = [b, u]. Now, for any u ∈ u, v ∈ b and w ∈ G, we have A u = 0 and then w, [u, v] + ad w u, v + u, ad w v = 0. This relation implies that S = [G, G] ⊥ . We deduce that [G, G] ⊂ u and [G, G] is abelian. From Lemma 2.1, S is abelian which completes the proof. Proposition 2.2. Let G be a Milnor Lie algebra. If dim S ≥ 1 then the derived ideal [G, G] is of even dimension. Proof. Let (s 1 , ..., s p ) be a basis of S. The restriction of ad s 1 to [G, G] is a skew-symmetric endomorphism, thus its kernel K 1 is of even codimension in [G, G]. Now, ad s 2 commutes with ad s 1 and let invariant K 1 , then K 1 ∩ker ad s 2 is of even codimension in K 1 and hence of even codimension in [G, G]. Thus, by induction, we show that K p = [G, G] ∩ (∩ p i=1 ker ad s i ) is an even codimensional subspace of [G, G]. Now from its definition K p is contained in the center of G which is contained in S and then K p = {0} and the result follows. Contravariant connections and metacurvature Contravariant connections associated to a Poisson structure have recently turned out to be useful in several areas of Poisson geometry. Contravariant connections were defined by Vaisman [14] and were analyzed in detail by Fernandes [7]. This notion appears extensively in the context of noncommutative deformations (see [8,9]). Let (P, π) be a Poisson manifold. We consider π # : T * P −→ T P the anchor map given by β(π # (α)) = π(α, β), and we denote by [ , ] π the Koszul bracket on differential 1-forms given by [α, β] π = L π # (α) β − L π # (β) α − d(π(α, β)).(5) This bracket can be extended naturally to Ω * (P ) and gives rise to a bracket which we denote also by [ , ] π . A contravariant connection on P , with respect to π, is a R-bilinear map D : Ω 1 (P ) × Ω 1 (P ) −→ Ω 1 (P ) (α, β) −→ D α β satisfying the following properties: 1. α → D α β is C ∞ (P )-linear, that is: D f α β = f D α β, for all f ∈ C ∞ (P ). 2. β → D α β is a derivation, in the sense: D α (f β) = f D α β + π # (α)(f )β, for all f ∈ C ∞ (P ). The torsion and the curvature of a contravariant connection D is formally identical to the usual definitions T (α, β) = D α β − D β α − [α, β] π and K(α, β) = D α D β − D β D α − D [α,β]π . The connection D is called flat if K vanishes identically. Let us define now an interesting class of contravariant connections, namely Levi-Civita contravariant connections. Let (P, π) be a Poisson manifold and , a pseudo-Riemannian scalar product on T * P . The metric contravariant connection associated to (π, , ) is the unique contravariant connection D such that D is torsion-free and the metric , is parallel with respect to D, i.e., π # (α). β, γ = D α β, γ + β, D α γ . The connection D is the contravariant analogue of the Levi-Civita connection and can be defined by the Koszul formula: 2 D α β, γ = π # (α). β, γ + π # (β). α, γ − π # (γ). α, β + [γ, α] π , β + [γ, β] π , α + [α, β] π , γ .(6) We call D the Levi-Civita contravariant connection associated to (π, , ). The metacurvature We recall now the definition of the metacurvature introduced by Hawkins in [9]. Let (P, π) be a Poisson manifold and D a torsion-free and flat contravariant connection with respect to π. In [9], Hawkins showed that such a connection defines a bracket { , } on the space of differential forms Ω * (P ) such that: 1. { , } is R-bilinear, degree 0 and antisymmetric, i.e., {σ, ρ} = −(−1) degσdegρ {ρ, σ}. 2. The differential d is a derivation with respect to { , }, i.e., d{σ, ρ} = {dσ, ρ} + (−1) degσ {σ, dρ}. 3. { , } satisfies the product rule {σ, ρ ∧ λ} = {σ, ρ} ∧ λ + (−1) degσdegρ ρ ∧ {σ, λ}. 4. For any f, g ∈ C ∞ (P ) and for any σ ∈ Ω * (P ) the bracket {f, g} coincides with the initial Poisson bracket and {f, σ} = D df σ. Hawkins called this bracket a generalized Poisson bracket and showed that there exists a (2, 3)-tensor M (symmetric in the contravariant indices and antisymmetric in the covariant indices) such that the following assertions are equivalent: M is called the metacurvature and is given by M(df, α, β) = {f, {α, β}} − {{f, α}, β} − {{f, β}, α}.(7) The connection D is called metaflat if M vanishes identically. The following formulas, due to Hawkins, will be useful later. Indeed, Hawkins pointed out in [9] pp. 394, that for any parallel 1-form α with respect to D and any 1-form β, the generalized Poisson bracket of α and β is given by {α, β} = −D β dα.(8) Thus, one can deduce from (7) that for any parallel 1-forms α, γ and for any 1-form β, M(α, β, γ) = −D β D γ dα.(9) To finish this section, we give a useful full global formula for Hawkin's generalized Poisson bracket of two 1-forms. Let α and β be two 1-forms on a Poisson manifold P endowed with a torsion-free and flat contravariant connection D. One can suppose that β = gdf where f, g ∈ C ∞ (P ). Then, we have {α, f dg} ={α, f } ∧ dg + f {α, dg} = − D df α ∧ dg + f (dD dg α − D dg dα) = − D f dg dα + dD f dg α − D df α ∧ dg − df ∧ D dg α = − D f dg dα + dD f dg α − D α (df ∧ dg) − [df, α] π ∧ dg − df ∧ [dg, α] π = − D f dg dα − D α (d(f dg)) + dD f dg α − [df, α] π ∧ dg − df ∧ [dg, α] π = − D f dg dα − D α (d(f dg)) + dD f dg α + [α, d(f dg)] π = − D α dβ − D β dα + dD β α + [α, dβ] π . Thus, for any α, β ∈ Ω 1 (P ), we have {α, β} = −D α dβ − D β dα + dD β α + [α, dβ] π .(10) 3 Proofs of Theorems 1.1-1. 2. If (π, , ) is flat then, if one identify G * with the space of left invariant 1-forms, the metacurvature M is given by M (α, β, γ) = ad α ad β ρ(γ) for all α, β, γ ∈ S, 0 otherwise,(11) where S = {α ∈ G * , ad α + ad t α = 0} and ρ : G * −→ G * ∧ G * is the dual 1-cocycle. Proof. Note first that in a Poisson-Lie group the Koszul bracket of two left invariant 1-form is a left invariant 1-form (see [15]) and, if one identifies G * with the space of left invariant 1-forms, the Koszul bracket coincides with the Lie bracket of G * . Through this proof, we identify G * with the space of left invariant 1-forms on G. 1. Denote by , * the left invariant metric on T * G associated to , and denote by D the Levi-Civita contravariant connection associated to (π, , * ). Since the Riemannian metric is left invariant, for any α, β, γ ∈ G * , (6) becomes 2 D α β, γ * = [γ, α] π , β * + [γ, β] π , α * + [α, β] π , γ * .(12) Hence the restriction of D to G * × G * defines a product on G * . The vanishing of the curvature of D is equivalent to the vanishing of the restriction of the curvature of D to G * . Now, one can deduce from (12) that the vanishing of the restriction of the curvature of D to G * is equivalent to the flatness of the left invariant Riemannian metric associated to , * e on any Lie group with G * as a Lie algebra and one can conclude by using Proposition 2.1. 2. Suppose now that (π, , ) is flat and, according to the first part, let G * = S ⊥ ⊕ [G * , G * ] where S = {α ∈ G * , ad α + ad t α = 0} and both S and [G * , G * ] are abelian. Let us establish (11). First, one can deduce from (12) that, for any γ ∈ G * , D α γ = 0 if α ∈ [G * , G * ] [α, γ] π = ad α γ if α ∈ S,(13) and moreover, for any α ∈ S, Dα = 0. (a) If α, β, γ ∈ S, since Dα = Dβ = Dγ = 0, we deduce from (9) that M(α, β, γ) = −D α D β dγ (13) = ad α ad β ρ(γ). (b) If α, γ ∈ S and β ∈ [G * , G * ], since Dα = Dγ = 0, we deduce from (9) that M(α, β, γ) = −D α D β dγ (13) = 0. (c) If α, β ∈ [G * , G * ] and γ ∈ S. At least locally, we have α = f i dg i and we deduce from (7) that M(α, β, γ) = f i {g i , {β, γ}} − f i {{g i , β}, γ} − f i {{g i , γ}, β}. From (8), we have {β, γ} = −D γ dβ = 0, and from (13), {g i , γ} = D dg i γ = 0, thus M(α, β, γ) = −f i {{g i , β}, γ} = − f i D D dg i β γ = −D Dαβ γ = 0. (d) For α, β ∈ [G * , G * ], the computation of M(α, β, β) is more difficult. First, by comparing M(α, β, β) and [β, [β, dα] π ] π , we will show that they agree up to sign and, next, we will show that [β, [β, dα] π ] π = 0 and we get the result. Put α = f i dg i . By using (7), we get M(α, β, β) = f i {g i , {α, β}} − 2f i {{g i , β}, β} = f i D dg i {α, β} − 2 f i {D dg i β, β} =D α {α, β} − 2 f i {D dg i β, β} ( * ) = − 2 f i {D dg i β, β} = − 2 ({f i D dg i β, β} + D df i β ∧ D dg i β) = − 2{D α β, β} − 2 D df i β ∧ D dg i β = − 2 D df i β ∧ D dg i β. In ( * ) we have used (13) and the fact that {α, β} ∈ ∧ 2 G * which can be deduced from (10). On the other hand, [β, [β, dα] π ] π = [β, [β, df i ∧ dg i ] π ] π = [β, [β, df i ] π ∧ dg i ] π + [β, df i ∧ [β, dg i ] π ] π = [β, [β, df i ] π ] π ∧ dg i + [β, df i ] π ∧ [β, dg i ] π + [β, df i ] π ∧ [β, dg i ] π + df i ∧ [β, [β, dg i ] π ] π . Now, choose an orthonormal basis {α 1 , ..., α n } of G * . For any γ ∈ Ω 1 (G), we have γ = γ, α i * α i , and [β, γ] π = π ♯ (β) · γ, α i * α i + γ, α i * [β, α i ] π =D β γ + γ, α i * [β, α i ] π . Hence [β, [β, γ] π ] π =[β, D β γ] π + π ♯ (β) · γ, α i * [β, α i ] π + γ, α i * [β, [β, α i ] π ] π =[β, D β γ] π + D β γ, α i * [β, α i ] π =2[β, D β γ] π − D β D β γ =D β D β γ − 2D (D β γ) β =D β D β γ − 2D [β,γ]π β − γ, α i * D [β,α i ]π β =D β D β γ − 2D [β,γ]π β =D β D β df − 2(K(β, γ)β + D β D γ β − D γ D β β) =D β D β γ − 2D β D γ β. By using this formula, we get [β, [β, df i ] π ] π ∧ dg i =D β D β df i ∧ dg i − 2D β D df i β ∧ dg i =D β (D β df i ∧ dg i ) − D β df i ∧ D β dg i − 2D β (D df i β ∧ dg i ) + 2D df i β ∧ D β dg i , df i ∧ [β, [β, dg i ] π ] π = − D β (D β dg i ∧ df i ) + D β dg i ∧ D β df i + 2D β (D dg i β ∧ df i ) − 2D dg i β ∧ D β df i . On the other hand 2[β, df i ] ∧ [β, dg i ] =2D β df i ∧ D β dg i − 2D β df i ∧ D dg i β − 2D df i β ∧ D β dg i + 2D df i β ∧ D dg i β. Thus [β, [β, dα] π ] π =D β D β dα + 2 D df i β ∧ D dg i β − 2D β (D df i β ∧ dg i ) − 2D β (df i ∧ D dg i β) =D β D β dα + 2 D df i β ∧ D dg i β + 2D β ([β, df i ] π ∧ dg i ) − 2D β (D β df i ∧ dg i ) + 2D β (df i ∧ [β, dg i ] π ) − 2D β (df i ∧ D β dg i ) = − D β D β dα − M(α, β, β) + 2D β [β, dα] π = − M(α, β, β). Now, since [G * , G * ] is abelian and β ∈ [G * , G * ], then [β, [β, dα] π ] π = 0. This completes the proof. Before giving a proof for Theorem 1.2, let us show first that, in the general case, the condition (2) is a necessary condition. Proof. The proof is based on the Koszul formula [10], satisfied by any vector field X and any multivector Q, and given by i [X,Q] µ = i X d (i Q µ) + (−1) deg Q d (i X i Q µ) − i Q d (i X µ) .(14) Indeed, if d(i π µ) = 0 then, for any left invariant vector field X, we get i [X,π] µ = d (i X i π µ) − i π d (i X µ) . Or d (i X µ) = L X µ = αµ, where α is a constant and hence di [X,π] µ = 0. One can conclude by using the fact that [X, π] is left invariant and [X, π](e) = ξ(X e ). Proof of Theorem 1.2 Proof. Let (G, π) be a connected unimodular Poisson-Lie group and let µ be a left invariant volume form on G. Let ξ be the 1-cocycle associated to π and let (G * , [ , ] * , ρ) be the dual Lie bialgebra. For any tensor T on G, we denote by T + the corresponding left invariant tensor field on G. Recall that the divergence of a vector field X with respect to µ is the function div µ X given by L X µ = (div µ X)µ. Before giving the proof of the theorem, we need to state some properties of the modular vector field on a Poisson Lie-group. As shown in [16], the operator X µ : f → div µ X f (X f being the Hamiltonian vector field associated to f ) is a derivation and hence a vector field called the modular vector field of (G, π) with respect to the volume form µ. It is well-known (see [16]) that X µ is given by d (i π µ) = i Xµ µ.(15) We define the modular form κ : G * → R by κ(α) = tr ad α ,(16) where ad α β = [α, β] * . The modular form κ, which is in G * * , defines a vector in G denoted also by κ. We have X µ (e) = κ.(17) Indeed, choose a scalar product , on G, an orthonormal basis (u 1 , ..., u n ) of (G, , ) and denote by (α 1 , ..., α n ) its dual basis. We have π = i<j π ij u + i ∧ u + j and the Hamiltonian vector field associated to f ∈ C ∞ (M ) is given by X f = n j=1 n i=1 π ij df, α + i * u + j . We have X µ (f ) = div µ n j=1 n i=1 π ij df, α i * u + j = n j=1 n i=1 π ij u + i (f ) div µ u + j + n j=1 n i=1 u + j π ij u + i (f ) . Now, since for any i, j = 1, . . . , n π ij (e) = 0 and, because G is unimodular, div µ u + j = 0 for j = 1, . . . , n, we get X µ (e) = n i=1   n j=1 L X + j π(α + i , α + j ) e   u i , and < α i , X µ (e) >= n j=1 L X + j π(α + i , α + j ) e = n j=1 < α i ∧ α j , ξ(X j ) > = n j=1 [α i , α j ] * (X j ) = n j=1 n k=1 [α i , α j ] * (X k )α k , α j * = n j=1 [α i , α j ] * , α j * = tr ad α i = κ(α i ), and (17) is established. Now, we will show that X µ − κ + is a multiplicative vector field. Indeed, by applying (14) and L X µ = 0, we get i [X,Xµ] µ = i X d i Xµ µ − d i X i Xµ µ − i Xµ d (i X µ) = − d i X i Xµ µ , i [X,π] µ =i X d (i π µ) + d (i X i π µ) − i π d (i X µ) =i X i Xµ µ + d (i X i π µ) . Thus d i [X,π] µ = −i [X,Xµ] µ.(18) Since [X, π] and µ are left invariant, we deduce from (18) that [X, X µ ] is also left invariant. Moreover, [X, X µ − κ + ] = [X, X µ ] − [X, κ + ] is left invariant and, since X µ (e) = κ + (e), we deduce that X µ − κ + is a multiplicative vector field. Thus X µ = X m + κ + where X m is a multiplicative vector field. To complete the proof, note that X µ = 0 if and only if κ = 0 and X m = 0, i.e., (G * , [ , ] * ) is unimodular, and [X, X m ](e) = 0, for all left invariant vector field X. Or the last condition is equivalent, according to (18), to ρ i ξ(u) µ = 0, for any u ∈ G. Examples This Section is devoted to the determination of Riemannian Poisson-Lie groups satisfying Hawkins's conditions in the linear case, in dimension 2, 3 and 4. According to Corollary 1.1, the triple (G * , π ℓ , , * ) satisfies Hawkins's conditions. It is easy to show that there exists a family of constants (a ij ) 1≤i,j≤q such that (G * , π ℓ , , * ) is isomorphic to (R q+2r , π 0 , , 0 ) where , 0 is the canonical Euclidian metric and We are looking for the 1-cocycles ρ : H −→ H ∧ H defining a Lie bialgebra structure on H and satisfying (1) and (2). Put ρ(e 1 ) = ae 1 ∧ e 2 + be 1 ∧ e 3 + ce 2 ∧ e 3 . π 0 = r i=1 a 1i ∂ x 1 + . . . + a qi ∂ xq ∧ y 2i ∂ y 2i−1 − y 2i−1 ∂ y 2i . The condition condition (1) is equivalent to ad e 1 • ad e 1 ρ(e 1 ) = 0. We have ad e 1 ρ(e 1 ) = aλe 1 ∧ e 3 − bλe 1 ∧ e 2 and hence ad e 1 • ad e 1 ρ(e 1 ) = −aλ 2 e 1 ∧ e 2 − bλ 2 e 1 ∧ e 3 . Thus ρ satisfies (1) We consider now H * endowed with the bracket associated to ρ, the dual scalar product and the dual of the bracket on H, ξ : H * −→ H * ∧ H * , given by ξ(e * 1 ) = 0, ξ(e * 2 ) = −λe * 1 ∧ e * 3 and ξ(e * 3 ) = λe * 1 ∧ e * 2 ,(21) where (e * 1 , e * 2 , e * 3 ) is the dual basis of (e 1 , e 2 , e 3 ). The bracket on H * associated to ρ is given by Let us write down (2). Since µ = e 1 ∧ e 2 ∧ e 3 and by virtue of (21), a straightforward calculation using (20) gives [e * 1 ,ρ i ξ(e * 2 ) µ =λρ(e 2 ) = λae 1 ∧ e 2 , ρ i ξ(e * 3 ) µ =λρ(e 3 ) = λae 1 ∧ e 3 . In conclusion, ρ defines a Lie bialgebra structure on H and satisfies (1) and (2) if and only if ρ(e 1 ) = ce 2 ∧ e 3 and ρ(e 2 ) = ρ(e 3 ) = 0. Note that in this case, the Lie algebra H * is unimodular. The following Proposition summarize all the discussion above. The infinitesimal situations in this Proposition can be integrated easily which leads to the following theorem. Let (G, π, , ) be a connected and simply connected 3dimensional Riemannian Poisson-Lie group. If (π, , ) satisfies Hawkins's conditions then (G, π, , ) is isomorphic to: 1. (R 3 , π, , ) where R 3 is endowed with its abelian Lie group structure, , is the canonical Euclidian metric and π = λ∂ x ∧ (z∂ y − y∂ z ), where λ ∈ R * or, 2. (H 3 , π, , ) where H 3 =      1 x z 0 1 y 0 0 1   , x, y, z, ∈ R 3    and π = λ(x∂ y − y∂ x ) ∧ ∂ z , , = dx 2 + dy 2 + a(dz − xdy) 2 , where λ ∈ R * and a > 0. The 4-dimensional case In this paragraph we will determine, up to isomorphism, all the 4-dimensional Riemannian Poisson-Lie groups satisfying Hawkins's conditions. According to Theorems 1.1-1.2 Proposition 3.1, the first step is to determine all the Lie bialgebra structures on 4-dimensional Milnor Lie algebras satisfying (1) and (2). Let H be a 4-dimensional Milnor Lie algebra. By virtue of (19), there exists non nul real numbers λ 1 , λ 2 and an orthonormal basis (s 1 , s 2 , f 1 , f 2 ) of H such that We are looking for the 1-cocycles ρ : H −→ H∧H defining a Lie bialgebra structure on H and satisfying (1) and (2). Put ρ(e i ) = a i e 1 ∧ e 2 + b i e 1 ∧ f 1 + c i e 1 ∧ f 2 + d i e 2 ∧ f 1 + f i e 2 ∧ f 2 + g i f 1 ∧ f 2 . We have ad e 2 ρ(e i ) = b i e 1 ∧ f 2 − c i e 1 ∧ f 1 + d i e 2 ∧ f 2 − f i e 2 ∧ f 1 , ad e 2 • ad e 2 ρ(e i ) = −b i e 1 ∧ f 1 − c i e 1 ∧ f 2 − d i e 2 ∧ f 1 − f i e 2 ∧ f 2 . Thus ρ satisfies (1) if and only if, for i = 1, 2, ρ(e i ) = α i e 1 ∧ e 2 + β i f 1 ∧ f 2 . Now, put ρ(f i ) = a i e 1 ∧ e 2 + b i e 1 ∧ f 1 + c i e 1 ∧ f 2 + d i e 2 ∧ f 1 + g i e 2 ∧ f 2 + h i f 1 ∧ f 2 , and write down the cocycle condition ρ( [u, v] ) = ad u ρ(v) − ad v ρ(u). First, we get ρ([f 1 , f 2 ]) = −a 2 e 1 ∧ f 2 − d 2 f 2 ∧ f 1 − a 1 e 1 ∧ f 1 − g 1 f 1 ∧ f 2 = 0, thus a 1 = a 2 = 0 and d 2 − g 1 = 0. On the other hand, ρ([e 1 , f 1 ]) = α 1 e 1 ∧ f 2 = 0, ρ([e 2 , f 1 ]) = b 1 e 1 ∧ f 2 − c 1 e 1 ∧ f 1 + d 1 e 2 ∧ f 2 − g 1 e 2 ∧ f 1 + α 2 e 1 ∧ f 2 = ρ(f 2 ), ρ([e 1 , f 2 ]) = −α 1 e 1 ∧ f 1 = 0, ρ([e 2 , f 2 ]) = b 2 e 1 ∧ f 2 − c 2 e 1 ∧ f 1 + d 2 e 2 ∧ f 2 − g 2 e 2 ∧ f 1 − α 2 e 1 ∧ f 1 = −ρ(f 1 ). These relations are equivalent to b 2 = −c 1 , c 2 = b 1 , d 2 = −g 1 , g 2 = d 1 = α i = h i = 0. Hence, ρ is a 1-cocycle satisfying (1) if and only if ρ(e i ) = β i f 1 ∧ f 2 , ρ(f 1 ) = be 1 ∧ f 1 + ce 1 ∧ f 2 + de 2 ∧ f 1 , ρ(f 2 ) = −ce 1 ∧ f 1 + be 1 ∧ f 2 + de 2 ∧ f 2 . We consider now H * endowed with the bracket associated to ρ, the dual scalar product and the dual of the bracket on H, ξ : H * −→ H * ∧ H * , given by ξ(e * 1 ) = ξ(e * 2 ) = 0, ξ(f * 1 ) = −e * 2 ∧ f * 2 , ξ(f * 2 ) = e * 2 ∧ f * 1 ,(25) where (e * 1 , e * 2 , f * 1 , f * 2 ) is the dual basis of (e 1 , e 2 , f 1 , f 3 ). The bracket on H * associated to ρ is given by Note that tr ad e * 1 = 2b, tr ad e * 2 = 2d, tr ad f * 1 = tr ad f * 3 = 0. when u = (x, y, z, t) and v = (x ′ , y ′ , z ′ , t ′ ), and π 0 = ∂ x ∧ (z∂ t − t∂ z ) and , 0 = dx 2 + ady 2 + dz 2 + dt 2 , where a > 0. (e) If c = 0, β 2 = 0 and β 1 = 0 then (G, π, , ) is isomorphic to (R 2 × C, π 0 , , 0 ) where R 2 × C is endowed with the structure of oscillator group given by (t, s, z).(t ′ , s ′ , z ′ ) = t + t ′ , s + s ′ + 1 2 Im zexp(it)z ′ , z + exp(it)z ′ , and π 0 = ∂ s ∧(x∂ y −y∂ x ), , 0 = adt 2 +bds 2 +ds(ydx−xdy)+ 1 4 (ydx−xdy) 2 , where a > 0 and b > 0. (f) If c = 0, β 2 = 0 then (G, π, , ) is isomorphic to (R×G 0 , π 0 , , 0 ) where R × G 0 is the direct product of the abelian group R with G 0 where G 0 is either SU (2) then π = ∂ t ∧ (E + 1 − E − 1 ) where E + 1 (resp. E + 1 ) is the left invariant (resp. right invariant) vector field associated to E 1 . On the other hand, , 0 is the left invariant Riemannian metric on R × G 0 whose value at the identity has the following matrix in the basis {E 0 , E 1 , E 2 , E 3 } 2. the non unimodular case: b = 0. In this case (G, π, , ) is isomorphic to (R 4 , π 0 , , 0 ) where R 4 is endowed with the Lie group structure given by uv = x + x ′ , y + y ′ , z + e xb (z ′ cos(xc) + t ′ sin(xc)), t + e xb (−z ′ sin(xc) + t ′ cos(xc)) . when u = (x, y, z, t) and v = (x ′ , y ′ , z ′ , t ′ ), π 0 = ∂ y ∧ (z∂ t − t∂ z ) and , 0 = dx 2 + dy 2 + e −2bx (dz 2 + dt 2 ). The Riemannian volume is given by µ = e −2bx dx ∧ dy ∧ dz ∧ dt, and i π µ = −e −2bx (zdx ∧ dz + tdx ∧ dt). Thus d (i π µ) = 0, and the third Hawkins's condition is satisfied. Lemma 2. 1 . 1Let (G, , ) be a Lie group with a left invariant Riemannian metric. If the sectional curvature of , is nonpositive then the Lie subalgebra S = {u ∈ G, ad u + ad t u = 0} is abelian. Proposition 2 . 1 . 21Let (G, , ) be a Lie group endowed with a left invariant Riemannian metric. Then the curvature of , vanishes if and only if the Lie algebra G of G endowed with the scalar product , e is a Milnor Lie algebra. Theorem 1.1 is an immediate consequence of the following result. Theorem 3. 1 . 1Let (G, π, , ) be a Riemannian Poisson-Lie group. Then: 1. (π, , ) is flat if and only if the dual Lie algebra (G * , , * ) is a Milnor Lie algebra. Proposition 3 . 1 . 31Let (G, π) be a Poisson-Lie group and let µ be a left invariant volume form on G. If d(i π µ) = 0 then (2) holds. The linear case Let G = S ⊕ [G, G] be a Milnor Lie algebra. Since S is abelian and acts on [G, G] by skew-symmetric endomorphisms, there exists a family of non nul vectors u 1 , . . . , u r ∈ S and an orthonormal basis (f 1 , . . . , f 2r ) of [G, G] such that, for any j = 1, . . . , r and for all s ∈ S, [s, f 2j−1 ] = s, u j f 2j and [s, f 2j ] = − s, u j f 2j−1 . The 2 - 2dimensional case According to Theorems 1.1-1.2 and since any 2-dimensional Milnor Lie algebra is abelian, a 2-dimensional connected and simply connected Riemannian Poisson-Lie group (G, π, , ) satisfies Hawkins's conditions if and only if the Poisson tensor is trivial.The 3-dimensional case In this paragraph we will determine, up to isomorphism, all the 3-dimensional connected and simply connected Riemannian Poisson-Lie groups satisfying Hawkins's conditions. According to Theorems 1.1-1.2 and Proposition 3.1, the first step is to determine all the Lie bialgebra structures on 3-dimensional Milnor Lie algebras satisfying(1)and(2).LetH be a 3-dimensional Milnor Lie algebra. By virtue of (19), there exists a real number λ = 0 and an orthonormal basis (e 1 , e 2 , e 3 ) of H such that [e 2 , e 3 ] = 0, [e 1 , e 2 ] = λe 3 et [e 1 , e 3 ] = −λe 2 . if and only if ρ(e 1 ) = ce 2 ∧ e 3 .Now putρ(e 2 ) = a 1 e 1 ∧e 2 +b 1 e 1 ∧e 3 +c 1 e 2 ∧e 3 , ρ(e 3 ) = a 2 e 1 ∧e 2 +b 2 e 1 ∧e 3 +c 2 e 2 ∧e 3 ,and write down the cocycle condition ρ([u, v]) = ad u ρ(v) − ad v ρ(u). We get ρ([e 2 , e 3 ]) = −λa 2 e 3 ∧ e 2 − λb 1 e 2 ∧ e 3 = λ(a 2 − b 1 )e 2 ∧ e 3 = 0, ρ([e 1 , e 2 ]) = λ(a 1 e 1 ∧ e 3 − b 1 e 1 ∧ e 2 ) = λρ(e 3 ),ρ([e 1 , e 3 ]) = λ(a 2 e 1 ∧ e 3 − b 2 e 1 ∧ e 2 ) = −λρ(e 2 ). These relations are equivalent to b 1 = a 2 = c 1 = c 2 = 0 and a 1 = b 2 .Thus ρ is a 1-cocycle satisfying (1) if and only if ρ(e 1 ) = ce 2 ∧ e 3 , ρ(e 2 ) = ae 1 ∧ e 2 and ρ(e 3 ) = ae 1 ∧ e 3 . Proposition 4. 1 . 1Let (G, π, , ) be a 3-dimensional connected and simply connected Riemannian Poisson-Lie group and let (G, ξ, , e ) be its Lie algebra endowed with the cocycle ξ associated to π and the value of the Riemannian metric at the identity. Then (G, π, , ) satisfies Hawkins's conditions if and only if the triple (G, ξ, , e ) is isomorphic to one of the following triples: 1. (R 3 , ξ 0 , , 0 ) where R 3 is endowed with its abelian Lie algebra structure, ξ 0 is given by ξ 0 (e 1 ) = 0, ξ(e 2 ) = −λe 1 ∧ e 3 and ξ(e 3 ) = λe 1 ∧ e 2 , λ = 0, and , 0 is the canonical Eucldian scalar product on R 3 . 2. (H 3 , ξ 0 , , 0 ) where H 3 the Heisenberg Lie algebra is given by ξ 0 (e 3 ) = 0, ξ(e 1 ) = −λe 3 ∧ e 2 and ξ(e 2 ) = λe 3 ∧ e 1 , λ = 0,and , 0 is the scalar product on H 3 whose matrix in (e 1 , e 2 , e 3 ) [s 1 1, s 2 ] = [f 1 , f 2 ] = 0, [s i , f 1 ] = λ i f 2 and [s i , f 2 ] = −λ i f 1 . Put e 1 = λ 2 s 1 −λ 1 s 2 λ 2 s 1 −λ 2 s 2 . Then there exists e 2 ∈ S such that (e 1 , e 2 , f 1 , f 2 ) is an orthogonal basis, [e 2 , f 1 ] = f 2 , [e 2 , f 2 ] = −f 1 , and all the other brackets vanish. Note that e 1 = f 1 = f 2 = 1. Theorem 1.1. Let (G, π, , ) be a Riemannian Poisson-Lie group and (G * , [ , ] * , ρ) its dual Lie bialgebra. Then (π, , ) is flat and metaflat if and only if: Theorem 1.2. Let (G, π) be a connected and unimodular Poisson-Lie group and let µ be a left invariant volume form on G. Then d (i π µ) = 0 if and only if:1. (G * , [ , ] * , , * e ) is a Milnor Lie algebra, 2. for any α, β, γ ∈ S = {α ∈ G * , ad α + ad t α = 0}, ad α ad β ρ(γ) = 0. (1) Corollary 1.1. Let (G, , ) be a Lie algebra endowed with a scalar product and denote by π l the canonical linear Poisson structure on G * . Then (π ℓ , , * ) satisfies Hawkins's conditions if and only if (G, , ) is a Milnor Lie algebra.There are some interesting consequences of Theorems 1.1-1.2:1. The classification of connected and simply connected Riemannian Poisson-Lie groups which are flat and metaflat is equivalent to the classifica- tion of the Lie bialgebra structures on Milnor Lie algebras for which (1) holds. 2. The classification of unimodular connected and simply connected Rie- mannian Poisson-Lie groups satisfying Hawkins's conditions is equiv- alent to the classification of the Lie bialgebra structures on Milnor Lie algebras for which (1) and (2) hold. 3. The Lie bialgebras structures on Milnor Lie algebras of dimension ≤ 4 can be computed (see Section 4) and hence the Riemannian Poisson-Lie groups of dimension ≤ 4 satisfying Hawkins's conditions can be deduced (see Theorems 4.1 and the paragraph devoted to the 4-dimensional case in Section 4). 1 . 1The generalized Poisson bracket satisfies the graded Jacobi identity {{σ, ρ}, λ} = {σ, {ρ, λ}} − (−1) degσdegρ {ρ, {σ, λ}}.2. The tensor M vanishes identically. or SL(2, R) and if {E 1 , E 2 , E 3 } is a the basis of the Lie algebra of G 0 satisfying [E 1 , E 2 ] = E 3 , [E 3 , E 1 ] = E 2 and [E 2 , E 3 ] = ±E 1 Acknowledgement Amine BAHAYOU would like to thank Philippe Monnier for very useful discussions and Emile Picard Laboratory, at Paul Sabatier University of Toulouse (France), for hospitality where a part of this work was done.The Jacobi identities are given by:Let us write down(2). Since µ = e 1 ∧ e 2 ∧ f 1 ∧ f 2 and by virtue of (25), a straightforward computation using (24) givesThe following proposition summarize all the computation above. The task now is the construction of the triples (G, π, , ) associated to the different models isomorphic to the triple (R 4 , ξ 0 , , 0 ) given in Proposition 4.2. The computation is very long so we omit it. Note that the determination of the Lie groups is easy since all the models of Lie algebras are product or semi-direct product. The determination of the multiplicative Poisson tensor from the 1-cocycle is a direct calculation using the method exposed in[6]Theorem 5.1.3.where R 4 is endowed with its abelian Lie group structure and π 0 = ∂ x ∧ (z∂ t − t∂ z ) and , 0 = dx 2 + dy 2 + dz 2 + dt 2 .(b) If c = 0 and β 1 = 0 then (G, π, , ) is isomorphic to (H 0 , π 0 , , 0 ) whereand , 0 = (x −1 dx + β 2 dt − β 2 ydz) 2 + dy 2 + dz 2 + β 2 1 (dt − ydz) 2 .(c) If c = 0 β 1 = 0 and β 2 = 0 then (G, π, , ) is isomorphic to (H 0 , π 0 , , 0 ) whereand , 0 = 1 x 2 dx 2 + dy 2 + dz 2 + β 2 2 (dt − ydz) 2 .(d) If c = 0 and (β 1 , β 2 ) = (0, 0) then (G, π, , ) is isomorphic to (R 4 , π 0 , , 0 ) where R 4 is endowed with the Lie group structure given by u.v = (x + x ′ , y + y ′ , z + z ′ cos y + t ′ sin y, t − z ′ sin y + t ′ cos y) Multiplicative noncommutative deformations of left invariant Riemannian metrics on Heisenberg groups. A Bahayou, M Boucetta, C. R. Acad. Sci. Paris, Série I. 347A. Bahayou and M. Boucetta, Multiplicative noncommutative deforma- tions of left invariant Riemannian metrics on Heisenberg groups, C. R. Acad. Sci. Paris, Série I 347 (2009), 791-796 . A L Besse, Einstein Manifolds. Springer-VerlagA. L. Besse, Einstein Manifolds, Springer-Verlag 2002. M Boucetta, Compatibilité des structures pseudo-riemanniennes et des structures de Poisson, C. R. Acad. Sci. Paris, t. 333, Série I. M. Boucetta, Compatibilité des structures pseudo-riemanniennes et des structures de Poisson, C. R. Acad. Sci. Paris, t. 333, Série I, (2001) 763-768. Solutions of the classical Yang-Baxter equation and noncommutative deformations. M Boucetta, Letters in Mathematical Physics. 83M. Boucetta, Solutions of the classical Yang-Baxter equation and non- commutative deformations, Letters in Mathematical Physics (2008) 83:69-81. Drinfel'd, Hamiltonian structures on Lie groups, Lie bialgebras and the geometric meaning of the classical Yang-Baxter equations. V G , Sov. Math. Dokl. 271V. G. Drinfel'd, Hamiltonian structures on Lie groups, Lie bialgebras and the geometric meaning of the classical Yang-Baxter equations, Sov. Math. Dokl. 27 (1) (1983), 68-71. J P Dufour, N T Zung, Poisson Structures and Their Normal Forms. Basel, Boston, New YorkBirkhauser Verlag242J. P. Dufour and N. T. Zung, Poisson Structures and Their Normal Forms, vol. 242 of Progress in Mathematics. Birkhauser Verlag, Basel, Boston, New York, 2005. Connections in Poisson geometry. I. Holonomy and invariants. R L Fernandes, J. Differential Geom. 542R. L. Fernandes, Connections in Poisson geometry. I. Holonomy and invariants, J. Differential Geom. 54 (2000), no. 2, 303-365. Noncommutative rigidity. E Hawkins, Commun. Math. Phys. 246E. Hawkins, Noncommutative rigidity, Commun. Math. Phys. 246 (2004), 211-235. The structure of noncommutative deformations. E Hawkins, J. Diff. Geom. 77E. Hawkins, The structure of noncommutative deformations, J. Diff. Geom. 77, 385-424 (2007). Crochet de Schouten-Nijenhuis et Cohomologie. J L Koszul, Numro Hors Série. J. L. Koszul, Crochet de Schouten-Nijenhuis et Cohomologie, Astérisque (1985), Numro Hors Série, 257-271. Poisson Lie groups, dressing transformations and Bruhat decompositions. J H Lu, A Weinstein, J. Diff. Geo. 31J. H. Lu, A. Weinstein, Poisson Lie groups, dressing transformations and Bruhat decompositions, J. Diff. Geo. 31 (1990), 501-526. Curvature of left invariant metrics on Lie groups. J Milnor, Adv. in Math. 21J. Milnor, Curvature of left invariant metrics on Lie groups, Adv. in Math. 21 (1976), 283-329. Dressing transformations and Poisson Lie group actions. M A Semenov-Tian-Shansky, Publ. RIMS, Kyoto University. 21M. A. Semenov-Tian-Shansky, Dressing transformations and Poisson Lie group actions, Publ. RIMS, Kyoto University 21 (1985), 1237-1260. Lecture on the geometry of Poisson manifolds. I Vaisman, Progr. In Math. 118BirkhausserI. Vaisman, Lecture on the geometry of Poisson manifolds, Progr. In Math. Vol. 118, Birkhausser, Berlin, (1994). Some remarks on dressing transformations. A Weinstein, J. Fac. Sci. Univ. Tokyo. Sect. 1A, Math. 36A. Weinstein,Some remarks on dressing transformations, J. Fac. Sci. Univ. Tokyo. Sect. 1A, Math. 36 (1988) 163-167. The Modular Automorphism Group of a Poisson Manifold. A Weinstein, J. Geom. Phys. 23A. Weinstein, The Modular Automorphism Group of a Poisson Mani- fold, J. Geom. Phys. 23, (1997) 379-394. P 511 Route de Ghardaa, 30000 Ouargla Algeria [email protected] Mohamed BOUCETTA, Faculté des sciences et techniques Gueliz BP 549 Marrakech Maroc mboucetta2@yahoo. Bahayou Amine, B Université Kasdi Merbah, Amine BAHAYOU, Université Kasdi Merbah, B.P 511 Route de Ghardaa, 30000 Ouargla Algeria [email protected] Mohamed BOUCETTA, Faculté des sciences et techniques Gueliz BP 549 Marrakech Maroc [email protected]
[]
[ "Efficient Approximation Algorithms for Spanning Centrality KEYWORDS spanning centrality, graph traversal, random walk, eigenvector ACM Reference Format", "Efficient Approximation Algorithms for Spanning Centrality KEYWORDS spanning centrality, graph traversal, random walk, eigenvector ACM Reference Format" ]
[ "Shiqi Zhang [email protected] ", "Renchi Yang [email protected] ", "Jing Tang [email protected] ", "Xiaokui Xiao [email protected] ", "Bo Tang [email protected] ", "Shiqi Zhang ", "Renchi Yang ", "Jing Tang ", "Xiaokui Xiao ", "Bo ", "\nNational University of Singapore Singapore Southern University of Science and Technology\nChina\n", "\nThe Hong Kong University of Science and Technology (Guangzhou)\nThe Hong\nHong Kong Baptist University\nChina, China\n", "\nKong University of Science and Technology\nChina\n", "\nNational University of Singapore\nSingapore\n", "\nSouthern University of Science and Technology\nChina\n" ]
[ "National University of Singapore Singapore Southern University of Science and Technology\nChina", "The Hong Kong University of Science and Technology (Guangzhou)\nThe Hong\nHong Kong Baptist University\nChina, China", "Kong University of Science and Technology\nChina", "National University of Singapore\nSingapore", "Southern University of Science and Technology\nChina" ]
[ "KDD" ]
Given a graph G, the spanning centrality (SC) of an edge measures the importance of for G to be connected. In practice, SC has seen extensive applications in computational biology, electrical networks, and combinatorial optimization. However, it is highly challenging to compute the SC of all edges (AESC) on large graphs. Existing techniques fail to deal with such graphs, as they either suffer from expensive matrix operations or require sampling numerous long random walks. To circumvent these issues, this paper proposes TGT and its enhanced version TGT+, two algorithms for AESC computation that offers rigorous theoretical approximation guarantees. In particular, TGT remedies the deficiencies of previous solutions by conducting deterministic graph traversals with carefully-crafted truncated lengths. TGT+ further advances TGT in terms of both empirical efficiency and asymptotic performance while retaining result quality, based on the combination of TGT with random walks and several additional heuristic optimizations. We experimentally evaluate TGT+ against recent competitors for AESC using a variety of real datasets. The experimental outcomes authenticate that TGT+ outperforms state of the arts often by over one order of magnitude speedup without degrading the accuracy.
10.48550/arxiv.2305.16086
[ "https://export.arxiv.org/pdf/2305.16086v2.pdf" ]
258,888,070
2305.16086
f8730e36518370ac19821e12570a8044c79e5489
Efficient Approximation Algorithms for Spanning Centrality KEYWORDS spanning centrality, graph traversal, random walk, eigenvector ACM Reference Format August 6 -August 10, 2023 Shiqi Zhang [email protected] Renchi Yang [email protected] Jing Tang [email protected] Xiaokui Xiao [email protected] Bo Tang [email protected] Shiqi Zhang Renchi Yang Jing Tang Xiaokui Xiao Bo National University of Singapore Singapore Southern University of Science and Technology China The Hong Kong University of Science and Technology (Guangzhou) The Hong Hong Kong Baptist University China, China Kong University of Science and Technology China National University of Singapore Singapore Southern University of Science and Technology China Efficient Approximation Algorithms for Spanning Centrality KEYWORDS spanning centrality, graph traversal, random walk, eigenvector ACM Reference Format KDD Long Beach, USA23August 6 -August 10, 2023* This work was partially done while at National University of Singapore. † Corresponding authors: Xiaokui Xiao and Bo Tang. ACM ISBN 978-1-4503-XXXX-X/18/06. . . $15.00 https://doi.org/XXXXXXX.XXXXXXXCCS CONCEPTS • Mathematics of computing → Approximation algorithmsMarkov-chain Monte Carlo methodsGraph algorithms Given a graph G, the spanning centrality (SC) of an edge measures the importance of for G to be connected. In practice, SC has seen extensive applications in computational biology, electrical networks, and combinatorial optimization. However, it is highly challenging to compute the SC of all edges (AESC) on large graphs. Existing techniques fail to deal with such graphs, as they either suffer from expensive matrix operations or require sampling numerous long random walks. To circumvent these issues, this paper proposes TGT and its enhanced version TGT+, two algorithms for AESC computation that offers rigorous theoretical approximation guarantees. In particular, TGT remedies the deficiencies of previous solutions by conducting deterministic graph traversals with carefully-crafted truncated lengths. TGT+ further advances TGT in terms of both empirical efficiency and asymptotic performance while retaining result quality, based on the combination of TGT with random walks and several additional heuristic optimizations. We experimentally evaluate TGT+ against recent competitors for AESC using a variety of real datasets. The experimental outcomes authenticate that TGT+ outperforms state of the arts often by over one order of magnitude speedup without degrading the accuracy. INTRODUCTION Edge centrality is a graph-theoretic notion measuring the importance of each edge in the graph, which plays a vital role in analyzing social, sensor, and transportation networks [5,11,32,37]. As pinpointed by Mavroforakis et al. [29], compared to the classic edge betweenness [9] based on shortest paths, spanning centrality (SC) [40] is a more ideal centrality for edges as it accommodates the information from longer paths. In particular, given a connected undirected graph G, the SC ( ) of an edge is defined as the fraction of spanning trees of G (a tree-structure subgraph of G including all the nodes) that contains . In simpler terms, the SC ( ) measures how crucial the edge is for G to remain connected, and hence, can be used to identify vulnerable edges in G. Such a definition renders SC useful in infrastructure networks like electrical grids that require maintaining connectivity, i.e., stability and robustness against failures [3,12]. In addition, SC also finds extensive applications in both practical and theoretical fields, including phylogenetics [40], graph sparsification [39], electric circuit analysis [10,36], and combinatorial optimization [4,18], to name a few. Despite its usefulness, the problem of computing the SC values of all edges (AESC) in G remains challenging. To explain, let and be the number of nodes and edges in the graph G, respectively. The graph G can have ( ) spanning trees in the worst case. Hence, the exact AESC computation by enumerating all spanning trees is infeasible. The best-known algorithm [40] for the exact AESC computation is based on Kirchoff's matrix-tree theory [13,42], which bears more than quadratic time ( 3/2 ), and thus, is prohibitive for massive graphs. To cope with this challenge, a series of approximation algorithms [14,29,34,39] for AESC have been developed in recent years. Given an absolute error threshold , existing solutions focus on calculating an estimated SCˆ( , ) for each edge , with at most absolute error in it. Although these methods allow us to trade result accuracy for execution time, they are still rather computationally expensive when G is sizable and is small. Spielman and Srivastava [39] propose to approximate AESC via its equivalent matrix-based definition, leading to˜ 2 time in total. In the follow-up work [29], Mavroforakis et al. develop a fast implementation by incorporating a suite of heuristic optimizations that considerably elevate its empirical efficiency without compromising its asymptotic performance. However, both methods become impractical when the matrices are high-dimensional and dense (i.e., and are large). To sidestep the shortcomings of matrices, Hayashi et al. [14] and Peng et al. [34] capitalize on the idea of using random walks for fast SC estimation, whereas these random walk-based techniques remain˜ 2 time. Motivated by the deficiencies of existing solutions, this paper presents two approximation algorithms for AESC: TGT and TGT+. At their hearts lie our improved bounds for random walk truncation, which are obtained through a rigorous theoretical analysis and novel exploitation of eigenvalues/eigenvectors pertaining to G. Notably, compared to Peng et al.'s bound [34], our bound can achieve orders of magnitude of reduction in random walk length. Based thereon, TGT (Truncated Graph Traversal) conducts the graph traversal, i.e., deterministic version of random walks, from each node to probe nodes within the truncated length. In doing so, TGT outperforms the state of the arts in the case where the amount of random walks needed in them exceeds the graph size. To overcome the limitations of TGT on large graphs with high degrees, we further devise TGT+, whose idea is deriving rough estimations of AESC by graph traversals in TGT and refining the results using merely a handful of random walks. By including a greedy trade-off strategy and additional optimizations, we can orchestrate and optimize the entire TGT+ algorithm for enhanced practical efficiency. On the theoretical side, TGT+ propels the approximate AESC computation by improving the asymptotic performance to˜ 2 + . Our extensive experiments on multiple benchmark graph datasets exhibit that TGT+ is often more than one order of magnitude faster compared to the state-of-the-art solutions while offering uncompromised or even superior result quality. Notably, on the Twitch dataset with 6.8 million edges, TGT+ can achieve 10 −5 empirical error on average within 17 minutes for AESC, using a single CPU core, whereas the best competitor takes over 10 hours. To summarize, we make the following contributions in this work: • We derive an improved lower bound for the truncated random walk length and propose a first-cut solution TGT, which estimates AESC using the graph traversal operations. (Section 3) • We develop an optimized solution TGT+, which integrates random walk sampling into TGT in an adaptive manner and improves over TGT in terms of practical efficiency. (Section 4) • We compare our proposed solutions with 3 competitors on 5 real datasets and demonstrate the superiority of TGT+. (Section 5) , The numbers of nodes and edges in G. N ( ), ( ) The neighbor set and degree of node . D, P The degree and transition matrices of G, respectively. ℓ ( , ) The ℓ-hop TP, i.e., P ℓ [ , ]. ( , ),ˆ( , ) The exact and estimated SC of edge , , respectively. , The truncated length for edge , defined by Eq. (5). , The absolute error threshold and failure probability. , The number of eigenvectors and candidate nodes, respectively. PRELIMINARIES This section sets up the stage for our study by introducing basic notations, the formal problem definition of -approximate AESC computation, and the main competitors for AESC approximation. Notations Let G = (V, E) be an undirected graph, where V is a set of nodes and E is a set of edges. For each edge , ∈ E, we say and are neighbors to each other, and we use N ( ) to denote the set of neighbors of , where the degree is ( ) = |N ( )|. Throughout this paper, we use a boldface lower-case (resp. upper-case) letter ì x (resp. M) to represent a vector (resp. matrix), with its -th element (resp. element at the -th row and -th column) denoted as ì x[ ] (resp. M[ , ]). Given G, we denote by A the adjacency matrix of G, where A[ , ] = 1 if , ∈ E and A[ , ] = 0 otherwise. In addition, we let D be the degree diagonal matrix of G and the diagonal entry D[ , ] = ( ) for each node ∈ V. Let P = D −1 A be the random walk matrix (i.e., transition matrix) of G, in which P[ , ] = 1 ( ) if , ∈ E and P[ , ] = 0 otherwise. Correspondingly, we denote ℓ ( , ) = P ℓ [ , ] , which can be interpreted as the probability of a random walk from node visits node at the ℓ-th hop, reflecting the proximity of nodes , . We refer to ℓ ( , ) as ℓ-hop TP (transition probability) of w.r.t. . In this paper, we assume G is connected and not bipartite. According to [31], the random walks over G are ergodic, i.e., lim Table 1 lists the notations that are frequently used in this paper. ℓ→∞ P ℓ [ , ] = ( ) 2 ∀ , ∈ V. Problem Definition Definition 2.1 (Spanning Centrality [40]). Given an undirected and connected graph G, the SC ( , ) ∈ (0, 1] of an edge , is defined as the fraction of spanning trees of G that contains , . Definition 2.1 presents the formal definition of SC. Recall that a spanning tree of graph G is a tree and spans over all nodes of G. Intuitively, a high SC ( , ) quantifies how crucial edge , is for G to ensure connectedness. Since an edge , with a high SC means that it appears in most spanning trees, all of them will fall apart once , is removed from G. In the extreme case where ( , ) = 1, G will be disconnected when , is excluded. To our knowledge, the state-of-the-art algorithm [40] for computing the exact AESC entails 3 2 time, which is prohibitive for large graphs. Following previous works [14,34], we focus on -approximate all-edge SC (AESC) computation, defined as follows. Particularly, we say an estimated SCˆ( , ) is -approximate if it satisfies Eq. (1). Definition 2.2 ( -Approximate AESC). Given an undirected and connected graph G = (V, E) and an absolute error threshold ∈ (0, 1), the -approximate AESC computation returns an estimated ( , ) for every edge , ∈ E such that |ˆ( , ) − ( , )| ≤ .(1) State of the Arts We briefly revisit four recent techniques for AESC computation: Fast-Tree [29], ST-Edge [14], MonteCarlo and MonteCarlo-C [34]. Other related works on SC will be reviewed later in Section 6. Fast-Tree. Mavroforakis et al. [29] develop a fast implementation of [39] on the basis of the equivalence between SC and effective resistance (ER) [6] when node pairs are edges. To be more specific, as per [39], the ER of all edges are the diagonal elements of matrix R = BL † B ⊤ , where B and L † are the incidence matrix and the pseudoinverse of the Laplacian matrix of G, respectively. Fast-Tree first employs random projections [1] to reduce high matrix dimensions and then deploys the SDD solver to solve the linear systems in the low-dimensional space, resulting in a linear time complexity of ST-Edge. Based on Definition 2.1, Hayashi et al. [14] first sample a sufficient number of random spanning trees by Wilson's algorithm [48], and record the fraction of trees where edge , appears as the estimatedˆ( , ). As proved, the expected time to draw a spanning tree rooted at a random node is ∈ V ( ) ( , ) , where ( , ) is the commute time between nodes and and is ( ) [31]. Hence, to ensure the -approximate for estimated AESC values, ST-Edge runs in 2 log( ) time by sampling 1 2 log( ) spanning trees, rendering it costly when is small. MonteCarlo and MonteCarlo-C. Very recently, Peng et al. [34] theoretically establish another equivalent definition of SC ( , ): ( , ) = ∞ ∑︁ ℓ=0 ℓ ( , ) ( ) + ℓ ( , ) ( ) − ℓ ( , ) ( ) − ℓ ( , ) ( ) . Thus, the problem is transformed into computing ℓ-hop TP values of every two nodes in { , } for 0 ≤ ℓ ≤ ∞. The crux of MonteCarlo and MonteCarlo-C involves finding a truncated length for random walks, which ensures | ( , ) − ( , )| ≤ 2 , where ( , ) = ∑︁ ℓ=0 ℓ ( , ) ( ) + ℓ ( , ) ( ) − ℓ ( , ) ( ) − ℓ ( , ) ( ) .(2) Based thereon, MonteCarlo and MonteCarlo-C simulate random walks with length at most from , to approximate the ℓ-hop TP values such that |ˆ( , ) − ( , )| ≤ 2 holds, connoting that ( , ) is -approximate. In particular, Peng et al. [34] provides the following bound for to ensure the -approximation ≥ log 4 − log 1 − 1 ,(3) where is matrix P's second largest eigenvalue in absolute value. The major distinction between MonteCarlo and MonteCarlo-C lies in the approach to computing ℓ-hop TP values. Specifically, MonteCarlo simply conducts random walks of length ℓ (1 ≤ ℓ ≤ ) to approximate ℓ-hop TP values before aggregating them as the estimated SC. According to the Chernoff-Hoeffding bound, a total time complexity of 3 log (8 / ) 2 is needed to obtain an -approximate SCˆ( , ) with a success probability at least 1 − . By contrast, MonteCarlo-C regards the ℓ-hop TP ℓ ( , ) (1 ≤ ℓ ≤ ) as the collision probability of two random walks of length-ℓ 2 from and , respectively, and then samples 40000 × √︁ ℓ / + 3 3/2 ℓ / 2 length-(ℓ/2) random walks from respective nodes. The parameter ℓ is a constant depending on the graph structure, which is hard to compute in practice. Notice that both algorithms are originally designed for computing the ER of any node pair in G, which overlook the unique property of edges and thus are not optimized for AESC computation. Moreover, they require an exorbitant amount of random walks due to the large (up to thousands when is small), significantly exacerbating the efficiency issues. THE TGT ALGORITHM In this section, we propose TGT, an iterative deterministic graph traversal approach to AESC processing based on the idea of computing the truncated SC (Eq. (2)) as in MonteCarlo. Particularly, TGT improves over MonteCarlo in two aspects. First and foremost, TGT offers significantly superior edge-wise lower bounds for truncated lengths by leveraging the well-celebrated theory of Markov chains [47] (Section 3.1). Further, TGT develops a deterministic graph traversal method to remedy the efficiency issue caused by substantial random walks needed in MonteCarlo (Section 3.2). be their corresponding normalized eigenvectors. Then, for any two nodes , ∈ V and any integer ℓ ≥ 0, we have Improved Bounds for Truncated Lengths ℓ ( , ) ( ) = ℓ ( , ) ( ) = 1 2 ∑︁ =1 ì f [ ] · ì f [ ] · ℓ ,(4)where ì f = √ 2 · D − 1 2 for = 1, 2, . . . , , and ì f 1 is taken to be 1. By Lemma 3.1, the ℓ-hop TP ℓ ( , ) can be computed based on the eigenvectors and eigenvalues of matrix D 1 2 PD − 1 2 , and hence, the difference between ( , ) and ( , ) can be quantified via | ( , ) − ( , )| = ∞ ∑︁ ℓ= +1 1 2 ∑︁ =1 ( ì f [ ] − ì f [ ]) 2 ℓ . This suggests that we can utilize these eigenvectors and eigenvalues to determine a truncated length , for edge , so that | ( , ) − ( , )| ≤ 2 . Additionally, when ℓ = 1 and , ∈ E, we have Algorithm 1: CalTau Data: Graph G, { 1 , . . . , }, { ì f 1 , . . . , ì f } Parameters : , , Result: , 1 , ← Eq. (6) with 2 and Δ = Υ = 0; 2 Υ ← Eq. (7); ← 1; 3 while true do 4 Δ ← Eq. (8); ′ ← Eq. (6); 5 if ≤ ′ then , ← ′ ; ← + 2 ; 6 else break; 7 return , ← ; ( , , ) =            log 1 ( ) + 1 ( ) − 2 ( ) · ( ) −Υ ( −Δ ) · (1− 2 ) log 1 | | − 1            ,(6) where Υ = 1 2 −1 ∑︁ =2 ( ì f [ ] − ì f [ ]) 2 · (1 + ),(7)Δ = 1 2 −1 ∑︁ =2 ( ì f [ ] − ì f [ ]) 2 · +1 1 − ,(8) and is an odd number ensuring ≤ , . Figure 1. Note that the eigenvalues and eigenvectors can be efficiently computed in the preprocessing stage (see Figure 4). Algorithm 1 presents the pseudo-code of CalTau, an algorithm realizing the computation of , on the basis of Theorem 3.2. Given graph G, eigenvalues { 1 , . . . , }, eigenvectors { ì f 1 , . . . , ì f }, and parameters , , as inputs, CalTau initializes , by Eq. (6) with 2 and Υ = Δ = 0 at Line 1, followed by setting = 1 and calculating Υ according to Eq. (7) at Line 2. After that, CalTau increases iteratively to search for the optimal such that it is closest to but does not exceed Eq. (6), ensuring the validity of Theorem 3.2 (Lines 3-6). To be more precise, in each iteration, CalTau calculates a candidate truncated length ′ using Eq. (6), wherein Δ is obtained by Eq. (8) with current . Next, if ≤ ′ , we update , as ′ and increase by 2 (Line 5). CalTau repeats the above procedure until the condition at Line 5 does not hold and returns as , at Line 7. Complete Algorithm and Analysis In light of Theorem 3.2, the problem of AESC computation in Definition 2.2 is reduced to computing the approximate SCˆ( , ) = ( , ) as per Eq. (2) for each edge , ∈ E. Unlike prior methods, TGT conducts a deterministic graph traversal from each node ∈ V to compute ℓ-hop TP values ℓ ( , ) and ℓ ( , ) for 1 ≤ ℓ ≤ , Algorithm 2: TGT Data: Graph G Parameters : Result: ( , ) ∀ , ∈ E 1 for ∈ V do 2 0 ( , ) ← 0 ∀ ∈ V \ ; 0 ( , ) ← 1; 3 ( , ) ← 1 ( ) ∀ ∈ N ( ); 4 ← max ∈ N ( ) CalTau( , , ); 5 for ℓ ← 1 to do 6 ℓ ( , ) ← 0 ∀ ∈ V; 7 for ∈ V with ℓ −1 ( , ) > 0 do 8 for ∈ N ( ) do 9 ℓ ( , ) ← ℓ ( , ) + ℓ −1 ( , ) ( ) ; 10 for ∈ N ( ) do 11 ( , ) ← ( , ) + ℓ ( , ) ( ) − ℓ ( , ) ( ) ; 12 for , ∈ E do ( , ) ← ( , ) + ( , ); 13 return ( , ) ∀ , ∈ E; in an iterative manner, and aggregates them as ( , ) = ∑︁ ℓ=0 ℓ ( , ) ( ) − ∑︁ ℓ=0 ℓ ( , ) ( )(9) to further derive ( , ) by ( , ) + ( , ). The pseudo-code of TGT is illustrated in Algorithm 2. In the course of graph traversal from each node ∈ V (Lines 2-11), 0 ( , ) and ( , ) ∀ ∈ V are initialized as Lines 2-3. Afterward, at Line 4, TGT invokes Algorithm 1 with absolute error and each edge , that is adjacent to . Let be the largest truncated length , for all ∈ N ( ). Then, Algorithm 2 performs a -hop graph traversal originating from (Lines 5-11). Specifically, at ℓ-th hop, TGT first sets ℓ ( , ) = 0 ∀ ∈ V. Subsequently, for each node with non-zero (ℓ − 1)hop TP ℓ −1 ( , ), we scatter its value to its neighbors, i.e., visit each neighbor ∈ N ( ) by adding ℓ −1 ( , ) ( ) to ℓ ( , ). This operation essentially performs a sparse matrix-vector multiplica- tion ℓ (·, ) = P · ℓ −1 (·, ). With ℓ ( , ) ∀ ∈ V, TGT injects an increment of ℓ ( , ) ( ) − ℓ ( , ) ( ) to ( , ) for each neighbor of . After the completion of all graph traversal operations, Algorithm 2 computes ( , ) for each edge , (Line 12) and returns them as the answers. The following theorem states the correctness and the worst-case time complexity of TGT. Theorem 3.3. Algorithm 2 returns -approximate SC values ( , ) ∀ , ∈ E using log ( 1 ) time in the worst case. Notwithstanding its unsatisfying worst-case time complexity, by virtue of our improved lower bounds for truncated lengths in Section 3.1, the actual number of graph traversal operations from each node in Algorithm 2 (Lines 7-9) is far less than ( ) when is non-diminutive, strengthening the superiority of TGT over MonteCarlo in empirical efficiency. THE TGT+ ALGORITHM Although TGT advances MonteCarlo in practical performance, we observe in our experiments that its cost is intolerable for massive graphs with high degrees. The reason is that the number of nonzero ℓ-hop TP values grows explosively at an astonishing rate till (Lines 7-9 in Algorithm 2) on such graphs as ℓ increases, causing a quadratic computational complexity of ( ). The severity of the efficiency issue is accentuated in high-precision AESC computation, i.e., is small. To alleviate this issue, we propose TGT+, an algorithm that significantly improves TGT in terms of both practical efficiency and asymptotic performance. The rest of this section proceeds as follows: Section 4.1 delineates the basic idea of TGT+, followed by several optimization techniques in Section 4.2. Finally, Section 4.3 describes the complete algorithm and analysis. High-level Idea Considering the sheer number of non-zero ℓ-hop TP values in TGT when ℓ is increased, we propose to calculate the TP values within˜(a small number) hops using TGT and harness random walks for the estimation of ℓ-hop TP with ℓ >˜. The rationale is that the amount of nodes in the vicinity of a given node is often limited, and hence, can be efficiently covered by a graph traversal from . On the contrary, far-reaching nodes from can be multitudinous (up to millions in large graphs), where random walks suit the demand better by focusing on probing important nodes (i.e., with high TP values) in lieu of all of them. To fulfill the above-said idea, we first derive a truncated length , such that | ( , ) − ( , )| ≤ 2 for each edge , ∈ E. Next, the problem is computing an estimated SCˆ( , ) of each edge , to ensure |ˆ( , ) − ( , )| ≤ 2 using graph traversals and random walks. To facilitate the seamless integration of random walks into TGT, we leverage the following crucial property of ( , ), a constituent part of SC ( , ) as defined in Eq. (9). Lemma 4.1. For any integer and˜with 0 ≤˜≤ , ( , ) =˜( , ) +˜→ ( , ), wherẽ → ( , ) = ∑︁ ∈˜( , ) ( ) , −∑︁ ℓ =1 ℓ ( , ) − ℓ ( , ) . (10) More concretely, given a cherry-picked length˜(1 ≤˜≤ , ), Lemma 4.1 implies that we can estimate˜→ ( , ) by simulating random walks of lengths from 1 to , −˜after obtaining˜( , ) and˜(·, ) with TGT. Mathematically, if we conduct two length-( , −˜) random walks and containing visited nodes from nodes , , respectively, we can define a random variable as = 1 ( ) · ∑︁ ∈˜( , ) − ∑︁ ∈˜( , ) .(11) By definition, the expectation E[ ] of is exactly˜→ ( , ) in Eq. (10), indicating that is an unbiased estimator of˜→ ( , ). Suppose that the range of is bounded by Lemma 4.2 (Hoeffding's ineqality [15]). Let 1 , 2 , . . . , be independent random variables with (∀1 ≤ ≤ ) is strictly bounded by the interval [ , ]. We define the empirical mean of these variables by = 1 =1 . Then, | | ≤ ( ) .(12)P[| − E[ ]| ≥ ] ≤ 2 exp − 2 2 2 =1 ( − ) 2 . It is straightforward to apply Hoeffding's inequality in Lemma 4.2 to derive the total number of random walks needed for the accurate estimation of˜→ ( , ), i.e., ( , , , −˜) = 8 2 log ( 2 ) ( ) 2 · 2 .(13) In the subsequent section, we elucidate the determination ofã nd so as to strike a good balance between graph traversal and random walks for optimized performance and meanwhile reduce the number ( , , , −˜) of samples required. Optimizations 4.2.1 Adaptive determination of˜. Since the length of random walks for estimating˜→ ( , ) ∀ ∈ N ( ) is , −˜, the computational overhead incurred by random walks from the given node and its neighbors is hence bounded by ∑︁ ∈ N ( ) ( , , , −˜) · ( , −˜) , which increases as˜decreases. Conversely, the graph traversal operations in TGT will reduce considerably when˜is lowered, as explained at the beginning of Section 4.1. In short, the lengthc ontrols the trade-off between the deterministic graph traversal and random walks for each node ∈ V. Since it is hard to accurately quantify the graph traversal cost as a function regarding˜due to the complex graph structure, we make use of an adaptive strategy to determine˜. More precisely, in the ℓ-th iteration of deterministic graph traversal (Lines 6-9 in Algorithm 2) originating from , we set˜= ℓ and switch the graph traversal to random walk simulations, if the following inequality holds: ∑︁ ∈ V& ℓ ( , )!=0 ( ) > ∑︁ ∈ N ( ) ( , , , − ℓ),(14) where the l.h.s. and r.h.s. represent their respective costs for computing (ℓ + 1)-hop TP values in the next iteration. The rationale of Eq. (14) is that we choose random walks rather than the graph traversal when the cost of the latter will outstrip the former. 4.2.2 Effective refinement of . By the definition of in Eq. (11), one may simply set as follows: = 2 · ( −˜) · max ∈˜( , ) − min ∈˜( , ) ,(15) where˜( , ) ∀ ∈ V is known from TGT. Unfortunately, the empirical values of r.h.s. of Eq. (15) are usually innegligible on real graphs, resulting in a considerable number of random samples according to Eq. (13). Intuitively, given that and are the random walks from two adjacent nodes , (i.e., , ∈ E), respectively, the nodes on and are highly overlapped. As a consequence, the difference between ∈˜( , ) and ∈˜( , ) in Eq. (11) (i.e., ) would be insignificant in practice. Inspired by the aforementioned observation, we can establish the following lower and upper bounds for ∈˜( , ). It is worth mentioning that ( , ℓ) and the first two terms in ( , ℓ) can be efficiently computed without sorting all the nodes, since the actual number of non-zero entries in˜(·, ) is limited due to our fine-tuned˜, as remarked earlier. Therefore, the critical challenge to realize the derivation of the improved in Eq. (17) arises from the computation of in Eq. (16), which incurs a high cost of ( log ) if we search for the optimal edge , ensuring Eq. (16) from E in a brute-force fashion. To tackle this problem, we propose a subroutine CalChi in Algorithm 3, which computes for in a cost-effective manner, without jeopardizing its correctness. More specifically, instead of inspecting all the edges in G, CalChi first identifies a set C = { 1 , 2 , · · · , } of nodes from V with ( is a small constant) largest˜-hop TP values to , in other words ( 1 , ) ≥˜( 2 , ) ≥ · · · ≥˜( , ) ≥˜( , ) ∀ ∈ V \ C (Line 2). After that, CalChi checks if any two nodes in C form an edge. If C does not contain such two nodes ( , ) ∈ E, we set 's upper boundˆto˜( 1 , ) +˜( , ), otherwise we use the largest˜( , ) +˜( , ) among all edges of C × C, which Complete Algorithm and Analysis Algorithm 4 summarizes the procedure of TGT+, which begins with computing ( , ) for each node ∈ V and each of its neighbors as in TGT. Specifically, for each node ∈ V, TGT+ first computes , for each neighbor of by taking /2 as input (Line 2). Subsequently, TGT+ carries out graph traversals as illustrated in TGT for the computation of˜( , ) ∀ ∈ N ( ) and˜(·, ) (Lines 3-10). The iterative process of the graph traversal terminates when Eq. (14) holds and Algorithm 4 then proceeds to sampling random walks for each neighboring node of whose˜( , ) is insufficiently accurate, i.e., , >˜(Line 12). In particular, TGT+ first invokes Algorithm 3 to obtain the refined (Line 13) before determining the number of random walks at Line 14. Afterwards, TGT+ generates length-( , −˜) random walks , from nodes , , respectively (Lines 15-16). After each sampling, it increaseŝ ( , ) by , where is a random variable based on Eq. (11) (Line 17). In the end, TGT+ computesˆ( , ) =ˆ( , ) +ˆ( , ) for each edge , ∈ E and outputs them as the SC estimations. The following theorem expresses the correctness and complexity of it. ( ) can be simplified as ( ) or even ( /log ) using Kantorovich inequality on scale-free graphs with / = (log ), manifesting the superiority of TGT+ over existing solutions. EXPERIMENTS In this section, we introduce the experimental settings, followed by evaluating our truncation bound and showing the performance of the proposed TGT+. At last, we analyze the sensitivity of constants and in TGT+. All experiments are conducted on a Linux machine with Intel Xeon(R) Gold [email protected] CPU and 377GB RAM in single-thread mode. None of the experiments need anywhere near all the memory. Due to space limitations, we refer interested readers to Appendix A for the scalability test. Experimental Setups Datasets and groundtruths. We include 5 different types of real undirected graphs at different scales, whose statistics are shown in Table 3. All datasets are collected from SNAP [21] and used as datasets in previous works [14,29,34]. For each graph, we generate groundtruth AESC by first computing P with 0 ≤ ≤ 1000 in parallel and then assembling them into SC by Eq.(2). Methods and parameters. We compare TGT and TGT+ with three recent algorithms for AESC: ST-Edge [14], MonteCarlo [34] and MonteCarlo-C [34], as introduced in Section 2.3. We exclude Fast-Tree [29] from the competitors since it mainly offers relative approximation guarantees and is empirically shown significantly inferior to ST-Edge in [14]. Amid them, MonteCarlo and MonteCarlo-C are adapted for AESC computation, and the detailed modification is explained in Section 5.2. For the randomized algorithms TGT+, ST-Edge, MonteCarlo, and MonteCarlo-C, we follow [14] and set failure probability = 1/ . Regarding MonteCarlo-C, we adopt the [20] 34,401 420,784 Slashdot [22] 77,360 469,180 Twitch [35] 168,114 6,797,557 Orkut [50] 3 heuristic settings of as suggested in [34], since they are unknown. For the proposed TGT and TGT+, we set the constants = 10 and = 128, unless otherwise specified. For a fair comparison, all tested algorithms are implemented in C++ and compiled by g++ 7.5 with −O3 optimization. For reproducibility, the source code is available at: https://github.com/jeremyzhangsq/AESC. Empirical Study of , and In the first set of experiments, we evaluate the performance of the proposed truncated length in Section 3.1. Figure 1 We take MonteCarlo as an example to demonstrate its superiority. Figure 1(b) reports the average number of random walks, where the major overhead of MonteCarlo stems from, for estimating each SC. Akin to the observation from Figure 1(a), MonteCarlo with our truncated lengths , requires at most 3 orders of magnitude fewer random walks than Peng et al. 's . It is worth noting that MonteCarlo and MonteCarlo-C are designed for computing SC of a single node pair. Although our , can remarkably cut down the number of random walks for an edge, there remain redundant random walks if invoking them for all edges individually. Hence, we further adapt MonteCarlo and MonteCarlo-C for efficient AESC computation by following the idea in TGT and TGT+ that iterate over each node. To summarize, for each node , the adapted MonteCarlo and MonteCarlo-C first compute the largest among 's local neighborhood as Line 2 in Algorithm 2, and then compute the number of samplings based on . In the end, these extensions generate the corresponding random walks from and estimate ( , ) for each ∈ N ( ). Performance Evaluation In the second set of experiments, we evaluate the performance of each approach in terms of efficiency and accuracy. For efficiency, we report the average running times (measured in wall-clock time) after all input data are loaded into the memory. For accuracy, we measure the actual average absolute error of the estimated all edge SC returned by each algorithm on each dataset. We run each algorithm with various in {0.05, 0.02, 0.01, 0.005}, and report the average evaluation score after repeating 3 trials. A method is excluded if it fails to report the result within 120 hours. Running time. We first compare TGT+ with TGT and other competitors in terms of efficiency. Figure 2 reports each solution's running time for solving AESC with various settings. Benefiting from the truncation bound and the seamless combination of TGT and random walk samplings, the proposed TGT+ outperforms all competitors on all tested graphs and settings. Most notably, TGT+ improves the best competitor ST-Edge by at least one order of magnitude on Facebook and Twitch. We find that the improvement achieved by TGT+ becomes more remarkable as decreases. For example, TGT+ is 10.8× (resp. 23.5×) faster than ST-Edge on HepPh (resp. Slashdot) when = 0.005. In addition, on the Orkut graph with 117 million edges, TGT+ is the only algorithm that can finish under all settings, demonstrating the scalability of our algorithm. To evaluate the performance of the combination in TGT+, we next compare TGT+, TGT, MonteCarlo, and MonteCarlo-C, as all of them employ the edge-wise for the sake of fairness. As shown in Figure 2, MonteCarlo and MonteCarlo-C fail to return results within the allowed time limit in most cases. In particular, MonteCarlo can only terminate within 120 hours on Facebook and on Twitch when = 0.05. The running time of MonteCarlo-C is even worse and is only feasible on Facebook with = 0.02, 0.05. In contrast, TGT speeds up MonteCarlo and MonteCarlo-C by at least 2 orders of magnitude, demonstrating the superiority of the graph traversal in Section 3.2. However, TGT is still rather costly in comparison to TGT+. Specifically, TGT is only comparable to TGT+ on Facebook and is inferior to TGT+ on the rest graphs. For instance, TGT costs at least 1 and 2 orders of magnitude more time than TGT+ on Slashdot and Twitch, respectively, demonstrating the effectiveness of integrating deterministic traversal with randomized simulations in TGT+. To explain, the truncated length , on the rest graphs (e.g., HepPh and Slashdot) is longer than that on Facebook, substantially increasing the overhead incurred by the graph traversal. 5.3.2 Accuracy. We next report the tradeoffs between average absolute error (in -axis) and running time (in -axis) in Figure 3. The results are sorted in the ascending order of , and the error-time curve closer to the lower left corner indicates a better performance. As shown, the overall observation is that TGT+ outperforms all competitors by achieving lower errors with less running time on all graphs. In particular, TGT+ achieves an average absolute error of 1.37E-05 with a time of 532 seconds on Twitch, while the closest solution TGT achieves an average absolute error of 1.56E-05 using over 20,000 seconds (≈ 5.6 hours). Regarding TGT, we observe that, under the same setting, the actual absolute error of TGT is slightly smaller than TGT+. This is as expected since TGT leverages the largest = max ∈ N ( ) , as the maximal iteration for . In other words, the SC value for the edge , is overestimated if , < . Furthermore, we notice that the absolute error of MonteCarlo-C is an order of magnitude larger than the closest competitor ST-Edge on Facebook. This is due to that the heuristic settings [34] for input parameters do not ensure the returned values are -approximate. Preprocessing time. Recall that TGT, TGT+, MonteCarlo, and MonteCarlo-C rely on the eigen decomposition of the matrices pertaining to G, e.g., P and D 1 2 PD − 1 2 , in the preprocessing stage. In particular, MonteCarlo and MonteCarlo-C require the second largest eigenvalues, while TGT and TGT+ need the = 128 largest eigenvalues and eigenvectors. Fortunately, by virtue of wellestablished techniques [33] and tools [19] for large-scale eigen decomposition, we can quickly obtain the desired eigenvalues and eigenvectors. Figure 4 reports the preprocessing time for TGT+ and vanilla MonteCarlo. As expected, the preprocessing time of TGT+ is comparable to MonteCarlo. In addition, compared to the running time for AESC displayed in Figure 2, the preprocessing costs are insignificant. For instance, the preprocessing time of TGT+ is about 45 minutes for the Orkut (OK) graph with 117 million edges, whereas the running time is at least 7 hours. Notice that this preprocessing step only needs to be conducted once for a graph. Parameter Analysis In the last set of experiments, we study the effects of TGT+'s constant: (i) , the number of largest eigenvalues and eigenvectors of D Figure 5(b) reports the running time of TGT+ by fixing = 128 and picking ∈ {0, 10 1 , 10 2 , 10 3 , 10 4 } for the computation of on HP, SD and TW. We observe that the running time of TGT+ first decreases and then increases as more candidates are considered. To explain, when is too small, the upper bound for | | is too loose, rendering more random walks generated; when is too large, Algorithm 3 incurs more computational overhead. For example, TGT+ with = 0 costs about 2× more time than that with = 10 on HP and SD. Meanwhile, TGT+ with = 10, 000 costs over 3× more time than that with = 10 on TW. Varying . ADDITIONAL RELATED WORK In the sequel, we review existing studies germane to our work. Spanning centrality. Apart from the methods discussed in Section 2.3, there exist several techniques for estimating SC (i.e., effective resistance (ER)). Fouss et al. [8] propose to calculate the exact ER values for all pairs of nodes in the input graph by first computing the Moore-Penrose pseudoinverse L + of the Laplacian matrix L = D − A, and then taking L + [ , ] + L + [ , ] − L + [ , ] − L + [ , ] as the ER for any node pair , ∈ . Teixeira et al. [40] and Mavroforakis et al. [29] utilize the random projection and symmetric diagonally dominant solvers to approximate the SC for all edges. After that, Jambulapati and Sidford [17] aime to compute the sketches of L and its pseudoinverse L + , and propose an algorithm for estimating ER values for all possible node pairs in ( 2 / ) time. Besides MonteCarlo and MonteCarlo-C, Peng et al. [34] also propose two solutions by leveraging the connection between ER and the commute time [31] These works all focus on the -multiplicative approximation and are beyond the scope of this paper. Personalized PageRank. Another line of related work is personalized PageRank (PPR). In past decades, the efficient computation of PPR has been extensively studied in a plethora of works [2, 7, 16, 23-28, 38, 43-46, 49]. Among them, some recent approaches [16,23,24,27,28,38,[43][44][45][46] also leveraged the idea of combining the deterministic graph traversal [2,7,26] with random walk simulations. At first glance, it seems that we can simply adapt and extend these techniques for computing -approximate all edge SC. However, SC is much more sophisticated than PPR. This is due to that they are defined according to two inherently different types of random walks. More concretely, PPR leverages the one called random walk with restart (RWR) [41], which would stop at each visited node with a certain probability during the walk. In contrast, SC relies on simple random walks of various fixed lengths (from 1 to ∞), indicating that the walk in SC will not terminate as early as RWR does. Motivated by this, a linchpin of this work is a personalized truncation for the maximum random walk length. Correspondingly, the combination of graph traversal and random walks becomes more challenging. CONCLUSION In this paper, we propose two approximation algorithms for AESC computation. Our contributions consist of (i) enhanced lower bounds for truncating random walks, (ii) an algorithmic framework integrating the deterministic graph traversal with random walk sampling, and (iii) several carefully-designed optimization techniques for increasing efficiency. Our experiments on five real datasets demonstrate that our proposed algorithm significantly outperforms existing solutions in terms of practical efficiency without compromising theoretical and empirical accuracy. In the future, we plan to study AESC computation with relative error guarantees as well as under multithreading environments. A APPENDIX A.1 Proofs Proof of Theorem 3.2. By Eq. (4) in Lemma 3.1 and the fact ì f 1 = 1, ℓ ( , ) ( ) + ℓ ( , ) ( ) − 2 ℓ ( , ) ( ) = ∑︁ =2 ( ì f [ ] − ì f [ ]) 2 · ℓ 2 . (18) Consider ℓ = 0. From Eq. (18), we have 0 ( , ) ( ) + 0 ( , ) ( ) − 2 0 ( , ) ( ) = 1 2 ∑︁ =2 ( ì f [ ] − ì f [ ]) 2 = 1 ( ) + 1 ( ) . Consider ℓ = 1. Since , ∈ E, we have 1 ( , ) ( ) + 1 ( , ) ( ) − 2 1 ( , ) ( ) = 1 2 ∑︁ =2 ( ì f [ ] − ì f [ ]) 2 · = −2 ( ) · ( ) . Combining these two equations yields ∑︁ =2 ( ì f [ ] − ì f [ ]) 2 · 1 + 2 = 1 ( ) + 1 ( ) − 2 ( ) · ( )(19) Note that 1 = | 1 | > | 2 | > · · · > | | > 0. For simplicity, we let = , here. With Eq. (18) and Eq. (19), suppose and are odd numbers and ≤ , we obtain ( , ) − ( , ) = ∞ ∑︁ ℓ= +1 ℓ ( , ) ( ) + ℓ ( , ) ( ) − 2 ℓ ( , ) ( ) = 1 2 ∑︁ =2 ( ì f [ ] − ì f [ ]) 2 ∞ ∑︁ ℓ= +1 ℓ = 1 2 ∑︁ =2 ( ì f [ ] − ì f [ ]) 2 · +1 1 − = 1 2 ∑︁ =2 ( ì f [ ] − ì f [ ]) 2 · (1 + ) · +1 1 − 2 = Δ + 1 2 ∑︁ = ( ì f [ ] − ì f [ ]) 2 · (1 + ) · +1 1 − 2 ≤ Δ + 1 2 ∑︁ = ( ì f [ ] − ì f [ ]) 2 · (1 + ) · +1 1 − 2 ≤ Δ + +1 1 − 2 · 1 2 ∑︁ =2 ( ì f [ ] − ì f [ ]) 2 · (1 + ) − Υ = Δ + +1 1 − 2 · 1 ( ) + 1 ( ) − 2 ( ) · ( ) − Υ . Plugging Eq. (5) into the above inequality yields | ( , ) − ( , )| ≤ 2 , which proves the theorem. Proof of Theorem 3.3. In the ℓ-th iteration of the source ∈ V, the graph traversal operation (Lines 6-9) is equivalent to the sparse matrix-vector multiplication ℓ (·, ) = P · ℓ −1 (·, ). Note that = max ∈ N ( ) , , where , ensures | ( , ) − ( , )| ≤ in terms of Theorem 3.2. Correspondingly, when iterations terminate for each ∈ V, the truncated ( , ) computed by Eq. (2) is -approximate. The worst-case complexity of Algorithm 2 is ( · · ), since steps of graph traversals are conducted from all nodes and each invocation of graph traversal costs ( ). For any ∈ V, as Line 1 of Algorithm 1, ≤ log 1 | 2 | 1 ( ) + 1 ( ) − 2 ( ) · ( ) 2 · (1 − | 2 | 2 ) = log( 1 ) , the worst-case complexity turns to · · log( 1 ) , which completes the proof. The lemma is then proved. Proof of Theorem 4.4. For a fixed edge , ∈ E, denote estimation of ( , ) asˆ( , ) =˜( , ) + , which is an unbiased estimator as mentioned in Section 4.1. As per Lemma 4.2 and Eq. (12), log 3 ( 1 ) ( ) 2 = + log ( ) 2 ∑︁ ∈ V log 3 ( 1 ) ( ) = + 1 2 · log 3 ( 1 ) · log ( ) · ∑︁ ∈ V 1 ( ) , which completes the proof. A.2 Scalability Test Besides the evaluation in Section 5, we also test the scalability of TGT+ on synthetic graphs of varying sizes generated by the Erdos Renyi random graph model. To evaluate scalability, we fix the number of nodes as 10 4 (resp. the number of edges as 10 6 ) and vary the number of edges from 0.2, 0.5, 1, 2, 5×10 6 (resp. the number of nodes from 2, 5, 10, 20, 50 × 10 3 ). We have included the results in Table 4 and Table 5. Our results show that the running time grows linearly with the number of nodes and edges, confirming the time complexity of TGT+ and demonstrating its scalability. 3.1 ([47]). Given an undirected graph G, let 1 = | 1 | ≥ | 2 | ≥ · · · ≥ | | ≥ 0 be the sorted absolute eigenvalues of D 1 , 2 , . . . , [ ] · ì f [ ] · as per Eq. (4). Given the above observations, we can establish an improved lower bound for the truncated length , of each edge , , as shown in Theorem 3.2. For ease of exposition, we defer all proofs to Appendix A.Theorem 3.2. Given G = (V, E), | ( , ) − ( , )| ≤ holds for any edge , ∈ E when , satisfies , ≥ ( , , ) and , ≡ 1 (mod 2) Compared to Peng et al. 's in Eq. (3), our truncated length , of edge , in Theorem 3.2 is dependent to the degrees of nodes , , the -largest (typically = 128) eigenvalues in absolute value and their corresponding eigenvectors of D 1 2 PD − 1 2 , enabling up to orders of magnitude improvement in practice, as reported in Algorithm 3 : 6 ← 36CalChi Data: Graph G, edge ( , C = { 1 , 2 , . . . , } such that˜( 1 , ) ≥ ( 2 , ) ≥ · · · ≥˜( , ) ≥˜( , ) ∀ ∈ V \ C; 3 if ∃ ,∈ C and ( , ) ∈ E then 4ˆ← max ( , ) ∈ E,∀ , ∈ C˜( , ) +˜( , ); 5 elseˆ←˜( 1 , ) +˜( , ) ; Eq. (17) with =ˆ; 7 else ← Eq. (15); 8 return ; Lemma 4. 3 .. 3Let be any length-ℓ (ℓ ≥ 1) random walk over G starting from node and be = max, ∈ E {˜( , ) +˜( , )}.(16)Then, we have ( , ℓ) ≤ ∈˜( , ) ≤ ( , ℓ), where the lower and upper bounds ( , ℓ), ( , ℓ) are defined by ( , ℓ) = min Using Lemma 4.3, a refined is at hand: = ( , , −˜) + ( , , −˜) − ( , , −˜) − ( , , −˜). length-( , −˜) random walks , from , , respectively; 17ˆ( , ) ←ˆ( , ) + with = Eq. (11); 18 for , ∈ E doˆ( , ) ←ˆ( , ) +ˆ( , ); 19 returnˆ( , ) ∀ , ∈ E;is itself (Lines 3-5). The rationale is that when no edges exists in C × C, at least an endpoint of the desired edge , is outside C, meaning˜( , ) ≤˜( , ). In the meantime, another endpoint of , satisfies˜( , ) ≤˜( 1 , ). Accordingly, ( 1 , ) +˜( , ) can serve as an upper bound of in this case. Eventually, CalChi calculates according to Eq. (17) by replacing by its upper boundˆ(Line 6). Particularly, when = 0, CalChi degrades to computing by Eq.(15). Theorem 4 . 4 . 44For any , ∈ (0, 1), Algorithm 4 returns theapproximate SCˆ( , ) ∀ , ∈ E with the probability at least 1 − ,using 1 2 · log 3 ( 1 ) · log ( ) · ∈ V 1 ( ) + expected time. The rationale of TGT+'s correctness has been explained in Section 4.1. For the time complexity, it comes from (i) the graph traversal in Lines 2-11, (ii) the random walk in Lines 12-17, and (iii) accessing each neighbor of each node in Line 18 with a total time of ( ).With the adaptive switch condition in Eq.(14), TGT+ ensures that the cost of the first part does not exceed the second part, resulting in the cost of both is∈ V ∈ N ( ) 2 · , · ( , , , ) , where ( , , , . (13) and , = log( 1 ) as Line 1 of Algorithm 1. Hence, the time complexity of TGT+ turns to the formula in Theorem 4.4. Figure 1 : 1Our , vs. Peng et al.'s . (a) reports the average of each edge , 's , derived from Algorithm 1 and Peng et al.'s computed by Eq. (3) when = 0.01 on Facebook (FB), HepPh (HP), Slashdot (SD) and Twitch (TW). We observe that our , can significantly improve Peng et al.'s on all tested graphs. Notably, our , is up to 3 orders of magnitude better than Peng et al.'s . Correspondingly, the computational overhead of MonteCarlo can be reduced by replacing Peng et al. 's with our , . Figure 2 : 2Running time of each algorithm by varying . Figure 3 : 3Tradeoffs between running time and absolute error. Figure 4 :Figure 5 : 45Preprocessing time of TGT+ and vanilla MonteCarlo. Varying constants in TGT+. Algorithm 1; (ii) , the number of candidates in Algorithm 3. In the sequel, we set = 0.05 unless otherwise specified. Varying .Figure 5(a) reports the running time of TGT+ by setting = 10 and varying ∈ {2, 8, 32, 128} on HepPh (HP), Slashdot (SD) and Twitch (TW). As expected, TGT+ costs less running times as more eigenvalues and eigenvectors are exploited. Specifically, the improvement of is more remarkable on HepPh, where the running time of = 128 is about 126× faster than = 2. Besides, the running time of TGT+ achieves about 8× and 17× improvements by varying from 2 to 128 on SD and TW, respectively. ℓ Based on Eq.(9), ( , ) turns to ( , ) = ℎ ( , ) − ℎ ( , ) = ℎ˜( , ) − ℎ˜( , ) +˜→ ( , ). ( , ) − ℓ ( , ) , which completes the proof.Proof of Lemma 4.3. Let be the -th node on . Note that for any two adjacency nodes , +1 on , ( , +1 ) is an edge in G. 2 | ( , ) −ˆ( , )| ≥ ] ≤ since Theorem 3.2 ensures that Line 2 of Algorithm 4 satisfying | ( , ) − ( , )| ≤ /2 and ( , ) = ( , ) + ( , ). Based on union bound, we can derive that Algorithm 4 returns -approximate SC values ( , ) ∀ , ∈ E with the probability at least 1 − .Regarding the time complexity, notice that, by Eq.(14), we ensure that the cost of the deterministic part does not exceed that of using random walk samplings. Hence, the overall time complexity of TGT+ is upper-· , · ( , , , )As per Eq. (15), we can obtain that ( , , , Table 1 : 1Frequently used notations. An undirected graph G with node set V and edge set E.Notation Description G = ( V, E ) Table 2 : 2Algorithms for -approximate AESC computation.Algorithm Time Complexity Fast-Tree [29] 2 log ( 1 ) log ( ) ST-Edge [14] 2 log MonteCarlo [34] 2 log 4 1 log MonteCarlo-C [34] 2 log 4 1 log Our TGT+ Table 2 2compares the expected time of the randomized algorithm for -approximate AESC computation. Notably, TGT+ eliminates an term in its bound, where the term∈ V 1 Table 3 : 3Datasets.Name #nodes #edges Facebook [30] 4,039 88,234 HepPh Table 4 : 4The running time of TGT+ by varying the number of edges. time (seconds) 0.357 0.921 2.400 5.163 12.106#edges (×10 6 ) 0.2 0.5 1 2 5 Table 5 : 5The running time of TGT+ by varying the number of nodes.#nodes (×10 3 ) 2 5 10 20 50 time (seconds) 2.293 2.328 2.400 2.593 2.946 by setting ( , , , −˜) = log 2 log ( 1 ) . However, its practical efficiency is less than satisfactory on large graphs, as revealed by the experiments in[14]. KDD'23, August 6 -August 10, 2023, Long Beach, USA Shiqi Zhang et al. Database-friendly random projections. Dimitris Achlioptas, PODS. Dimitris Achlioptas. 2001. Database-friendly random projections. In PODS. 274- 281. Local graph partitioning using pagerank vectors. Reid Andersen, Fan Chung, Kevin Lang, FOCS. Reid Andersen, Fan Chung, and Kevin Lang. 2006. Local graph partitioning using pagerank vectors. In FOCS. 475-486. Power grid vulnerability to geographically correlated failures-Analysis and control implications. Andrey Bernstein, Daniel Bienstock, David Hay, Meric Uzunoglu, Gil Zussman, INFOCOM. Andrey Bernstein, Daniel Bienstock, David Hay, Meric Uzunoglu, and Gil Zussman. 2014. Power grid vulnerability to geographically correlated fail- ures-Analysis and control implications. In INFOCOM. 2634-2642. Electrical flows, laplacian systems, and faster approximation of maximum flow in undirected graphs. Paul Christiano, Jonathan A Kelner, Aleksander Madry, A Daniel, Shang-Hua Spielman, Teng, STOC. Paul Christiano, Jonathan A Kelner, Aleksander Madry, Daniel A Spielman, and Shang-Hua Teng. 2011. Electrical flows, laplacian systems, and faster approxima- tion of maximum flow in undirected graphs. In STOC. 273-282. Edge betweenness centrality: A novel algorithm for QoSbased topology control over wireless sensor networks. Alfredo Cuzzocrea, Alexis Papadimitriou, Dimitrios Katsaros, Yannis Manolopoulos, Journal of Network and Computer Applications. 35Alfredo Cuzzocrea, Alexis Papadimitriou, Dimitrios Katsaros, and Yannis Manolopoulos. 2012. Edge betweenness centrality: A novel algorithm for QoS- based topology control over wireless sensor networks. Journal of Network and Computer Applications 35, 4 (2012), 1210-1217. . G Peter, J Laurie Doyle, Snell, American Mathematical Soc22Peter G Doyle and J Laurie Snell. 1984. Random walks and electric networks. Vol. 22. American Mathematical Soc. Towards scaling fully personalized pagerank: Algorithms, lower bounds, and experiments. Dániel Fogaras, Balázs Rácz, Károly Csalogány, Tamás Sarlós, Internet Math. 2Dániel Fogaras, Balázs Rácz, Károly Csalogány, and Tamás Sarlós. 2005. Towards scaling fully personalized pagerank: Algorithms, lower bounds, and experiments. Internet Math. 2, 3 (2005), 333-358. Random-walk computation of similarities between nodes of a graph with application to collaborative recommendation. Francois Fouss, Alain Pirotte, Jean-Michel Renders, Marco Saerens, TKDE. 19Francois Fouss, Alain Pirotte, Jean-Michel Renders, and Marco Saerens. 2007. Random-walk computation of similarities between nodes of a graph with appli- cation to collaborative recommendation. TKDE 19, 3 (2007), 355-369. A set of measures of centrality based on betweenness. C Linton, Freeman, Sociometry. Linton C Freeman. 1977. A set of measures of centrality based on betweenness. Sociometry (1977), 35-41. Minimizing effective resistance of a graph. Arpita Ghosh, Stephen Boyd, Amin Saberi, SIAM review. 50Arpita Ghosh, Stephen Boyd, and Amin Saberi. 2008. Minimizing effective resistance of a graph. SIAM review 50, 1 (2008), 37-66. Community structure in social and biological networks. Michelle Girvan, E J Mark, Newman, PNAS. 99Michelle Girvan and Mark EJ Newman. 2002. Community structure in social and biological networks. PNAS 99, 12 (2002), 7821-7826. Monotonicity properties and spectral characterization of power redistribution in cascading failures. Linqi Guo, Chen Liang, Steven H Low, SIGMET-RICS. 45Linqi Guo, Chen Liang, and Steven H Low. 2017. Monotonicity properties and spectral characterization of power redistribution in cascading failures. SIGMET- RICS 45, 2 (2017), 103-106. Combinatorics and graph theory. M John, Jeffry L Harris, Michael J Hirst, Mossinghoff, Springer Science & Business MediaJohn M Harris, Jeffry L Hirst, and Michael J Mossinghoff. 2000. Combinatorics and graph theory. Springer Science & Business Media. Efficient Algorithms for Spanning Tree Centrality. Takanori Hayashi, Takuya Akiba, Yuichi Yoshida, In IJCAI. 16Takanori Hayashi, Takuya Akiba, and Yuichi Yoshida. 2016. Efficient Algorithms for Spanning Tree Centrality. In IJCAI, Vol. 16. 3733-3739. Probability Inequalities for Sums of Bounded Random Variables. Wassily Hoeffding, J. Amer. Statist. Assoc. 58Wassily Hoeffding. 1963. Probability Inequalities for Sums of Bounded Random Variables. J. Amer. Statist. Assoc. 58, 301 (1963), 13-30. Massively parallel algorithms for personalized PageRank. Guanhao Hou, Xingguang Chen, Sibo Wang, Zhewei Wei, PVLDB. 14Guanhao Hou, Xingguang Chen, Sibo Wang, and Zhewei Wei. 2021. Massively parallel algorithms for personalized PageRank. PVLDB 14, 9 (2021), 1668-1680. Efficient Õ( / ) Spectral Sketches for the Laplacian and its Pseudoinverse. Arun Jambulapati, Aaron Sidford, SODA. Arun Jambulapati and Aaron Sidford. 2018. Efficient Õ( / ) Spectral Sketches for the Laplacian and its Pseudoinverse. In SODA. 2487-2503. An almost-linear-time algorithm for approximate max flow in undirected graphs, and its multicommodity generalizations. A Jonathan, Yin Tat Kelner, Lorenzo Lee, Aaron Orecchia, Sidford, SODA. Jonathan A Kelner, Yin Tat Lee, Lorenzo Orecchia, and Aaron Sidford. 2014. An almost-linear-time algorithm for approximate max flow in undirected graphs, and its multicommodity generalizations. In SODA. 217-226. ARPACK users' guide: solution of large-scale eigenvalue problems with implicitly restarted Arnoldi methods. Danny C Richard B Lehoucq, Chao Sorensen, Yang, SIAMRichard B Lehoucq, Danny C Sorensen, and Chao Yang. 1998. ARPACK users' guide: solution of large-scale eigenvalue problems with implicitly restarted Arnoldi methods. SIAM. Graphs over time: densification laws, shrinking diameters and possible explanations. Jure Leskovec, Jon Kleinberg, Christos Faloutsos, SIGKDD. Jure Leskovec, Jon Kleinberg, and Christos Faloutsos. 2005. Graphs over time: densification laws, shrinking diameters and possible explanations. In SIGKDD. 177-187. Jure Leskovec, Andrej Krevl, SNAP Datasets: Stanford Large Network Dataset Collection. Jure Leskovec and Andrej Krevl. 2014. SNAP Datasets: Stanford Large Network Dataset Collection. http://snap.stanford.edu/data. Community structure in large networks: Natural cluster sizes and the absence of large well-defined clusters. Jure Leskovec, J Kevin, Anirban Lang, Michael W Dasgupta, Mahoney, Internet Mathematics. 6Jure Leskovec, Kevin J Lang, Anirban Dasgupta, and Michael W Mahoney. 2009. Community structure in large networks: Natural cluster sizes and the absence of large well-defined clusters. Internet Mathematics 6, 1 (2009), 29-123. Index-free approach with theoretical guarantee for efficient random walk with restart query. Dandan Lin, Raymond Chi-Wing Wong, Min Xie, Victor Junqiu Wei, Dandan Lin, Raymond Chi-Wing Wong, Min Xie, and Victor Junqiu Wei. 2020. Index-free approach with theoretical guarantee for efficient random walk with restart query. In ICDE. 913-924. Distributed algorithms for fully personalized pagerank on large graphs. Wenqing Lin, WWW. Wenqing Lin. 2019. Distributed algorithms for fully personalized pagerank on large graphs. In WWW. 1084-1094. Personalized pagerank estimation and search: A bidirectional approach. Peter Lofgren, Siddhartha Banerjee, Ashish Goel, WSDM. Peter Lofgren, Siddhartha Banerjee, and Ashish Goel. 2016. Personalized pagerank estimation and search: A bidirectional approach. In WSDM. 163-172. . Peter Lofgren, Ashish Goel, Personalized pagerank to a target node. arXivPeter Lofgren and Ashish Goel. 2013. Personalized pagerank to a target node. arXiv (2013). Baton: Batch onehop personalized pageranks with efficiency and accuracy. Siqiang Luo, Xiaokui Xiao, Wenqing Lin, Ben Kao, TKDE. 32Siqiang Luo, Xiaokui Xiao, Wenqing Lin, and Ben Kao. 2019. Baton: Batch one- hop personalized pageranks with efficiency and accuracy. TKDE 32, 10 (2019), 1897-1908. Efficient batch one-hop personalized pageranks. Siqiang Luo, Xiaokui Xiao, Wenqing Lin, Ben Kao, ICDE. Siqiang Luo, Xiaokui Xiao, Wenqing Lin, and Ben Kao. 2019. Efficient batch one-hop personalized pageranks. In ICDE. 1562-1565. Spanning edge centrality: Large-scale computation and applications. Charalampos Mavroforakis, Richard Garcia-Lebron, Ioannis Koutis, Evimaria Terzi, Charalampos Mavroforakis, Richard Garcia-Lebron, Ioannis Koutis, and Evimaria Terzi. 2015. Spanning edge centrality: Large-scale computation and applications. In WWW. 732-742. Learning to discover social circles in ego networks. J Julian, Jure Mcauley, Leskovec, NeurIPS. Julian J McAuley and Jure Leskovec. 2012. Learning to discover social circles in ego networks.. In NeurIPS. 548-56. Randomized algorithms. Rajeev Motwani, Prabhakar Raghavan, Cambridge University PressRajeev Motwani and Prabhakar Raghavan. 1995. Randomized algorithms. Cam- bridge University Press. Finding and evaluating community structure in networks. E J Mark, Michelle Newman, Girvan, Physical review E. 6926113Mark EJ Newman and Michelle Girvan. 2004. Finding and evaluating community structure in networks. Physical review E 69, 2 (2004), 026113. The symmetric eigenvalue problem. N Beresford, Parlett, SIAMBeresford N Parlett. 1998. The symmetric eigenvalue problem. SIAM. Local Algorithms for Estimating Effective Resistance. Pan Peng, Daniel Lopatta, Yuichi Yoshida, Gramoz Goranci, SIGKDD. Pan Peng, Daniel Lopatta, Yuichi Yoshida, and Gramoz Goranci. 2021. Local Algorithms for Estimating Effective Resistance. In SIGKDD. 1329-1338. Benedek Rozemberczki, Rik Sarkar, arXiv:2101.03091Twitch Gamers: a Dataset for Evaluating Proximity Preserving and Structural Role-based Node Embeddings. cs.SIBenedek Rozemberczki and Rik Sarkar. 2021. Twitch Gamers: a Dataset for Evaluating Proximity Preserving and Structural Role-based Node Embeddings. arXiv:2101.03091 [cs.SI] Network meta-analysis, electrical networks and graph theory. Gerta Rücker, Research synthesis methods. 3Gerta Rücker. 2012. Network meta-analysis, electrical networks and graph theory. Research synthesis methods 3, 4 (2012), 312-324. Centrality and connectivity in public transport networks and their significance for transport sustainability in cities. Jan Scheurer, Sergio Porta, World Planning Schools Congress, Global Planning Association Education Network. Jan Scheurer and Sergio Porta. 2006. Centrality and connectivity in public transport networks and their significance for transport sustainability in cities. In World Planning Schools Congress, Global Planning Association Education Network. Realtime top-k personalized pagerank over large graphs on gpus. Jieming Shi, Renchi Yang, Tianyuan Jin, Xiaokui Xiao, Yin Yang, PVLDB. 13Jieming Shi, Renchi Yang, Tianyuan Jin, Xiaokui Xiao, and Yin Yang. 2019. Real- time top-k personalized pagerank over large graphs on gpus. PVLDB 13, 1 (2019), 15-28. Graph sparsification by effective resistances. A Daniel, Nikhil Spielman, Srivastava, STOC. Daniel A Spielman and Nikhil Srivastava. 2008. Graph sparsification by effective resistances. In STOC. 563-568. . Andreia Sofia Teixeira, Pedro T Monteiro, A João, Mário Carriço, Alexandre P Ramirez, Francisco, In MLG. 24Spanning edge betweennessAndreia Sofia Teixeira, Pedro T Monteiro, João A Carriço, Mário Ramirez, and Alexandre P Francisco. 2013. Spanning edge betweenness. In MLG, Vol. 24. 27-31. Fast random walk with restart and its applications. Hanghang Tong, Christos Faloutsos, Jia-Yu Pan, ICDM. Hanghang Tong, Christos Faloutsos, and Jia-Yu Pan. 2006. Fast random walk with restart and its applications. In ICDM. 613-622. William Thomas Tutte, William Thomas Tutte, Graph theory. Cambridge university press21William Thomas Tutte and William Thomas Tutte. 2001. Graph theory. Vol. 21. Cambridge university press. Personalized PageRank to a Target Node. Hanzhi Wang, Zhewei Wei, Junhao Gan, Sibo Wang, Zengfeng Huang, Revisited. In SIGKDD. Hanzhi Wang, Zhewei Wei, Junhao Gan, Sibo Wang, and Zengfeng Huang. 2020. Personalized PageRank to a Target Node, Revisited. In SIGKDD. 657-667. HubPPR: effective indexing for approximate personalized pagerank. Sibo Wang, Youze Tang, Xiaokui Xiao, Yin Yang, Zengxiang Li, PVLDB. 10Sibo Wang, Youze Tang, Xiaokui Xiao, Yin Yang, and Zengxiang Li. 2016. HubPPR: effective indexing for approximate personalized pagerank. PVLDB 10, 3 (2016), 205-216. Efficient algorithms for approximate single-source personalized pagerank queries. Sibo Wang, Renchi Yang, Runhui Wang, Xiaokui Xiao, Zhewei Wei, Wenqing Lin, Yin Yang, Nan Tang, TODS. 44Sibo Wang, Renchi Yang, Runhui Wang, Xiaokui Xiao, Zhewei Wei, Wenqing Lin, Yin Yang, and Nan Tang. 2019. Efficient algorithms for approximate single-source personalized pagerank queries. TODS 44, 4 (2019), 1-37. FORA: simple and effective approximate single-source personalized pagerank. Sibo Wang, Renchi Yang, Xiaokui Xiao, Zhewei Wei, Yin Yang, SIGKDD. Sibo Wang, Renchi Yang, Xiaokui Xiao, Zhewei Wei, and Yin Yang. 2017. FORA: simple and effective approximate single-source personalized pagerank. In SIGKDD. 505-514. Markov chains and mixing times. El Wilmer, A David, Yuval Levin, Peres, American Mathematical SocProvidenceEL Wilmer, David A Levin, and Yuval Peres. 2009. Markov chains and mixing times. American Mathematical Soc., Providence (2009). Generating random spanning trees more quickly than the cover time. Wilson David Bruce, STOC. David Bruce Wilson. 1996. Generating random spanning trees more quickly than the cover time. In STOC. 296-303. Unifying the Global and Local Approaches: An Efficient Power Iteration with Forward Push. Hao Wu, Junhao Gan, Zhewei Wei, Rui Zhang, SIGMOD. Hao Wu, Junhao Gan, Zhewei Wei, and Rui Zhang. 2021. Unifying the Global and Local Approaches: An Efficient Power Iteration with Forward Push. In SIGMOD. 1996-2008. Defining and Evaluating Network Communities Based on Ground-Truth. Jaewon Yang, Jure Leskovec, ICDM. Jaewon Yang and Jure Leskovec. 2012. Defining and Evaluating Network Com- munities Based on Ground-Truth. In ICDM. 745-754.
[ "https://github.com/jeremyzhangsq/AESC." ]
[ "Quantum Otto heat engines on XYZ spin working medium with DM and KSEA interactions: Operating modes and efficiency at maximal work output", "Quantum Otto heat engines on XYZ spin working medium with DM and KSEA interactions: Operating modes and efficiency at maximal work output" ]
[ "Elena I Kuznetsova ", "M A Yurischev ", "Saeed Haddadi " ]
[]
[]
The magnetic Otto thermal machine based on a two-spin-1/2 XYZ working fluid in the presence of an inhomogeneous magnetic field and antisymmetric Dzyaloshinsky-Moriya (DM) and symmetric Kaplan-Shekhtman-Entin-Wohlman-Aharony (KSEA) interactions is considered. Its possible modes of operation are found and classified. The efficiencies of engines at maximum power are estimated for various choices of model parameters. There are cases when these efficiencies exceed the Novikov value. New additional points of local minima of the total work are revealed and the mechanism of their occurrence is analyzed.
10.1007/s11128-023-03944-z
[ "https://export.arxiv.org/pdf/2301.07987v1.pdf" ]
255,999,949
2301.07987
6db23f8bfbf063fee4e6b716aeeb275f5db79bd3
Quantum Otto heat engines on XYZ spin working medium with DM and KSEA interactions: Operating modes and efficiency at maximal work output 19 Jan 2023 Elena I Kuznetsova M A Yurischev Saeed Haddadi Quantum Otto heat engines on XYZ spin working medium with DM and KSEA interactions: Operating modes and efficiency at maximal work output 19 Jan 2023Received:Noname manuscript No. (will be inserted by the editor)Quantum thermodynamics · Quantum adiabaticity · Otto cycle · Carnot and Novikov efficiencies · Nonclassical correlations The magnetic Otto thermal machine based on a two-spin-1/2 XYZ working fluid in the presence of an inhomogeneous magnetic field and antisymmetric Dzyaloshinsky-Moriya (DM) and symmetric Kaplan-Shekhtman-Entin-Wohlman-Aharony (KSEA) interactions is considered. Its possible modes of operation are found and classified. The efficiencies of engines at maximum power are estimated for various choices of model parameters. There are cases when these efficiencies exceed the Novikov value. New additional points of local minima of the total work are revealed and the mechanism of their occurrence is analyzed. Introduction In 1955 Prokhorov and Basov [1,2,3], and later Bloembergen [4] proposed a three-level maser scheme with electromagnetic pump to obtain population inversion, which could lead to negative absorption. This scheme turned out to be very effective and was successfully implemented in masers [5,6,7] (and then in laser [8]). Shortly after, Scovil and Schulz-DuBois came to a conclusion that "three-level masers can be regarded as heat engines", and showed that "the limiting efficiency of a 3-level maser is that of a Carnot engine" [9] (see also [10,11,12,13,14,15]). The induced (stimulated) emission in such a picture plays a role of the work output of heat engine, which operates between a hot pump temperature and a low relaxation bath temperature. So a three-level maser, as interpreted by Scovil and Schulz-DuBois, was the first example of a quantum heat engine and has become an important step in the development of quantum thermodynamics. Quantum thermodynamics, which grew out of the classical Carnot theory [16], is based on the quantum-mechanical principles and deals with conditions of conservation and conversion of such forms of energy as heat and mechanical work [17,18,19,20,21] (for a historical review see, e.g., [22]). Its important branch is the study of quantum cyclic heat engines that produce work using quantum matter as a working medium. There are many thermodynamic cycles [23,24]. One of the most known among them is the Carnot cycle. It consists of isentropic compression and expansion and isothermal heat addition and rejection. All the processes that compose the ideal Carnot engine can be reversed, in which case it becomes a heat pump or refrigerator. The Carnot cycle provides an upper limit to the efficiency that any thermodynamic engine can achieve when converting heat to work, or vice versa. Another important cycle is the Otto one. It is an idealized thermodynamic cycle that describes the operation of a spark-ignited piston engine in automobiles. The Otto cycle consists of four processes (strokes): two adiabatic ones, where there is no heat exchange and two isochoric ones, where there is no work exchange. Below, we will study magnetic quantum Otto cycles in which the "expansion" and "compression" of energy levels of the thermally isolated working fluid are performed during isochoric processes where work is the change in the average energy due to a change in external control parameters of the Hamiltonian of the system. The magnetic Otto cycles and heat machines operating on spin quantum fluid have been studied by many researches (see Ref. [25] and references therein). As a working substance one takes spin magnetic systems with Heisenberg pair and multi-spin non-local collective interactions. Much attention has been paid to cases when spin working medium involves Dzyaloshinsky-Moriya (DM) couplings [26,27,28]. However, we are motivated to extend such studies and include in the consideration also Kaplan-Shekhtman-Entin-Wohlman-Aharony (KSEA) interactions [29,30], which are symmetric in contrast to the DM ones. The structure of the paper is as follows. In Sect. 2, we briefly review quantum Otto cycles composed of two quantum adiabatic stages and two isochoric coupling to thermal baths (reservoirs). In Sect. 3, we describe the model of working medium used. Sect. 4 is devoted to the description of results obtained and their discussion. Finally, our main results are summarized in Sect 5. Preliminaries Before we start presenting our results, we should provide some necessary definitions and expressions used in this paper. Let there be a system with Hamiltonian H, and its density operator ρ satisfies, say, the quantum Liouville-von Neumann or Lindblad master equation, or has a thermal equilibrium Gibbs form. Here we will consider the latter case, that is ρ = 1 Z exp(−βH),(1) where Z = Tr exp(−βH) is the partition function and β = 1/k B T , wherein T is the temperature, and the Boltzmann constant k B is assumed to be equal to one for simplicity. The operator ρ satisfies the following conditions: ρ † = ρ, ρ ≥ 0, and Trρ = 1. Next, F = −T ln Z is the Helmholtz free energy and S = −∂F/∂T denotes the entropy of the system. The internal energy of the system is given as (see, e.g., [19]) U = H = n p n E n ,(2) where E n are the energy levels and the density-matrix eigenvalues p n (T ) = 1 Z(T ) exp(−E n /T )(3) represent the occupation probabilities of energy levels at the temperature T . From here, in accord with the first law of thermodynamics, it follows that during infinitesimal process the energy change equals [31] dU = n (E n dp n + p n dE n ) = δQ + δW, where δQ = n E n dp n (5) equals the heat transferred and δW = n p n dE n(6) is the work done. Note that positive heat, Q > 0, means that heat is transferred to the working body, and its negative sign Q < 0 means that heat, on the contrary, leaves the body. Similarly for the work. Positive amount of work, δW > 0, corresponds to the work done on a given body by external forces, while negative work, δW < 0, means that the body itself does work on some external object. A cycle of the quantum Otto heat machine consists of four steps (see Fig. 1), namely, two adiabatic processes, where there is no heat exchange and two socalled isochoric (isomagnetic) ones, where there is no work exchange [17,19]. All processes are assumed to be sufficiently slow (quasistatic) and the quantum adiabatic theorem holds [32,33,34,35,36,37]. According to this theorem, the level populations are invariant during the course of a quantum adiabatic process and, consequently, the von Neumann entropy remains unchanged. On the other hand, the classical adiabatic process is in equilibrium and characterized by a certain temperature at any given time, but not necessarily require that the occupation probabilities remain constant. The corresponding temperatures can be found, for example, from the Gibbs entropy invariance condition [25,38]. Notice that the classical adiabatic theorem is a sequence of quantum adiabatic theorem, but the converse is not true in general. Next, the cycle node with the lowest temperature T c can naturally be referred to the cold bath, and the node with the highest temperature T h to the hot one. Accordingly, the heat from or to the cold and hot baths will be denoted as Q c and Q h , respectively. Let us now consider in detail the Otto cycle shown in Fig. 1. It includes four following strokes. First stroke (AB). The working medium at thermal equilibrium with the cold bath in A at the temperature T A = T c is isolated from thermal reservoir and undergoes an adiabatic compression (magnetization). Energy level parameters (spacings between energy levels) are increased, but the occupation probabilities stay unchanged. The work W A→B = W in is performed on the working medium during this step: W A→B = n B A p n dE n = n p A n (E f n − E i n ),(7) where E i n and E f n are the initial and final values of energy levels, respectively, and p A n = p n (T A ) is the occupation probability by the temperature at the point A. Second stroke (BC). The system is brought into thermal contact with the hot bath C under unchanged its energy structure. This process is irreversible, and the occupation probabilities change to new equilibrium values. Only heat Q B→C = Q h is transformed in this step: Q B→C = n C B E f n dp n = n E f n (p C n − p B n ),(8) where p B n = p n (T A ) and p C n = p n (T C ) are initial and final values of the occupation probabilities. Third stroke (CD). This is another adiabatic (demagnetization) process reducing the energy gaps to initial values. Here, the external control parameters of system are changes back to the initial values and the occupation probabilities remain fixed. Only work W C→D = W out is performed by working medium and no heat is exchanged: W C→D = n D C p n dE n = n p C n (E i n − E f n ). (9) Fourth stroke (DA). The system is brought into thermal contact with the cold bath at node A. Again, no work is done, only heat Q c is rejected during this isochoric process: Q D→A = n A D E i n dp n = n E i n (p A n − p D n ).(10) Since energy is conserved in a cyclic process, the balance condition is satisfied: W A→B + Q B→C + W C→D + Q D→A = 0 (11) or W + Q h + Q c = 0,(12) where W = W A→B + W C→D(13) is the total work. If W < 0, the thermodynamical machine produces mechanical work |W | = Q h + Q c with energy absorption Q h > 0 and energy release Q c < 0, i.e., corresponds to a heat engine; see The efficiency of heat engine is defined as In particular, the efficiency of an ideal Carnot cycle is given by the well-known expression η = |W | Q h .(14)η C = 1 − T c T h ,(15) which is the upper bound for any thermodynamic cycles. A heat engine acts by transferring energy from a warm region to a cool region of space and, in the process, converting some of that energy to mechanical work. The cycle may also be reversed. Then the system may be worked upon by an external force, and in the process, it can transfer thermal energy from a cooler system to a warmer one, thereby acting as a refrigerator or heat pump ({→↓→}) rather than a heat engine. In this case, the operation of a heat machine is characterized by a coefficient of performance (CoP), which is defined as CoP = Q c W .(16) This completes the preliminary section, and now we move on to the description of the working substance. Working medium As a working medium, we consider a two-site spin-1/2 system with the Hamiltonian H = J x σ x 1 σ x 2 + J y σ y 1 σ y 2 + J z σ z 1 σ z 2 + D z (σ x 1 σ y 2 − σ y 1 σ x 2 ) + Γ z (σ x 1 σ y 2 + σ y 1 σ x 2 ) + B 1 σ z 1 + B 2 σ z 2 ,(17) where σ α i (i = 1, 2; α = x, y, z) are the Pauli matrices, B 1 and B 2 the zcomponents of external magnetic fields applied at the first and second qubits respectively, (J x ,J y ,J z ) the vector of interaction constants of the Heisenberg part of interaction, D z the z-component of Dzyaloshinsky vector, and Γ z the strength of KSEA interaction. Thus, this model contains seven real independent parameters: B 1 , B 2 , J x , J y , J z , D z , and Γ z . In open matrix form, the Hamiltonian (17) reads H =     J z + B 1 + B 2 . . J x − J y − 2iΓ z . −J z + B 1 − B 2 J x + J y + 2iD z . . J x + J y − 2iD z −J z − B 1 + B 2 . J x − J y + 2iΓ z . . J z − B 1 − B 2    (18) with the dots which are put instead of zero entries. This Hermitian matrix has X form. Its eigenvalues are equal to E 1,2 = J z ± R 1 , E 3,4 = −J z ± R 2 ,(19) where R 1 = [(B 1 +B 2 ) 2 +(J x −J y ) 2 +4Γ 2 z ] 1/2 , R 2 = [(B 1 −B 2 ) 2 +(J x +J y ) 2 +4D 2 z ] 1/2 .(20) Thus, the energy spectrum of the working medium consists of two pairs of levels with energy shifts R 1 and R 2 . Therefore, instead of seven parameters, the spectrum is determined by only three quantities: J z , R 1 , and R 2 . Note that Γ z occurs only in R 1 , while D z only in R 2 , i.e., R 1 is the effective Γ z -parameter, and R 2 , on the contrary, is the parameter determined by D z . The Gibbs density matrix is given by Eq. (1) and the partition function Z = n exp(−βE n ) for the considered model is expressed as Z = 2[e −βJz cosh(βR 1 ) + e βJz cosh(βR 2 )].(21) Therefore, the Gibbs entropy equals S(T ; J z , R 1 , R 2 ) = − 1 Z R 1 − J z T exp R 1 − J z T − R 1 + J z T exp − R 1 + J z T + R 2 + J z T exp R 2 + J z T − R 2 − J z T exp − R 2 − J z T + ln Z.(22) On the other hand, using Eq. (3), the von Neumann entropy S = − ln ρ = −Trρ ln ρ = − n p n ln p n(23) again leads to the Gibbs entropy expression (22). Finally, using the general relations (7)-(10) and also (3) and (19), as well as taking into account the quantum adiabatic theorem, we arrive at equations for a heat engine with the considered working medium. For the adiabatic (isentropic) strokes, the equations are given as W in = (J f z − J i z ) cosh R i 1 T c e −J i z /Tc − cosh R i 2 T c e J i z /Tc − (R f 1 − R i 1 ) sinh R i 1 T c e −J i z /Tc −(R f 2 − R i 2 ) sinh R i 2 T c e J i z /Tc / cosh R i 1 T c e −J i z /Tc + cosh R i 2 T c e J i z /Tc ,(24) and W out = (J i z − J f z ) cosh R f 1 T h e −J f z /T h − cosh R f 2 T h e J f z /T h − (R i 1 − R f 1 ) sinh R f 1 T h e −J f z /T h −(R i 2 − R f 2 ) sinh R f 2 T h e J f z /T h / cosh R f 1 T h e −J f z /T h + cosh R f 2 T h e J f z /T h .(25) The net work done during a cycle is W = W in + W out . Similarly for the isochoric strokes. The quantities of heat exchanged between working agent and hot and cold reservoirs, respectively, are Q h = J f z cosh R f 1 T h − R f 1 sinh R f 1 T h e −J f z /T h − J f z cosh R f 2 T h +R f 2 sinh R f 2 T h e J f z /T h / cosh R f 1 T h e −J f z /T h + cosh R f 2 T h e J f z /T h − J f z cosh R i 1 T c − R f 1 sinh R i 1 T c e −J i z /Tc − J f z cosh R i 2 T c +R f 2 sinh R i 2 T c e J i z /Tc / cosh R i 1 T c e −J i z /Tc + cosh R i 2 T c e J i z /Tc ,(26) and Q c = J i z cosh R i 1 T c − R i 1 sinh R i 1 T c e −J i z /Tc − J i z cosh R i 2 T c +R i 2 sinh R i 2 T c e J i z /Tc / cosh R i 1 T c e −J i z /Tc + cosh R i 2 T c e J i z /Tc − J i z cosh R f 1 T h − R i 1 sinh R f 1 T h e −J f z /T h − J i z cosh R f 2 T h +R i 2 sinh R f 2 T h e J f z /T h / cosh R f 1 T h e −J f z /T h + cosh R f 2 T h e J f z /T h .(27) The presented equations make it possible to investigate the quantum Otto heat engine in the general and various interesting special cases. Although, nonclassical correlations are initially present in the quantum state ρ with Hamiltonian (17), transition to the diagonal energy representation, where the heat engine is analyzed, completely destroys any quantumness of correlations. Results and discussion Using the above equations, we will now study the operation of the quantum Otto machine in different modes. = R 2 = 0 for R f 1 ≥ R i 1 (a) and R f 1 ≤ R i 1 (b). The green trapezoids correspond to engine cycles, and the blue ones represent refrigerator cycles. Other details are described in the text A three-level system Let us start with a simple case, namely, when J z = 0 and one of R i (i = 1, 2) is also equal to zero. Without loss of generality, we set R 2 = 0. In this case, the energy spectrum consists of three levels: doublet E 1,2 = ±R 1 and doublydegenerate zero-energy level E 3,4 = 0. It is important to note that here the energy levels are invariant under the scale transformation E f n = qE i n , where q is independent of n. This property is necessary and sufficient condition that the quantum adiabatic theorem reduces to its classical counterpart [36]. Thus, this is the case when the system is quantum but the adiabaticity condition is classical, i.e., quantum state at each point of the quantum adiabatic process is the state of thermal equilibrium with respect to the Hamiltonian at the given point. On the other hand, the Gibbs entropy (22) for the case under discussion reduces to S(T /R 1 ) = 2 ln 2 + ln cosh R 1 2T − R 1 2T tanh R 1 2T ,(28) i.e., it is a function of only one variable. Then the adiabaticity condition is R 1 /T = const and, therefore, adiabatic curves in the plane R 1 − T are straight lines passing through the origin of the coordinate system. Otto cycles in the plane R 1 − temperature are shown in Fig. 3. If the final value of R 1 is equal to the initial value, R f 1 = R i 1 , then the cycle contracts into a segment of a horizontal straight line from temperature T c to T h (AE in Fig. 3a and ED in Fig. 3b). When R f 1 starts to increase, the cycle ABCD appears that goes clockwise and has adiabatic AB and CD and isochoric BC and DA strokes (green Fig. 3a). The nodes A and C correspond here to the cold and hot reservoirs: T A = T c and T C = T h . The adiabaticity conditions allow us to express the temperatures of other two nodes through the temperatures of the cold and hot reservoirs: T B = T c R f 1 /R i 1 and T D = T h R i 1 /R f 1 . Because of this, the net work performed during the whole cycle is given as W = (R f 1 − R i 1 ) tanh R f 1 2T h − tanh R i 1 2T c .(29) This work equals zero, if R f 1 = R i 1 or R f 1 = (T h /T c )R i 1 . On the other hand, Q h and Q c are given as Q h = R f 1 tanh R i 1 2T c − tanh R f 1 2T h .(30) and Q c = R i 1 tanh R f 1 2T h − tanh R i 1 2T c .(31) Both Q h and Q c equal zero at the same boundary R f 1 = (T h /T c )R i 1 as W . Below this line, Q h > 0 and Q c < 0. As a result, the region 0 < R f 1 < (T h /T c )R i 1 corresponds to the heat engine regime; see green domain II in Fig. 4. The structure of isolines of net work W in this domain is depicted in Fig. 5. Taking into account the definition (14) and Eqs. (29)-(30), we get the efficiency of the given Otto heat engine η = 1 − R i 1 R f 1 .(32) Since R f 1 < (T h /T c )R i 1 , the value found is less than Carnot's efficiency (15). This agrees with the Carnot theorem (principle) known from classical thermodynamics. According to this theorem, no heat engine operating on a cycle between two heat reservoirs can be more efficient than a reversible heat engine operating between the same two reservoirs regardless of the working substance employed or the operation details; Carnot's efficiency (15) is the upper limit that does not depend on the design of the engine (see, e.g., [39], Chapt. 44). The efficiency (32) is zero at R f 1 = R i 1 . When R f 1 reaches the value of (T h /T c )R i 1 , the Otto cycle is reduced to a section of straight line between points A and F , as shown in Fig. 3a. The efficiency of such a "cycle" reaches a Carnot efficiency of 50%, however, the total work W , Eq. (29), vanishes here. Further Fig. 3a by a blue region II. Note first of all, that the direction of such a cycle was changed to opposite. Moreover, the minimum temperature now is at the node D ′ and equals T ′ c = (R i 1 /R f 1 )T h , while the maximum one is at node B ′ and equals , if R f 1 > (T h /T c )R i 1 , the cycle transforms into a trapezoid AB ′ C ′ D ′ shown inT ′ h = (R f 1 /R i 1 )T c . It is clear that T ′ c < T c , T ′ h > T h and T ′ h /T ′ c > T h /T c . Hence, the total work W = (R f 1 −R i 1 ) tanh R f 1 2T h −tanh R i 1 2T c = (R f 1 −R i 1 ) tanh R i 1 2T ′ c −tanh R f 1 2T ′ h > 0. (33) The values of heat of cold and hot strokes are given as Q c ≡ Q D ′ →A = R i 1 tanh R f 1 2T h − tanh R i 1 2T c = R i 1 tanh R i 1 2T ′ c − tanh R f 1 2T ′ h > 0 (34) and Q h ≡ Q B ′ →C ′ = R f 1 tanh R i 1 2T c − tanh R f 1 2T h = R f 1 tanh R f 1 2T ′ h − tanh R i 1 2T ′ c < 0. (35) This regime corresponds to the refrigerator mode (blue region I in Fig. 4). We discuss now the cases when R f 1 is less than R i 1 . If (T c /T h )R i 1 < R f 1 < R i 1 ,W = (R i 1 − R f 1 ) tanh R f 1 2T c − tanh R i 1 2T h > 0,(36)Q c = R f 1 tanh R f 1 2T c − tanh R i 1 2T h > 0(37) and Q h = R i 1 tanh R i 1 2T h − tanh R f 1 2T c < 0.(38) This is again the cooling mode of Otto's thermal machine: {→↓→}. For example, the coefficient of performance at the point of maximum total work in this case (see Fig. 4, the point marked with the symbol "×") reaches the value CoP = 2.37. When R f 1 = (T c /T h )R i 1 , the "cycle" is a straight-line section DF . Here, W = Q c = Q h = 0. Finally, if R f 1 < (T c /T h )R i 1 then the cycle is DA ′ B ′ C ′ shown as the green trapezoid II in Fig. 3b. In this case W < 0, Q c < 0 and Q h > 0 and therefore the heat engine regime is realized here. In Fig. 4, the corresponding area is labeled IV and shown in green. As mentioned above, the efficiency of the discussed Otto engine can reach the upper limit, namely, the Carnot efficiency. However, in this case, the total work performed is zero and therefore such an "engine" is useless. It is of interest to find the efficiency of engines at their maximum power (work per cycle). In 1957, Novikov [40] considered a generalized Carnot engine taking into account the heat loss from the hot bath to the working fluid ({←↑← ⊳}, where the triangle ⊳ denotes a lossy heat conductor) and derived a remarkable formula for the efficiency at maximum power of such an engine (Eq. (7) in Ref. [40] and Eq. (6) in Ref. [41]) η N = 1 − T c /T h .(39) (In connection with the problem of optimal efficiency of engines, see Ref. [42].) More later, in 18 years, Curzon and Ahlborn [43,44] (see also [45,46]) considered a Carnot engine with losses both from the hot bath to the working fluid and from the working fluid to the cold bath, {⊳ ←↑← ⊳}, and obtained the same result for the efficiency. This gave an impetus to the development of endoreversible thermodynamics [19,47]. It turned out that efficiency (39) is the benchmark for the efficiency η mp of any real running engine at maximum power. Therefore, it is interesting to compare the efficiency at maximal power of the Otto engine with the Novikov efficiency. The efficiency for the engine operating between bath temperatures T c = 1 and T h = 2 at the point with the maximum work done (|W | = 0.148615, see Figs. 4 and 5) equals 29.6%. This is less than Carnot's efficiency of 50%, but larger than Novikov's efficiency equaled 29.3%. A similar picture is also valid for other temperatures presented in Table 1. As seen from Table 1, both the useful work and efficiency grow with increasing the temperature difference of reservoirs. Note that these η mp values are well reproduced by Novikov's formula, and moreover, they are somewhat greater than it provides. A similar increase in efficiency at maximum power has recently been obtained for a photonic engine [48]. Thus, the Otto thermal machine on a spin working substance with J z = 0 and one of the two R 1 and R 2 equal to zero can operate either as an engine or as a refrigerator. The efficiency at maximum output power is limited from above by the Carnot bound, and from below by the Novikov efficiency. Two local minima of net work done In this section, we extend the case described above and consider the parameter R 2 as constant, not equal to zero. So J z = 0, R 2 = const, and R 1 ∈ [R i 1 , R f 1 ]. From Eqs. (24) and (25), it follows that the net work done during a cycle is given as W = (R f 1 −R i 1 ) sinh R f 1 T h / cosh R f 1 T h +cosh R 2 T h −sinh R i 1 T c / cosh R i 1 T c +cosh R 2 T c . (40) Next, in accordance with Eqs. (26) and (27), the heat Q h is Q h = R f 1 sinh(R i 1 /T c ) + R 2 sinh(R 2 /T c ) cosh(R i 1 /T c ) + cosh(R 2 /T c ) − R f 1 sinh(R f 1 /T h ) + R 2 sinh(R 2 /T h ) cosh(R f 1 /T h ) + cosh(R 2 /T h )(41) and similarly for Q c : Q c = R i 1 sinh(R f 1 /T h ) + R 2 sinh(R 2 /T h ) cosh(R f 1 /T h ) + cosh(R 2 /T h ) − R i 1 sinh(R i 1 /T c ) + R 2 sinh(R 2 /T c ) cosh(R i 1 /T c ) + cosh(R 2 /T c ) . (42) The boundaries separating the regions with W > 0 and W < 0 are found from condition W = 0. It is obvious from (40), that one boundary is again the diagonal R f 1 = R i 1 ,(43) while the other boundary is determined by the relation R f 1 = T h ln 1 1 − γ γ cosh R 2 T h + 1 + γ 2 sinh 2 R 2 T h ,(44) where γ = sinh R i 1 T c / cosh R i 1 T c + cosh R 2 T c .(45) It is clear that R f 1 = 0 at R i 1 = 0. For small R i 1 , the dependence (44) behaves like R f 1 ≈ κR i 1 ,(46) where κ = T h T c cosh[R 2 /(2T h )] cosh[R 2 /(2T c )] 2 .(47) For κ = 1, these two boundaries touch near small R i 1 . On the other hand, when R i 1 → ∞, the function R f 1 of R i 1 , Eq. (44), satisfies the linear asymptotic law R f 1 ≈ T h ln cosh(R 2 /T h ) cosh(R 2 /T c ) + T h T c R i 1 .(48) Thus, for large R i 1 , the values of R f 1 again follow, as in the previous subsection, a linear dependence R f 1 = (T h /T c )R i 1 , but now shifted. For bath temperatures T c = 1 and T h = 2, the slope coefficient (47) reaches the critical value κ c = 1 at R (c) Fig. 6a). Here, both R i 1 and R f 1 are greater than R 2 , and therefore there is no energy level crossing. 2 = 4 ln 1 2 1+ √ 5 √ 2 + √ 5 − 1 ≃ 2.12255. When However, for R 2 > R (c) 2 , the curve 2 forms a loop, inside which the second minimum appears (see Fig. 6b). In it, R i 1 and R f 1 are less than R 2 , which means that again there is no energy level crossing. Engine efficiencies, defined by Eq. (14), in two minima shown in Fig. 6b are η • = 26.3% and η + = 0.58%. Both of these values are less than Novikov's efficiency of 29.3%. So, the presence of two minimums of optimal engine operating modes at once is a rather interesting situation, but it is not yet clear where and how it can be used in practice. Three-parameter energy spectrum We now turn to the quantum Otto machine, the working body of which is described by the Hamiltonian (17) with all interactions. The energy levels are characterized by three parameters J z , R 1 , and R 2 . Let the longitudinal exchange coupling J z vary within J i z and J f z , while the parameters R 1 and R 2 remain unchanged during a cycle, that is R i (24) and (25) for the net work done take the form Setting Q c = 0, we obtain an explicit expression for the fourth boundary 1 = R f 1 = R 1 and R i 2 = R f 2 = R 2 . EquationsW = (J f z − J i z )J f z = 1 2 T h ln cosh(R 1 /T h ) cosh(R 2 /T h ) · J i z − R 1 tanh(R 1 /T h ) − A 2 J i z + R 2 tanh(R 2 /T h ) + A 2 .(57) Thus, mathematical tools are ready, and we can proceed to study the operating modes of a heat engine. Consider, for instance, a spin working medium with parameters R 1 = 0.7 and R 2 = 2, which is located between the thermal reservoirs at temperatures T c = 1 and T h = 1.5. Lines 1, 2, 3 and 4, defined by Eqs. (50), (51), (54) and (57), divide the plane J i z -J f z into several regions, as drawn in Fig. 7. Regions corresponding to different modes of operation are marked here with Roman numerals and additionally colored. The boundaries separating the regions are marked with Arabic numerals 1-4. It is noteworthy that curves 3 and 4 do not intersect each other and do not intersect lines 1 and 2. Finding the signs of Q c , W and Q h in each such region made it possible to determine that there are only four different types of regions, see again Fig. 7. Firstly, the region I with Q c < 0, W < 0 and Q h > 0 naturally corresponds to engine mode, which is denoted as {←↑←}. Secondly, the region II with Q c > 0, W > 0 and Q h < 0, which is identified with a refrigerator or heat pump, and for clarity we depict it in the form {→↓→}. Then the region III, where W > 0 and both Q h and Q c are less than zero; this is a heater that is represented as {←↓→}. Finally, the region IV in which W and Q h are grater than zero while Q c < 0, that is {←↓←}; this is the so-called accelerator or cold-bath heater [12,49,50]. The total work output W (J i z , J f z ), Eq. (49), has a local minimum W min = −0.030259 at the point (0.659225, 0.976325). The hot heat (52) at this point is Q h = 0.343863. Therefore, in accord with Eq. (14), the efficiency at maximal power equals η mp = 8.8%. This value is less than Novikov's efficiency equal to 18.4%. A similar scheme of regions for the operating modes is shown in Fig. 8. It corresponds to the following parameter values: R 1 = 3, R 2 = 0.05, T c = 1, and T h = 2. According to Eqs. (50) and (51), the boundaries 1 and 2 intersect at the point (1.45295, 1.45295). Above this point, the total work output has minimumal value of W = −0.044432 at the point (2.79285, 3.35601). The heat Q h at this point equals 0.168719. Therefore the efficiency of heat engine is 26.3%. Moreover, below of the intersection point, the work W (J i z , J f z ) has the second local minimum. It is located at (−0.104884, −0.762864) and equals W = −0.119575. Here Q h = 0.63495 and hence η mp = 18.8%. Both these efficiencies are less than η N = 29.3%. Next, Fig. 9 shows the operating mode areas for the following parameters: R 1 = 1.3, R 2 = 0.8, T c = 1, and T h = 2.5. The picture here is similar to the previous two cases. Concluding this subsection, we can state the following. Only four different operating modes were observed for the thermal machine under study. They are listed in Table 2. Although there are eight (2 3 = 8) different combinations of signs for Q c , W and Q h , the regimes {→↓←} and {←↑→} are prohibited by the first law of thermodynamics (12). Moreover, as noted in Ref. [51], the variants (Q c > 0, W > 0, Q h < 0) and (Q c > 0, W > 0, Q h > 0), or in our notation {→↑→} and {→↑←}, contradict the second law of thermodynamics ( δQ/T ≥ 0 or dS ≥ 0). As seen from Figs. 7-9, the operating mode regions alternate in the following order: engine-accelerator-heater-refrigerator. Concluding remarks In the present paper, we have examined a two-qubit Heisenberg XYZ model with DM and KSEA interactions under a non-uniform external magnetic field as the working substance of a quantum Otto thermal machine. Equations (19) and (20) show, firstly, that the KSEA interaction affects the operation of the machine only through the collective parameter R 1 , and DM interaction only through R 2 , and secondly, the roles of DM and KSEA interactions change places when the longitudinal exchange constant J z changes the antiferromagnetic behavior to ferromagnetic. Combining analytical and numerical analysis, we have found regions in the parameter space for possible operating modes of the thermal machine. Only such four modes as a heat engine, refrigerator (heat pump), heater or dissipator (when work is converted into the heat of both baths at once) and a thermal accelerator or cold-bath heater (fast defrost regime) are acceptable. The engine and refrigerator mode regions can directly border each other (Fig. 4) or they are separated by areas with accelerator and heater regimes (Figs. 7 and 8). We have found and investigated the efficiency of the heat engine at maximum output power. Remarkably, cases have been discovered where there are two local extrema of the total work; their appearance is due to splitting the engine mode region into two subregions. Optimal efficiency has been observed not only less than the Novikov efficiency, but also greater than it for certain choices of model parameters. However, the Carnot efficiency was never exceeded. Fig. 1 ( 1Color online) Quantum Otto cycle in the En − pn plane. The cycle consists of two adiabatic (AB and CD) and two isochoric (BC and DA) processes Fig. 2 . 2The resulting work W is performed during the exchange of heat Q h and Q c between the working fluid and the hot and cold baths.Instead of drawing, we will further depict the engine as {• ←↑← •} (where the filled circle represents a hot bath and the open circle represents cold bath) or just like {←↑←} (left -cold bath, right-hot bath). Fig. 2 ( 2Color online) Schematic layout of a heat engine. The arrows show the energy flows Fig. 3 ( 3Color online) Otto cycles of thermal machine in the case Jz Fig. 4 ( 4Color online) Regions of different operating modes (regimes) in the plane (R i 1 , R f 1 ) for the quantum Otto thermal machine with nonzero R 1 and Jz = R 2 = 0 by Tc = 1 and T h = 2. Here, 1 is the diagonal straight line R f 1 = R i 1 , 2 and 3 are the boundaries R f 1 = (T h /Tc)R i 1 and R f 1 = (Tc/T h )R i 1 , respectively. Regions I and III (blue) correspond to the refrigeration regime, while the regions II and IV (green) represent the heat engine. The '+' symbol has coordinates (2.86075,4.06548) and marks the position of minimum of W (= −0.148615), and the '×' symbol has mirror coordinates (4.06548,2.86075) which mark the position of maximum of W (= 0.148615) in the region III trapezoid I in Fig. 5 ( 5Color online) Qualitative structure of isolines for the total work W (R i 1 , R f 1 ); here Jz = R 2 = 0, and bath temperatures Tc = 1 and T h = 2. The region between the red (R f 1 = 2R i 1 ) and green (R f 1 = R i 1 ) lines corresponds to the engine mode. The symbol "+" indicates the position of local minimum of the function W (R i 1 , R f 1 ) typical cycle can be represented by a trapezoid ABCD shown in Fig. 3b as blue region I. The cycle runs counterclockwise and cold and hot nodes are B and D, respectively. The total work and heat are given by expressions Fig. 6 6the engine mode has only one local minimum of the work W Regions with W < 0 (between lines 1 and 2) and with W > 0 (outside the previous region) in the plane (R i 1 , R f 1 ). Bath temperatures are Tc = 1 andT h = 2. Dotted line R f 1 = (T h /Tc)R i 1 isshown for a comparison with the case R 2 = 0. (a), R 2 = 1.8, the black circle (•) has coordinates (4.32922, 5.51837) and indicates the local minimum of work (W = −0.09977). (b), R 2 = 2.9, black circle (•) has coordinates (5.62759, 6.82585) and shows the minimum W = −0.08299, while the symbol plus (+) marks additional local minimum (W = −0.00366) at the point (1.31298, 1.15942) Fig. 7 ( 7Color online) Regions of operation modes in the J i z -J f z plane for quantum Otto thermal machine with R 1 = 0.7, R 2 = 2 and bath temperatures Tc = 1 and T h = 1.5: I (green), engine; II (blue), refrigerator; III (yellow), heater; IV (violet), accelerator. Lines 1-4 are the boundaries separating the listed regions Fig. 8 ( 8Color online) Operation modes for a quantum Otto cycle in the plane J i z -J f z . The regions corresponding to each operation mode are marked as I, heat engine -green; II, refrigerator -blue; III, heater -yellow; and IV, accelerator -violet. Lines 1 and 2 are the boundaries W = 0, while the curves 3 and 4 result from conditions Q h = 0 and Qc = 0, respectively. Straight lines 1 and 2 are the boundaries W = 0, and curves 3 and 4 follow from the conditions Q h = 0 and Qc = 0, respectively. Parameters R 1 = 3 and R 2 = 0.05. Temperatures of thermal reservoirs are Tc = 1 and T h = 2 Fig. 9 ( 9Color online) The same as in Figs. 7 and 8, but for R 1 = 1.3, R 2 = 0.8, Tc = 1 and T h = 2.5 Table 1 1Coordinates (R i 1 and R f 1 ) of a minimum of the work W , its value at the minimum, efficiency at maximum power, and Carnot and Novikov efficiencies by T c = 1 and different values of T hT h R i 1 R f 1 W ηmp η C η N 3 3.16836 5.59152 -0.454983 43.3% 66.7% 42.3% 2.5 3.02699 4.83933 -0.289598 37.5% 60% 36.8% 2 2.86075 4.06548 -0.148615 29.6% 50% 29.3% 1.5 2.65857 3.25929 -0.044155 18.43% 33.3% 18.35% Table 2 2Operating modes of the Otto machine depending on signs Qc, W and Q hmode Qc W Q h scheme engine − − + {←↑←} refrigerator + + − {→↓→} heater − + − {←↓→} accelerator − + + {←↓←} Acknowledgment Two of us, E. K. and M. Yu., were supported by the program CITIS #AAAA-A19-119071190017-7.The boundaries separating the regions with W > 0 and W < 0 are given here likeandThese straight lines intersect at a point defined by the presented equations. Taken into account Eq.(26), the heat Q h for the case under consideration is reduced towherePutting Q h = 0, we get the expression for the boundary in an explicit formAnother heat, Q c , is equal towhere Possible methods of obtaining active molecules for a molecular oscillator. N G Basov, A M Prokhorov, ZhETF. 28249in RussianBasov, N.G., Prokhorov, A.M.: Possible methods of obtaining active molecules for a molecular oscillator. ZhETF 28, 249 (1955) [in Russian] Possible methods of obtaining active molecules for a molecular oscillator. N G Basov, A M Prokhorov, Sov. Phys. JETP. 1in EnglishBasov, N.G., Prokhorov, A.M.: Possible methods of obtaining active molecules for a molecular oscillator. Sov. Phys. JETP 1, 184 (1955) [in English] Molecular generator and amplifier. N G Basov, A M Prokhorov, Usp. Fiz. Nauk. 57in RussianBasov, N.G., Prokhorov, A.M.: Molecular generator and amplifier. Usp. Fiz. Nauk 57, 485 (1955) [in Russian] Proposal for a new type solid state maser. N Bloembergen, Phys. Rev. 104324Bloembergen, N.: Proposal for a new type solid state maser. Phys. Rev. 104, 324 (1956) Operation of a solid state maser. H E D Scovil, G Feher, H Seidel, Phys. Rev. 105762Scovil, H.E.D., Feher, G., Seidel, H.: Operation of a solid state maser. Phys. Rev. 105, 762 (1957) A chromium corundum paramagnetic amplifier and generator. G M Zverev, L S Kornienko, A A Manenkov, A M Prokhorov, ZhETF. 341660in RussianZverev, G.M., Kornienko, L.S., Manenkov, A.A., Prokhorov, A.M.: A chromium corun- dum paramagnetic amplifier and generator. ZhETF 34, 1660 (1958) [in Russian] A chromium corundum paramagnetic amplifier and generator. G M Zverev, L S Kornienko, A A Manenkov, A M Prokhorov, Sov. Phys. -JETP. 71141in EnglishZverev, G.M., Kornienko, L.S., Manenkov, A.A., Prokhorov, A.M.: A chromium corun- dum paramagnetic amplifier and generator. Sov. Phys. -JETP 7, 1141 (1958) [in En- glish] Stimulated optical radiation in ruby. T H Maiman, Nature. 187493Maiman, T.H.: Stimulated optical radiation in ruby. Nature 187, 493 (1960) Three-level masers as heat engines. H E D Scovil, E O Schulz-Dubois, Phys. Rev. Lett. 2262Scovil, H.E.D., Schulz-DuBois, E.O.: Three-level masers as heat engines. Phys. Rev. Lett. 2, 262 (1959) Quantum equivalent of the Carnot cycle. J E Geusic, E O Schulz-Dubios, H E D Scovil, Phys. Rev. 156343Geusic, J.E., Schulz-DuBios, E.O., Scovil, H.E.D.: Quantum equivalent of the Carnot cycle. Phys. Rev. 156, 343 (1967) Three-level quantum amplifier as a heat engine: A study in finitetime thermodynamics. E Geva, R Kosloff, Phys. Rev. E. 493903Geva, E., Kosloff, R.: Three-level quantum amplifier as a heat engine: A study in finite- time thermodynamics. Phys. Rev. E 49, 3903 (1994) The quantum heat engine and heat pump: An irreversible thermodynamic analysis of the three-level amplifier. E Geva, R Kosloff, J. Chem. Phys. 1047681Geva, E., Kosloff, R.: The quantum heat engine and heat pump: An irreversible ther- modynamic analysis of the three-level amplifier. J. Chem. Phys. 104, 7681 (1996) Quantum statistics of a single-atom Scovil-Schulz-DuBois heat engine. S.-W Li, M B Kim, G S Agarwal, M O Scully, Phys. Rev. A. 9663806Li, S.-W., Kim, M.B., Agarwal, G.S., Scully, M.O.: Quantum statistics of a single-atom Scovil-Schulz-DuBois heat engine. Phys. Rev. A 96, 063806 (2017) Two-level masers as heat-to-work converters. A Ghosh, D Gelbwaser-Klimovsky, W Niedenzu, A I Lvovsky, I Mazets, M O Scully, G Kurizki, Proc. Natl. Acad. Sci. U.S.A. 1159941Ghosh, A., Gelbwaser-Klimovsky, D., Niedenzu, W., Lvovsky, A.I., Mazets, I., Scully, M.O., Kurizki, G.: Two-level masers as heat-to-work converters. Proc. Natl. Acad. Sci. U.S.A. 115, 9941 (2018) Optimal operation of a three-level quantum heat engine and universal nature of efficiency. V Singh, Phys. Rev. Res. 243187Singh, V.: Optimal operation of a three-level quantum heat engine and universal nature of efficiency. Phys. Rev. Res. 2, 043187 (2020) Réflexions sur la Puissance Motrice du Feu et sur les Machines propresà Développer cette Puissance. S Carnot, BachelierParisCarnot, S.: Réflexions sur la Puissance Motrice du Feu et sur les Machines propresà Développer cette Puissance. Bachelier, Paris (1824) Quantum thermodynamics. S Vinjanampathy, J Anders, Contemp. Phys. 57545Vinjanampathy, S., Anders, J.: Quantum thermodynamics. Contemp. Phys. 57, 545 (2016) F Correa, L A Gogolin, C Anders, J , Thermodynamics in the Quantum Regime. Fundamental Aspects and New Directions. Binder. Adesso, G.BerlinSpringerThermodynamics in the Quantum Regime. Fundamental Aspects and New Directions. Binder, F., Correa, L.A., Gogolin, C., Anders, J., Adesso, G. (eds.). Springer, Berlin (2018) Quantum Thermodynamics. An introduction to the thermodynamics of quantum information. S Deffner, S Campbell, arXiv:1907.01596v1Morgan & ClaypoolSan Rafael, CA, USAquant-phDeffner, S., Campbell, S.: Quantum Thermodynamics. An introduction to the thermo- dynamics of quantum information. Morgan & Claypool, San Rafael, CA, USA (2019); arXiv:1907.01596v1 [quant-ph] Are quantum thermodynamic machines better than their classical counterparts?. A Ghosh, V Mukherjee, W Niedenzu, G Kurizki, Eur. Phys. J. Spec. Top. 2272043Ghosh, A., Mukherjee, V., Niedenzu, W., Kurizki, G.: Are quantum thermodynamic machines better than their classical counterparts? Eur. Phys. J. Spec. Top. 227, 2043 (2019) Quantum thermodynamic devices: from theoretical proposals to experimental reality. N M Myers, O Abah, S Deffner, AVS Quantum Sci. 427101Myers, N.M., Abah, O., Deffner, S.: Quantum thermodynamic devices: from theoretical proposals to experimental reality. AVS Quantum Sci. 4, 027101 (2022) Introduction to quantum thermodynamics: History and prospects. R Alicki, R ; F Kosloff, L A Correa, C Gogolin, J Anders, Thermodynamics in the Quantum Regime. Fundamental Aspects and New Directions. Binder. Adesso, G.BerlinSpringerAlicki, R., Kosloff, R.: Introduction to quantum thermodynamics: History and prospects. In: Thermodynamics in the Quantum Regime. Fundamental Aspects and New Direc- tions. Binder, F., Correa, L.A., Gogolin, C., Anders, J., Adesso, G. (eds.). Springer, Berlin (2018). Quantum thermodynamic cycles and quantum heat engines. H T Quan, Y.-X Liu, C P Sun, F Nori, Phys. Rev. E. 7631105Quan, H.T., Liu, Y.-x., Sun, C.P., Nori, F.: Quantum thermodynamic cycles and quan- tum heat engines. Phys. Rev. E 76, 031105 (2007) Quantum thermodynamic cycles and quantum heat engines. H T Quan, II. Phys. Rev. E. 7941129Quan, H.T.: Quantum thermodynamic cycles and quantum heat engines. II. Phys. Rev. E 79, 041129 (2009) Otto engine: classical and quantum approach. F J Peña, O Negrete, N Cortés, P Vargas, Entropy. 22755Peña, F.J., Negrete, O., Cortés, N., Vargas, P.: Otto engine: classical and quantum approach. Entropy 22, 755 (2020) Entangled quantum heat engines based on two two-spin systems with Dzyaloshinski-Moriya anisotropic antisymmetric interaction. G.-F Zhang, Eur. Phys. J. D. 49123Zhang, G.-F.: Entangled quantum heat engines based on two two-spin systems with Dzyaloshinski-Moriya anisotropic antisymmetric interaction. Eur. Phys. J. D 49, 123 (2008) Entangled quantum Otto heat engines based on two-spin systems with the Dzyaloshinski-Moriya interaction. L.-M Zhao, G.-F Zhang, Quantum Inf. Process. 16216Zhao, L.-M., Zhang, G.-F.: Entangled quantum Otto heat engines based on two-spin sys- tems with the Dzyaloshinski-Moriya interaction. Quantum Inf. Process. 16:216 (2017) Coupled two-qubit engine and refrigerator in Heisenberg model. S Ahadpour, F Mirmasoudi, Quantum Inf. Process. 2063Ahadpour, S., Mirmasoudi, F.: Coupled two-qubit engine and refrigerator in Heisenberg model. Quantum Inf. Process. 20:63 (2021) On the quantum correlations in two-qubit XYZ spin chains with Dzyaloshinsky-Moriya and Kaplan-Shekhtman-Entin-Wohlman-Aharony interactions. M A Yurischev, Quantum Inf. Process. 19336Yurischev, M.A.: On the quantum correlations in two-qubit XYZ spin chains with Dzyaloshinsky-Moriya and Kaplan-Shekhtman-Entin-Wohlman-Aharony interactions. Quantum Inf. Process. 19:336 (2020) Quantum entanglement in the anisotropic Heisenberg model with multicomponent DM and KSEA interactions. A V Fedorova, M A Yurischev, Quantum Inf. Process. 20169Fedorova, A.V., Yurischev, M.A.: Quantum entanglement in the anisotropic Heisenberg model with multicomponent DM and KSEA interactions. Quantum Inf. Process. 20:169 (2021) The second law, Maxwell's demon, and work derivable from quantum heat engines. D Kieu, Phys. Rev. Lett. 93140403Kieu, D.: The second law, Maxwell's demon, and work derivable from quantum heat engines Phys. Rev. Lett. 93, 140403 (2004) Das Adiabatenprinzip in der Quantenmechanik. M Born, Z. Phys. 40165Z. Phys.Born, M.: Das Adiabatenprinzip in der Quantenmechanik. Z. Phys. 40, 167 (1927) 33. Born, M., Fock, V.: Beweis des Adiabatensatzes. Z. Phys. 51, 165 (1928) A Messiah, Quantum Mechanics. Dover, New YorkMessiah, A.: Quantum Mechanics. Dover, New York (1999) Quantum heat engine with multi-level quantum systems. H T Quan, P Zhang, C P Sun, Phys. Rev. E. 7256110Quan, H.T., Zhang, P., Sun, C.P.: Quantum heat engine with multi-level quantum systems. Phys. Rev. E 72, 056110 (2005) Quantum features and signatures of quantum thermal machines. A Levy, D ; F Gelbwaser-Klimovsky, L A Correa, C Gogolin, J Anders, Thermodynamics in the Quantum Regime. Fundamental Aspects and New Directions. Binder. Adesso, G.Levy, A., Gelbwaser-Klimovsky, D.: Quantum features and signatures of quantum ther- mal machines. In: Thermodynamics in the Quantum Regime. Fundamental Aspects and New Directions. Binder, F., Correa, L.A., Gogolin, C., Anders, J., Adesso, G. (eds.). Adiabatic theorem for closed quantum systems initialized at finite temperature. N Il&apos;in, A Aristova, O Lychkovskiy, Phys. Rev. A. 10430202Il'in, N., Aristova, A., Lychkovskiy, O.: Adiabatic theorem for closed quantum systems initialized at finite temperature. Phys. Rev. A 104, 030202 (2021) Magic angle twisted bilayer graphene as a highly efficient quantum Otto engine. A Singh, C Benjamin, Phys. Rev. B. 104125445Singh, A., Benjamin, C.: Magic angle twisted bilayer graphene as a highly efficient quantum Otto engine. Phys. Rev. B 104, 125445 (2021) R P Feynman, R B Leighton, M Sands, The Feynman lectures on physics. Reading, MassAddison-Wesley1second printingFeynman, R.P., Leighton, R.B., Sands, M.: The Feynman lectures on physics, Vol. 1. Addison-Wesley, Reading, Mass. (1964, second printing) The efficiency of atomic power stations. I I Novikov, Atomnaya Energiya. 3in RussianNovikov, I.I.: The efficiency of atomic power stations. Atomnaya Energiya 3, 409 (1957) [in Russian] The efficiency of atomic power stations. I I Novikov, J. Nucl. Energy. 7125Novikov, I.I.: The efficiency of atomic power stations. J. Nucl. Energy (1954) 7, 125 (1958) . M P Vukalovich, I I Novikov, Termodinamika. Mashinostroenie, Moskvain RussianVukalovich, M.P., Novikov, I.I.: Termodinamika. Mashinostroenie, Moskva (1972) [in Russian] Efficiency of a Carnot engine at maximum power output. F L Curzon, B Ahlborn, Am. J. Phys. 4322Curzon, F.L., Ahlborn, B.: Efficiency of a Carnot engine at maximum power output. Am. J. Phys. 43, 22 (1975) Time scales for energy transfer. B Ahlborn, F L Curzon, J. Non-Equilib. Thermodyn. 29301Ahlborn, B., Curzon, F.L.: Time scales for energy transfer. J. Non-Equilib. Thermodyn. 29, 301 (2004) Reitlinger and the origins of the efficiency at maximum power formula for heat engines. A Vaudrey, F Lanzetta, M : H Feidt, J. Non-Equilib. Thermodyn. 39199Vaudrey, A., Lanzetta, F., Feidt, M.: H. B. Reitlinger and the origins of the efficiency at maximum power formula for heat engines. J. Non-Equilib. Thermodyn. 39, 199 (2014) The history and perspectives of efficiency at maximum power of the Carnot engine. M Feidt, Entropy. 19369Feidt, M.: The history and perspectives of efficiency at maximum power of the Carnot engine. Entropy 19, 369 (2017) Atti dell'Accademia Peloritana dei Pericolanti Classe di Scienze Fisiche. K H Hoffmann, Matematiche e Naturali. LXXXVIAn introduction to endoreversible thermodynamics. Suppl. 1; DOI:10.1478/C1S0801011Hoffmann, K.H.: An introduction to endoreversible thermodynamics. Atti dell'Accademia Peloritana dei Pericolanti Classe di Scienze Fisiche, Matematiche e Naturali, Vol. LXXXVI, C1S0801011 (2008) -Suppl. 1; DOI:10.1478/C1S0801011 Endoreversible Otto engines at maximal power. Z Smith, P S Pal, S Deffner, J. Non-Equilib. Thermodyn. 45305Smith, Z., Pal, P.S., Deffner. S.: Endoreversible Otto engines at maximal power. J. Non-Equilib. Thermodyn. 45, 305 (2020) Multilevel quantum thermodynamic swap engines. M F Sacchi, Phys. Rev. A. 10412217Sacchi, M.F.: Multilevel quantum thermodynamic swap engines. Phys. Rev. A 104, 012217 (2021) C Cruz, H.-R Rastegar-Sedehi, M F Anka, T R De Oliveira, M Reis, arXiv:2208.14548v1Quantum Stirling engine based on dinuclear metal complexes. Cruz, C., Rastegar-Sedehi, H.-R., Anka, M.F., de Oliveira, T.R., Reis, M.: Quantum Stirling engine based on dinuclear metal complexes. arXiv:2208.14548v1 (2022) Four-level entangled quantum heat engines. T Zhang, W.-T Liu, P.-X Chen, C.-Z Li, Phys. Rev. A. 7562102Zhang, T., Liu, W.-T., Chen, P.-X., Li, C.-Z.: Four-level entangled quantum heat en- gines. Phys. Rev. A 75, 062102 (2007)
[]
[ "NUMERICAL SOLUTION AND BIFURCATION ANALYSIS OF NONLINEAR PARTIAL DIFFERENTIAL EQUATIONS WITH EXTREME LEARNING MACHINES A PREPRINT", "NUMERICAL SOLUTION AND BIFURCATION ANALYSIS OF NONLINEAR PARTIAL DIFFERENTIAL EQUATIONS WITH EXTREME LEARNING MACHINES A PREPRINT" ]
[ "Gianluca Fabiani [email protected] \nDipartimento di Matematica e Applicazioni \"Renato Caccioppoli\" Università degli Studi di Napoli \"Federico II\"\nInstitute of Sustainable Mobility and Energy Consiglio Nazionale delle Ricerche\nDipartimento di Matematica e Applicazioni \"Renato Caccioppoli\" Università degli Studi di Napoli \"Federico II\"\nScuola Superiore Meridionale Università degli Studi di Napoli Federico II\nItaly, Italy, Italy, Italy\n", "Francesco Calabrò [email protected] \nDipartimento di Matematica e Applicazioni \"Renato Caccioppoli\" Università degli Studi di Napoli \"Federico II\"\nInstitute of Sustainable Mobility and Energy Consiglio Nazionale delle Ricerche\nDipartimento di Matematica e Applicazioni \"Renato Caccioppoli\" Università degli Studi di Napoli \"Federico II\"\nScuola Superiore Meridionale Università degli Studi di Napoli Federico II\nItaly, Italy, Italy, Italy\n", "Lucia Russo [email protected] \nDipartimento di Matematica e Applicazioni \"Renato Caccioppoli\" Università degli Studi di Napoli \"Federico II\"\nInstitute of Sustainable Mobility and Energy Consiglio Nazionale delle Ricerche\nDipartimento di Matematica e Applicazioni \"Renato Caccioppoli\" Università degli Studi di Napoli \"Federico II\"\nScuola Superiore Meridionale Università degli Studi di Napoli Federico II\nItaly, Italy, Italy, Italy\n", "Constantinos Siettos [email protected] \nDipartimento di Matematica e Applicazioni \"Renato Caccioppoli\" Università degli Studi di Napoli \"Federico II\"\nInstitute of Sustainable Mobility and Energy Consiglio Nazionale delle Ricerche\nDipartimento di Matematica e Applicazioni \"Renato Caccioppoli\" Università degli Studi di Napoli \"Federico II\"\nScuola Superiore Meridionale Università degli Studi di Napoli Federico II\nItaly, Italy, Italy, Italy\n" ]
[ "Dipartimento di Matematica e Applicazioni \"Renato Caccioppoli\" Università degli Studi di Napoli \"Federico II\"\nInstitute of Sustainable Mobility and Energy Consiglio Nazionale delle Ricerche\nDipartimento di Matematica e Applicazioni \"Renato Caccioppoli\" Università degli Studi di Napoli \"Federico II\"\nScuola Superiore Meridionale Università degli Studi di Napoli Federico II\nItaly, Italy, Italy, Italy", "Dipartimento di Matematica e Applicazioni \"Renato Caccioppoli\" Università degli Studi di Napoli \"Federico II\"\nInstitute of Sustainable Mobility and Energy Consiglio Nazionale delle Ricerche\nDipartimento di Matematica e Applicazioni \"Renato Caccioppoli\" Università degli Studi di Napoli \"Federico II\"\nScuola Superiore Meridionale Università degli Studi di Napoli Federico II\nItaly, Italy, Italy, Italy", "Dipartimento di Matematica e Applicazioni \"Renato Caccioppoli\" Università degli Studi di Napoli \"Federico II\"\nInstitute of Sustainable Mobility and Energy Consiglio Nazionale delle Ricerche\nDipartimento di Matematica e Applicazioni \"Renato Caccioppoli\" Università degli Studi di Napoli \"Federico II\"\nScuola Superiore Meridionale Università degli Studi di Napoli Federico II\nItaly, Italy, Italy, Italy", "Dipartimento di Matematica e Applicazioni \"Renato Caccioppoli\" Università degli Studi di Napoli \"Federico II\"\nInstitute of Sustainable Mobility and Energy Consiglio Nazionale delle Ricerche\nDipartimento di Matematica e Applicazioni \"Renato Caccioppoli\" Università degli Studi di Napoli \"Federico II\"\nScuola Superiore Meridionale Università degli Studi di Napoli Federico II\nItaly, Italy, Italy, Italy" ]
[]
We address a new numerical scheme based on a class of machine learning methods, the so-called Extreme Learning Machines (ELM) with both sigmoidal and radial-basis functions, for the computation of steady-state solutions and the construction of (one-dimensional) bifurcation diagrams of nonlinear partial differential equations (PDEs). For our illustrations, we considered two benchmark problems, namely (a) the one-dimensional viscous Burgers with both homogeneous (Dirichlet) and nonhomogeneous boundary conditions, and, (b) the one-and two-dimensional Liouville-Bratu-Gelfand PDEs with homogeneous Dirichlet boundary conditions. For the one-dimensional Burgers and Bratu PDEs, exact analytical solutions are available and used for comparison purposes against the numerical derived solutions. Furthermore, the numerical efficiency (in terms of accuracy and size of the grid) of the proposed numerical machine-learning scheme is compared against central finite differences (FD) and Galerkin weighted-residuals finite-element (FEM) methods. We show that the proposed ELM numerical method outperforms both FD and FEM methods for medium to large sized grids, while provides equivalent results with the FEM for low to medium sized grids; both methods (ELM and FEM) outperform the FD scheme.
10.1007/s10915-021-01650-5
[ "https://export.arxiv.org/pdf/2104.06116v1.pdf" ]
233,219,543
2104.06116
abc591e2380be7e3e586f8c847d5eace5cc2ae45
NUMERICAL SOLUTION AND BIFURCATION ANALYSIS OF NONLINEAR PARTIAL DIFFERENTIAL EQUATIONS WITH EXTREME LEARNING MACHINES A PREPRINT 13 Apr 2021 December 21, 2021 Gianluca Fabiani [email protected] Dipartimento di Matematica e Applicazioni "Renato Caccioppoli" Università degli Studi di Napoli "Federico II" Institute of Sustainable Mobility and Energy Consiglio Nazionale delle Ricerche Dipartimento di Matematica e Applicazioni "Renato Caccioppoli" Università degli Studi di Napoli "Federico II" Scuola Superiore Meridionale Università degli Studi di Napoli Federico II Italy, Italy, Italy, Italy Francesco Calabrò [email protected] Dipartimento di Matematica e Applicazioni "Renato Caccioppoli" Università degli Studi di Napoli "Federico II" Institute of Sustainable Mobility and Energy Consiglio Nazionale delle Ricerche Dipartimento di Matematica e Applicazioni "Renato Caccioppoli" Università degli Studi di Napoli "Federico II" Scuola Superiore Meridionale Università degli Studi di Napoli Federico II Italy, Italy, Italy, Italy Lucia Russo [email protected] Dipartimento di Matematica e Applicazioni "Renato Caccioppoli" Università degli Studi di Napoli "Federico II" Institute of Sustainable Mobility and Energy Consiglio Nazionale delle Ricerche Dipartimento di Matematica e Applicazioni "Renato Caccioppoli" Università degli Studi di Napoli "Federico II" Scuola Superiore Meridionale Università degli Studi di Napoli Federico II Italy, Italy, Italy, Italy Constantinos Siettos [email protected] Dipartimento di Matematica e Applicazioni "Renato Caccioppoli" Università degli Studi di Napoli "Federico II" Institute of Sustainable Mobility and Energy Consiglio Nazionale delle Ricerche Dipartimento di Matematica e Applicazioni "Renato Caccioppoli" Università degli Studi di Napoli "Federico II" Scuola Superiore Meridionale Università degli Studi di Napoli Federico II Italy, Italy, Italy, Italy NUMERICAL SOLUTION AND BIFURCATION ANALYSIS OF NONLINEAR PARTIAL DIFFERENTIAL EQUATIONS WITH EXTREME LEARNING MACHINES A PREPRINT 13 Apr 2021 December 21, 2021arXiv:2104.06116v1 [math.NA]Extreme Learning Machines · Machine Learning · Numerical Analysis · Nonlinear Partial Differential Equations · Numerical Bifurcation Analysis We address a new numerical scheme based on a class of machine learning methods, the so-called Extreme Learning Machines (ELM) with both sigmoidal and radial-basis functions, for the computation of steady-state solutions and the construction of (one-dimensional) bifurcation diagrams of nonlinear partial differential equations (PDEs). For our illustrations, we considered two benchmark problems, namely (a) the one-dimensional viscous Burgers with both homogeneous (Dirichlet) and nonhomogeneous boundary conditions, and, (b) the one-and two-dimensional Liouville-Bratu-Gelfand PDEs with homogeneous Dirichlet boundary conditions. For the one-dimensional Burgers and Bratu PDEs, exact analytical solutions are available and used for comparison purposes against the numerical derived solutions. Furthermore, the numerical efficiency (in terms of accuracy and size of the grid) of the proposed numerical machine-learning scheme is compared against central finite differences (FD) and Galerkin weighted-residuals finite-element (FEM) methods. We show that the proposed ELM numerical method outperforms both FD and FEM methods for medium to large sized grids, while provides equivalent results with the FEM for low to medium sized grids; both methods (ELM and FEM) outperform the FD scheme. Introduction The solution of partial differential equations (PDEs) with the aid of machine learning as an alternative to conventional numerical analysis methods can been traced back in the early '90s. For example, Lagaris et al. [37] presented a method based on feedforward neural networks (FNN) that can be used for the numerical solution of linear and nonlinear PDEs. The method is based on the construction of appropriate trial functions, the analytical derivation of the gradient of the error with respect to the network parameters and collocation. The training of the FNN was achieved iteratively with the quasi-Newton BFGS method. Gonzalez-Garcia et al. [22] proposed a multilayer neural network scheme that resembles the Runge-Kutta integrator for the identification of dynamical systems described by nonlinear PDEs. Nowadays, the exponentially increasing-over the last decades-computational power and recent theoretical advances, have allowed further developments at the intersection between machine learning and numerical analysis. In particular, on the side of the numerical solution of PDEs, the development of systematic and robust machine-learning methodologies targeting at the solution of large scale systems of nonlinear problems with steep gradients constitutes an open and challenging problem in the area. Very recently [42,43] addressed the use of numerical Gaussian Processes and Deep Neural Networks (DNNs) with collocation to solve time-dependent non-linear PDEs circumventing the need for spatial discretization of the differential operators. The proposed approach is demonstrated through the one-dimensional nonlinear Burgers, the Schrödinger and the Allen-Cahn equations. In [26], DNNs were used to solve high-dimensional nonlinear parabolic PDEs including the Black-Scholes, the Hamilton-Jacobi-Bellman and the Allen-Cahn equation. In [45], DNNs were used to approximate the solution of PDEs arising in engineering problems by exploiting the variational structure that may arise in some of these problems. In [10,20,24] DNNs were used to solve high-dimensional semi-linear PDEs; the efficiency of the method was compared against other deep learning schemes. In [51], the authors used FNN to solve modified high-dimensional diffusion equations: the training of the FNN is achieved iteratively using an unsupervised universal machine-learning solver. Most recently, in [19], the authors have used DNN to construct non-linear reduced-order models of time-dependent parametrized PDEs. Over the last few years, extreme learning machines (ELMs) have been used as an alternative to other machine learning schemes, thus providing a good generalization at a low computational cost [32]. The idea behind ELMs is to randomly set the values of the weights between the input and hidden layer, the biases and the parameters of the activation/transfer functions and determine the weights between the last hidden and output layer by solving a least-squares problem. The solution of such a least-squares problem is the whole "training" procedure; hence, no iterative training is needed for ELMs, in contrast with what happens with the other aforementioned machine learning methods. Extensions to this basic scheme include multilayer ELMs [14,28,48] and deep ELMs [49]. As with conventional neural networks, convolutional networks and deep learning, ELMs have been mainly used for classification purposes [4,11,12,30,48,50]. On the other hand, the use of ELMs for "traditional" numerical analysis tasks and in particular for the numerical solution of PDEs is still widely unexplored. To the best of our knowledge, the only study on the subject is that of [18] where the authors however report a failure of ELMs to deal, for example, with PDEs whose solutions exhibit steep gradients. Recently, we have proposed an ELM scheme to deal with such steep gradients appearing in linear PDEs [8] demonstrating through several benchmark problems that the proposed approach is efficient. Here, we propose a problem-independent new numerical scheme based on ELMs for the solution of nonlinear PDEs that may exhibit sharp gradients. As nonlinear PDEs may also exhibit non-uniqueness and/or non-existence of solutions, we also show how one can use ELMs for the construction of (one-dimensional) bifurcation diagrams of PDEs. The efficiency of the proposed numerical scheme is demonstrated and discussed through two well-studied benchmark problems: the one-dimensional viscous Burgers equation, a representative of the class of advection-diffusion problems and the one-and two-dimensional Liouville-Bratu-Gelfand PDE, a representative of the class of reaction-diffusion problems. The numerical accuracy of the proposed scheme is compared against the analytical solutions and the exact locations of the limit points that are known for the one-dimensional PDEs, but also against the corresponding numerical approximations obtained with central finite differences (FD) and Galerkin finite elements methods (FEM). Then, we say that v is an ELM function with a single hidden layer, if there exists a choice of w ∈ R N , (the external weights vector between the hidden layer and the output layer) such that: v(x; A, β; w) = N j=1 w j ψ(α j · x + β j ),(1)where x = (x 1 , x 2 , . . . , x d ) ∈ R d is the input vector. We remark that the regularity assumption in the above definition is not mandatory for the approximation properties, but in our case some regularity is needed to write the collocation method, thus for this case, we also briefly present the necessary theory. It is well-known, that for ANNs, where A and β are not a-priori fixed, holds the universal approximation theorem if ψ is a non-polynomial function: the functional space is spanned by the basis functions {ψ(α · x + β), α ∈ R d , β ∈ R} that is dense in L 2 . Moreover, with some regularity assumptions on the activation function(s), the approximation holds true also for the derivatives (see e.g. Theorem 3.1 and Theorem 4.1 in [40]). Besides, fixing A and β a priori is not a limitation, because the universal approximation is still valid in the setting of ELMs (see Theorem 2 in [27]): Theorem 2.1 (Universal approximation). Let the coefficients α, β in the function sequence {ψ(α j · x + β j )} N j=1 be randomly generated according to any continuous sampling distribution and callṽ N ∈ span{ψ(α j · x + β j ) , j = 1 . . . N } the ELM function determined by ordinary least square solution of f (x) −ṽ N (x) , where f is a continuous function. Then, one has with probability one that lim N →∞ f −ṽ N = 0. We remark that in the ANN framework, the classical way is to optimize the parameters of the network (internal and external weights and biases) iteratively, e.g. by stochastic gradient descent algorithms that have a high computational cost and don't ensure a global but only local convergence. On the other hand, ELM networks are advantageous because the solution of an interpolation problem leads to a system of linear equations, where the only unknowns are the external weights w. For example, consider M points x i such that y i = v(x i ) for i = 1, . . . , M . In the ELM framework (1) the interpolation problem becomes: N j=1 w j ψ j (x i ) = y i , i = 1, . . . , M where N is the number of neurons and ψ j (x) is used to denote ψ(α j · x + β j ). Thus, this is a system of M equations and N unknowns that in a matrix form can be we written as: Sw = y,(2) where y = (y 1 , . . . , y M ) ∈ R M and S ∈ R M×N is the matrix with elements (S) ij = ψ j (x i ). If the problem is square (N = M ) and the parameters α j and β j are chosen randomly, it can be proved that the matrix S is invertible with probability 1 (see i.e. Theorem 1 [27]) and so, there is a unique solution, than can be numerically found; if one has to deal with an ill-conditioned matrix, one can still attempt to find a numerically robust solution by applying established numerical analysis methods suitable for such a case (e.g. by constructing the Moore-Penrose pseudoinverse using QR factorization or SVD). If the problem is under-determined (N > M ), the linear system has (infinite) many solutions and can be solved by applying regularization in order to pick the solution with e.g. the minimal L 2 norm. Such an approach provides the best solution to the optimization problem related to the magnitude of the calculated weights (see [31]). Thus, in ELM networks, one has to choose the type of the activation/transfer function and the values of the internal weights and biases. Since the only limitation is that ψ is a non-polynomial function, there are infinitely many choices. The most common choice are the sigmoidal functions (SF) (also referred as ridge functions or plane waves) and the radial basis functions (RBF) [2,40]. Below, we describe the construction procedure and main features of the proposed ELM scheme, based on these two transfer functions. In the case of the logistic sigmoid transfer function this investigation was made in our work for one-dimensional linear PDEs [8]. Here, we report the fundamental arguments and we extend them to include RBFs and two-dimensional nonlinear problems. ELM with sigmoidal functions For the SF case, we select the logistic sigmoid, that is defined by ψ j (x) ≡ σ j (x) = 1 1 + exp(−α j · x − β j ) .(3) For this function, it is straightforward to compute the derivatives. In particular the derivatives with respect to the x k component are given by: ∂ ∂x k σ j (x) = α j,k exp(z j ) (1 + exp(z j )) 2 , ∂ 2 ∂x 2 k σ j (x) = α 2 j,k exp(z j ) · (exp(z j ) − 1) (1 + exp(z j )) 3 ,(4) where z j = α j · x + β j . A crucial point in the ELM framework is how to fix the values of the internal weights and biases in a proper way. Indeed, despite the fact that theoretically any random choice should be good enough, in practice, it is convenient to define an appropriate range of values for the parameters α j,k and β j that are strictly related to the selected activation function. For the one-dimensional case, σ j is a monotonic function such that: α j > 0 ⇒ lim x→+∞ σ j (x) = 1, lim x→−∞ σ j (x) = 0 α j < 0 ⇒ lim x→+∞ σ j (x) = 0, lim x→−∞ σ j (x) = 1. This function has a inflection point, that we call center c j defined by the following property: σ j (α j c j + β j ) = 1 2 .(5) Now since σ(0) = 1/2, the following relation between parameters holds: c j = − β j α j . Finally, σ j has a steep transition that is governed by the amplitude of α j : if |α j | → +∞, then σ j approximates the Heaviside function, while if |α j | → 0, then σ j becomes a constant function. Now, since in the ELM framework these parameters are fixed a priori, what one needs to avoid is to have some function that can be "useless" 1 in the domain, say I = [a, b]. Therefore, for the one-dimensional case, our suggestion is to chose α j uniformly distributed as: α j ∼ U − N − 55 10|I| , N + 35 10|I| , where N is the number of neurons in the the hidden layer and |I| = b − a is the domain length. Moreover, we also suggest to avoid too small in module coefficients a j by setting: |α j | > 1 2|I| . Then, for the centers c j , we select equispaced points in the domain I, that are given by imposing the β j s to be: β j = −α j · c j . In the two-dimensional case, we denote as x = (x 1 , x 2 ) ∈ R 2 the input and A ∈ R N ×2 the matrix with rows α j = (α j,1 , α j,2 ). Then, the condition (5) becomes: σ j (x, y) = σ(α j,1 x 1 + α j,2 x 2 + β j ) = 1 2 So, now we have: s ≡ x 2 = − α j,1 α j,2 x 1 − β j α j,2 , where s is a straight line of inflection points that we call central direction. As the direction parallel to the central direction σ j is constant, while the orthogonal direction to s, the sigmoid σ j is exactly the one-dimensional logistic sigmoid. So considering one point c j = (c j,1 , c j,2 ) of the straight line s, we get the following relation between parameters: β j = −α j,1 · c j,1 − α j,2 · c j,2 . Now, the difference with the one-dimensional case is the fact that in a domain I 2 = [a, b] 2 discretized by a grid of n × n points, the number of neurons N = n 2 grows quadratically, while the distance between two adjacent points decreases linearly, i.e. is given by |I|/(n − 1). Thus, for the two-dimensional case, we take α j,k uniformly distributed as: α j,k ∼ U − √ N − 60 20|I| , √ N + 40 20|I| , k = 1, 2 where N is the number of neuron in the network and |I| = b − a. ELM with radial basis functions Here, for the RBF case, we select the Gaussian kernel, that is defined as follows: ψ j (x) ≡ ϕ j (x) = exp(−ε 2 j ||x − c j || 2 2 ) = exp −ε 2 j d k=1 (x k − c j,k ) 2 ,(6) where c j ∈ R d is the center point and ε j ∈ R is the inverse of the standard deviation. For such functions, we have: ∂ ∂x k ϕ j (x) = −2ε 2 j (x k − c j,k )exp(−ε 2 j r 2 j ), ∂ 2 ∂x 2 k ϕ j (x) = −2ε 2 j (1 − 2ε 2 j (x k − c j,k ) 2 )exp(−ε 2 j r 2 j ),(7) where r j = ||x − c j || 2 . In all the directions, the Gaussian kernel is a classical bell function such that: lim x−cj →+∞ φ j (x) = 0, φ j (c j ) = 1. Moreover, the parameter ε 2 j controls the steepness of the amplitude of the bell function: if ε j → +∞, then φ j approximates the Dirac function, while if ε → 0, φ j approximates a constant function. Thus, in the case of RBFs one can relate the role of ε j to the role of α j,k for the case of SF. For RBFs, it is well known that the center has to be chosen as a point internal to the domain and also more preferable to be exactly a grid point, while the steepness parameter ε is usually chosen to be the same for each function. Here, since we are embedding RBFs in the ELM framework, we take randomly the center c j and the steepness parameter ε j in order to have more variability in the functional space. Thus, as for the SF case, we set the parameters ε 2 j random uniformly distributed as: ε 2 j ∼ U 1 |I| , N + 65 15|I| , where N denotes the number of neurons in the hidden layer and |I| = b − a is the domain length; for the centers c j , we select equispaced points in the domain. Besides, note that for the RBF case, it is trivial to extend the above into the multidimensional case, since ϕ j is already expressed with respect to the center. For the two-dimensional case, we do the same reasoning as for the SF taking: ε 2 j ∼ U 1 2|I| , √ N + 50 30|I| . Numerical Bifurcation Analysis of Nonlinear Partial Differential Equations with Extreme Learning Machines In this section, we introduce the general setting for the numerical solution and bifurcation analysis of nonlinear PDEs with ELMs based on basic numerical analysis concepts and tools (see e.g. [7,9,13,21,41]). Let's start from a nonlinear PDE of the general form: Lu = f (u, λ) in Ω,(8) where L is the partial differential operator acting on u, f (u, λ) is a nonlinear function of u and λ ∈ R p is the vector of model parameters, and {∂Ω l } l denotes a partition of the boundary. A numerical solutionũ =ũ(λ) to the above problem at particular values of the parameters λ is typically found iteratively by applying e.g. Newton-Raphson or matrix-free Krylov-subspace methods (Newton-GMRES) (see e.g. [34]) on a finite system of M nonlinear algebraic equations. In general, these equations reflect some zero residual condition, or exactness equation, and thus the numerical solution that is sought is the optimal solution with respect to the condition in the finite dimensional space. Assuming thatũ is fixed via the degrees of freedom w ∈ R N -we use the notationũ =ũ(w) -then these degrees of freedom are sought by solving: F k (w 1 , w 2 , . . . w j . . . w N ; λ) = 0 , k = 1, 2, ...M .(10) Many methods for the numerical solution of Eq. (8), (9) are written in the above form after the application of an approximation and discretization technique such as Finite Differences (FD), Finite Elements (FE) and Spectral Expansion (SE), as we detail next. The system of M algebraic equations (10) is solved iteratively (e.g. by Newton's method), that is by solving until a convergence criterion is satisfied, the following linearized system: ∇ w F (w (n) , λ) · dw (n) = −F (w (n) , λ), w (n+1) = w (n) + dw (n) . (11) ∇ w F is the Jacobian matrix: ∇ w F (w (n) , λ) = ∂F k ∂w j (w (n) ,λ) =            ∂F1 ∂w1 ∂F1 ∂w2 . . . ∂F1 ∂wj . . . ∂ F1 ∂wN ∂F2 ∂w1 ∂F2 ∂w2 . . . ∂F2 ∂wj . . . ∂F2 ∂wN . . . . . . . . . . . . . . . . . . ∂F k ∂w1 ∂F k ∂w2 . . . ∂F k ∂wj . . . ∂F k ∂wN . . . . . . . . . . . . . . . . . . ∂FM ∂w1 ∂FM ∂w2 . . . ∂FM ∂wj . . . ∂FM ∂wN            (w (n) ,λ)(12) If the system is not square (i.e. when M = N ), then at each iteration, one would perform e.g. QR-factorization of the Jacobian matrix ∇ w F (w (n) , λ) = R T Q T = R T 1 0 Q T 1 Q T 2 ,(13) where Q ∈ R N ×N is an orthogonal matrix and R ∈ R N ×M is an upper triangular matrix. Then, the solution of Eq.(11) is given by: dw (n) = −Q 1 R −1 1 · F (w (n) , λ). Branches of solutions in the parameter space past critical points on which the Jacobian matrix ∇F with elements ∂F k ∂w j becomes singular can be traced with the aid of numerical bifurcation analysis theory (see e.g. [15,16,17,23,35,36,46]). For example, solution branches past saddle-node bifurcations (limit/turning points) can be traced by applying the so called "pseudo" arc-length continuation method [9]. This involves the parametrization of bothũ(w) and λ by the arc-length s on the solution branch. The solution is sought in terms of bothũ(w; s) and λ(s) in an iterative manner, by solving until convergence the following augmented system: ∇ w F ∇ λ F ∇ w N ∇ λ N · dw (n) (s) dλ (n) (s) = − F (w (n) (s), λ(s)) N (ũ(w (n) ; s), λ (n) (s)) ,(14) where ∇ λ F = ∂F1 ∂λ ∂F2 ∂λ . . . FM ∂λ T , and N (ũ(w (n) ; s), λ (n) (s)) = (ũ(w (n) ; s)−ũ(w; s) −2 ) T · (ũ(w) −2 −ũ(w) −1 ) ds + (λ (n) (s) − λ −1 ) · (λ −2 − λ −1 ) ds − ds, is one of the choices for the so-called "pseudo arc-length condition" (for more details see e.g. [9,16,21,23,36]); u(w) −2 andũ(w) −1 are two already found consequent solutions for λ −2 and λ −1 , respectively and ds is the arclength step for which a new solution around the previous solution (ũ(w) −2 , λ −2 ) along the arc-length of the solution branch is being sought. Finite Differences and Finite Elements cases: the application of Newton's method In FD methods, one aims to find the values of the solution per se (i.e. u j = w j ) at a finite number of nodes within the domain. The operator in the differential problem (8) and the boundary conditions (9) are approximated by means of some finite difference operator: L h ≈ L ; B h l ≈ B l : the finite operator revels in some linear combination of the function evaluations for the differential part, while keeping non-linear requirement to be satisfied due to the presence of nonlinearity. Then, approximated equations are collocated in internal and boundary points x k giving equations that can be written as residual equations (10). With FE and SE methods, the aim is to find the coefficients of a properly chosen basis function expansion of the solution within the domain such that the boundary conditions are satisfied precisely. In the Galerkin-FEM with Lagrangian basis (see e.g. [39,41]), the discrete counterpart seeks for a solution of Eq. (8)- (9) in N points x j of the domain Ω according to: u = N j=1 w j φ j ,(15) where the basis functions φ j are defined so that they satisfy the completeness requirement and are such that φ j (x k ) = δ jk . This, again with the choice of nodal variables to be the function approximation at the points, gives that u(x j ) = w j are exactly the degrees of freedom for the method. The scheme can be written as the satisfaction of the zero for the weighted residuals R k , k = 1, 2, . . . N defined as: R k = Ω (Lu − f (u, λ))φ k dΩ + m l=1 ∂Ω k (B k u − g l )φ l dσ(16) where the weighting functions φ i are the same basis functions used in Eq. (15) for the approximation of u. The above constitutes a nonlinear system of N algebraic equations that for a given set of values for λ are solved by Newton-Raphson, thus solving until convergence the following linearized system seen in equation (11), where R k plays the role of F k . Notice that the border rows and columns of the Jacobian matrix (12) are appropriately changed so that Eq. (11) satisfy the boundary conditions. Due to the construction of the basis functions, the Jacobian matrix is sparse, thus allowing the significant reduction of the computation cost for the solution of (11) at each Newton's iteration. Extreme Learning Machine Collocation: the application of Newton's method In an analogous manner to FE methods, Extreme Learning Machines aim at solving the problem (8)-(9), using an approximationũ N of u with N neurons as an ansatz. The difference is that, similarly to FD methods, the equations are constructed by collocating the solution on M Ω points x i ∈ Ω and M l points x k ∈ ∂Ω l , where Ω l are the parts of the boundary where boundary conditions are posed, see e.g. [3,41]: Lũ N (x i ; w) = f (ũ N (x i ; w), λ), i = 1, . . . , M Ω B lũN (x k ; w) = g l (x k ), k = 1, . . . , M l , l = 1, . . . , m. Then, if we denote M = M Ω + m l=1 M l , we have a system of M nonlinear equations with N unknowns that can be rewritten in a compact way as: F k (w, λ) = 0, k = 1, . . . , M where for k = 1, . . . , M Ω , we have: F k (w, λ) = L N i=1 w j ψ(α j · x i + β j ) − f N i=1 w j ψ(α j · x i + β j ) = 0, while for the l-th boundary condition, for k = 1, . . . , M l we have: F k (w, λ) = B l N i=1 w j ψ(α j · x i + β j ) − g N i=1 w j ψ(α j · x i + β j ) = 0. At this system of non-linear algebraic equations, here we apply Newton's method (11). Notice that the application of the method requires the explicit knowledge of the derivatives of the functions ψ; in the ELM case as described, we have explicit formulae for these (see Eq. (4), (7)). Remark 3.1. In our case, Newton's method is applied to non-squared systems. When the rank of the Jacobian is small, here we have chosen to solve the problem with the use of Moore-Penrose pseudo inverse of ∇ w F computed by the SVD decomposition; as discussed above, another choice would be QR-decomposition (13). This means that we cut off all the eigenvectors correlated to small eigenvalues 2 , so: ∇ w F = U ΣV T , (∇ w F ) + = V Σ + U T , where U ∈ R M×M and V ∈ R N ×N are the unitary matrices of left and right eigenvectors respectively, and Σ ∈ R M×N is the diagonal matrix of singular values. Finally, we can select q ≤ rank(∇F ) to get ∇ w F = U q Σ q V T q , (∇ w F ) + = V q Σ + q U T q ,(17) where U q ∈ R M×q and V ∈ R N ×q and Σ q ∈ R q×q . Thus, the solution of Eq.(11) is given by: dw (n) = −V q Σ + q U T q · F (w (n) , λ) . Branches of solutions past turning points can be traced by solving the augmented, with the pseudo-arc-length condition, problem given by Eq. (14). In particular in (14), for the ELM framework (1), the term ∇ w N becomes: ∇ w N = S T (ũ(w) −2 −ũ(w) −1 ) ds , where S is the collocation matrix defined in equation (2). Numerical Analysis Results: the Case Studies The efficiency of the proposed numerical scheme is demonstrated through two benchmark nonlinear PDEs, namely (a) the one dimensional nonlinear Burgers equation with Dirichlet boundary conditions and also mixed boundary conditions, and, (b) the one-and two-dimensional Liouville-Bratu-Gelfand problem. These problems have been widely studied as have been used to model and analyse the behaviour of many physical and chemical systems (see e.g. [1,6,9,21,25,33,44]). In this section, we present some known properties of the proposed problems and provide details on their numerical solution with FD, FEM and ELM with both logistic and Gaussian RBF transfer functions. The Nonlinear Viscous Burgers Equation Here, we consider the one-dimensional steady state viscous Burgers problem: ν ∂ 2 u ∂x 2 − u ∂u ∂x = 0(18) in the unit interval [0, 1], where ν > 0 denotes the viscosity. For our analysis, we considered two different sets of boundary conditions: • Dirichlet boundary conditions u(0) = γ , u(1) = 0 , γ > 0 ;(19) • Mixed boundary conditions: Neumann condition on the left boundary and zero Dirichlet on the right boundary: ∂u ∂x (0) = −ϑ , u(1) = 0 , ϑ > 0 .(20) The two sets of boundary conditions result to different behaviours (see [1,5]). We summarize in the next two lemmas some of the main results. Lemma 4.1 (Dirichlet case). Consider Eq. (18) with boundary conditions given by (19). Moreover, take (notice that γ − −− → ν→0 1): γ = 2 1 + exp( −1 ν ) − 1. Then, the problem (18)- (19) has a unique solution given by: u(x) = 2 1 + exp( x−1 ν ) − 1 .(21) We will use this test problem because the solution has a boundary layer and for this simple case, we can also implement and discuss the efficiency of a fixed point iteration by linearization, while in the mixed-boundaries case, we implement only the Newton's iterative procedure. (20). The solution of the problem can be written as [1] : u(x) = √ 2c tanh √ 2c 2ν (1 − x) ,(22) where c is constant value which can be determined by the imposed Neumann condition. Then, for ϑ sufficiently small the viscous Burgers problem with mixed boundary conditions admits two solutions: (a) a stable lower solution such that ∀x ∈ (0, 1): u(x) → ϑ→0 0 , ∂u(x) ∂x → ϑ→0 0 ; (b) an unstable upper solution u(x) > 0 ∀x ∈ (0, 1) such that: ∂u(0) ∂x → ϑ→0 0 , ∂u(1) ∂x → ϑ→0 −∞ , and ∀x ∈ (0, 1) , u(x) → ϑ→0 ∞ . Proof. The spatial derivative of (22) is given by: ∂u(x) ∂x = − c ν sech 2 √ 2c 2ν (1 − x) .(23) (a) When c → 0 then from Eq. (22), we get asymptotically the zero solution, i.e. u(x) → 0, ∀x ∈ (0, 1) and from Eq.(23), we get ∂u(x) ∂x → 0, ∀x ∈ (0, 1). At x = 1, the Dirichlet boundary condition u(1) = 0 is satisfied exactly (see Eq. (22)), while at the left boundary x = 0 the Neumann boundary condition is also satisfied as due to Eq.(23) and our assumption (ϑ → 0): ∂u(0) ∂x = −ϑ → 0, when c → 0. (b) When ∂u(1) ∂x → −∞, then (23) is satisfied ∀x ∈ (0, 1) when c → ∞. In that case, at x = 0, the Neumann boundary condition is satisfied as due to Eq.(23) is easy to prove that ∂u(0) ∂x → 0. Indeed, from Eq.(23): lim c→∞ ∂u(x) ∂x = − lim c→∞ ν exp √ 2c ν = 0.(24) Finally Eq. (22) gives u(x) → ∞, ∀x ∈ (0, 1). To better understand the behaviour of the unstable solution with respect to the left boundary condition, we can prove the following. (20). For the non-zero solution, when ϑ = ǫ → 0 the solution at x = 0 goes to infinity with values: u(0) = ν log ν ǫ tanh 1 2 log ν ǫ .(25) Proof. By setting the value of ϑ in the Neumann boundary condition to be a very small number, i.e. ϑ = ǫ ≪ 1, then from Eq.(24), we get that the slope of the analytical solution given by Eq. (23) is equal to ǫ, when c = 1 2 ν 2 log 2 ν ǫ .(26) Plugging the above into the analytical solution given by Eq. (22), we get Eq.(25). The above findings imply also the existence of a limit point bifurcation with respect to ϑ that depends also on the viscosity. For example, as shown in [1], for ϑ > 0 and ν = 1/10, there are two equilibria arising due to a turning point at ϑ * = 0.087845767978. Numerical Solution of the Burgers equation with Finite Differences and Finite Elements The discretization of the one-dimensional viscous Burgers problem in N points with second-order central finite differences in the unit interval 0 ≤ x ≤ 1 leads to the following system of N − 2 algebraic equations ∀x j = (j − 1)h, j = 2, . . . N − 1, h = 1 N −1 : F j (u) = ν h 2 (u j+1 − 2u j + u j−1 ) − u j u j+1 − u j−1 2h = 0 . At the boundaries x 1 = 0, x N = 1, we have u 1 = γ, u N = 0, respectively for the Dirichlet boundary conditions (19) and u 1 = (2hϑ + 4u 2 − u 3 )/3, u N = 0, respectively for the mixed boundary conditions (20). The above N −2 nonlinear algebraic equations are the residual equations (10) that are solved iteratively using Newton's method (11). The Jacobian (12) is now triagonal: at each i-th iteration, the non-null elements are given by: ∂F j ∂u j−1 = ν h 2 + u j 2h ; ∂F j ∂u j = −ν 2 h 2 − u j+1 − u j−1 2h ; ∂F j ∂u j+1 = ν h 2 − u j 2 . The Galerkin residuals (16) in the case of the one-dimensional Burgers equation are: R k = 1 0 ν ∂ 2 u(x) ∂x 2 − u ∂u(x) ∂x φ k (x)dx.(27) By inserting the numerical solution (15) into Eq. (27) and by applying the Green's formula for integration, we get: R k =νφ k (x) du dx 1 0 − ν N j=1 u j 1 0 dφ j (x) dx dφ k (x) dx dx − 1 0 N j=1 u j φ j (x) N j=1 u j dφ j (x) dx φ k (x)dx.(28) At the above residuals, we have to impose the boundary conditions. If Dirichlet boundary conditions (19) are imposed, Eq. (28) becomes: R k = − ν N j=1 u j 1 0 dφ j (x) dx dφ k (x) dx dx − 1 0 N j=1 u j φ j (x) N j=1 u j dφ j (x) dx φ k (x)dx.(29) In the case of the mixed boundary conditions (20), Eq.(28) becomes: R k =νϑφ k (0) − ν N j=1 u j 1 0 dφ j (x) dx dφ k (x) dx dx − 1 0 N j=1 u j φ j (x) N j=1 u j dφ j (x) dx φ k (x)dx.(30) In this paper, we use a P 2 Finite Element space, thus quadratic basis functions using an affine element mapping in the interval [0, 1] d . For the computation of the integrals, we used the Gauss quadrature numerical scheme: for the one-dimensional case, we used the three-points gaussian rule: 1 2 − 3 20 , 5 18 , 0.5, 8 18 , 1 2 + 3 20 , 5 18 . When writing Newton's method (11), the elements of the Jacobian matrix for both (29) and (30) are given by: ∂R i ∂u j = −ν 1 0 dφ j (x) dx dφ k (x) dx dx − 2 1 0 N j=1 u j φ j (x) dφ j (x) dx φ k (x)dx.(31) Finally, with all the above, the Newton's method (11) involves the iterative solution of a linear system. For the Dirichlet problem this becomes:           1 0 . . . 0 . . . 0 ∂R20 0 . . . 0 . . . 1           u (n) ·            du (n) 1 du (n) 2 . . . du (n) j . . . du (n) N            = −          0 R 2 . . . R k . . . 0          u (n) ,(32) while for the problem with the mixed boundary conditions, at each iteration, we need to solve the following system:            ∂R10 0 . . . 0 . . . 1            u (n) ·            du (n) 1 du (n) 2 . . . du (n) j . . . du (n) N            = −          R 1 R 2 . . . R k . . . 0          u (n) .(33) Numerical Solution of the Burgers equation with Extreme Learning Machine Collocation Collocating the ELM network function for the one-dimensional Burgers equation leads to the following nonlinear algebraic system for i = 2, . . . , M − 1: F i (w, ν) = ν N j=1 w j α 2 j ψ ′′ j (x i ) − N j=1 w j ψ j (x i ) · N j=1 w j α j ψ ′ j (x i ) = 0 .(34) Then, the imposition of the boundary conditions (19) gives: F 1 (w, ν) = N j=1 w j ψ j (0) − γ = 0, F M (w, ν) = N j=1 w j ψ j (1) = 0 ,(35) while boundary conditions (20) lead to: F 1 (w, ν) = N j=1 w j α j ψ ′ j (0) + ϑ = 0, F M (w, ν) = N j=1 w j ψ j (1) = 0 .(36) These equations are the residual equations (10) that we solve by Newton's method (11). The elements of the Jacobian matrix ∇ w F are: ∂F i ∂w j = να 2 j ψ ′′ j (x i ) − ψ j (x i ) · N j=1 w j α j ψ ′ j (x i ) − N j=1 w j ψ j (x i ) · α j ψ ′ j (x i ) for i = 2, . . . , M − 1 and due to the Dirichlet boundary conditions (35), we have: ∂F 1 ∂w j (w, λ) = ψ j (0) ∂F M ∂w j (w, λ) = ψ j (1). On the other hand, due to the mixed boundary conditions given by (36), we get: ∂F 1 ∂w j (w, λ) = α j ψ ′ j (0) ∂F M ∂w j (w, λ) = ψ j (1). At this point, the application of Newton's method (11) using the exact computation of the derivatives of the basis functions is straightforward (see (4) and (7)). Numerical Results In all the computations with FD, FEM and ELMs, the convergence criterion for Newton's iterations was the L 2 4 norm of the relative error between the solutions resulting from successive iterations; the convergence tolerance was set to 10 −6 . In fact, for all methods, Newton's method converged quadratically also up to the order of 10 −10 , when the bifurcation parameter was not close to zero where the solution of both Burgers with mixed boundary conditions and Bratu problems goes asymptotically to infinity. The exact solutions that are available for the one-dimensional Burgers and Bratu problems are derived using Newton's method with a convergence tolerance of 10 −12 . First, we present the numerical results for the Burgers equation (18) with Dirichlet boundary conditions (19). Recall that for this case, the exact solution is available (see equation (21)). For our illustrations, we have selected two different values for the viscosity, namely ν = 0.1 and ν = 0.007. Results were obtained with Newton's iterations starting from an initial guess that is a linear segment that satisfies the boundary conditions. Figure 1 shows the corresponding computed solutions for a fized size N = 40 as well as the relative errors with respect to the exact solution. As it is shon the proposed ELM scheme outperforms both the FD and FEM schemes for medium to large sizes of the grid; from low to medium sizes of the grid, all methods perform equivalently. However, as shown in Figure 1(c), for ν = 0.007, and the particualr choice of the size (N = 40), the FD scheme fails to approximate sufficiently the steep-gradient appearing at the right boundary. Then, we considered the case of the non-homogeneous Neumann condition on the left boundary (18)- (20); here, we have set ν = 1/10. In this case, the solution is not unique and the resulting bifurcation diagram obtained with FD, FEM and ELM is depicted in Fig.(2). In Table 1, we report the error between the value of the bifurcation point as computed with FD, FEM and ELM for various problem sizes N , with respect to the exact value of the bifurcation point (occurring for the particular choice of viscosity at ϑ * = 0.087845767978). The location of the bifurcation point for all numerical methods was estimated by fitting a parabola around the four points (two on the lower and two on the upper branch) of the largest values of λ as obtained by the pseudo-arc-length continuation. As shown, the proposed ELM scheme performs equivalently to FEM for low to medium sized of the grid, thus outperforming FEM for medium to large grid sizes; both methods FEM and ELM) outperform FD for all sizes of the grid. -5.3487e-05 -7.6969e-09 -2.0571e-09 -2.1431e-09 100 -1.3370e-05 -2.1575e-09 -9.8439e-09 -9.8483e-09 200 -3.3420e-06 -5.9262e-09 -9.6156e-09 -9.6095e-09 400 -8.3473e-07 4.1474e-09 9.3882e-10 9.3338e-10 Table 1: One-dimensional Burgers equation (18) with mixed boundary conditions (20). Comparative results with respect to the error between the estimated value of the turning point as obtained with FD, FEM and ELMs schemes and the exact value of the turning point at ϑ * = 0.087845767978 for ν = 1/10. The value of the turning point was estimated by fitting a parabola around the four points with the largest λ values as obtained by the arc-length continuation. In this case, steep gradients arise at the right boundary related to the presence of the upper unstable solution, as discussed in Lemma 4.2 and Corollary 4.2.1. In Table 2, we report the error between the numerically computed and the exact analytically obtained value (see Eq. (22)) at x = 0 when the value of boundary condition ϑ at the left boundary is ϑ = 10 −6 . Again as shown, near the left boundary, the proposed ELM scheme outperforms both FEM and FD for medium to larger sizes of the grid. -1.8099e-01 2.0532e-02 -6.5492e-01 -6.1366e-01 50 -2.6632e-02 7.6660e-04 -5.8353e-01 -6.0850e-01 100 -6.5179e-03 1.5752e-04 -1.9976e-01 -1.0504e-01 200 -1.6105e-03 8.9850e-05 -2.4956e-06 -5.0483e-06 400 -3.9992e-04 6.2798e-05 -3.4737e-06 -9.5189e-06 Table 2: One-dimensional Burgers equation (18) (19), one can also consider the following simple iterative procedure that linearizes the equation:    Given u (0) , do until convergence find u (k) such that ν ∂ 2 u (k) ∂x 2 − u (k−1) ∂u (k) ∂x = 0 . In this way, the nonlinear term becomes a linear advection term with a non-constant coefficient given by the evaluation of u at the previous iteration. This results to a fixed point scheme. Such linearized equations can be easily solved, being linear elliptic equations, and thus in this case one can perform the analysis for linear systems presented in [8]. The results of this procedure are depicted in Figure 3. We point out that such iterations converge generally very slowly and, what is most important from our point of view, is that convergence is obtained only for a very "good" guess of the solution. The one-and two-dimensional Liouville-Bratu-Gelfand Problem The Liouville-Bratu-Gelfand model arises in many physical and chemical systems. It is an elliptic partial differential equation which in its general form is given by [6]: The one-dimensional problem admits an analytical solution given by [38]: ∆u(x) + λe u(x) = 0 x ∈ Ω,(37)u(x) = 2 ln cosh θ cosh θ(1 − 2x) , where θ is such that cosh θ = 4θ √ 2λ . It can be shown that when 0 < λ < λ c the problem admits two branches of solutions that meet at λ c ∼ 3.513830719, a limit point (saddle-node bifurcation) that marks the onset of two branches of solutions with different stability, while beyond that point no solutions exist. For the two-dimensional problem, to the best of our knowledge, no such (as in the one-dimensional case) exact analytical solution exist that is verified by the numerical results that have been reported in the literature (e.g. [9,25]), in which the authors report the value of the turning at λ c ∼ 6.808124. Numerical Solution with Finite Differences and Finite Elements The discretization of the one-dimensional problem in N points with central finite differences at the unit interval 0 ≤ x ≤ 1 leads to the following system of N − 2 algebraic equations ∀x j = (j − 1)h, j = 2, . . . N − 1, h = 1 N −1 : F j (u) = 1 h 2 (u j+1 − 2u j + u j−1 ) + λe uj = 0, where, at the boundaries x 1 = 0, x N = 1, we have u 1 = u N = 0. The solution of the above N − 2 nonlinear algebraic equations is obtained iteratively using the Newton-Raphson method. The Jacobian is now triagonal; at each n-th iteration, the elements at the main diagonal are given by ∂Fj ∂uj (n) = − 2 h 2 + λe u (n) j and the elements of the first diagonal above and the first diagonal below are given by ∂Fj+1 ∂uj (n) = ∂Fj ∂uj+1 (n) = 1 h 2 , respectively. The discretization of the two-dimensional Bratu problem in N × N points with central finite differences on the square grid 0 ≤ x, y ≤ 1 with zero boundary conditions leads to the following system of ( N − 2) × (N − 2) algebraic equations ∀(x i = (i − 1)h, y j = (j − 1)h), i, j = 2, . . . N − 1, h = 1 N −1 : F i,j (u) = 1 h 2 (u i+1,j + u i,j+1 − 4u i,j + u i,j−1 + u i−1,j ) + λe ui,j = 0. The Jacobian is now a (N − 2) 2 × (N − 2) 2 block diagonal matrix of the form: Regarding the FEM solution, for the one-dimensional Bratu problem, Eq. (16) gives: ∇F = 1 h 2      R k = Ω ∂ 2 u ∂x 2 + λe u(x) φ k (x)dx.(40) By inserting Eq.(15) into Eq.(40) and by applying the Green's formula for integration, we get: R k = φ k (x) du dx 1 0 − N j=1 u j 1 0 dφ j (x) dx dφ k (x) dx dx + λ 1 0 e N j=1 uj φj (x) φ k (x)dx(41) and because of the zero Dirichlet boundary conditions, Eq.(41) becomes: R k = − N j=1 u j 1 0 dφ j (x) dx dφ k (x) dx dx + λ 1 0 e N j=1 uj φj (x) φ k (x)dx. The elements of the Jacobian matrix are given by: ∂R i ∂u j = − 1 0 dφ j (x) dx dφ k (x) dx dx + λ 1 0 e N j=1 uj φj (x) φ j (x)φ k (x)dx(42) Due to the Dirichlet boundary conditions, Eq.(42) becomes:           1 0 . . . 0 . . . 0 ∂R2 ∂u1 ∂R20 0 . . . 0 . . . 1           u (n) ·            du (n) 1 du (n) 2 . . . du (n) j . . . du (n) N            = −          0 R 2 . . . R k . . . 0          u (n) .(43) For the two-dimensional Bratu problem, the residuals are given by: R k = Ω ( ∂ 2 u(x, y) ∂x 2 + ∂ 2 u(x, y) ∂y 2 + λe u(x,y) )φ k (x, y)dxdy. By applying the Green's formula for integration, we get: R k = ∂Ω ∇u(x, y)dℓ− Ω ∇u(x, y)∇φ k (x, y)dxdy + Ω λe u(x,y) φ k (x, y)dxdy. By inserting Eq.(15) and the zero Dirichlet boundary conditions, we get: R k = − N j=1 u j Ω ∇φ j (x, y)∇φ k (x, y)dxdy + Ω λe N j=1 uj φj (x,y) φ k (x, y)dxdy. Thus, the elements of the Jacobian matrix for the two-dimensional Bratu problem are given by: ∂R k ∂u j = − Ω ∇φ j (x, y)∇φ k (x, y)dxdy + Ω λe N j=1 uj φj (x,y) φ j (x, y)φ k (x, y)dxdy. As before, for our computations we have used quadratic basis functions using an affine element mapping in the domain [0, 1] 2 . Numerical Solution with Extreme Learning Machine Collocation Collocating the ELM network function (1) in the 1D Bratu problem (37) leads to the following system: F i (w, λ) = N j=1 w j α 2 j ψ ′′ j (x i ) + λexp N j=1 w j ψ j (x i ) = 0, i = 2, . . . , M − 1 with boundary conditions: F 1 (w, λ) = N j=1 w j ψ j (0) = 0, F M (w, λ) = N j=1 w j ψ j (1) = 0. Thus, the elements of the Jacobian matrix ∇ w F are given by: ∂F i ∂w j = α 2 j ψ ′′ j (x i ) + λψ j (x i )exp N j=1 w j ψ j (x i ) , i = 2, . . . , M − 1 and ∂F 1 ∂w j (w, λ) = ψ j (0) ∂F M ∂w j (w, λ) = ψ j (1). The application of Newton's method (11) is straightforward using the exact computation of derivatives of the basis functions (see (4) and (7)). For the two-dimensional Bratu problem (37), we have: F i (w, λ) = N j=1 w j α 2 j,1 ψ ′′ j (x i , y i ) + N j=1 w j α 2 j,2 ψ ′′ j (x i , y i ) + λ exp N j=1 w j ψ j (x i , y i ) = 0, i = 1, . . . , M Ω with boundary conditions: F k (w, λ) = N j=1 w j ψ j (x k , y k ) = 0, k = 1, . . . , M 1 . Thus, the elements of the Jacobian matrix ∇ w F read: ∂F i ∂w j = α 2 j,1 ψ ′′ j (x i , y i ) + α 2 j,2 ψ ′′ j (x i , y i ) + λψ j (x i , y i ) exp N j=1 w j ψ j (x i , y i ) , i = 1, . . . , M Ω and ∂F k ∂w j (w, λ) = ψ j (x k , y k ) = 0, k = 1, . . . , M 1 . Also in this case, with the above computations the application of Newton's method (11) is straightforward. Numerical results for the one-dimensional problem First, we show the numerical results for the one-dimensional Liouville-Bratu-Gelfand equation (37) with homogeneous Dirichlet boundary conditions (38). Recall that an exact solution, although in implicit form, is available in this case (see equation (39)); thus, as discussed, the exact solutions are derived using Newton's method with a convergence tolerance of 10 −12 . Figure 4 depicts the comparative results between the exact, FD, FEM and ELM solutions on the upper-branch as obtained by applying Newton's iterations, for two values of the parameter λ and a fixed N = 40, namely for λ = 3 close to the turning point (occurring at λ c ∼ 3.513830719) and for λ = 0.2. For our illustrations, we have set as initial guess u 0 (x) a parabola that satisfies the homogeneous boundary conditions, namely: with a fixed L ∞ -norm ||u|| ∞ = l 0 close to the one obtained from the exact solution. u 0 (x) = 4l 0 (x − x 2 ), In particular, for λ = 3, we used as initial guess a parabola with l 0 = 2.2; in all cases Newton's iterations converge to the correct unstable upper-branch solution. For λ = 0.2, we used as initial guess a parabola with l 0 = 6.4 (the exact solution has l 0 ∼ 6.5); again in all cases, Newton's iterations converged to the correct unstable upper-branch solution. To clarify more the behaviour of the convergence, in Figure 5, we report the regimes of convergence for a grid of L ∞ norms of the initial guesses (parabolas) and λs. (38), one can consider the following iterative procedure that linearizes the equation: Given u (0) , do until convergence find u (k) such that ∆u (k) + λe u (k−1) u (k) = λ(u (k−1) − 1)e u (k−1) . In this way, the nonlinear term becomes a linear reaction term with a non-constant coefficient given by the evaluation of the nonlinearity at the previous step. Then, we implemented fixed point iterations until convergence. Such a linearization procedure is used, for example, in [33]. In Figure 6, we report some results on the application of this method. We note that this scheme converges more slowly and it is not so robust compared to Newton's method. Bifurcation diagram and numerical accuracy In this section, we report the numerical results obtained by the bifurcation analysis of the one-dimensional Bratu problem (37). Figure 7 shows the constructed bifurcation diagram with respect to the parameter λ and in Table 3 we report the accuracy of the computed value as obtained with FD, FEM and ELMS, versus the exact value of the turning point. As shown, the ELMs provide a bigger numerical accuracy for the value of the turning point for medium to large sizes of the grid, and equivalent results (the ELM with SF) to FEM, both outperforming the FD scheme. In Figures 8 and 9, we report the contour plots of the L ∞ -norms of the differences between the computed solutions by FD, FEM and ELMs and the exact solutions for the lower-(8) and upper-branch (9), respectively with respect to N and λ. As it is shown, the ELM schemes outperform both FD and FEM methods for medium to large problem sizes N , and provide equivalent results with FEM for low to medium problem sizes, ths both (FEM and ELMs) outperforming the FD scheme. -7.3137e-04 8.4422e-07 2.9818e-07 6.6092e-05 100 -1.8282e-04 5.0597e-08 -3.7086e-08 6.1302e-08 200 -4.5683e-05 2.3606e-08 -4.5484e-09 -2.6770e-09 400 -1.1412e-05 1.3557e-08 2.0169e-09 2.0275e-09 Table 3: One-dimensional Bratu problem (37). Accuracy of FD, FEM and ELMS in the approximation of the value of the turning point with respect to the exact value λ = 3.513830719125162. Values express the difference with the computed turning point and the exact one. The value of the turning point was estimated by fitting a parabola around the four points with the largest λ values as obtained with arc-length continuation. Numerical results for the two-dimensional problem For the two-dimensional problem (37)- (38), no exact analytical solution is available. Thus, for comparing the numerical accuracy of the FD, FEM and ELM schemes, we considered the value of the bifurcation point that has been reported in key works as discussed in Section 4.2. Figure 10 depicts the computed bifurcation diagram as computed via pseudo-arc-length continuation (see section 3). u ′′ (r) + d − 1 r u ′ (r) + λe u(r) = 0 0 < r < 1 u(1) = u ′ (0) = 0(44) In the case d = 2 this equation gives multiple solutions if λ < λ c = 2. For example, in [44], the authors have used Mathematica to give analytical solutions at various values of λ; for our tests we consider: Figure 11 depicts the numerical accuracy of the ELM collocation schemes with respect to the exact solutions for two values of λ, namely for λ = 1/2 and for λ = 1. Because no meshing procedure is involved, and because the collocation equation seeks no other point, the implementation of the Newton's method is straightforward when changing the geometry of the domain. 6.783434 7.083742 6.845015 7.207203 100 10x10 6.792626 6.984260 6.723902 6.930798 196 14x14 6.800361 6.900313 6.855055 6.882435 400 20x20 6.804392 6.856401 6.799440 6.829754 784 28x28 6.806235 6.835771 6.801689 6.806149 1600 40x40 6.807220 6.824770 6.806899 6.804600 Table 4: Turning point estimation of the two-dimensional Bratu problem. The value that has been reported in the literature in key works (see e.g. [9]) is λ * = 6.808124. The value of the turning point was estimated by fitting a parabola around the four points with the largest λ values as obtained by the arc-length continuation. λ = 1 2 → u(r) = Conclusions We proposed a numerical approach based on Extreme Learning Machines (ELMs) and collocation for the approximation of steady-state solutions of non-linear PDEs. The proposed scheme takes advantage of the property of the ELMs as universal function approximators, bypassing the need of the computational very expensive -and most-of-the times without any guarantee for convergence of-the training phase of other types of machine learning such as single or multilayer ANNs and Deep-learning networks. The base of the approximation subspace on which a solution of the PDE is sought are the (unknown) weights of the hidden to output layer. For linear PDEs, these can be computed by solving a linear regularization problem in one step. In our previous work [8], we demonstrated that ELMs can provide robust and accurate approximations of the solution of benchmark linear PDEs with steep gradients, for which analytical solutions were available. Here, building on this work, we make a step change by showing how ELMs can be used to solve non-linear PDEs, and by bridging them with continuation methods, we show how one can exploit the arsenal of numerical bifurcation theory to trace branches of solutions past critical points. For our demonstrations, we considered two celebrated classes of nonlinear PDEs whose solutions bifurcate as parameter values change: the onedimensional viscous Burgers equation (a fundamental representative of advection-diffusion PDEs) and the one-and two-dimensional Liouville-Bratu-Gelfand equation (a fundamental representative of reaction-diffusion PDEs). By coupling the proposed numerical scheme with Newton-Raphson iterations and the "pseudo" arc-length continuation method, we constructed the corresponding bifurcation diagrams past turning points. The efficiency of the proposed numerical ELM collocation scheme was compared against two of the most established numerical solution methods, namely central Finite Differences and Galerkin Finite Elements. By doing so, we showed that (for the same problem size) the proposed machine-learning approach outperforms FD and FEM schemes for relatively medium to large sizes of the grid, both with respect to the accuracy of the computed solutions for a wide range of the bifurcation parameter values and the approximation accuracy of the turning points. Hence, the proposed approach arises as an alternative and powerful new numerical technique for the approximation of steady-state solutions of non-linear PDEs. Furthermore, its implementation is far simpler than the implementation of FEM, thus providing equivalent or even better numerical accuracy, and in all cases is shown to outperform the simple FD scheme, which fails to approximate steep gradients as here arise near the boundaries. Of course there are many open problems linked to the implementation of the proposed scheme that ask for further and deeper investigation, such as the theoretical investigation of the impact of the type of transfer functions and the probability distribution of their parameter values functions to the approximation of the solutions. Further directions could be towards the extension of the scheme for the solution of time-dependent non-linear PDEs as well as the solution of inverse-problems in PDEs. with boundary conditions: B l u = g l , in ∂Ω l , l = 1, 2, . . . , m , Remark 3 . 2 . 32The three numerical methods (FD, FEM and ELM) are compared with respect to the dimension of the Jacobian matrix J, that in the case of FD and FEM is square and related to the number N of nodes, i.e. J ∈ R N ×N , and in the case ELM is rectangular and related to both the number M of collocation nodes and the number N of neurons, i.e. J ∈ R M×N . Actually, N is the parameter related to the computational cost, i.e. the inversion of the J is O(N 3 ) and the same is in the ELM case for the inversion of the matrix J T J ∈ R N ×N . Lemma 4. 2 ( 2Mixed case). Consider Eq.(18) with boundary conditions given by Corollary 4.2. 1 . 1Consider Eq.(18) with boundary conditions given by Figure 1 : 1Numerical solution and accuracy of the FD, FEM and ELM schemes for the one-dimensional viscous Burgers problem with Dirichlet boundary conditions (18), (19), (a,b) with viscosity ν = 0.1: (a) Solutions for a fixed problem size N = 40; (b) L 2 -norm of differences with respect to the exact solution (21) for various problem sizes. (c,d) with viscosity ν = 0.007: (c) Solutions for a fixed problem size N = 40; (d) L 2 -norm errors with respect to the exact solution for various problem sizes. Figure 2 : 2(a) One-dimensional Burgers equation(18) with mixed boundary conditions(20). Bifurcation diagram with respect to the Neumann boundary value θ as obtained for ν = 1/10, with FD, FEM and ELM schemes with a fixed problem size N = 400; (b) Zoom near the turning point. with mixed boundary conditions(20). Comparative results with respect to the error between the computed solution (at x = 0) with FD, FEM and ELMs (with both sigmoidal and radial basis functions) and the exact solution u(0) = 1.798516682636303 (see Eq.(22)) for ϑ = 1e − 6 (the value of the Neumann condition at the left boundary). Remark 4 . 1 ( 41Linearization of the Burgers equation for its numerical solution.). For the numerical solution of the Burgers equation (18) with boundary conditions given by with homogeneous Dirichlet conditions u(x) = 0 , x ∈ ∂Ω.(38)The domain that we consider here is the Ω = [0, 1] d in R d , d = 1, 2. Figure 3 : 3Numerical accuracy of FD and ELM schemes with respect to the exact solution, for the case of the onedimensional Burgers equation(18)with Dirichlet boundary conditions given by(19), as obtained by the fixed point scheme described in Remark 4.1 for (a) ν = 0.1 and (b) ν = 0.007. We depict the L 2 -norm of the difference between the solutions obtained with FD and ELMs and the exact solution(21). T 2 2I 0 0 . . . . . . 0 I T 3 I 0 . . . . . . 0 0 I T 4 I 0 . . . 0 . . . . . . . . . . . . . . . . . . . . . 0 . . . . . . . . . . . . I T N −1where I is the (N − 2) × (N − 2) identity matrix and T i is the (N − 2) × (N − 2) tridiagonal matrix with non null elements on the j-th row: 1 , −4 + h 2 λe ui+j,i+j , 1 Figure 4 : 4Numerical solutions and accuracy of the FD, FEM and ELMs schemes for the one-dimensional Bratu problem (37). (a) Computed solutions at the upper-branch unstable solution at λ = 3 for a fixed problem size N = 40. (b) L 2 -norm of differences with respect to the exact unstable solution (39) at λ = 3 for various values of N . (c) Computed solutions at the upper-branch unstable solution at λ = 0.2 with a fixed problem size N = 40. (d) L 2 -norm of differences with respect to the exact unstable solution (39) at λ = 0.2 for various values of N . The initial guess of the solutions was a parabola satisfying the homogeneous boundary conditions with a fixed L ∞ -norm ||u|| ∞ = l 0 close to the one resulting from the exact solution. Remark 4. 2 ( 2Linearization of the equation for the numerical solution of the Liouville-Bratu-Gelfand problem). For the solution of the equation (37) with boundary conditions given by Figure 5 : 5Convergence regimes (basin of attraction) of Newton's method with the (a) FD, (b) FEM, and (c-d) ELM numerical schemes for the one-dimensional Bratu problem (37) for a grid of initial guesses (L ∞ -norms of parabolas that satisfy the boundary conditions (38)) and λs. Green points indicate convergence to the lower-branch solutions; Red points indicate convergence to the upper-branch solutions; Blue points indicate divergence. (c) ELM with logistic SF (3) (d) ELM with Gaussian RBF (6). Figure 6 : 6Fixed point iterations: L 2 -norm of the difference errors for the low and up branch Liouville-Bratu-Gelfand solution (39) for λ = 2: (a) L 2 errors with respect to N of the low branch solution (b) L 2 errors with respect to N of the upper branch. Figure 7 :Figure 8 :Figure 9 : 789(a) Bifurcation diagram for the one-dimensional Bratu problem (37), with a fixed problem size N = 400. (b) Zoom near the turning point. One-dimensional Bratu problem (37). Contour plots of the L ∞ -norms of the differences between the computed and exact (39) solutions for the lower stable branch: (a) FD, (b) FEM, (c) ELM with logistic SF (3), (d) ELM with Gaussian RBF (6). One-dimensional Bratu problem (37). Contour plots of the L ∞ -norms of the differences between the computed and exact (39) solutions for the upper unstable branch: (a) FD, (b) FEM, (c) ELM with logistic SF (3), (d) ELM with Gaussian RBF (6). Remark 4 . 3 ( 43The Gelfand-Bratu model). The Liouville-Bratu-Gelfand equation(37) in a unitary ball B ⊂ R d with homogeneous Dirichlet boundary conditions is usually refereed as Gelfand-Bratu model. Such equation posses radial solutions u(r) of the one-dimensional non-linear boundary-value problem[47]: Figure 10 : 10(a) Computed bifurcation diagram for the two-dimensional Bratu problem (37), with a grid of 40 × 40 points. b) Zoom near the turning point. Figure 11 : 11Numerical accuracy of ELMs for the radial two-dimensional Gelfand-Bratu problem (44). L 2 -norm of differences of the analytical solutions (45) w.r.t. the number of neurons N in ELMs with both logistic SF (3) and Gaussian RBF (6): (a) λ = 1/2, (b) λ = 1. Finally we make explicit that in all the rest of this work, for the ELM case, we use a number M of collocation points that is half the number N of neurons. Such a choice is justified by our previous work ([8]) that works better for linear PDEs with steep gradients. In general, we pinpoint that by increasing the number M to be2N 3 , 3N 4 , etc.. 3 one gets even better results (see e.g. our previous work[8] on the solution of linear PDEs). Table 5 , 5summarizes the computed values of the turning point as estimated with the FD, FEM and ELM schemes for various sizes N of the grid. In Huang[29] it is suggested to take in I = [−1, 1] the αj randomly generated in the interval [−1, 1] and βj randomly generated in [0, 1]. This construction leads to functions that are not well suited for our purposes: ad example if αj = 0.1 and βj = 0.9, the center is cj = −9. Moreover if αj is small, the function φj is very similar to a constant function in [−1, 1], therefore this function is useless for our purposes. The usual algorithm implemented in Matlab is that any singular value less than a tolerance is treated as zero: by default, this tolerance is set to max(size(A)) * eps(norm(A))3 The case M = N can be solved only by the use of a (Moore-Penrose) pseudo-inverse(17), because the invertibility of the Jacobian of the nonlinear PDE operator cannot be guaranteed in advance. The relative error is the L2-norm of the difference between two successive solutions ||u(w)−2 − u(w)−1||2. In particular for the ELM framework is given by ||S T · (w−2 − w−1)||2, where S is the collocation matrix defined in eq. (2). AcknowledgmentsFrancesco Calabrò and Constantinos Siettos were partially supported by INdAM, through GNCS research projects. This support is gratefully acknowledged. Numerical approximations of the dynamical system generated by burgers' equation with neumann-dirichlet boundary conditions. E J Allen, J A Burns, D S Gilliam, ESAIM: Mathematical Modelling and Numerical Analysis-Modélisation Mathématique et Analyse Numérique. 475Allen, E.J., Burns, J.A., Gilliam, D.S.: Numerical approximations of the dynamical system generated by burg- ers' equation with neumann-dirichlet boundary conditions. ESAIM: Mathematical Modelling and Numerical Analysis-Modélisation Mathématique et Analyse Numérique 47(5), 1465-1492 (2013) Particle methods for a 1 d elastic model problem: Error analysis and development of a second-order accurate formulation. D Asprone, F Auricchio, G Manfredi, A Prota, A Reali, G Sangalli, Computer Modeling in Engineering & Sciences(CMES). 621Asprone, D., Auricchio, F., Manfredi, G., Prota, A., Reali, A., Sangalli, G.: Particle methods for a 1 d elastic model problem: Error analysis and development of a second-order accurate formulation. Computer Modeling in Engineering & Sciences(CMES) 62(1), 1-21 (2010) Isogeometric collocation for elastostatics and explicit dynamics. F Auricchio, L B Da Veiga, T J Hughes, A Reali, G Sangalli, Computer methods in applied mechanics and engineering. 249Auricchio, F., Da Veiga, L.B., Hughes, T.J., Reali, A., Sangalli, G.: Isogeometric collocation for elastostatics and explicit dynamics. Computer methods in applied mechanics and engineering 249, 2-14 (2012) Sparse extreme learning machine for classification. Z Bai, G B Huang, D Wang, H Wang, M B Westover, IEEE transactions on cybernetics. 4410Bai, Z., Huang, G.B., Wang, D., Wang, H., Westover, M.B.: Sparse extreme learning machine for classification. IEEE transactions on cybernetics 44(10), 1858-1870 (2014) A table of solutions of the one-dimensional burgers equation. E R Benton, G W Platzman, Quarterly of Applied Mathematics. 302Benton, E.R., Platzman, G.W.: A table of solutions of the one-dimensional burgers equation. Quarterly of Applied Mathematics 30(2), 195-212 (1972) An analytical and numerical study of the two-dimensional bratu equation. J P Boyd, Journal of Scientific Computing. 12Boyd, J.P.: An analytical and numerical study of the two-dimensional bratu equation. Journal of Scientific Computing 1(2), 183-206 (1986) Finite dimensional approximation of nonlinear problems. F Brezzi, J Rappaz, P A Raviart, Numerische Mathematik. 381Brezzi, F., Rappaz, J., Raviart, P.A.: Finite dimensional approximation of nonlinear problems. Numerische Mathematik 38(1), 1-30 (1982) Extreme learning machine collocation for the numerical solution of elliptic pdes with sharp gradients. F Calabrò, G Fabiani, C Siettos, arXiv:2012.05871arXiv preprintCalabrò, F., Fabiani, G., Siettos, C.: Extreme learning machine collocation for the numerical solution of elliptic pdes with sharp gradients. arXiv preprint arXiv:2012.05871 (2020) Arc-length continuation and multigrid techniques for nonlinear elliptic eigenvalue problems. T F Chan, H Keller, SIAM Journal on Scientific and Statistical Computing. 32Chan, T.F., Keller, H.: Arc-length continuation and multigrid techniques for nonlinear elliptic eigenvalue prob- lems. SIAM Journal on Scientific and Statistical Computing 3(2), 173-194 (1982) Machine learning for semi linear pdes. Chan-Wai-Nam , Q Mikael, J Warin, X , Journal of Scientific Computing. 793Chan-Wai-Nam, Q., Mikael, J., Warin, X.: Machine learning for semi linear pdes. Journal of Scientific Comput- ing 79(3), 1667-1712 (2019) Bayesian network based extreme learning machine for subjectivity detection. I Chaturvedi, E Ragusa, P Gastaldo, R Zunino, E Cambria, Journal of The Franklin Institute. 3554Chaturvedi, I., Ragusa, E., Gastaldo, P., Zunino, R., Cambria, E.: Bayesian network based extreme learning machine for subjectivity detection. Journal of The Franklin Institute 355(4), 1780-1797 (2018) Unsupervised feature selection based extreme learning machine for clustering. J Chen, Y Zeng, Y Li, G B Huang, Neurocomputing. 386Chen, J., Zeng, Y., Li, Y., Huang, G.B.: Unsupervised feature selection based extreme learning machine for clustering. Neurocomputing 386, 198-207 (2020) The numerical analysis of bifurcation problems with application to fluid mechanics. K Cliffe, A Spence, S Tavener, Acta Numerica. 900Cliffe, K., Spence, A., Tavener, S.: The numerical analysis of bifurcation problems with application to fluid mechanics. Acta Numerica 9(00), 39-131 (2000) Multilayer one-class extreme learning machine. H Dai, J Cao, T Wang, M Deng, Z Yang, Neural Networks. 115Dai, H., Cao, J., Wang, T., Deng, M., Yang, Z.: Multilayer one-class extreme learning machine. Neural Networks 115, 11-22 (2019) New features of the software matcont for bifurcation analysis of dynamical systems. A Dhooge, W Govaerts, Y A Kuznetsov, H G E Meijer, B Sautois, Mathematical and Computer Modelling of Dynamical Systems. 142Dhooge, A., Govaerts, W., Kuznetsov, Y.A., Meijer, H.G.E., Sautois, B.: New features of the software matcont for bifurcation analysis of dynamical systems. Mathematical and Computer Modelling of Dynamical Systems 14(2), 147-175 (2008) Numerical methods for bifurcation problems and large-scale dynamical systems. E Doedel, L S Tuckerman, Springer Science & Business Media119Doedel, E., Tuckerman, L.S.: Numerical methods for bifurcation problems and large-scale dynamical systems, vol. 119. Springer Science & Business Media (2012) E J Doedel, A R Champneys, F Dercole, T F Fairgrieve, Y A Kuznetsov, B Oldeman, R Paffenroth, B Sandstede, X Wang, C Zhang, Auto-07p: Continuation and bifurcation software for ordinary differential equations. Doedel, E.J., Champneys, A.R., Dercole, F., Fairgrieve, T.F., Kuznetsov, Y.A., Oldeman, B., Paffenroth, R., Sandstede, B., Wang, X., Zhang, C.: Auto-07p: Continuation and bifurcation software for ordinary differential equations (2007) Physics informed extreme learning machine (PIELM) -A rapid method for the numerical solution of partial differential equations. V Dwivedi, B Srinivasan, Neurocomputing. 391Dwivedi, V., Srinivasan, B.: Physics informed extreme learning machine (PIELM) -A rapid method for the numerical solution of partial differential equations. Neurocomputing 391, 96 -118 (2020) A comprehensive deep learning-based approach to reduced order modeling of nonlinear time-dependent parametrized pdes. S Fresca, L Dede, A Manzoni, Journal of Scientific Computing. 8761Fresca, S., Dede, L., Manzoni, A.: A comprehensive deep learning-based approach to reduced order modeling of nonlinear time-dependent parametrized pdes. Journal of Scientific Computing 87(61) (2021) A framework for data-driven structural analysis in general elasticity based on nonlinear optimization: The dynamic case. C G Gebhardt, M C Steinbach, D Schillinger, R Rolfes, International Journal for Numerical Methods in Engineering. 12124Gebhardt, C.G., Steinbach, M.C., Schillinger, D., Rolfes, R.: A framework for data-driven structural analysis in general elasticity based on nonlinear optimization: The dynamic case. International Journal for Numerical Methods in Engineering 121(24), 5447-5468 (2020) Continuation-conjugate gradient methods for the least squares solution of nonlinear boundary value problems. R Glowinski, H B Keller, L Reinhart, SIAM journal on scientific and statistical computing. 64Glowinski, R., Keller, H.B., Reinhart, L.: Continuation-conjugate gradient methods for the least squares solution of nonlinear boundary value problems. SIAM journal on scientific and statistical computing 6(4), 793-832 (1985) Identification of distributed parameter systems: A neural net based approach. R González-García, R Rico-Martìnez, I G Kevrekidis, Computers & chemical engineering. 22González-García, R., Rico-Martìnez, R., Kevrekidis, I.G.: Identification of distributed parameter systems: A neural net based approach. Computers & chemical engineering 22, S965-S968 (1998) Numerical methods for bifurcations of dynamical equilibria. W J Govaerts, SIAMGovaerts, W.J.: Numerical methods for bifurcations of dynamical equilibria. SIAM (2000) G Hadash, E Kermany, B Carmeli, O Lavi, G Kour, A Jacovi, arXiv:1804.09028Estimate and replace: A novel approach to integrating deep neural networks with existing applications. arXiv preprintHadash, G., Kermany, E., Carmeli, B., Lavi, O., Kour, G., Jacovi, A.: Estimate and replace: A novel approach to integrating deep neural networks with existing applications. arXiv preprint arXiv:1804.09028 (2018) On the accurate discretization of a highly nonlinear boundary value problem. M Hajipour, A Jajarmi, D Baleanu, Numerical Algorithms. 793Hajipour, M., Jajarmi, A., Baleanu, D.: On the accurate discretization of a highly nonlinear boundary value problem. Numerical Algorithms 79(3), 679-695 (2018) Solving high-dimensional partial differential equations using deep learning. J Han, A Jentzen, E Weinan, Proceedings of the National Academy of Sciences. 11534Han, J., Jentzen, A., Weinan, E.: Solving high-dimensional partial differential equations using deep learning. Proceedings of the National Academy of Sciences 115(34), 8505-8510 (2018) Trends in extreme learning machines: A review. G Huang, G B Huang, S Song, K You, Neural Networks. 61Huang, G., Huang, G.B., Song, S., You, K.: Trends in extreme learning machines: A review. Neural Networks 61, 32-48 (2015) Representational learning with extreme learning machine for big data. G Huang, L Kasun, H Zhou, C Vong, IEEE Intelligent Systems. 286Huang, G., Kasun, L., Zhou, H., Vong, C.: Representational learning with extreme learning machine for big data. IEEE Intelligent Systems 28(6), 31-34 (2013) Extreme learning machine for regression and multiclass classification. G Huang, H Zhou, X Ding, R Zhang, 10.1109/TSMCB.2011.2168604IEEE Transactions on Systems, Man, and Cybernetics. 422Part B (Cybernetics)Huang, G., Zhou, H., Ding, X., Zhang, R.: Extreme learning machine for regression and multiclass classification. IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics) 42(2), 513-529 (2012). DOI 10.1109/TSMCB.2011.2168604 Optimization method based extreme learning machine for classification. G B Huang, X Ding, H Zhou, Neurocomputing. 741-3Huang, G.B., Ding, X., Zhou, H.: Optimization method based extreme learning machine for classification. Neu- rocomputing 74(1-3), 155-163 (2010) Extreme learning machine for regression and multiclass classification. G B Huang, H Zhou, X Ding, R Zhang, IEEE Transactions on Systems, Man, and Cybernetics. 422Part B (Cybernetics)Huang, G.B., Zhou, H., Ding, X., Zhang, R.: Extreme learning machine for regression and multiclass classifica- tion. IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics) 42(2), 513-529 (2011) Extreme learning machine: theory and applications. G B Huang, Q Y Zhu, C K Siew, Neurocomputing. 701-3Huang, G.B., Zhu, Q.Y., Siew, C.K.: Extreme learning machine: theory and applications. Neurocomputing 70(1-3), 489-501 (2006) S Iqbal, P A Zegeling, A numerical study of the higher-dimensional gelfand-bratu model. 79Iqbal, S., Zegeling, P.A.: A numerical study of the higher-dimensional gelfand-bratu model. Computers & Mathematics with Applications 79(6), 1619-1633 (2020) Numerical methods for nonlinear equations. C T Kelley, 10.1017/ S0962492917000113Acta Numerica. 27Kelley, C.T.: Numerical methods for nonlinear equations. Acta Numerica 27, 207-287 (2018). DOI 10.1017/ S0962492917000113 Numerical continuation methods for dynamical systems. B Krauskopf, H M Osinga, J Galán-Vioque, Springer2Krauskopf, B., Osinga, H.M., Galán-Vioque, J.: Numerical continuation methods for dynamical systems, vol. 2. Springer (2007) Y A Kuznetsov, Elements of applied bifurcation theory. Springer Science & Business Media112Kuznetsov, Y.A.: Elements of applied bifurcation theory, vol. 112. Springer Science & Business Media (2013) Artificial neural networks for solving ordinary and partial differential equations. I E Lagaris, A Likas, D I Fotiadis, IEEE transactions on neural networks. 95Lagaris, I.E., Likas, A., Fotiadis, D.I.: Artificial neural networks for solving ordinary and partial differential equations. IEEE transactions on neural networks 9(5), 987-1000 (1998) A simple solution of the bratu problem. A Mohsen, Computers & Mathematics with Applications. 671Mohsen, A.: A simple solution of the bratu problem. Computers & Mathematics with Applications 67(1), 26-33 (2014) An efficient finite element method for treating singularities in laplace's equation. L G Olson, G C Georgiou, W W Schultz, Journal of Computational Physics. 962Olson, L.G., Georgiou, G.C., Schultz, W.W.: An efficient finite element method for treating singularities in laplace's equation. Journal of Computational Physics 96(2), 391-410 (1991) Approximation theory of the mlp model. A Pinkus, Acta Numerica. 88Pinkus, A.: Approximation theory of the mlp model. Acta Numerica 1999: Volume 8 8, 143-195 (1999) Numerical approximation of partial differential equations. A Quarteroni, A Valli, Springer Science & Business Media23Quarteroni, A., Valli, A.: Numerical approximation of partial differential equations, vol. 23. Springer Science & Business Media (2008) Numerical gaussian processes for time-dependent and nonlinear partial differential equations. M Raissi, P Perdikaris, G E Karniadakis, SIAM Journal on Scientific Computing. 401Raissi, M., Perdikaris, P., Karniadakis, G.E.: Numerical gaussian processes for time-dependent and nonlinear partial differential equations. SIAM Journal on Scientific Computing 40(1), A172-A198 (2018) Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations. M Raissi, P Perdikaris, G E Karniadakis, Journal of Computational Physics. 378Raissi, M., Perdikaris, P., Karniadakis, G.E.: Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations. Journal of Computational Physics 378, 686-707 (2019) Neural network optimized with evolutionary computing technique for solving the 2-dimensional bratu problem. M A Z Raja, R Samar, Neural Computing and Applications. 237Raja, M.A.Z., Samar, R., et al.: Neural network optimized with evolutionary computing technique for solving the 2-dimensional bratu problem. Neural Computing and Applications 23(7), 2199-2210 (2013) An energy approach to the solution of partial differential equations in computational mechanics via machine learning: Concepts, implementation and applications. E Samaniego, C Anitescu, S Goswami, V M Nguyen-Thanh, H Guo, K Hamdia, X Zhuang, T Rabczuk, Computer Methods in Applied Mechanics and Engineering. 362112790Samaniego, E., Anitescu, C., Goswami, S., Nguyen-Thanh, V.M., Guo, H., Hamdia, K., Zhuang, X., Rabczuk, T.: An energy approach to the solution of partial differential equations in computational mechanics via machine learning: Concepts, implementation and applications. Computer Methods in Applied Mechanics and Engineering 362, 112790 (2020) Continuation core and toolboxes (coco). Source-Forge. net. F Schilder, H Dankowicz, project cocotoolsSchilder, F., Dankowicz, H.: Continuation core and toolboxes (coco). Source-Forge. net, project cocotools (2017) The modified broyden-variational method for solving nonlinear elliptic differential equations. M I Syam, Chaos, Solitons & Fractals. 322Syam, M.I.: The modified broyden-variational method for solving nonlinear elliptic differential equations. Chaos, Solitons & Fractals 32(2), 392-404 (2007) Extreme learning machine for multilayer perceptron. J Tang, C Deng, G B Huang, IEEE transactions on neural networks and learning systems. 27Tang, J., Deng, C., Huang, G.B.: Extreme learning machine for multilayer perceptron. IEEE transactions on neural networks and learning systems 27(4), 809-821 (2015) Deep extreme learning machines: supervised autoencoding architecture for classification. M D Tissera, M D Mcdonnell, Neurocomputing. 174Tissera, M.D., McDonnell, M.D.: Deep extreme learning machines: supervised autoencoding architecture for classification. Neurocomputing 174, 42-49 (2016) A study on effectiveness of extreme learning machine. Y Wang, F Cao, Y Yuan, Neurocomputing. 7416Wang, Y., Cao, F., Yuan, Y.: A study on effectiveness of extreme learning machine. Neurocomputing 74(16), 2483-2490 (2011) Machine-learning solver for modified diffusion equations. Q Wei, Y Jiang, J Z Chen, Physical Review E. 98553304Wei, Q., Jiang, Y., Chen, J.Z.: Machine-learning solver for modified diffusion equations. Physical Review E 98(5), 053304 (2018)
[]
[ "Sources of performance variability in deep learning-based polyp detection", "Sources of performance variability in deep learning-based polyp detection" ]
[ "T N Tran \nDiv. Intelligent Medical Systems\nDKFZ\nGermany\n", "T J Adler \nDiv. Intelligent Medical Systems\nDKFZ\nGermany\n", "A Yamlahi \nDiv. Intelligent Medical Systems\nDKFZ\nGermany\n", "E Christodoulou \nDiv. Intelligent Medical Systems\nDKFZ\nGermany\n", "P Godau \nDiv. Intelligent Medical Systems\nDKFZ\nGermany\n", "A Reinke \nDiv. Intelligent Medical Systems\nDKFZ\nGermany\n", "M. DTizabi \nDiv. Intelligent Medical Systems\nDKFZ\nGermany\n", "P Sauer \nInterdisciplinary Endoscopy Center (IEZ)\nUniversity Hospital Heidelberg\nGermany\n", "T Persicke \nDepartment of Gastroenterology, Hepatology and Endocrinology\nRobert-Bosch Hospital (RBK)\nGermany\n", "J G Albert \nDepartment of Gastroenterology, Hepatology and Endocrinology\nRobert-Bosch Hospital (RBK)\nGermany\n\nClinic for General Internal Medicine\nGastroenterology, Hepatology and Infectiology\nKlinikum StuttgartPneumologyGermany\n", "L Maier-Hein \nDiv. Intelligent Medical Systems\nDKFZ\nGermany\n" ]
[ "Div. Intelligent Medical Systems\nDKFZ\nGermany", "Div. Intelligent Medical Systems\nDKFZ\nGermany", "Div. Intelligent Medical Systems\nDKFZ\nGermany", "Div. Intelligent Medical Systems\nDKFZ\nGermany", "Div. Intelligent Medical Systems\nDKFZ\nGermany", "Div. Intelligent Medical Systems\nDKFZ\nGermany", "Div. Intelligent Medical Systems\nDKFZ\nGermany", "Interdisciplinary Endoscopy Center (IEZ)\nUniversity Hospital Heidelberg\nGermany", "Department of Gastroenterology, Hepatology and Endocrinology\nRobert-Bosch Hospital (RBK)\nGermany", "Department of Gastroenterology, Hepatology and Endocrinology\nRobert-Bosch Hospital (RBK)\nGermany", "Clinic for General Internal Medicine\nGastroenterology, Hepatology and Infectiology\nKlinikum StuttgartPneumologyGermany", "Div. Intelligent Medical Systems\nDKFZ\nGermany" ]
[]
Validation metrics are a key prerequisite for the reliable tracking of scientific progress and for deciding on the potential clinical translation of methods. While recent initiatives aim to develop comprehensive theoretical frameworks for understanding metric-related pitfalls in image analysis problems, there is a lack of experimental evidence on the concrete effects of common and rare pitfalls on specific applications. We address this gap in the literature in the context of colon cancer screening. Our contribution is twofold. Firstly, we present the winning solution of the Endoscopy Computer Vision Challenge (EndoCV) on colon cancer detection, conducted in conjunction with the IEEE International Symposium on Biomedical Imaging (ISBI) 2022. Secondly, we demonstrate the sensitivity of commonly used metrics to a range of hyperparameters as well as the consequences of poor metric choices. Based on comprehensive validation studies performed with patient data from six clinical centers, we found all commonly applied object detection metrics to be subject to high inter-center variability. Furthermore, our results clearly demonstrate that the adaptation of standard hyperparameters used in the computer vision 1 arXiv:2211.09708v1 [cs.CV] 17 Nov 2022Performance variability in polyp detection community does not generally lead to the clinically most plausible results. Finally, we present localization criteria that correspond well to clinical relevance. Our work could be a first step towards reconsidering common validation strategies in automatic colon cancer screening applications.
10.1007/s11548-023-02936-9
[ "https://export.arxiv.org/pdf/2211.09708v1.pdf" ]
253,581,525
2211.09708
cdf084e72b6358de6b733bf38ee06c88ec1cc90f
Sources of performance variability in deep learning-based polyp detection T N Tran Div. Intelligent Medical Systems DKFZ Germany T J Adler Div. Intelligent Medical Systems DKFZ Germany A Yamlahi Div. Intelligent Medical Systems DKFZ Germany E Christodoulou Div. Intelligent Medical Systems DKFZ Germany P Godau Div. Intelligent Medical Systems DKFZ Germany A Reinke Div. Intelligent Medical Systems DKFZ Germany M. DTizabi Div. Intelligent Medical Systems DKFZ Germany P Sauer Interdisciplinary Endoscopy Center (IEZ) University Hospital Heidelberg Germany T Persicke Department of Gastroenterology, Hepatology and Endocrinology Robert-Bosch Hospital (RBK) Germany J G Albert Department of Gastroenterology, Hepatology and Endocrinology Robert-Bosch Hospital (RBK) Germany Clinic for General Internal Medicine Gastroenterology, Hepatology and Infectiology Klinikum StuttgartPneumologyGermany L Maier-Hein Div. Intelligent Medical Systems DKFZ Germany Sources of performance variability in deep learning-based polyp detection ValidationEvaluationMetricsObject detectionSurgical data scienceVariability Validation metrics are a key prerequisite for the reliable tracking of scientific progress and for deciding on the potential clinical translation of methods. While recent initiatives aim to develop comprehensive theoretical frameworks for understanding metric-related pitfalls in image analysis problems, there is a lack of experimental evidence on the concrete effects of common and rare pitfalls on specific applications. We address this gap in the literature in the context of colon cancer screening. Our contribution is twofold. Firstly, we present the winning solution of the Endoscopy Computer Vision Challenge (EndoCV) on colon cancer detection, conducted in conjunction with the IEEE International Symposium on Biomedical Imaging (ISBI) 2022. Secondly, we demonstrate the sensitivity of commonly used metrics to a range of hyperparameters as well as the consequences of poor metric choices. Based on comprehensive validation studies performed with patient data from six clinical centers, we found all commonly applied object detection metrics to be subject to high inter-center variability. Furthermore, our results clearly demonstrate that the adaptation of standard hyperparameters used in the computer vision 1 arXiv:2211.09708v1 [cs.CV] 17 Nov 2022Performance variability in polyp detection community does not generally lead to the clinically most plausible results. Finally, we present localization criteria that correspond well to clinical relevance. Our work could be a first step towards reconsidering common validation strategies in automatic colon cancer screening applications. Introduction Colorectal cancer is one of the most common cancer types, ranking second in females and third in males [1]. By detecting and subsequently resecting neoplastic polyps during screening colonoscopy, the risk of developing the disease can be reduced significantly. Research focuses on developing deep learning (DL) solutions for automated detection of polyps in colonoscopy videos [2][3][4][5][6]. However, to date, the metrics with which algorithms are validated receive far too little attention. These metrics are not only important for measuring scientific progress, but also for gauging a method's potential for clinical translation. While previous work has highlighted general metric pitfalls in the broader context of classification, segmentation and detection [7], we are not aware of any prior studies systematically analyzing common metrics in the context of polyp detection. Our underlying hypothesis was that reported performance values in polyp detection methods are largely misleading as they are sensitive to many validation design choices including (1) the choice of test set and (2) the chosen metric configurations (e.g. threshold for the localization criteria). Our contribution is twofold: Firstly, we present the winning solution of the Endoscopy Computer Vision Challenge (EndoCV) on colon cancer detection, conducted in conjunction with the IEEE International Symposium on Biomedical Imaging (ISBI) 2022. Secondly, based on publicly available challenge data, we demonstrate the sensitivity of commonly used metrics to a range of hyperparameters as well as the consequences of poor metric choices. Methods Here, we present the winning method of the EndoCV challenge on colon cancer detection, conducted in conjunction with ISBI 2022 (Sec. 2.1), and revisit common detection metrics including their hyperparameters (Sec. 2.2). Object detection algorithm We base our study on a state-of-the-art detection method, namely the winning entry [8] of the EndoCV 2022 polyp detection challenge [4]. Method overview: The method is illustrated in Fig. 1. A heterogeneous ensemble of YOLOv5-based models was trained. To this end, we split the training data into subsets. To avoid data leakage, the split was performed along each sequence ID. Originally, we created four folds for stratified four-fold crossvalidation, but the final models were trained on only two of the four folds due to training and inference time restrictions. Furthermore, we trained each model either with light augmentation on EndoCV data only, heavy augmentation on EndoCV data only, or light augmentation on EndoCV data and external data (see [8] for details). Overall, this led to six ensemble members. The individual member predictions were merged using the weighted boxes fusion algorithm. As we observed a tendency towards oversegmentation, we added a postprocessing step to shrink the bounding boxes. Implementation details: The models were trained for 20 epochs using a stochastic gradient descent optimizer, a learning rate of 0.1, and a complete Intersection over Union (CIoU) loss. The non-maximum suppression algorithm (NMS) was applied to each ensemble member individually with an Intersection over Union (IoU) threshold of 0.5. For the weighted boxes fusion algorithm hyperparameters, we chose an IoU threshold of 0.5, a skip box threshold of 0.02, and all models were weighted equally. During postprocessing, we shrank all bounding boxes with a confidence score higher than 0.4 by 2% of their size. We evaluated the ensemble a single time on our test data set. Fig. 1 Winning submission of the Endoscopy computer vision challenge (EndoCV) on colon cancer detection. An ensemble of six YOLOv5-based models, each trained with different data and/or augmentation strategies, predicts a set of bounding box candidates. These are merged using weighted boxes fusion and postprocessed to yield the final prediction. Object detection metrics Three metric-related design decisions are important when assessing performance of object detection algorithms [9]: (1) Localization criterion: The localization criterion determines whether a predicted object spatially corresponds to one of the reference objects and vice versa by measuring the spatial similarity between prediction (represented by a bounding box, pixel mask, center point or similar) and reference object. It defines whether the prediction hit/detected (true positive) or missed (false positive) the reference. Any reference object not detected by the algorithm is defined as false negative. The localization criteria that were applied in this work comprise two groups, namely the point-based criteria and the overlapbased criteria (Fig. 2). (2) Assignment strategy: As applying the localization criterion might lead to ambiguous matchings, such as two predictions being assigned to the same reference object, an assignment strategy needs to be chosen that determines how potential ambiguities are resolved. As multiple polyps in the same image are rather rare, an assignment strategy is not as relevant as in other applications. With respect to the metric configuration, we therefore focus on the localization criterion and the classification metrics. (3) Classification metric: Based on the choice of localization criterion and assignment strategy, standard classification metrics can be computed at object level [7]. The most popular multi-threshold metric in object detection is Average Precision (AP) (Fig. 3). As a foundation of this work, we determined common metrics in object detection challenges, along with their respective localization criterion and classification metric (Tab. 1). Experiments and Results In this section, we investigate the sensitivity of popular classification metrics to the test set composition (Sec. 3.1) and the localization criterion (Sec. 3.2). We further assess the clinical value of commonly used metric configurations (Sec. 3.3). Effect of test set In the following, we quantitatively assess the performance variability resulting from the chosen test set, specifically from the target domain (i.e. the clinical validation center) and the distribution of polyp size. Sensitivity to center: To show the variability of performance resulting from different test sets, we used data from six validation centers [11]. Fig. 4 shows the performance of our object detection method (Sec. 2.1) according to commonly used metrics. These exhibit high variability between centers. For example, the AP ranges from [0.38, 0.65], which is notable, given that the AP of the top three submissions for EndoCV 2022 ranged from [0.12, 0.33]. Sensitivity to polyp size: Using the polyp size definitions introduced by the EndoCV 2021 challenge [3], we further calculated the AP scores from all six validation centers, stratified by polyp size (Tab. 2). A high variability can be observed, indicating that algorithm performance is highly affected by the distribution of polyp sizes. [12] (center). We provide additional information on the number of frames (n) and polyp prevalence (φ) per center (right). *SD: standard deviation To further evaluate how the IoU values relate to polyp size and polyp type and simultaneously account for the hierarchical structure of the data set, we fit a linear mixed effects model (R version 4.1.3, package lme4). In this model, polyp size (small, medium, or large) and polyp type (flat or protruded) were fixed effects, while data center, patient identifier (ID), and image ID were random effects. The results suggest that there are strong effects of polyp type and polyp size on the IoU values. In particular when the polyp is of a protruded as opposed to a flat type, the values of IoU are on average higher by a difference of 0.08 (conditional that the other predictors remain constant). When the polyp is of a medium or small size compared to a large size, the IoU values are lower by a difference of 0.05 and 0.28, respectively (conditional that the remaining predictors remain constant). Effect of metric configuration In the case of polyp detection, the goal of high sensitivity (not missing a polyp) is an indispensable priority. We therefore assess the effect of design choices related to the localization criterion on the decision whether a prediction is determined to be a true or false positive. Figures 5 and 6 showcase the effect of the reference shape in point-based and overlap-based localization criteria, respectively, while Fig. 7 demonstrates the sensitivity of overlapbased criteria to different localization thresholds. In the following, we provide experimental evidence for the showcased phenomena. Sensitivity of the AP to the specific choice of overlap-based localization criterion: In this experiment, we investigated the AP scores using Box IoU, Mask IoU and Hull IoU criteria over a range of IoU thresholds [0.05:0.95]. The resulting curves are shown in Fig. 8a). We observe that the Mask IoU and Hull IoU -based AP scores are very similar; conversely, using Box IoU yielded overall higher AP, even at lower thresholds. Sensitivity of the AP to the IoU range: We investigated the AP scores, using Box IoU as a criterion, over different IoU threshold ranges including the commonly used range of [0.5:0.95]. As shown in Fig. 8b), the AP scores Table 3 Point-based versus overlap-based localization criteria applied to the set of all six centers. Point-based criteria give rise to similar results while the Box Intersection over Union (IoU) criterion consistently yields lower values. Fig. 9 Agreement of common localization criteria with clinicians' ratings. Predictions rated as "not useful" by clinicians were rejected by all criteria without exception. However, especially overlap-based localization criteria yielded a high proportion of false negatives that clinicians would have classified as "useful". Almost perfect agreement was achieved by the metric Mask IoU > 0. Reflection of domain interests In the presence of many sources of variability depending on the metric configuration, we conducted an experiment to determine which configuration aligns most with the clinical goal. We presented colonoscopy images of over 300 patients with their predicted bounding boxes to three gastroenterologists, one with over five years and two with over ten years of experience, who rated the predicted boxes as (clinically) "useful" or "not useful". Each clinician was responsible for one third of the images and each image was only rated once. In order to assess the agreement of certain metric configurations with the clinician score, we plotted the number of predictions that met the criterion as a fraction of the predictions rated as "useful", as well as the number of predictions not meeting the criterion as a fraction of predictions rated as "not useful". We applied overlap-based and point-based criteria and highlighted the localization granularity that they focus on (rough outline or only position). The result can be seen as a bar plot in Fig. 9. All predictions clinically rated as "not useful" were rejected by all localization criteria. Criteria that focus only on position yielded a higher agreement with the "useful" score than those that localize based on overlap using rough outline. Discussion To our knowledge, we were the first to systematically investigate the variability of polyp detection performance resulting from various validation design choices. The following key insights can be derived from our experiments: (1) Performance results are highly sensitive to various design choices: Our experiments clearly demonstrate that various validation design choices have a substantial effect on the performance computed for object detection algorithms according to popular metrics. These range from the choice of test set to the specific metric configuration used. While the effect of using different classification metrics may be increasingly well-understood [9], we believe that common metrics, such as AP, are often regarded as black boxes and the effect of the various hyperparameters remains poorly understood. Our findings clearly suggest that hyperparameters -specifically the localization criterion and the corresponding threshold -should not indiscriminately be adopted from other work, but carefully be chosen to match the domain need. (2) Common metric configurations do not reflect the clinical need: According to a usefulness assessment of polyp predictions from over 300 patients by three clinicians from different hospitals, commonly used localization criteria that are popular in the computer vision community do not reflect the clinical domain interest when deciding whether a prediction should be assigned a true positive or false positive. The community should therefore revisit the question of whether a good object detection method must necessarily yield a good outline of a polyp. Restricting the need to just localizing a polyp via its position (reflected by the requirement of IoU > 0, for example) might better approximate the clinical need and at the same time overcome problems resulting from suboptimal IoU thresholds. (3) Common hyperparameters may be too restrictive: Our visual examples (Fig. 7) demonstrate that even fairly well-localized polyps feature an IoU below the commonly used threshold of 0.5, resulting in them being considered a miss even though a clinician might find the prediction useful. The community may therefore want to reconsider commonly used threshold ranges and use a broader range (see Fig. 8b)). (4) Comparison of performance across datasets can be largely misleading: Our work finds that detection performance depends crucially on the polyp sizes. Hence, even if the prevalences of polyps across centers are similar, comparison of algorithm results can be largely misleading in case of different polyp size distributions. The closest work to ours was recently presented by Ismail et al. [13] outside the field of deep learning. They provide anecdotal evidence on the noncomparability of confusion matrices between different methods, but do not analyze common multi-threshold metrics such as AP or popular localization criteria that serve as the basis for popular classification metrics. Other related work focused on providing benchmarking data sets [2] or showing limitations of metrics for clinical use cases outside the field of polyp detection [7,14,15]. A limitation of our study can be seen in the fact that we only used one object detection model. As a consequence, we are restricted to bounding boxes as predicted instances. On the other hand, the applied model was the winner of a very recent polyp detection challenge and can therefore be regarded as representative of the state of the art. Furthermore, almost all common object detection algorithms are based on predicting bounding boxes. Another limitation could be seen in the fact that we reported our findings only on a single data set [11]. However, this data set comprises images from six centers and can therefore be seen as sufficiently representative for the scope of our research question. Finally, there are several other factors related to performance assessment that we did not prioritize in this work. These include the assignment strategy, the prevalence as well the confidence threshold in the case of counting metrics. Future work could hence explore the impact of these factors. In conclusion, our study is the first to systematically demonstrate the sensitivity of commonly used performance metrics in deep learning-based colon cancer screening to a range of validation design choices. In showing clear evidence for the disparity between commonly used metric configurations and clinical needs, we hope to raise awareness for the importance of adapting validation in machine learning to clinical relevance in general, and spark the careful reconsideration of common validation strategies in automatic cancer screening applications in particular. Declarations Funding This project was supported by a Twinning Grant of the German Cancer Research Center (DKFZ) and the Robert Bosch Center for Tumor Diseases (RBCT). Competing interests The authors have no relevant financial or non-financial interests to disclose. Ethics approval This work was conducted using public datasets of human subject data made available by [3]. Consent to participate Not applicable. Consent for publication Not applicable. Availability of data and materials Not applicable. Code availability Not applicable. Authors' contributions All authors contributed and commented on previous versions of the manuscript. All authors read and approved the final manuscript. Fig. 2 2Localization criteria can be point-based or overlap-based depending on whether the user is mainly interested in the position or in the rough outline of an object. Point in mask returns a true positive (TP) if the center point of the predicted bounding box lies within the respective reference mask. The reference can be the segmentation mask, convex hull or bounding box. Center distance criterion determines a TP if the distance d between prediction and reference centers is within a range τ . For overlap-based criteria, the result is a TP if the overlap lies above a certain threshold. Depending on whether the Intersection over Union (IoU) is computed for a reference mask or an approximating bounding box, we refer to it as Mask or Box IoU. Fig. 3 3Pictorial representation of the Average Precision (AP) metric. Fig. 4 4Performance variability resulting from the chosen validation center. All commonly used classification metrics (cf. Tab. 1) show a substantial sensitivity to the center. The dot-and-box plots contain aggregated values per center. Fig. 5 5Effect of the reference shape in point-based localization criteria (a) on the confusion matrix (CM) (b). In the case of non-convex polyps, Mask IoU leads to substantially more false positives. Fig. 6 6Effect of the reference shape (here: reference mask or its bounding box or convex hull) in boundary-based localization criteria. For two different (blue) predictions (a) and (b) the Intersection over Union (IoU) results are shown. These vary substantially in the case of the inferior prediction (b). Fig. 7 7Effect of Intersection over Union (IoU) threshold on the confusion matrix for three different overlap-based localization criteria. The same predictions produce substantially different confusion matrices for commonly used thresholds 0.5 and 0.75. Fig. 8 8(a) Effect of different localization criteria on the most common object detection metric Average Precision (AP). Three common overlap-based criteria using different references (box, mask and hull) are plotted as a function of the Intersection over Union (IoU) cutoff threshold in the range [0.05:0.95]. Box IoU scores are higher across all thresholds, while Mask IoU and Hull IoU do not differ substantially. (b) Average Precision (AP) with Box Intersection over Union (IoU) threshold for three different ranges of IoU thresholds. Note that the range [0.5:0.95] (orange) is the most common one in the computer vision community. on the commonly-used IoU range substantially differ from those on lower IoU ranges. Note that the corresponding AP for a point in mask criterion would be 0.73. IoU vs. Point in Mask : Considering the clinical goal of prioritizing the localization of polyps more than their boundaries, we compared the values of the aggregated metrics Sensitivity, Positive Predictive Value (PPV), F1-Score, F2-Score and Average Precision using point-based localization criteria to the values obtained using Box IoU. The result is shown in Tab. 3. Point inside reference criteria yield higher scores across all metrics compared to Box IoU over most IoU thresholds. This especially holds true for detection Sensitivity. Table 2 2Average Precision (AP) stratified by polyp size. The results are shown for a fixed Intersection over Union (IoU) threshold of 0.5 (left) as well as for a range of thresholds following the COCO benchmark evaluation standard Colorectal cancer epidemiology: incidence, mortality, survival, and risk factors. F Haggar, 10.1055/s-0029-1242458Clin Colon Rect Surg. Haggar, F., et al.: Colorectal cancer epidemiology: incidence, mortality, survival, and risk factors. Clin Colon Rect Surg. (2009) https://doi.org/ 10.1055/s-0029-1242458 A video based benchmark data set (ENDOTEST) to evaluate computer-aided polyp detection systems. D Fitting, 10.1080/00365521.2022.2085059Scand J Gastroentero. Fitting, D., et al.: A video based benchmark data set (ENDOTEST) to evaluate computer-aided polyp detection systems. Scand J Gastroentero. (2022)https://doi.org/10.1080/00365521.2022.2085059 Assessing generalisability of deep learning-based polyp detection and segmentation methods through a computer vision challenge. S Ali, 10.48550/arXiv.2202.12031arXiv (2022Ali, S., et al.: Assessing generalisability of deep learning-based polyp detection and segmentation methods through a computer vision challenge. arXiv (2022) https://doi.org/10.48550/arXiv.2202.12031 Endoscopic computer vision challenges 2.0. S Ali, Ali, S., et al.: Endoscopic computer vision challenges 2.0. (2022), https: //endocv2022.grand-challenge.org/ Accessed 2022-11-14 Gastrointestinal Image ANAlysis (GIANA). J Bernal, Bernal, J., et al.: Gastrointestinal Image ANAlysis (GIANA). (2021), https://giana.grand-challenge.org/ Accessed 2022-11-15 J Bernal, Polyp Detection in Colonoscopy Videos. Computer-Aided Analysis Of Gastrointestinal Videos. Bernal, J., et al.: Polyp Detection in Colonoscopy Videos. Computer-Aided Analysis Of Gastrointestinal Videos. pp. 163-169 (2021) A Reinke, 10.48550/arXiv.2104.05642Common Limitations of Image Processing Metrics: A Picture Story. arXiv. Reinke, A., et al.: Common Limitations of Image Processing Metrics: A Picture Story. arXiv (2021) https://doi.org/10.48550/arXiv.2104.05642 Heterogeneous model ensemble for polyp detection and tracking in colonoscopy. A Yamlahi, EndoCV@ISBI. Yamlahi, A., et al.: Heterogeneous model ensemble for polyp detection and tracking in colonoscopy. EndoCV@ISBI. (2022) L Maier-Hein, 10.48550/ARXIV.2206.01653Metrics reloaded: Pitfalls and recommendations for image analysis validation. arXiv. Maier-Hein, L., et al.: Metrics reloaded: Pitfalls and recommendations for image analysis validation. arXiv (2022) https://doi.org/10.48550/ ARXIV.2206.01653 Comparative Validation of Polyp Detection Methods in Video Colonoscopy: Results From the MICCAI 2015 Endoscopic Vision Challenge. J Bernal, 10.1109/TMI.2017.2664042IEEE T Med Imaging. Bernal, J., et al.: Comparative Validation of Polyp Detection Methods in Video Colonoscopy: Results From the MICCAI 2015 Endoscopic Vision Challenge. IEEE T Med Imaging. (2017) https://doi.org/10.1109/TMI. 2017.2664042 S Ali, 10.48550/arXiv.2106.04463PolypGen: A multi-center polyp detection and segmentation dataset for generalisability assessment. arXiv. Ali, S., et al.: PolypGen: A multi-center polyp detection and segmentation dataset for generalisability assessment. arXiv (2021) https://doi.org/10. 48550/arXiv.2106.04463 T Lin, Microsoft coco: Common objects in context. European Conference On Computer Vision. Lin, T., et al.: Microsoft coco: Common objects in context. European Conference On Computer Vision. pp. 740-755 (2014) On Metrics Used in Colonoscopy Image Processing for Detection of Colorectal Polyps. New Approaches For Multidimensional Signal Processing. R Ismail, Ismail, R., et al.: On Metrics Used in Colonoscopy Image Processing for Detection of Colorectal Polyps. New Approaches For Multidimensional Signal Processing. pp. 137-151 (2021) Are we using appropriate segmentation metrics? Identifying correlates of human expert perception for CNN training beyond rolling the DICE coefficient. F Kofler, 10.48550/arXiv.2103.06205arXivKofler, F., et al.: Are we using appropriate segmentation metrics? Identify- ing correlates of human expert perception for CNN training beyond rolling the DICE coefficient. arXiv (2021) https://doi.org/10.48550/arXiv.2103. 06205 Comparative evaluation of autocontouring in clinical practice: A practical method using the Turing test. M Gooding, Med Phys. Gooding, M., et al.: Comparative evaluation of autocontouring in clinical practice: A practical method using the Turing test. Med Phys. (2018)
[]
[ "Mechanical Theory of Nonequilibrium Coexistence and Motility-Induced Phase Separation The Mechanics of Nonequilibrium Coexistence", "Mechanical Theory of Nonequilibrium Coexistence and Motility-Induced Phase Separation The Mechanics of Nonequilibrium Coexistence" ]
[ "Ahmad K Omar [email protected]. \nDepartment of Materials Science and Engineering\nUniversity of California\n94720BerkeleyCA\n\nMaterials Sciences Division\nLawrence Berkeley National Laboratory\n94720BerkeleyCA\n", "Hyeongjoo Row \nDivision of Chemistry and Chemical Engineering\nCalifornia Institute of Technology\n91125PasadenaCA\n", "Stewart A Mallory \nDepartment of Chemistry\nThe Pennsylvania State University\n16802University ParkPA\n", "John F Brady \nDivision of Chemistry and Chemical Engineering\nCalifornia Institute of Technology\n91125PasadenaCA\n" ]
[ "Department of Materials Science and Engineering\nUniversity of California\n94720BerkeleyCA", "Materials Sciences Division\nLawrence Berkeley National Laboratory\n94720BerkeleyCA", "Division of Chemistry and Chemical Engineering\nCalifornia Institute of Technology\n91125PasadenaCA", "Department of Chemistry\nThe Pennsylvania State University\n16802University ParkPA", "Division of Chemistry and Chemical Engineering\nCalifornia Institute of Technology\n91125PasadenaCA" ]
[]
Nonequilibrium phase transitions are routinely observed in both natural and synthetic systems. The ubiquity of these transitions highlights the conspicuous absence of a general theory of phase coexistence that is broadly applicable to both nonequilibrium and equilibrium systems. Here, we present a general mechanical theory for phase separation rooted in ideas explored nearly a half-century ago in the study of inhomogeneous fluids. The core idea is that the mechanical forces within the interface separating two coexisting phases uniquely determine coexistence criteria, regardless of whether a system is in equilibrium or not. We demonstrate the power and utility of this theory by applying it to active Brownian particles, predicting a quantitative phase diagram for motility-induced phase separation in both two and three dimensions. This formulation additionally allows for the prediction of novel interfacial phenomena, such as an increasing interface width while moving deeper into the two-phase region, a uniquely nonequilibrium effect confirmed by computer simulations. The self-consistent determination of bulk phase behavior and interfacial phenomena offered by this mechanical perspective provide a concrete path forward towards a general theory for nonequilibrium phase transitions.
10.1073/pnas.2219900120
[ "https://export.arxiv.org/pdf/2211.12673v1.pdf" ]
253,801,766
2211.12673
8b349cfbfa21ced36d8bf14189be2296667ddc98
Mechanical Theory of Nonequilibrium Coexistence and Motility-Induced Phase Separation The Mechanics of Nonequilibrium Coexistence Ahmad K Omar [email protected]. Department of Materials Science and Engineering University of California 94720BerkeleyCA Materials Sciences Division Lawrence Berkeley National Laboratory 94720BerkeleyCA Hyeongjoo Row Division of Chemistry and Chemical Engineering California Institute of Technology 91125PasadenaCA Stewart A Mallory Department of Chemistry The Pennsylvania State University 16802University ParkPA John F Brady Division of Chemistry and Chemical Engineering California Institute of Technology 91125PasadenaCA Mechanical Theory of Nonequilibrium Coexistence and Motility-Induced Phase Separation The Mechanics of Nonequilibrium Coexistence Author contributions: A.K.O., H.R., S.A.M., and J.F.B. designed research, performed research, analyzed data, and wrote the paper. The authors declare no conflict of interest. 1 A.K.O., H.R., and S.A.M. contributed equally to this work. 2 To whom correspondence should be addressed. Nonequilibrium phase transitions are routinely observed in both natural and synthetic systems. The ubiquity of these transitions highlights the conspicuous absence of a general theory of phase coexistence that is broadly applicable to both nonequilibrium and equilibrium systems. Here, we present a general mechanical theory for phase separation rooted in ideas explored nearly a half-century ago in the study of inhomogeneous fluids. The core idea is that the mechanical forces within the interface separating two coexisting phases uniquely determine coexistence criteria, regardless of whether a system is in equilibrium or not. We demonstrate the power and utility of this theory by applying it to active Brownian particles, predicting a quantitative phase diagram for motility-induced phase separation in both two and three dimensions. This formulation additionally allows for the prediction of novel interfacial phenomena, such as an increasing interface width while moving deeper into the two-phase region, a uniquely nonequilibrium effect confirmed by computer simulations. The self-consistent determination of bulk phase behavior and interfacial phenomena offered by this mechanical perspective provide a concrete path forward towards a general theory for nonequilibrium phase transitions. T he diversity of phase behavior and pattern formation found in far-from-equilibrium systems has brought renewed focus to the theory of nonequilibrium phase transitions. Intracellular phase separation resulting in membraneless organelles (1,2) and pattern formation on cell surfaces (3) are just a few instances in which nonequilibrium phase transitions are implicated in biological function. Colloids (4) and polymers (5)(6)(7)(8) subject to boundary-driven flow can experience shear-induced phase transitions and patterns that profoundly alter their transport properties. Microscopic self-driven particles, such as catalytic Janus particles, motile bacteria, or field-directed synthetic colloids, exhibit phase transitions eerily similar to equilibrium fluids despite the absence of traditional equilibrium driving forces (9)(10)(11)(12)(13)(14). A general predictive framework for constructing phase diagrams for these driven systems is notably absent. For equilibrium systems, the formulation of a theory for phase coexistence was among the earliest accomplishments in thermodynamics. Maxwell (15), building on the work of van der Waals, derived what are now familiar criteria for phase equilibria for a one-component system: equality of temperature, chemical potential, and pressure. These criteria are rooted in the fundamental equilibrium requirements that free energy be extensive and convex for any unconstrained degrees of freedom within a system. The lack of such a variational principle for nonequilibrium systems has limited the theoretical description of out-of-equilibrium phase transitions. The absence of a general theory for nonequilibrium coexistence has been particularly evident in the field of active matter. The phenomena of motility-induced phase separation (MIPS) -the occurrence of liquid-gas phase separation among repulsive active Brownian particles (ABPs) -has motivated a variety of perspectives (16)(17)(18)(19)(20)(21)(22)(23)(24)(25)(26) in pursuit of a theory for active coexistence. These perspectives range from kinetic models (27), continuum and generalized Cahn-Hilliard approaches (16,18,28), large deviation theory (29,30), and power functional theory (24,25). Some of these approaches appeal to equilibrium notions such as free energy and chemical potential (19), concepts which lack a rigorous basis for active systems. Without a first-principles nonequilibrium coexistence theory, one cannot compare or assess the various perspectives. Despite the significant progress, a closed-form theory for the coexistence criteria for MIPS, which makes no appeals to equilibrium ideas, remains an outstanding challenge in the field. Mechanics is a natural choice for describing the behavior of both equilibrium and nonequilibrium systems as it is agnostic to the underlying distribution of microstates. In this Article, we construct an entirely mechanical description of liquid-gas coexistence, relying only on notions such as forces and stresses. This formulation is an extension of the mechanical perspective developed decades ago to describe coexistence and interfacial phenomena for equilibrium systems (31)(32)(33). We highlight the utility of this framework by developing a theory for the coexis- Significance Statement Phase separation, the coexistence between distinct macroscopic phases (such as oil coexisting with water), is ubiquitous in everyday life and motivated the development of the equilibrium theory of coexistence by Maxwell, van der Waals, and Gibbs. However, phase separation is increasingly observed in both synthetic and living nonequilibrium systems, where thermodynamic principles are strictly inapplicable. Here, we develop a mechanical description of phase separation, offering a route for constructing phase diagrams without presuming equilibrium Boltzmann statistics. We highlight the utility of our approach by developing a first-principles theory for motilityinduced phase separation and the uniquely nonequilibrium interfacial phenomena that accompany this transition. tence criteria of MIPS and comparing our theory's predictions to results from computer simulation. Our formulation further allows for the prediction of novel nonequilibrium interfacial behavior, such as a nonmonotonic interfacial width, as the system is taken deeper into the coexistence region. The Mechanics of Nonequilibrium Coexistence We briefly review the thermodynamics of phase separation for a one-component system undergoing a liquid-gas phase transition. The order parameter distinguishing the liquid and gas phases is the number density ρ ≡ N/V where N and V are the number of particles and volume, respectively. For simple substances at a uniform temperature T below a critical temperature Tc, the mean-field Helmholtz free energy F(N, V, T ) becomes concave for a range of densities, in violation of thermodynamic stability. The system resolves this instability by separating into coexisting macroscopic domains of liquid and gas with densities ρ liq and ρ gas , respectively. The free energy of the phase separated system (neglecting interfacial free energy) is now V liq f (ρ liq , T ) + V gas f (ρ gas , T ) where we have defined the free energy density f (ρ, T ) ≡ F (N, V, T )/V . The volumes occupied by the liquid (V liq ) and gas (V gas ) phases sum to the total system volume V . We now obtain the coexistence criteria by minimizing the total free energy with respect to ρ liq and ρ gas subject to the conservation of particle number constraint (i.e., V liq ρ liq + V gas ρ gas = V ρ). This results in the familiar coexistence criteria: µ(ρ liq , T ) = µ(ρ gas , T ) = µ coexist (T ) , p(ρ liq , T ) = p(ρ gas , T ) = p coexist (T ) , [1a] where µ(ρ, T ) = ∂f (ρ, T )/∂ρ is the chemical potential, p(ρ, T ) = −f (ρ, T ) + ρµ(ρ, T ) is the pressure, and µ coexist (T ) and p coexist (T ) are the coexistence values for the chemical potential and pressure, respectively, at the temperature of interest. It is straightforward to show that Eq. (1a) can be equivalently expressed as: µ(ρ liq ) = µ(ρ gas ) = µ coexist , ρ liq ρ gas µ(ρ) − µ coexist dρ = 0 , [1b] or similarly: p(υ liq ) = p(υ gas ) = p coexist , υ liq υ gas p(υ) − p coexist dυ = 0 , [1c] where we have defined the inverse density υ ≡ 1/ρ and have dropped the dependence on T in Eqs. (1b) and (1c) for convenience. The integral expressions in Eqs. (1b) and (1c) are often referred to as equal-area or Maxwell constructions (15) in the µ − ρ and p − υ planes, respectively. These expressions are equivalent to Eq. (1a) and can be used to compute the coexistence curve or binodal as a function of T . The spinodal boundaries enclose the region of the phase diagram in which thermodynamic stability is violated, i.e., (∂ 2 f /∂ρ 2 )T < 0 or equivalently when (∂p/∂ρ)T < 0 or (∂µ/∂ρ)T < 0. These boundaries can thus be determined by finding the densities at which (∂p/∂ρ)T = 0 or (∂µ/∂ρ)T = 0 for a specified temperature. Interestingly, the coexistence criteria presented in Eq. (1c) contains only the mechanical equation-of-state, a quantity which is readily defined for nonequilibrium systems (unlike, for example, chemical potential). In fact, Eq. (1c) has been used in previous studies (19,34) to obtain the phase diagram of active systems. However, its validity for nonequilibrium systems is questionable as its origins are clearly rooted in a variational principle that only holds for equilibrium systems. We are now poised to construct a theory of coexistence based purely on mechanics. As previously noted, the order parameter for liquid-gas phase separation is density. The evolution equation for the order parameter is therefore simply the continuity equation: ∂ρ ∂t + ∇ · j ρ = 0 ,[2] where we are now considering a density field ρ(x; t) that is continuous in spatial position x (with ∇ = ∂/∂x) and j ρ (x; t) is the number density flux. A constitutive equation for the number density flux follows directly from linear momentum conservation. This connection can be appreciated by noting that j ρ (x; t) ≡ ρ(x; t)u(x; t) (where u(x; t) is the number average velocity of particles) and is therefore proportional to the momentum density by a factor of the particle mass m. Expressing linear momentum conservation with j ρ (rather than the more traditional u): ∂(mjρ) ∂t + ∇ · (mj ρ j ρ /ρ) = ∇ · σ + b ,[3] where σ(x; t) is the stress tensor and b(x; t) are the body forces acting on the particles. In simple systems, Eqs. (2) and (3) may constitute a closed set of coupled equations describing the temporal and spatial evolution of the density profile. However, the precise form of the stresses and body forces may depend on other fields, which will require additional conservation equations to furnish a closed-set of equations. As we are interested in scenarios in which phase separation reaches a stationary state of coexistence, the continuity equation reduces to ∇ · j ρ = 0 and linear momentum conservation is now ∇ · (mj ρ j ρ /ρ) = ∇ · σ + b. While j ρ = 0 for systems in equilibrium, nonequilibrium steady-states may admit nonzero fluxes * . However, a phase-separated system with a planar interface will satisfy j ρ = 0 due to the quasi-1d geometry and no-flux boundary condition. We restrict our discussion to macroscopic phase separation. Therefore, both equilibrium and nonequilibrium systems will adopt a density flux-free state, reducing the linear momentum conservation to a static mechanical force balance: 0 = ∇ · σ + b . [4] Equation (4) is the mechanical condition for liquid-gas coexistence and can be used to solve for ρ(x) with constitutive equations for σ and b. The nature of these constitutive equations will also determine if other conservation equations will be required. Let us now demonstrate that the equilibrium coexistence criteria are recovered from this mechanical perspective. In * Phase-separated nonequilibrium systems with interfaces of finite curvature (i.e., if the domain of one of the coexisting phases is of non-macroscopic spatial extent) may exhibit non-zero density fluxes (35). principle, for any system, whether it is in or out of equilibrium, microscopic expressions for Eqs. (2) and (3) can be obtained precisely through the N -body distribution function and its evolution equation. It will later be necessary to follow such an approach to obtain stresses and body forces when considering the phase coexistence of active particles. However, in equilibrium, the stresses and body forces can also be obtained variationally through a free energy functional. Consider the following free energy functional: [5] where f (ρ) is the mean-field free energy density, κ(ρ) is a (positive) coefficient such that the square-gradient term penalizes density gradients (36) and U ext (x) represents all externally applied potential fields. Minimizing F[ρ] with respect to ρ(x) (36)(37)(38) results in Eq. (4), allowing us to identify the reversible stress and body forces as: F[ρ] = V f + ρU ext + κ 2 |∇ρ| 2 dx ,σ = −pI + 1 2 ∂(κρ) ∂ρ |∇ρ| 2 + κρ∇ 2 ρ I − κ∇ρ∇ρ , [6a] b = −ρ∇U ext , [6b] where the pressure is again p(ρ) = −f (ρ) + ρ∂f /∂ρ, I is the second-rank identity tensor. Note that the gradient terms appearing in Eq. (6a) are the so-called Korteweg stresses (39). The equilibrium coexistence criteria can now be obtained from Eqs. (4) and (6). Without loss of generality, we take the z-direction to be normal to the planar interface and neglect any external potential (i.e., b = 0). In this case, the static force balance [Eq. (4)] reduces to dσzz/dz = 0 where we have exploited the spatial invariance tangential to the interface. The stress is therefore constant across the interface resulting in: [7] where C is a to-be-determined constant. The complete density profile ρ(z) can now be determined by solving Eq. (7) with the appropriate boundary conditions. For a macroscopically phase separated system, the density profile approaches constant values ρ liq and ρ gas as z → ±∞. In these regions of constant density, the gradient terms in Eq. (7) vanish and the pressure in the two phases are equal: p(ρ liq ) = p(ρ gas ) = C. We now recognize the constant C as the coexistence pressure p coexist and recover the first of the two expected coexistence criteria in Eq. (1c). Before proceeding to the second coexistence criteria, we rearrange Eq. (7): Force balance on the particles within a control volume at steady state. Application of an external force field F ext (top) to a passive system with conservative reciprocal interaction forces F C and a system with no external forces but with active forces F A in addition to F C (bottom). − σzz = p − 1 2 ∂κ ∂ρ ρ − κ dρ dz 2 − κρ d 2 ρ dz 2 = C ,p(ρ) − p coexist = a(ρ) d 2 ρ dz 2 + b(ρ) dρ dz 2 ,[8]E(ρ) = 1 a(ρ) exp 2 b(ρ) a(ρ) dρ ,[9] and spatially integrating the result across the interface. This operation eliminates the gradient terms, resulting in a coexistence criteria purely in terms of equations-of-state: ρ liq ρ gas p(ρ) − p coexist E(ρ) dρ = 0 .[10] Aifantis and Serrin further established that Eq. (10) has a unique coexistence solution, provided a(ρ) > 0 and p(ρ) is nonmonotonic in ρ (32). Equation (10) is no longer an equal-area construction, but such a form can be readily obtained through a simple change of variables (21,22) E(ρ) ≡ ∂E/∂ρ resulting in: E liq E gas p(E) − p coexist dE = 0 . [11] Equation (11) now has the form of an an equal-area construction in the p − E plane. For the equilibrium system of interest, one finds E(ρ) = 1/ρ 2 = υ 2 and E(ρ) = υ (multiplicative and additive constants in E(ρ) and E(ρ) do not affect the coexistence criteria), recovering the expected equilibrium coexistence criteria [Eq. (1c)] from our mechanical perspective. We further note that in order to define the spinodal without invoking thermodynamic stability, a linear stability analysis on Eqs. (2) and (3) [using the reversible stress Eq. (6a)] can be performed to determine if small density perturbations to a homogeneous base state will grow in time. In doing so (see Supporting Information (SI) for details), we recover the mechanical spinodal criteria (∂p/∂ρ) < 0. This completes our discussion of the mechanics of equilibrium coexistence and stability. For a nonequilibrium system, an additional complexity arises: the possibility of spontaneously generated internal body forces. The absence of applied external fields does not exclude the possibility of body forces for nonequilibrium systems. A general nonequilibrium coexistence criteria for liquid-gas phase separation must therefore account for these internal body forces. To understand this physically, let us consider a steady state force balance on a collection of particles in a control volume [see Fig. 1]. Application of an external force field on the particles results in a net volumetric force acting on the particles: a body force. By Newton's third law, interparticle interactions do not give rise to a net volumetric force within the volume interior. It is only at the surface of the control volume that interparticle forces (exerted by particles outside the volume on the interior particles) are non vanishing, resulting in stresses. The polarization of active forces (see bottom of Fig. 1) results in a net active force within the volume, behaving similarly to an external force field (40). At steady state, the self-generated body force density due to nonequilibrium forces must balance a stress difference across the volume. In this case, the steady-state one dimensional (1d) mechanical balance is dσzz/dz + bz = 0. For a one dimensional system, the body force can always be expressed as bz = dσ b /dz and the mechanical balance can now be expressed as d(σzz + σ b )/dz = 0. This newly defined effective stress Σ ≡ σzz + σ b is, just as before, constant spatially. Expressing Σ as a second-order gradient expansion in density: [12] where P(ρ) is a dynamic or effective pressure. We again recognize that, as the gradients must vanish in the bulk phases, − Σ = P(ρ) − a(ρ) d 2 ρ dz 2 − b(ρ) dρ dz 2 = C ,P(ρ liq ) = P(ρ gas ) = C, where we identify the constant as the coexistence effective pressure P coexist . The second coexistence criteria can be found analogously as before through the use of an integrating factor E(ρ)dρ/dz, where E(ρ) is defined in Eq. (9). The two coexistence criteria are then: P(E liq ) = P(E gas ) = P coexist , [13a] E liq E gas P(E) − P coexist dE = 0 , [13b] where ∂E ∂ρ = 1 a(ρ) exp 2 b(ρ) a(ρ) dρ . [13c] Equation (13) is the general nonequilibrium coexistence criteria for liquid-gas phase separation. Application of this criteria to determine the phase diagram will require expressing the dynamic pressure P(ρ) as a second order density gradient expansion in order to identify the equal-area construction variable E(ρ). Furthermore, provided that a timescale exists such that this dynamic pressure can also be defined for timedependent states, the spinodal criteria is now (∂P/∂ρ) < 0, as shown in the SI. We now proceed to obtain the dynamic pressure of active Brownian particles and apply this nonequilibrium coexistence criteria. The Mechanical Theory of MIPS For a theoretical prediction of the phase diagram of active Brownian particles, our mechanical perspective requires expressions for the dynamic pressure, P(ρ), and the coefficients of the leading gradient terms, a(ρ) and b(ρ). These quantities are needed to calculate the appropriate integration variable E(ρ) such that Eq. (13) is satisfied. To derive these quantities, we require expressions for the stress σ and body forces b without invoking a variational principle. These constitutive equations can be obtained systematically, beginning with the equations-of-motion describing the motion of the microscopic degrees of freedom. We consider active Brownian particles with overdamped translational and rotational equations-of-motion describing the position rα and orientation qα (|qα| = 1) of particle α as:ṙ α = U0qα + 1 ζ F C α , [14a] qα = Ω R α × qα , [14b] where ζ is the translational drag coefficient, and F C α is the interparticle force on particle α. The orientation of a particle evolves under the influence of a stochastic angular velocity Ω R α which follows the usual white noise statistics with a mean of Ω R α (t) = 0 and a variance of Ω R α (t)Ω R β (t ) = (2/τR) δ αβ δ(t − t )I where τR is the reorientation time and δ αβ is the Kronecker delta. We aim to describe the strongly active (athermal) limit of hard active disks and spheres where the phase diagram for these systems are fully described by two geometric parameters: the volume (or area) fraction φ ≡ vpρ (where vp is the area (d = 2) or volume (d = 3) of a particle) and the dimensionless intrinsic run length 0/D, where 0 ≡ U0τR, D being the particle diameter and U0 is the intrinsic active speed. We therefore choose a conservative force F C α that results in hard-particle interactions, as further detailed in the Materials and Methods. The probability density fN (Γ; t) of finding the system in a microstate Γ = (r N , q N ) at time t satisfies a conservation equation ∂fN /∂t = LfN , where L is the relevant dynamical operator specific to the microscopic equations-of-motion [e.g., Eq. (14)]. Conservation equations needed to describe the density-field (at a minimum, the continuity equation and linear momentum conservation) can be directly obtained through this dynamical operator and distribution function. For example, the continuity equation for the ensemble-averaged microscopic density ρ(x; t) = ρ(x) = N α=1 δ(x − rα) is given by ∂ρ/∂t = γρ LfN dΓ where γ is the phase-space volume. An expression for linear momentum conservation and all other required conservation equations can be similarly obtained. In the case of ABPs, L is the Fokker-Planck (or Smoluchowski) operator. For brevity, this operator and the conservation equations resulting from it are provided in the Materials and Methods and a complete derivation can be found in the SI. Here, we only include the necessary results to obtain the MIPS phase diagram. The linear momentum balance for overdamped ABPs is found to simply be 0 = ∇ · σ + b, where the inertial terms [the left-hand-side of Eq. (3)] are identically zero. The stress is identified as σ = σ C , where σ C is the stress generated by the conservative interparticle forces. The body forces are given by b = −ζj ρ + ζU0m, where −ζj ρ is the drag force density and ζU0m is the active force density arising from the polarization density field m(x; t) = N α=1 qαδ(x − rα) . For the quasi-1d system, the active force density is the sole body force as j ρ = 0, reducing the linear momentum balance to: 0 = ∇ · σ C + ζU0m . [15] Activity thus manifests as a body force (40)(41)(42)(43) rather than a true stress. An added complexity for ABP coexistence is that we now require an additional conservation equation for the polarization density field m as it appears in Eq. (15). This is given by: m = − τR d − 1 ∇ · j m . [16] The form of Eq. (16) allows us to write an effective stress for the system as: Σ = σ C + σ act ,[17] where we have defined the active or "swim" (44) stress as σ act = −ζU0τRj m /(d − 1) (43). It is important to note here that the effective stress we define here is not a true stress [just as the Maxwell stress tensor is not a true stress tensor (45)]. This distinction between true stresses (σ C ) and effective stresses (Σ) was found to be crucial (43) in computing the surface tension of ABPs (46)(47)(48), which requires the true stress tensor (43,49). In our derivation of the effective stress [Eq. (17)] we have made no approximations. However, to utilize our nonequilibrium coexistence criteria, we must be able to express Σ = σ C zz + σ act zz in terms of bulk equations-of-state and density gradients. A gradient expansion of the conservative interparticle stress σ C zz results in the bulk interaction pressure p C (ρ) and Korteweg-like terms with coefficients related to the pairinteraction potential and pair-distribution function (38). In the SI, we show the coefficients on the gradient terms associated with σ C zz scale as ζU0D -the stress scale for active hard-particle collisions -while, as we demonstrate next, the gradient terms in the active stress scale as ζU0 0. As MIPS occurs at 0/D 1, we can safely discard the Korteweg-like terms and approximate the conservative interparticle stress as σ C zz ≈ −p C (ρ). We now turn our focus to an expression for the active stress σ act zz in terms of bulk equations-of-state and density gradients. Deriving a constitutive equation for the polarization flux j m results in σ act zz taking the following form: [18] where Qzz is the normal component of the traceless nematic density field Q( σ act zz (z) = − ζ 0U0U (ρ) d(d − 1) (ρ(z) + dQzz(z)) ,x; t) = N α=1 ((qαqα − I/d)δ(x − rα) . U0U (ρ) is the density-dependent average speed of the particles. In the absence of interparticle interactions, the normalized speed U (ρ) = 1 as particle motion is unencumbered. An equation-of-state for U (ρ) is required to describe this bulk contribution of the active stress. The nematic field satisfies its own conservation equation which takes the following form at steady-state: Qzz(z) = − τR 2d d dz j Q zzz , [19a] j Q zzz = U0U (ρ)Bzzz(z) + 3U (ρ) d + 2 − 1 d U0mz(z) + 1 dζ dp C dz , [19b] where Bzzz is the relevant component of the traceless third orientational moment B = N α=1 ((qαqαqα − α · qα/(d + 2))δ(x − rα) , where α is a fourth-rank isotropic tensor (see Materials and Methods plane overestimates the coexistence pressure as predicted from (b) the equal-area construction in the P − p C established by our nonequilibrium theory. P and p C are made dimensionless by ζU 0 D/vp. or SI). As we are interested in density gradients up to second order, we can safely close the hierarchy of orientational moments by setting B = 0. We also recognize from linear momentum conservation Eq. (15) that ζU0mz − dp C /dz = 0, allowing us to substitute p C in place of mz in Eq. (19b). Our expression for the effective stress is now: [20] where pact = ρζ 0U0U (ρ)/d(d − 1) is the active pressure (42-44, 50-53) -an effective pressure emerging from the active body force density. The mechanical terms needed to apply our nonequilibrium coexistence criteria, for a given activity 0, can now be identified as: −Σ = p C +pact− 3 2 0 2d(d − 1)(d + 2) U (ρ) d dz U (ρ) dp C dz ,P(ρ) = p C + pact , [21a] a(ρ) = 3 2 0 2d(d − 1)(d + 2) U 2 ∂p C ∂ρ . [21b] b(ρ) = 3 2 0 2d(d − 1)(d + 2) U ∂ ∂ρ U ∂p C ∂ρ , [21c] Equations (13c), (21a), and (21b) allow us to identify E(ρ) = p C (ρ). The coexistence criteria for MIPS is therefore: P(p liq C ) = P(p gas C ) = P coexist , [22a] p liq C p gas C P(p C ) − P coexist dp C = 0 . [22b] Furthermore, the spinodal criteria is indeed found to be (∂P/∂ρ) < 0 (see SI for details). To apply this coexistence criteria we need to know the functional form of p C (ρ, 0) and pact(ρ, 0) (or equivalently U ) as a function of volume fraction φ (in place of ρ) and activity 0/D. A detailed theoretical treatment for these equations-ofstate will require a theory for the pair-distribution function g(r, q) where r and q are the separation vector and relative orientation vector between particle pairs, respectively. The description of nonequilibrium pair-correlations is an active area of investigation. Theories applicable in the dilute limit have been proposed (54), and recent developments have been made towards our understanding of strongly interacting systems (55,56). An alternative approach is to obtain these equations-ofstate directly from particle-based simulations in regions of the φ − 0 plane where the system remains homogeneous. This measured behavior can then be extrapolated to regions of the φ − 0 plane where the equations-of-state cannot be directly obtained by leveraging a number of physical considerations (e.g., p C is a monotonically increasing function of both φ and 0), as detailed in Ref. (57). In two dimensions (2d), we utilize the equations-of-state developed in Ref. (57) and follow a similar procedure to develop three dimensional (3d) versions, provided in the SI. We note that in both 2d (58) and 3d (59), ABPs can exhibit an order-disorder transition. The theory presented here applies only to scenarios where the sole order parameter is density. We therefore limit our focus to polydisperse ABPs in 2d (eliminating any potential ordered phase) and, in 3d, recognize that the liquid-gas transition is metastable with respect to a fluid-crystal transition for much of the phase diagram (59). Figure 2 compares the results of performing the equal-area construction in the P − p C plane with the naive application of the Maxwell (equilibrium) equal-area-construction in the P − υ plane (where υ ∼ 1/φ). The equilibrium construction overestimates the coexistence pressure in comparison to our nonequilibrium theory, resulting in less disparate coexisting densities. This trend holds in both two and three dimensions (see the binodals presented in Fig. 3) and is exacerbated with increasing activity. We now compare our theory with extensive simulations of polydisperse hard-disks (2d) performed in this study [see Fig. 3(a)] and simulations of monodisperse hard-spheres (3d) conducted in Ref. (59) [see Fig. 3(b)]. The agreement between our theory and simulation data is nearly perfect in 2d and, while there is less agreement in 3d, the nonequilibrium theory provides a substantially improved binodal in comparison to that predicted by the equilibrium Maxwell construction. We note that, just as in equilibrium theories for coexistence, the quantitative accuracy of any theory for nonequilibrium coexistence will of course depend on the quality of the equations-ofstate, a potential source of the discrepancy in 3d. Nonequilibrium Interfacial Phenomena At this point, let us now consider physically why our nonequilibrium mechanical theory consistently predicts a wider binodal when compared to the equilibrium Maxwell construction in the p − υ plane. We first note Eq. (1c) has a clear mechanical interpretation. The integrand p(υ) − p coexist isolates the contribution to the pressure arising solely due to interfacial forces. The integral can thus be interpreted as the mechanical work exerted by the interfacial forces on a particle as it moves from one phase to the other. In equilibrium, this (reversible) work is identically zero: moving a particle from liquid to gas (or gas to liquid) requires no work. In the case of ABPs, performing the equilibrium Maxwell construction in the p − υ plane [with the coexistence pressure P coexist determined from the nonequilibrium theory, see Fig. 2(a)] -the interface works against particle removal from the liquid phase: W liq→gas interf = υ gas υ liq P(υ) − P coexist dυ ≥ 0 ,[23] where the equality only holds at the critical point. This physical picture is consistent with the unique interfacial structure of MIPS, where ABPs within the interface are polarized facing into the liquid phase. As activity increases, this interfacial polarization intensifies and so too does the departure from the equilibrium Maxwell construction. The above discussion makes clear that nonequilibrium interfacial forces play a determining role in the phase behavior of driven systems. We can investigate this interfacial structure in greater detail as our mechanical theory, by its very nature, makes predictions about the structure of the interface that can be compared with simulation. A solution of Eq. (20) is shown in Fig. (4), where we find good agreement between our mechanical theory and simulation results for the density φ, polarization mz, and nematic order Qzz profiles. Additionally, we observe the polar order is proportional to dφ/dz, and the nematic order is proportional to dmz/dz, as predicted by their conservation equations. The polarization density, implicated above in the violation of the equilibrium Maxwell construction, can be understood as follows. From the momentum balance, the difference in p C between the two phases is balanced by the integral of the active force density: p C (ρ liq ) − p C (ρ gas ) = z liq z gas ζU0mzdz. Particles at the interface are oriented and exert active forces towards the phase with a higher interaction pressures or density, suppressing the removal of particles from the liquid phase. In the absence of these interfacial active forces (and in the absence of attractive cohesive forces keeping the liquid intact), there would be nothing to prevent the complete dissolution of the liquid phase. The internally-generated active force density engenders a unique non-monotonic trend in the interfacial width, predicted by our theory (see Fig. 5). This behavior was first observed in the simulations of Lauersdorf et al. (49) and reproduced here in our simulations of active spheres (Fig. 5 inset). This trend is in stark contrast to interfaces in equilibrium systems where the width of the interface decreases monotonically as the system is taken deeper into the coexistence region. We emphasize that the width obtained from our theory is the intrinsic width, w0, as, unlike the width from simulations, w, it does not include capillary fluctuations. These two widths, however, are expected and indeed found to be correlated. To illustrate that the origins of this unique nonequilibrium effect are again rooted in the interfacial active force density, consider the following. As one moves deeper into the two-phase region, the difference in interaction pressures (or densities) between coexisting phases increases, and so must the total active force provided by the particles at the interface to maintain this density difference. For sufficiently low activities, the active force required can be achieved by amplifying the active force density, ζU0mz = ζU0ρ qz , by better alignment of particle orientations qz towards the liquid phase, which results in a more compact and thinner interface. However, this reinforcement mode is limited due to the upper bound of the magnitude of the active force density imposed by perfect alignment qz = 1. To supply the large required active force needed at high activity, the width of the interface must increase with activity -once a packed layer of particles is fully aligned, more layers are necessary to produce the required active force. Discussion and Conclusions The nonequilibrium mechanical theory presented in this work allows for the determination of phase diagrams from bulk equations-of-state without making any assumptions regarding the distribution of microstates. Our theory identifies the effective pressure P, which includes the pressure arising from conservative interactions and those arising from nonequilibrium body forces, as the critical mechanical quantity in determining the phase behavior of nonequilibrium systems,. Using MIPS as a case study, we find that using a true nonequilibrium coexistence theory results in significantly better predictions than the binodal obtained through the naive use of the equilibrium coexistence criteria. In equilibrium, the coexistence criteria for phase separation are independent of the system details. All that is required is the equation-of-state (the pressure or chemical potential) to determine the phase diagram. For nonequilibrium systems, the interfacial stresses must be determined to derive the coexistence criteria, which will generally result in system-specific coexistence criteria [i.e., a system specific E(ρ)]. Moreover, while the order at which the density-gradient expansion is truncated for equilibrium systems will not affect E(ρ), there is no such guarantee for nonequilibrium systems. This is a result of the coefficients for a nonequilibrium system generally not emerging from a variational principle as in equilibrium. These considerations might suggest that the equilibrium coexistence criteria, while both rigorously and quantitatively incorrect, might at least provide a rough pragmatic estimate for the binodal of a nonequilibrium material (19,34). However, any departure from the equilibrium Maxwell construction likely indicates the significance of nonequilibrium interfacial forces. Indeed, our theory reveals that the internally generated active force density -present only within the interface -dictates the interface's structure and, in turn, the appropriate coexistence criteria. Finally, the mechanical theory for nonequilibrium phase separation presented in this work applies to scenarios where density is the sole order parameter. A myriad of other nonequilibrium phase transitions have been observed in recent years, including symmetry-breaking transitions [such as active crystallization (59)], transitions with non-conserved order parameters (14), and transitions with multiple order parameters, including traveling states (62). A general mechanical theory, such as that developed here, for these and other phase transitions would provide a much-needed framework for constructing and characterizing nonequilibrium coexistence. Materials and Methods Here, we briefly summarize the simulation and theoretical details while a detailed derivation of the ABP conservation equations is provided in the SI. Simulations. Particle-based simulations were conducted to determine the binodal for 2d polydisperse disks [equations-of-state for this system were exhaustively determined in Ref. (57)] and the equationsof-state for monodisperse 3d hard spheres [the binodal of this system was determined in Ref. (59)]. In all simulations, particles follow the equations-of-motion provided in the main text [Eqs. (14a) and (14b)] and the interparticle force F C [r N ; ε, σ] is taken to result from a Weeks-Chandler-Anderson (WCA) potential (63) (characterized by a Lennard-Jones diameter σ LJ and energy scale ε). Despite the use of a continuous potential, hard-particle statistics can be effectively achieved through careful consideration of the different force scales, as discussed in Ref. (59). Lacking translational Brownian motion, which simply attenuates the influence of activity on the phase behavior, these particles strictly exclude volume with a diameter D set by the potential stiffness S ≡ ε/(ζU 0 σ LJ ) as a measure of the relative strength of conservative and active forces. Continuous repulsions act only at distances between D and 2 1/6 σ LJ , a range that quickly becomes negligible as the stiffness S increases. We use a stiffness S = 50 for which D/(2 1/6 σ LJ ) = 0.9997, effectively achieving hard-sphere statistics. We therefore take the diameter to simply be D = 2 1/6 σ LJ . Holding S fixed to remain in this hardsphere limit, the system state is independent of the active force magnitude and is fully described by two geometric parameters: the volume fraction φ = N πD 3 /6V (or area fraction φ = N πD 2 /4A) and the dimensionless intrinsic run length 0 /D. All simulations were conducted with a minimum of 54,000 particles using the GPU-enabled HOOMD-blue software package (64). Additional details for the construction of the 3d equations-of-state are provided in the SI. Fokker-Planck Equation. The Fokker-Planck (or Smoluchowski) describing the N -body distribution of particle positions and orientations has the following form: ∂f N ∂t + α ∇α · j T α + α ∇ R α · j R α = 0 . [24a] Here, f N (Γ, t) is the probability density of observing a configuration Γ ≡ (r 1 , r 2 , ..., r N , q 1 , q 2 , ..., q N ) at time t, rα and qα (|qα| = 1) are the position and orientation vectors of particle α, j T α and j R α are translational and rotational fluxes of particle α, and ∇α = ∂/∂rα and ∇ R α = qα × ∂/∂qα are translational and rotational gradient operators. The fluxes are given by j T α = U 0 qαf N + 1 ζ F C α f N , [24b] j R α = −τ −1 R ∇ R α f N . [24c] The application of our nonequilibrium coexistence theory requires the steady-state (and density flux-free) linear momentum balance and the conservation equations of any field variable appearing in the momentum balance. Equation (24) and the microscopic definition of the field variables can be used to obtain these conservation equations (see SI for details), which are summarized next. Conservation Equations. Conservation of number density is simply the continuity equation: ∂ρ ∂t + ∇ · j ρ = 0 , [25] which is coupled to linear momentum conservation: 0 = ∇ · σ C + ζU 0 m − ζj ρ . [26] The polar order field m(x, t) satisfies its own conservation equation: ∂m ∂t + ∇ · j m + d − 1 τ R m = 0 , [27a] where the polarization flux follows: j m = U 0 U Q + 1 d ρI . [27b] A microscopic expression for the dimensionless average active speed U is provided in the SI. An additional term, not included in Eq. (27b), also appears but is found to have only a negligible quantitative effect on our findings as detailed in the SI. The nematic order conservation and constitutive equations are found to be: ∂Q ∂t + ∇ · j Q + 2d τ R Q = 0 , [28a] j Q = U 0 U B + U 0 m · U d + 2 α − 1 d II − 1 dζ ∇ · σ C I , [28b] where α is an isotropic fourth-rank tensor. (In indicial notation, α ijkl = δ ij δ kl + δ ik δ jl + δ il δ jk where δ ij is the second-rank identity tensor.) In Eq. (28b), the microscopic expression for U differs from that in Eq. (27b). However, to good approximation, these speeds can be taken to be the same, allowing us to express the steady-state equations with only two equations-of-state: p C and U . ACTIVE BROWNIAN PARTICLE CONSERVATION EQUATIONS AND CLOSURES We provide a derivation of the equations presented in the main text: the conservation equations for density, linear momentum, and the other orientational moment fields for a collection of N interacting active Brownian particles (ABPs). We emphasize that these equations for interacting ABPs have appeared in various forms throughout the literature [1][2][3]. However, in addition to deriving the equations, we will introduce the closures and assumptions that allow us to determine the stationary states of coexistence. The Fokker-Planck (or Smoluchowski) equation describing the N -body distribution of particle positions and orientations follows from the particle equations-of-motion and is given by: ∂f N (Γ, t) ∂t + α ∇ α · j T α + α ∇ R α · j R α = 0 .(S1a) Here, f N (Γ, t) is the probability density of observing a configuration Γ ≡ (r 1 , . . . , r N , q 1 , . . . , q N ) at time t, r α and q α (|q α | = 1) are the position and orientation vectors of particle α, j T α and j R α are the translational and rotational fluxes of particle α, and ∇ α = ∂/∂r α and ∇ R α = q α × ∂/∂q α are translational and rotational gradient operators. The fluxes are given by: j T α = U 0 q α f N + 1 ζ F α f N − D T ∇ α f N ,(S1b)j R α = − 1 τ R ∇ R α f N , ,(S1c) where U 0 is the intrinsic active speed, ζ is the translational drag coefficient, D T is the translational Brownian diffusivity (neglected in the main text but provided here for completeness), τ R is the reorientation time scale, and F α is the conservative force on particle α arising from a the potential energy U (r 1 , . . . , r N ). We consider pairwise additive isotropic interparticle interactions such that U (r 1 , . . . , r N ) = α U ext (r α ) + α β =α U 2 (r αβ )/2. Here, U ext (x) is the externally imposed potential at position x, U 2 (r) is the pair interaction potential, and r αβ = r α −r β (r αβ = |r αβ |). The force on particle α arising from the conservative potentials thus has two contributions F α = F ext α + F C α , where F ext α = −∇ α U ext (r α ), F C α = β =α F C αβ , and F C αβ = −∇ α U 2 (r αβ ). From Eq. (S1) the dynamical operator is defined as L ≡ α [∇ α ·(−U 0 q α −ζ −1 F α +D T ∇ α )+∇ R α ·(τ −1 R ∇ R α )] such that ∂f N /∂t = Lf N . The full N -body distribution function f N allows for the determination of the ensemble average of any observable O: O = Ô (Γ) , where · ≡ ( · )f N (Γ, t) dΓ is the ensemble average andÔ(Γ) is the microscopic definition of the observable O. The time evolution of an observable is then: ∂O ∂t = Ô ∂f N ∂t dΓ = Ô (Lf N ) dΓ = (L Ô )f N dΓ = L Ô ,(S2) where, in the case of Eq. (S1), the adjoint of L is L ≡ α [(U 0 q α + ζ −1 F α + D T ∇ α ) · ∇ α + τ −1 R ∇ R α · ∇ R α ] . We first derive an evolution equation of the (number) density field ρ(x, t) = ρ(x) , whereρ(x) = α δ(x − x α ) is the microscopic density of particles. The continuity equation directly follows from this procedure: ∂ρ ∂t + ∇ · j ρ = 0 ,(S3a) where: Here, ∇ ≡ ∂/∂x is the spatial gradient operator, j ρ is the particle flux, m(x, t) ≡ m(x) is the polarization density field, m(x) = α q α δ(x − r α ) is the microscopic density of polarization, and: j ρ = U 0 m(x, t) + 1 ζ F ext (x)ρ + 1 ζ ∇ · σ C (x, t) − D T ∇ρ .(S3bσ C (x, t) = − 1 2 α α =β r αβ F C αβ b αβ ,(S4) is the stress generated by the pairwise interparticle forces and b αβ (x; r α , r β ) = 1 0 δ(x − r β − λr αβ ) dλ is the bond function [4,5]. The four terms in the particle flux (S3b) correspond to the four modes of particle transport: transport driven by the active force, external forcing, interparticle forces, and Brownian motion. Equation (S3b) is also a statement of linear momentum conservation. To see this we rearrange and find: 0 = ∇ · σ(x, t) + b(x, t) ,(S5a) where the body forces are the terms in Eq. S3b that generally may not be expressed as divergences of a tensor: b(x, t) = −ζj ρ (x, t) + ζU 0 m(x, t) + F ext (x)ρ(x, t) ,(S5b) and the stresses are: σ(x, t) = σ C (x, t) − ζD T ρ(x, t)I ,(S5c) where −ζD T ρI represents the Brownian "ideal gas" stress and I is the second-rank identity tensor. In order to solve Eq. (S3), we require the polarization density m and the stress σ C . The polarization density has its own evolution equation. By lettingÔ =m in Eq. (S2) and following procedures similar to what were applied in order to obtain Eq. (S3), we find: ∂m ∂t + ∇ · j m + d − 1 τ R m = 0 ,(S6a)j m = U 0Q (x, t) + 1 ζ F ext m + 1 ζ κ m (x, t) + 1 ζ ∇ · Σ m (x, t) − D T ∇m ,(S6b) where j m is the polarization flux, d is the spatial dimension, andQ(x, t) ≡ Q (x) is the nematic order density field (wherê Q(x) = α q α q α δ(x − r α ) is the microscopic nematic density). The interparticle forces contribute to the transport of the polarization in body-force-like (i.e., no divergence) κ m and stress-like Σ m manners, which are defined as: κ m (x, t) = 1 2 α α =β F C αβ (q α − q β )δ(x − r α ) ,(S7)Σ m (x, t) = − 1 2 α α =β r αβ F C αβ q α b αβ .(S8) Equation (S8) makes clear that dS(x) · Σ m /ζ is the average transport of polarization due to the interparticle forces acting across the infinitesimal area dS from the direction of dS. From the definition [Eq. (S7)] of the body-force-like term κ m , we observe that configurations with q α = q β do not contribute to κ m and configurations with q α = −q β contribute the most to κ m in magnitude. This observation is indicative that κ m is correlated with the reduction in the effective active speed U eff due to interparticle interactions -a pair of particles slow down when they collide head to head but active motion is largely unaffected when interacting particles are oriented in the same direction. From the scaling analysis of the active pressure p act ∼ ρζU 0 τ R U eff [6] and recognizing that the active pressure is proportional to the trace of the polarization flux (with j m ∼ ρ ẋq ) such that p act ∼ ρζU 0 τ R ẋ · q [2, 7-10], we can indeed identify that κ m is directly related to the reduction in the effective speed of active transport of polarization. This motivates a constitutive equation κ m = −ζ(U 0 − U m eff )Q ,(S9) which leads to j m = U 0 U mQ + 1 ζ F ext m + 1 ζ ∇ · Σ m − D T ∇m ,(S10) where U m eff = U 0 U m is the effective speed of active polarization transport. The dimensionless quantity U m (∈ [0, 1]) represents the effective speed relative to the intrinsic speed U 0 and is an equation-of-state depending on the system volume (area) fraction φ and activity 0 /D. U m ≈ 1 when particles move nearly freely at low (φ 1) densities and U m ≈ 0 when particles mobility is limited due to interparticle interactions. To close our equations, expressions for σ C ,Q, U m , and Σ m are required. The nematic order densityQ follows its own evolution equation which can again be derived from Eq. (S2) withÔ =Q: ∂Q ∂t + ∇ · jQ + 2d τ R Q − 1 d ρI = 0 ,(S11a)jQ = U 0B (x, t) + 1 ζ F extQ + 1 ζ κQ(x, t) + 1 ζ ∇ · ΣQ(x, t) − D T ∇Q .(S11b) Here, jQ is the nematic order flux andB(x, t) ≡ α q α q α q α δ(x − r α ) . Interparticle interactions again result in body-forceand stress-like terms in the nematic order flux: κQ(x, t) = 1 2 α α =β F C αβ (q α q α − q β q β )δ(x − r α ) , (S12) ΣQ(x, t) = − 1 2 α α =β r αβ F C αβ q α q α b αβ .(S13) Again, the stress-like term ΣQ(x, t) represents the average transport of nematic order due to interparticle forces acting across x and κQ is related to the reduction in the effective speed of active transport of the nematic order. We again propose a constitutive relation: κQ = −ζ(U 0 − UQ eff )B .(S14) Consequently, the nematic order flux becomes: jQ = U 0 UQB(x, t) + 1 ζ F extQ + 1 ζ ∇ · ΣQ(x, t) − D T ∇Q ,(S15) where UQ eff = U 0 UQ is the effective speed of active nematic order transport. The dimensionless quantity UQ is again an equation-of-state depending on φ and 0 /D. It proves convenient to rewrite our equations with the traceless tensorial orientational moments to exclude the portions that are dependent on the lower order orientational moments (i.e., ρ and m). The traceless nematic order is defined as Q =Q − ρI/d while B =B − α · m/(d + 2), where α is a fourth-rank isotropic tensor. (In indicial notation, α ijkl = δ ij δ kl + δ ik δ jl + δ il δ jk where δ ij is the second-rank identity tensor.) The polarization flux [Eq. (S10)] and nematic order evolution equation [Eqs. (S11a) and (S15)] become: j m = U 0 U m Q + 1 d ρI + 1 ζ F ext m + 1 ζ ∇ · Σ m − D T ∇m ,(S16)∂Q ∂t + ∇ · j Q + 2d τ R Q = 0 ,(S17a) where the traceless nematic flux j Q = jQ − 1 d j ρ I is: Importantly, since our mechanical theory for phase coexistence requires density gradients only up to second order, B and ΣQ can be safely discarded as they will contribute third order (O(k 3 ) in the Fourier space) gradient terms. We postulate that the effective speeds of active transport for polar and nematic orders are identical, i.e. U ( 0 /D, φ) ≡ U m = UQ. The conservative interaction stress σ C is a familiar quantity. In the absence of spatial gradients, its sole contribution is the isotropic pressure arising from interparticle interactions σ C = −p C ( 0 /D, φ)I, which again will simply depend on activity and the particle density. The stress will of course also have gradient terms (the Korteweg stresses). However, these Korteweg stresses, which arise due to the distortion of the pair-distribution function in the presence of density gradients, are significantly smaller than the gradient terms generated by the active stress. Indeed, the nonisotropic contribution to the Korteweg stress (which has the same scale as the isotropic contributions, see main text) was measured in Ref. [9] in order to compute the surface tension of phase-separated ABPs. These Korteweg stresses were found to be negligibly small, scaling as ∼ ζU 0 D while the active stress gradient terms scale as ∼ ζU 0 0 . As we are interested in scenarios where MIPS occurs (i.e., 0 /D 1), we neglect the gradient terms arising from the conservative stress such that σ C = −p C ( 0 /D, φ)I. j Q = U 0 UQ B + 1 d + 2 α · m − 1 d U 0 mI + 1 ζ F ext Q + 1 ζ ∇ · ΣQ − 1 ζd ∇ · σ C I − D T ∇Q .(S17b) Finally, we show that including the stress-like contribution of the interparticle forces to the polar order (i.e. Σ m in the polar order flux [Eq. (S16)]) broadens the binodals predicted by our mechanical theory, yet only inconsequentially. We therefore close our equations by neglecting the stress-like contribution. We first note that our first coexistence criterion for MIPS P(p liq C ) = P(p gas C ) = P coexist is always true regardless of whether we include Σ m or not, as Σ m vanishes in regions of homogeneous density. However, since density gradients generate Σ m , it can in principle alter the second coexistence criterion, the equal-area construction, by altering E. From the definition Eq. (S8), it is seen that Σ m represents the correlation between the interaction stress σ C and orientation m/ρ. We examine the effect of the correlation by considering a constitutive relation Σ m = ξσ C m/ρ. Here, a parameter ξ(> 0) is introduced in order to investigate effects of the magnitude of this term systematically. The effect of Σ m on the binodal is most clearly seen by considering the following integral: I ≡ p liq C p gas C P(p C ) − P coexist dp C = − 0 d − 1 p liq C p gas C dΣ m zzz dz dp C .(S18) When the correlation term is neglected, the integral I trivially vanishes and the corresponding equal-area construction variable is E(ρ) = p C as discussed in the main text. With the simple model Σ m = ξσ C m/ρ, we find that: It can be easily seen that the integrand in Eq. (S19) is always non-negative. Consequently, the integral I > 0 and this implies that the coexistence effective pressure P coexist is reduced upon including Σ m [see Fig. S1(a)]. Accordingly, the difference between the coexisting densities ρ liq and ρ gas increases as shown in Fig. S1(b). This is because ρ gas decreases with P coexist more rapidly than ρ liq does due to the larger dp C /dρ in the liquid phase. Figure S1(b) shows that the coexistence curve is not modified significantly by Σ m even when its magnitude (ξ) is large, which allows us to close equations by discarding the term. I = ξτ R 2(d − 1)ζ p liq C p gas C dp C dz 2 d dp C p C ρ dp C .(S19) With our closures, only two quantities are required to describe athermal ABP phase coexistence: the effective active speed U 0 U ( 0 /D, φ) (or equivalently the active pressure) and the conservative interaction pressure p C ( 0 /D, φ). Accurate equationsof-state in 2d were developed in Ref. [10], and a detailed derivation of those expressions and a comparison to simulation data can be found in the main text and supplementary material of that work. A similar procedure to that utilized in Ref. [10] can be used to obtain equations-of-state for active Brownian spheres in three dimensions. The functional form of these expressions are: p act ζU 0 /(πD 2 ) = φ 0 D U = φ 0 D 1 + 1 − exp −2 7/6 0 D φ 1 − φ/φ max −1 ,(S20a)p C ζU 0 /(πD 2 ) = 6 × 2 −7/6 φ 2 1 − φ/φ max ,(S20b) where φ max = 0.645 is the maximum random packing fraction achieved in 3d from the simulations, D = 2 1/6 σ LJ is the hardsphere-like diameter, and σ LJ is the Lennard-Jones diameter. Figure S2 provides a comparison between simulation data obtained using Brownian dynamics and the equations-of-state above. MECHANICAL DERIVATION OF THE SPINODAL CONDITION The primary focus of the main text is to obtain the coexistence criteria for mechanically determining the binodal without invoking thermodynamic arguments. Here, we show that the spinodal -the region of a phase diagram in which a homogeneous density profile is unstable -can also be determined mechanically without invoking thermodynamic stability arguments. The temporal description of a density profile is provided by the continuity equation: ∂ρ ∂t + ∇ · j ρ = 0 ,(S21) where we now require an expression for the number density flux j ρ (x; t) = ρ(x; t)u(x, t) which follows from conservation of linear momentum: ∂(mj ρ ) ∂t + ∇ · (mj ρ j ρ /ρ) = ∇ · σ + b ,(S22) where m is the particle mass. Let us consider a passive system initially at rest u(x, t 0 ) = 0 with an initially homogeneous density profile ρ(x, t 0 ) = ρ 0 . We now consider small perturbations to the density and velocity fields such that ρ(x) = ρ 0 + δρ(x) and u(x) = δu(x). Substituting the perturbed density and velocity fields into Eq. (S21) and (S22) and retaining only terms linear in these perturbations results in: ∂δρ ∂t + ρ 0 ∇ · δu = 0 ,(S23a)mρ 0 ∂δu ∂t = ∇ · σ ,(S23b) where we have, for now, neglected body forces. We require an expression for σ to describe the evolution of the density perturbations. As our focus will be on the behavior of long wavelength perturbations, we omit the spatial gradient terms (e.g., the Korteweg stress or viscous stresses), resulting in σ = −p(ρ)I. The divergence of the stress can now be expressed as ∇ · σ = −(∂p/∂ρ)∇δρ where the compressibility (∂p/∂ρ) is to be evaluated at ρ 0 . Differentiating Eq. (S23a) with respect to time and substituting in Eq. (S23b) we arrive at: ∂ 2 δρ ∂t 2 = 1 m ∂p ∂ρ ρ=ρ0 ∇ 2 δρ ,(S24) which we recognize as a wave equation for δρ with the wave speed c given by c 2 = (∂p/∂ρ)/m. Spatially Fourier transforming Eq. (S24) we arrive at: ∂ 2 δρ k ∂t 2 = −(ck) 2 δρ k ,(S25) where k is the magnitude of the wavevector k and δρ k (k, t) is the Fourier-transformed density perturbation. Equation (S25) admits a plane-wave solution with: δρ k = A k exp[−ickt] + B k exp[ickt],(S26) where A k and B k are to-be-determined constants. Clearly, if c is imaginary, the Fourier modes of the density perturbations will grow in time and a homogeneous density is linearly unstable. This condition only occurs when (∂p/∂ρ) < 0, thus recovering the same spinodal condition as expected from thermodynamic stability while using only mechanical arguments. In the case of driven systems, internal body forces b(x, t) may be generated. These internal body forces often have their own evolution equations. However, there is often a separation of timescales between the relaxation dynamics of the density field and the dynamics of active body forces. Indeed, in the case of ABPs, the body force arises due to the polarization density of the active force which relaxes on a timescale proportional τ R [see Eq. (S6)]. With a separation of timescales (e.g., for timescales large compared to τ R in the case of ABPs), we can ignore the dynamics of the body force and, just as in the case of statics (see main text), define an effective stress Σ which incorporates the effects of the body force (for ABPs, Σ = σ C + σ act ). Provided that the dynamics of the body force permit the use of this dynamic stress in nonstationary conditions, the analysis applied above for passive systems can be repeated and will result in the spinodal condition of (∂P/∂ρ) < 0 with the dynamic pressure (Σ = −PI) now playing a determining role. In unsteady conditions, the active force density is not the only body force in our model. The drag force density also acts as a body force, altering the equation-of-motion of the velocity field from Eq. (S23b) to: mρ 0 ∂δu ∂t = ∇ · Σ − ζρ 0 δu ,(S27) where ζ is the translational drag coefficient and we now use the dynamic stress. We can again take a time derivative of Eq. (S23a) and now, using Eq. (S27), obtain: ∂ 2 δρ ∂t 2 − 1 τ p ∂δρ ∂t = 1 m ∂P ∂ρ ρ=ρ0 ∇ 2 δρ ,(S28) where τ p = m/ζ is the momentum relaxation time. Equation (S28) is a telegraph equation. Considering timescales much larger than the momentum relaxation time (i.e., τ p → 0) results in a diffusion equation: ∂δρ ∂t = 1 ζ ∂P ∂ρ ρ=ρ0 ∇ 2 δρ .(S29) We identify the diffusion coefficient as D = (∂P/∂ρ)/ζ. The Fourier space diffusion equation follows as: ∂δρ k ∂t = −Dk 2 δρ ,(S30) with a solution of: δρ k = A k exp[−Dk 2 t] .(S31) Thus, density perturbations will be linearly unstable only when D < 0, again resulting in the same spinodal condition of (∂P/∂ρ) < 0. Finally, we note that it is also straightforward to show that the same spinodal condition is also recovered using Eq. (S28) without taking the overdamped limit. Fig. 1 . 1Fig. 1. Force balance on the particles within a control volume at steady state. Application of an external force field F ext (top) to a passive system with conservative reciprocal interaction forces F C and a system with no external forces but with active forces F A in addition to F C (bottom). Fig. 2 . 2Predicted homogeneous equation-of-state for 2d athermal ABPs (57) with 0 /D ≈ 31.2. (a) The equal-area Maxwell construction in the P −φ −1 Fig. 3 . 3Coexistence curves for athermal active Brownian (a) disks (2d) and (b) spheres (3d). Coexisting densities were obtained from slab simulation data collected in this work (2d) and from Ref. (59) (3d). Critical points displayed were estimated from simulations in Refs. (60) (2d) and (59) (3d). Regions of coexistence and homogeneity are shaded on the basis of our theoretical predictions. Fig. 4 . 4Comparison of the one-body orientational moments obtained from simulation and theory for 3d ABPs with 0 /D ≈ 44.5. Snapshot represents an instantaneous system configuration. Only a narrow slice (in the outof-plane direction) of particles are shown for clarity. Polar and nematic order profiles are made dimensionless by the particle volume. Spatial integral of mz (shaded) is directly proportional to the difference in liquid and gas phase pressures, coupling the interfacial structure to the bulk phase behavior. Fig. 5 . 5Theoretical "intrinsic" interfacial width w 0 of 3d ABPs as a function of the critical parameter (where c 0 is the critical activity) with simulations (inset) corroborating the predicted nonmonotonicity. In both theory and simulation, the width is defined as the "10-90 thickness" (61), which does not presume a particular functional form to the density profile. The qualitative results were found to be insensitive to the specific definition of interfacial width. Stars denote local minima. FIG. S1 . S1(a) Schematic illustrating the effect of Σ m to the coexistence effective pressure P coexist . On the P − pC diagram, the integral I [Eq. (S18)] represents the area between the constant activity curve (black) and horizontal line P = P coexist . When the stress-like contribution Σ m is neglected, I = 0 since the equal-area construction variable is E(ρ) = pC. However, I > 0 when Σ m is considered and the corresponding coexistence effective pressure (red) should be lower than the coexistence effective pressure predicted by I = 0 (blue). (b) Coexistence curves for athermal active Brownian spheres (3d) obtained by the mechanical theory with the stress-like contribution of the interparticle forces to the polar order Σ m . The parameter ξ represents the magnitude of Σ m . The reduced P coexist resulting from Σ m increases the difference between the coexisting densities. However, this difference, even for extreme values of ξ, is not significant. Now, we have coupled evolution equations for the density [Eq. (S3)], polar order [Eqs. (S6a) and (S16)], and nematic order [Eq. (S17)]. Closing these equations will require expressions for the following unknowns: B, ΣQ, Σ m , σ C , U m , and UQ. FIG. S2 . S2Comparison between simulation data for active Brownian spheres and derived equations-of-state [Eq. (S20)] as a function of volume fraction φ. We observe excellent agreement with simulation data for the (a) dynamical pressure, (b) interaction pressure, and (c) active pressure for a range of run lengths 0/D approaching the critical activity c 0 . All pressures have been made dimensionless using the active energy scale ζU0 0/D 3 to highlight the collapse and linear behavior ofpact andP at low volume fractions. ACKNOWLEDGMENTS.A.K.O. is deeply indebted to Phill Geissler for his numerous insights regarding this work. We thank Katie Klymko, Karol Makuch, Zhiwei Peng and Andy Ylitalo for helpful discussions. We gratefully acknowledge support from the Schmidt Science Fellowship in partnership with the Rhodes Trust Physical principles of intracellular organization via active and passive phase transitions. J Berry, C P Brangwynne, M Haataja, Berry J, Brangwynne CP, Haataja M (2018) Physical principles of intracellular organization via active and passive phase transitions. Formation of liquid-like cellular organelles depends on their composition. C F Lee, Lee CF (2020) Formation of liquid-like cellular organelles depends on their composition. Pollen Cell Wall Patterns Form from Modulated Phases Correspondence In Brief A biophysical model explains the nonequilibrium phase-separation properties in polysaccharides that form distinct, characteristic patterns on the surface of pollen grains across the diversity of plants. A Radja, E M Horsley, M O Lavrentovich, A M Sweeney, Cell. 176Radja A, Horsley EM, Lavrentovich MO, Sweeney AM (2019) Pollen Cell Wall Patterns Form from Modulated Phases Correspondence In Brief A biophysical model explains the non- equilibrium phase-separation properties in polysaccharides that form distinct, characteristic patterns on the surface of pollen grains across the diversity of plants. Cell 176:856-868. Shear Banding and Flow-concentration Coupling in Colloidal Glasses. R B , Phys. Rev. Lett. 26105R. B, et al. (2010) Shear Banding and Flow-concentration Coupling in Colloidal Glasses. Phys. Rev. Lett. 105(26). Large Fluctuations in Polymer Solutions under Shear. H Eugene, G Fredrickson, Phys. Rev. Lett. 6221Eugene H, Fredrickson G (1989) Large Fluctuations in Polymer Solutions under Shear. Phys. Rev. Lett. 62(21):2468-2471. Flow Phase Diagrams for Concentration-coupled Shear Banding. S M Fielding, P D Olmsted, Euro. Phys. J. E. 111Fielding S M, Olmsted PD (2003) Flow Phase Diagrams for Concentration-coupled Shear Banding. Euro. Phys. J. E 11(1):65-83. Direct Observation of Flow-concentration Coupling in a Shear-banding Fluid. Meh Wagner, L Porcar, C Lopez-Barron, J , Phys. Rev. Lett. 8105Wagner MEH, Porcar L, Lopez-Barron C, J N (2010) Direct Observation of Flow-concentration Coupling in a Shear-banding Fluid. Phys. Rev. Lett. 105(8). Shear-induced Heterogeneity in Associating Polymer Gels: Role of Network Structure and Dilatancy. A K Omar, Z G Wang, Phys. Rev. Lett. 11911117801Omar AK, Wang ZG (2017) Shear-induced Heterogeneity in Associating Polymer Gels: Role of Network Structure and Dilatancy. Phys. Rev. Lett. 119(11):117801. Motility-induced Phase Separation. M E Cates, J Tailleur, Annu. Rev. Condens. Matter Phys. 61Cates ME, Tailleur J (2015) Motility-induced Phase Separation. Annu. Rev. Condens. Matter Phys. 6(1):219-244. Statistical mechanics where newton's third law is broken. A V Ivlev, Phys. Rev. X. 5111035Ivlev AV, et al. (2015) Statistical mechanics where newton's third law is broken. Phys. Rev. X 5(1):011035. Microscopic origin and macroscopic implications of lane formation in mixtures of oppositely driven particles. K Klymko, P L Geissler, S Whitelam, Phys. Rev. E. 94222608Klymko K, Geissler PL, Whitelam S (2016) Microscopic origin and macroscopic implications of lane formation in mixtures of oppositely driven particles. Phys. Rev. E 94(2):022608. Effective Temperature Concept Evaluated in an Active Colloid Mixture. M Han, J Yan, S Granick, E Luijten, Proc. Natl. Acad. Sci. USA. 11429Han M, Yan J, Granick S, Luijten E (2017) Effective Temperature Concept Evaluated in an Active Colloid Mixture. Proc. Natl. Acad. Sci. USA 114(29):7513-7518. Energy dissipation and fluctuations in a driven liquid. C Del Junco, L Tociu, S Vaikuntanathan, Proc. Natl. Acad. Sci. U.S.A. 11514del Junco C, Tociu L, Vaikuntanathan S (2018) Energy dissipation and fluctuations in a driven liquid. Proc. Natl. Acad. Sci. U.S.A. 115(14):3569-3574. Non-reciprocal phase transitions. M Fruchart, R Hanai, P B Littlewood, V Vitelli, Nature. 5927854363Fruchart M, Hanai R, Littlewood PB, Vitelli V (2020) Non-reciprocal phase transitions. Nature 592(7854):363. On the dynamical evidence of the molecular constitution of bodies. J Clerk-Maxwell, Nature. 11279Clerk-Maxwell J (1875) On the dynamical evidence of the molecular constitution of bodies. Nature 11(279):357-359. Athermal Phase Separation of Self-propelled Particles with No Alignment. Y Fily, M C Marchetti, Phys. Rev. Lett. 10823235702Fily Y, Marchetti MC (2012) Athermal Phase Separation of Self-propelled Particles with No Alignment. Phys. Rev. Lett. 108(23):235702. Structure and dynamics of a phase-separating active colloidal fluid. G S Redner, M F Hagan, A Baskaran, Phys. Rev. Lett. 110555701Redner GS, Hagan MF, Baskaran A (2013) Structure and dynamics of a phase-separating active colloidal fluid. Phys. Rev. Lett. 110(5):055701. Scalar φˆ4 Field Theory for Active-particle Phase Separation. W Raphael, Nat. Commun. 54351Raphael W, et al. (2014) Scalar φˆ4 Field Theory for Active-particle Phase Separation. Nat. Commun. 5:4351. Towards a thermodynamics of active matter. S C Takatori, J F Brady, Phys. Rev. E. 91332117Takatori SC, Brady JF (2015) Towards a thermodynamics of active matter. Phys. Rev. E 91(3):032117. Additivity, density fluctuations, and nonequilibrium thermodynamics for active Brownian particles. S Chakraborti, S Mishra, P Pradhan, Phys. Rev. E. 93552606Chakraborti S, Mishra S, Pradhan P (2016) Additivity, density fluctuations, and nonequilibrium thermodynamics for active Brownian particles. Phys. Rev. E 93(5):052606. Generalized thermodynamics of motility-induced phase separation: phase equilibria, Laplace pressure, and change of ensembles. A P Solon, J Stenhammar, M E Cates, Y Kafri, J Tailleur, New J. Phys. 20775001Solon AP, Stenhammar J, Cates ME, Kafri Y, Tailleur J (2018) Generalized thermodynam- ics of motility-induced phase separation: phase equilibria, Laplace pressure, and change of ensembles. New J. Phys. 20(7):75001. Generalized Thermodynamics of Phase Equilibria in Scalar Active Matter. A P Solon, J Stenhammar, M E Cates, Y Kafri, J Tailleur, Phys. Rev. E. 97220602Solon AP, Stenhammar J, Cates ME, Kafri Y, Tailleur J (2018) Generalized Thermodynamics of Phase Equilibria in Scalar Active Matter. Phys. Rev. E 97(2):020602(R). Chemical potential in active systems: predicting phase equilibrium from bulk equations of state?. S Paliwal, J Rodenburg, Roij Rv, New J. Phys. 20115003Dijkstra MPaliwal S, Rodenburg J, Roij Rv, Dijkstra M (2018) Chemical potential in active systems: predicting phase equilibrium from bulk equations of state? New J. Phys. 20(1):015003. Phase coexistence of active Brownian particles. S Hermann, P Krinninger, D De Las Heras, M Schmidt, Phys. Rev. E. 100552604Hermann S, Krinninger P, de las Heras D, Schmidt M (2019) Phase coexistence of active Brownian particles. Phys. Rev. E 100(5):52604. Phase separation of active Brownian particles in two dimensions: anything for a quiet life. S Hermann, D De Las Heras, M Schmidt, Mol. Phys. p. 1902585Hermann S, de las Heras D, Schmidt M (2021) Phase separation of active Brownian particles in two dimensions: anything for a quiet life. Mol. Phys. p. e1902585. Coexistence of active Brownian disks: van der Waals theory and analytical results. T Speck, Phys. Rev. E. 10312607Speck T (2021) Coexistence of active Brownian disks: van der Waals theory and analytical results. Phys. Rev. E 103:12607. Classical Nucleation Theory Description of Active Colloid Assembly. G S Redner, C G Wagner, A Baskaran, M F Hagan, Phys. Rev. Lett. 11714148002Redner GS, Wagner CG, Baskaran A, Hagan MF (2016) Classical Nucleation Theory De- scription of Active Colloid Assembly. Phys. Rev. Lett. 117(14):148002. Effective Cahn-Hilliard Equation for the Phase Separation of Active Brownian Particles. T Speck, J Bialké, A M Menzel, H Löwen, Physical Review Letters. 11221218304Speck T, Bialké J, Menzel AM, Löwen H (2014) Effective Cahn-Hilliard Equation for the Phase Separation of Active Brownian Particles. Physical Review Letters 112(21):218304. Phase Separation and Large Deviations of Lattice Active Matter. S Whitelam, K Klymko, D Mandal, J. Chem. Phys. 14815154902Whitelam S, Klymko K, Mandal D (2018) Phase Separation and Large Deviations of Lattice Active Matter. J. Chem. Phys. 148(15):154902. Entropy production fluctuations encode collective behavior in active matter. T Grandpre, K Klymko, K K Mandadapu, D T Limmer, Phys. Rev. E. 103112613GrandPre T, Klymko K, Mandadapu KK, Limmer DT (2021) Entropy production fluctuations encode collective behavior in active matter. Phys. Rev. E 103(1):012613. Stress and Structure in Fluid Interfaces. H T Davis, L E Scriven, Adv. Chem. Phys. John Wiley & SonsDavis HT, Scriven LE (1982) Stress and Structure in Fluid Interfaces in Adv. Chem. Phys. (John Wiley & Sons, Ltd), pp. 357-454. Equilibrium solutions in the mechanical theory of fluid microstructures. E C Aifantis, J B Serrin, J. Colloid Interf. Sci. 962Aifantis EC, Serrin JB (1983) Equilibrium solutions in the mechanical theory of fluid mi- crostructures. J. Colloid Interf. Sci. 96(2):530-547. The mechanical theory of fluid interfaces and Maxwell's rule. E C Aifantis, J B Serrin, J. Colloid Interf. Sci. 962Aifantis EC, Serrin JB (1983) The mechanical theory of fluid interfaces and Maxwell's rule. J. Colloid Interf. Sci. 96(2):517-529. Active phase separation by turning towards regions of higher density. J Zhang, R Alert, J Yan, N S Wingreen, S Granick, Nat. Phys. 178Zhang J, Alert R, Yan J, Wingreen NS, Granick S (2021) Active phase separation by turning towards regions of higher density. Nat. Phys. 17(8):961-967. Cluster Phases and Bubbly Phase Separation in Active Fluids: Reversal of the Ostwald Process. E Tjhung, C Nardini, M E Cates, Phys. Rev. X. 8331080Tjhung E, Nardini C, Cates ME (2018) Cluster Phases and Bubbly Phase Separation in Active Fluids: Reversal of the Ostwald Process. Phys. Rev. X 8(3):031080. Free energy of a nonuniform system. I. Interfacial free energy. J W Cahn, J E Hilliard, The Journal of Chemical Physics. 282Cahn JW, Hilliard JE (1958) Free energy of a nonuniform system. I. Interfacial free energy. The Journal of Chemical Physics 28(2):258-267. Thermodynamische theorie der capillariteit in de onderstelling van continue dichtheidsverandering. J D Der Waals, Verhand. Kon. Akad. Wetensch. Amsterdam Sect. 1der Waals JD (1893) Thermodynamische theorie der capillariteit in de onderstelling van con- tinue dichtheidsverandering, Verhand. Kon. Akad. Wetensch. Amsterdam Sect 1. Molecular theory of surface tension. Ajm Yang, P D Fleming, J H Gibbs, J. Chem. Phys. 649Yang AJM, Fleming PD, Gibbs JH (1976) Molecular theory of surface tension. J. Chem. Phys. 64(9):3732-3747. . D J Korteweg, Archives Neerl. Sci. Exacts. Nat. 61Korteweg DJ (1904) Archives Neerl. Sci. Exacts. Nat 6(1). The Swim Force As a Body Force. W Yan, J F Brady, Soft Matter. 1131Yan W, Brady JF (2015) The Swim Force As a Body Force. Soft Matter 11(31):6235-6244. Van't Hoff's law for active suspensions: The role of the solvent chemical potential. J Rodenburg, M Dijkstra, R Van Roij, Soft Matter. 1347Rodenburg J, Dijkstra M, Van Roij R (2017) Van't Hoff's law for active suspensions: The role of the solvent chemical potential. Soft Matter 13(47):8957-8963. Statistical mechanics of transport processes in active fluids. II. Equations of hydrodynamics for active Brownian particles. J M Epstein, K Klymko, K K Mandadapu, J. Chem. Phys. 15016164111Epstein JM, Klymko K, Mandadapu KK (2019) Statistical mechanics of transport processes in active fluids. II. Equations of hydrodynamics for active Brownian particles. J. Chem. Phys. 150(16):164111. Microscopic origins of the swim pressure and the anomalous surface tension of active matter. A K Omar, Z G Wang, J F Brady, Phys. Rev. E. 101112604Omar AK, Wang ZG, Brady JF (2020) Microscopic origins of the swim pressure and the anomalous surface tension of active matter. Phys. Rev. E 101(1):012604. Swim Pressure: Stress Generation in Active Matter. S C Takatori, W Yan, J F Brady, Phys. Rev. Lett. 113228103Takatori SC, Yan W, Brady JF (2014) Swim Pressure: Stress Generation in Active Matter. Phys. Rev. Lett. 113(2):028103. Body versus surface forces in continuum mechanics: Is the Maxwell stress tensor a physically objective Cauchy stress?. C Rinaldi, H Brenner, Phys. Rev. E. 65336615Rinaldi C, Brenner H (2002) Body versus surface forces in continuum mechanics: Is the Maxwell stress tensor a physically objective Cauchy stress? Phys. Rev. E 65(3):036615. Negative Interfacial Tension in Phaseseparated Active Brownian Particles. J Bialké, J T Siebert, H Löwen, T Speck, Phys. Rev. Lett. 115998301Bialké J, Siebert JT, Löwen H, Speck T (2015) Negative Interfacial Tension in Phase- separated Active Brownian Particles. Phys. Rev. Lett. 115(9):98301. Curvature-dependent Tension and Tangential Flows at the Interface of Motility-induced Phases. A Patch, D M Sussman, D Yllanes, M C Marchetti, Soft Matter. 1436Patch A, Sussman DM, Yllanes D, Marchetti MC (2018) Curvature-dependent Tension and Tangential Flows at the Interface of Motility-induced Phases. Soft Matter 14(36):7435-7445. Non-negative Interfacial Tension in Phase-Separated Active Brownian Particles. S Hermann, D De Las Heras, M Schmidt, Phys. Rev. Lett. 12326268002Hermann S, De Las Heras D, Schmidt M (2019) Non-negative Interfacial Tension in Phase- Separated Active Brownian Particles. Phys. Rev. Lett. 123(26):268002. Phase behavior and surface tension of soft active Brownian particles. N Lauersdorf, T Kolb, M Moradi, E Nazockdast, D Klotsa, Soft Matter. 1726Lauersdorf N, Kolb T, Moradi M, Nazockdast E, Klotsa D (2021) Phase behavior and surface tension of soft active Brownian particles. Soft Matter 17(26):6337-6351. Freezing and phase separation of self-propelled disks. Y Fily, S Henkes, M C Marchetti, Soft Matter. 1013Fily Y, Henkes S, Marchetti MC (2014) Freezing and phase separation of self-propelled disks. Soft Matter 10(13):2132-2140. Anomalous thermomechanical properties of a self-propelled colloidal fluid. S A Mallory, A Šarić, C Valeriani, A Cacciuto, Phys. Rev. E. 89552303Mallory SA, Šarić A, Valeriani C, Cacciuto A (2014) Anomalous thermomechanical properties of a self-propelled colloidal fluid. Phys. Rev. E 89(5):052303. Pressure and Phase Equilibria in Interacting Active Brownian Spheres. A P Solon, Phys. Rev. Lett. 114Solon AP, et al. (2015) Pressure and Phase Equilibria in Interacting Active Brownian Spheres. Phys. Rev. Lett. 114(19):198301. Pressure is not a state function for generic active fluids. A P Solon, Nat. Phys. 118Solon AP, et al. (2015) Pressure is not a state function for generic active fluids. Nat. Phys. 11(8):673-678. A simple paradigm for active and nonlinear microrheology. T M Squires, J F Brady, Phys. Fluids. 17773101Squires TM, Brady JF (2005) A simple paradigm for active and nonlinear microrheology. Phys. Fluids 17(7):73101. How Dissipation Constrains Fluctuations in Nonequilibrium Liquids: Diffusion, Structure, and Biased Interactions. L Tociu, É Fodor, T Nemoto, S Vaikuntanathan, Phys. Rev. X. 9441026Tociu L, Fodor É, Nemoto T, Vaikuntanathan S (2019) How Dissipation Constrains Fluctua- tions in Nonequilibrium Liquids: Diffusion, Structure, and Biased Interactions. Phys. Rev. X 9(4):041026. Mean-field theory for the structure of strongly interacting active liquids. L Tociu, G Rassolov, É Fodor, S Vaikuntanathan, J. Chem. Phys. 157114902Tociu L, Rassolov G, Fodor É, Vaikuntanathan S (2022) Mean-field theory for the structure of strongly interacting active liquids. J. Chem. Phys. 157(1):014902. Dynamic overlap concentration scale of active colloids. S A Mallory, A K Omar, J F Brady, Phys. Rev. E. 104444612Mallory SA, Omar AK, Brady JF (2021) Dynamic overlap concentration scale of active colloids. Phys. Rev. E 104(4):044612. Full Phase Diagram of Active Brownian Disks: From Melting to Motility-Induced Phase Separation. P Digregorio, Phys. Rev. Lett. 121998003Digregorio P, et al. (2018) Full Phase Diagram of Active Brownian Disks: From Melting to Motility-Induced Phase Separation. Phys. Rev. Lett. 121(9):098003. Phase Diagram of Active Brownian Spheres: Crystallization and the Metastability of Motility-Induced Phase Separation. A K Omar, K Klymko, T Grandpre, P L Geissler, Phys. Rev. Lett. 12618188002Omar AK, Klymko K, GrandPre T, Geissler PL (2021) Phase Diagram of Active Brownian Spheres: Crystallization and the Metastability of Motility-Induced Phase Separation. Phys. Rev. Lett. 126(18):188002. Critical behavior of active Brownian particles. J T Siebert, Phys. Rev. E. 98330601Siebert JT, et al. (2018) Critical behavior of active Brownian particles. Phys. Rev. E 98(3):030601(R). Theoretical determination of the thickness of a liquid-vapour interface. J Lekner, J R Henderson, Physica A. 943-4Lekner J, Henderson JR (1978) Theoretical determination of the thickness of a liquid-vapour interface. Physica A 94(3-4):545-558. Nonreciprocity as a generic route to traveling states. Z You, A Baskaran, M C Marchetti, Proc. Natl. Acad. Sci. USA. 11733You Z, Baskaran A, Marchetti MC (2020) Nonreciprocity as a generic route to traveling states. Proc. Natl. Acad. Sci. USA 117(33):19767-19772. Role of Repulsive Forces in Determining the Equilibrium Structure of Simple Liquids. J D Weeks, D Chandler, H C Andersen, J. Chem. Phys. 5412Weeks JD, Chandler D, Andersen HC (1971) Role of Repulsive Forces in Determining the Equilibrium Structure of Simple Liquids. J. Chem. Phys. 54(12):5237-5247. HOOMD-blue: A Python package for highperformance molecular dynamics and hard particle Monte Carlo simulations. J A Anderson, J Glaser, S C Glotzer, Comput. Mater. Sci. 173109363Anderson JA, Glaser J, Glotzer SC (2020) HOOMD-blue: A Python package for high- performance molecular dynamics and hard particle Monte Carlo simulations. Comput. Mater. Sci. 173:109363. . * Aomar@berkeley, * [email protected] . S Paliwal, J Rodenburg, R V Roij, M Dijkstra, New J. Phys. 2015003S. Paliwal, J. Rodenburg, R. v. Roij, and M. Dijkstra, New J. Phys. 20, 015003 (2018). . A P Solon, J Stenhammar, M E Cates, Y Kafri, J Tailleur, New J. Phys. 2075001A. P. Solon, J. Stenhammar, M. E. Cates, Y. Kafri, and J. Tailleur, New J. Phys. 20, 75001 (2018). . J M Epstein, K Klymko, K K Mandadapu, J. Chem. Phys. 150164111J. M. Epstein, K. Klymko, and K. K. Mandadapu, J. Chem. Phys. 150, 164111 (2019). . W Noll, Indiana Univ. Math. J. 4627W. Noll, Indiana Univ. Math. J. 4, 627 (1955). . R B Lehoucq, A Von, J. Elasticity. 1005R. B. Lehoucq and A. Von Lilienfeld-Toal, J. Elasticity 100, 5 (2010). . S C Takatori, W Yan, J F Brady, Phys. Rev. Lett. 11328103S. C. Takatori, W. Yan, and J. F. Brady, Phys. Rev. Lett. 113, 028103 (2014). . A Patch, D M Sussman, D Yllanes, M C Marchetti, Soft Matter. 147435A. Patch, D. M. Sussman, D. Yllanes, and M. C. Marchetti, Soft Matter 14, 7435 (2018). . S Das, G Gompper, R G Winkler, Sci. Rep. 9S. Das, G. Gompper, and R. G. Winkler, Sci. Rep. 9 (2019). . A K Omar, Z.-G Wang, J F Brady, Phys. Rev. E. 10112604A. K. Omar, Z.-G. Wang, and J. F. Brady, Phys. Rev. E 101, 012604 (2020). . S A Mallory, A K Omar, J F Brady, Phys. Rev. E. 10444612S. A. Mallory, A. K. Omar, and J. F. Brady, Phys. Rev. E 104, 044612 (2021).
[]
[ "Prepared for submission to JHEP ABJM flux-tube and scattering amplitudes", "Prepared for submission to JHEP ABJM flux-tube and scattering amplitudes" ]
[ "Benjamin Basso \nLaboratoire de Physique Théorique de l'École Normale Supérieure\nCNRS\nUniversité PSL\nSorbonne Universités\nUniversité Pierre et Marie Curie\n24 rue Lhomond75005ParisFrance\n", "Andrei V Belitsky \nDepartment of Physics\nArizona State University\n85287-1504TempeAZUSA\n" ]
[ "Laboratoire de Physique Théorique de l'École Normale Supérieure\nCNRS\nUniversité PSL\nSorbonne Universités\nUniversité Pierre et Marie Curie\n24 rue Lhomond75005ParisFrance", "Department of Physics\nArizona State University\n85287-1504TempeAZUSA" ]
[]
There is a number of indications that scattering amplitudes in the Aharony-Bergman-Jafferis-Maldacena theory might have a dual description in terms of a holonomy of a supergauge connection on a null polygonal contour in a way analogous to the fourdimensional maximally supersymmetric Yang-Mills theory. However, so far its explicit implementations evaded a successful completion. The difficulty is intimately tied to the lack of the T-self-duality of the sigma model on the string side of the gauge/string correspondence. Unscathed by the last misfortune, we initiate with this study an application of the pentagon paradigm to scattering amplitudes of the theory. With the language being democratic and nondiscriminatory to whether one considers a Wilson loop expectation value or an amplitude, the success in the application of the program points towards a possible unified observable on the field theory side. Our present consideration is focused on two-loop perturbation theory in the planar limit, begging for higher loop data in order to bootstrap current analysis to all orders in the 't Hooft coupling.
10.1007/jhep09(2019)116
[ "https://export.arxiv.org/pdf/1811.09839v1.pdf" ]
118,980,823
1811.09839
394288afbfa332935c5ec4802ea4acf88b460ccf
Prepared for submission to JHEP ABJM flux-tube and scattering amplitudes 24 Nov 2018 Benjamin Basso Laboratoire de Physique Théorique de l'École Normale Supérieure CNRS Université PSL Sorbonne Universités Université Pierre et Marie Curie 24 rue Lhomond75005ParisFrance Andrei V Belitsky Department of Physics Arizona State University 85287-1504TempeAZUSA Prepared for submission to JHEP ABJM flux-tube and scattering amplitudes 24 Nov 2018 There is a number of indications that scattering amplitudes in the Aharony-Bergman-Jafferis-Maldacena theory might have a dual description in terms of a holonomy of a supergauge connection on a null polygonal contour in a way analogous to the fourdimensional maximally supersymmetric Yang-Mills theory. However, so far its explicit implementations evaded a successful completion. The difficulty is intimately tied to the lack of the T-self-duality of the sigma model on the string side of the gauge/string correspondence. Unscathed by the last misfortune, we initiate with this study an application of the pentagon paradigm to scattering amplitudes of the theory. With the language being democratic and nondiscriminatory to whether one considers a Wilson loop expectation value or an amplitude, the success in the application of the program points towards a possible unified observable on the field theory side. Our present consideration is focused on two-loop perturbation theory in the planar limit, begging for higher loop data in order to bootstrap current analysis to all orders in the 't Hooft coupling. Introduction Without a doubt, integrability is a blessing in the quest of solving planar maximally supersymmetric SU(N ) Yang-Mills (SYM) theory in four dimensional space-time. The gauge/string correspondence provided a hint for this profound property since it allowed one to view gauge dynamics from the perspective of a two-dimensional world-sheet of the type IIB string theory in the AdS 5 ×S 5 target space. The existence of an infinite number of conserved charges encoding the dynamics of the two-dimensional world-sheet, and thus exact solvability of the string sigma model, implied its manifestation in space-time observables which are non-trivial functions of the 't Hooft coupling g 2 = g 2 YM N/(4π) 2 . The ones which played central roles since the inception of the AdS 5 /CFT 4 correspondence were the scaling dimensions of composite single-trace field operators and their dual string energies; the structure constants in the Operator Product Expansion (OPE) and corresponding string couplings; last but not least, regularized gluon and open string scattering amplitudes. For this last instance, the T-self-duality of the AdS 5 ×S 5 background was crucial since it allowed one to map the open string amplitudes to the string world-sheet bounded by a closed polygonal contour formed by the particles' momenta [1]. From the gauge theory standpoint, this yielded a conjecture that amplitudes are equivalent to the vacuum expectation value of a super-Wilson loop on a null polygonal contour [1][2][3][4][5]. By this virtue, the gauge theory enjoys yet another symmetry, the dual superconformal symmetry [6,7], which is manifest in the Wilson loop representation and closes with traditional superconformal symmetry onto a Yangian algebra [8]. Quantum mechanical anomalies violate the bulk of symmetries but in a manner that can be used to derive predictive Ward identities [6]. This allowed one to fix the four-and five-leg amplitudes completely and, starting from six legs and beyond, up to an additional dual conformal-invariant remainder function [9,10]. These considerations spawned the development of a non-perturbative method to calculate the near-collinear limit of scattering amplitudes at any value of the 't Hooft coupling [11] by decomposing null-polygonal Wilson loops in terms of pentagons [12], which were determined from a set of bootstrap equations. The formalism is akin to the conventional OPE for correlation functions of local operators. Taking the limit of adjacent segments of the loop's contour to approach the same null line generates curvature field insertions into the Wilson link stretched along this direction. Physically, they are viewed as excitations propagating on top of the vacuum, which is the Faraday color flux tube. Their dynamics is integrable and was explored in the context of the large-spin approximation to single-trace operators. At any finite order of the near-collinear expansion, there is a limited number of contributing flux-tube states, which, however, have to be summed over in order to get an exact representation of the Wilson loop and correspondingly space-time scattering amplitudes in generic kinematics. The pentagon program was completed in recent years [13][14][15][16][17][18][19][20][21] and allowed one to compute the aforementioned remainder function at finite coupling in the collinear limit and successfully confront with various data stemming from other approaches to gauge-theory scattering amplitudes either within perturbation theory [22][23][24][25][26][27][28][29][30][31] or at strong coupling [32,33]. A decade younger AdS 4 /CFT 3 sibling of the original AdS 5 /CFT 4 correspondence has been known for quite some time now. The dual pair involved in this case is a particular three-dimensional superconformal SU(N )×SU(N ) Chern-Simons theory with level ±k, dubbed the Aharony-Bergman-Jafferis-Maldacena (ABJM) theory, and M-theory on AdS 4 ×S 7 /Z k . Furthermore, the double scaling limit k, N → ∞ with the 't Hooft coupling λ = N/k held fixed, yields a correspondence between the planar ABJM theory and free type IIA superstring theory in AdS 4 ×CP 3 . Integrability appears to be ubiquitous in both examples. However, while both instances share similarities there are also significant qualitative differences (at least in the present state of the art). The most important deviation from the SYM story, pertinent to our current consideration, is the absence of a well-established duality of scattering amplitudes in ABJM theory to a null-polygonal super-Wilson loop. This can be traced back to the lack of the fermionic T-self-duality of the AdS 4 ×CP 3 background [34], see also [35][36][37][38]. If exists, it would imply by default the dual superconformal symmetry. In spite of the fact that this dual description is not known, the four-and six-leg tree ABJM amplitudes were found to possess a Yangian symmetry [39]. This can be traced back to a hidden OSp(6|4) dual superconformal symmetry [40]. In fact, a Yangian-invariant formula for an arbitrary n-leg tree level amplitude was proposed in [41], see also [42,43], in the form of Grassmannian integrals, mirroring the SYM construction [44]. A BCFW-like recursion in three dimensions, which preserves the dual conformal symmetry, was suggested in [45], where the eight-leg tree amplitude was calculated explicitly as well. Loop-level explicit ABJM analyses are more scarce, but what was found in those considerations is even more encouraging for the applicability of the pentagon OPE. The result of [45] suggested that all cut-constructible loop amplitudes within generalized unitaritybased methods [46] possess the dual symmetry as well. This selection rule for the basis of unregularized momentum integrals was the central point for successful (and relatively) concise calculation of high-order perturbative amplitudes in the SYM theory [47,48]. The explicit result for the four-point ABJM planar amplitude up to two loops confirms this expectation. In particular, the cut-based construction of the amplitude [49] from a set of dual conformal invariant integrals coincides with a direct Feynman diagram computation [50] which does not assume this property from the onset. Moreover, the final result, in a fashion analogous to SYM, can be interpreted as a solution to the anomalous dual conformal Ward identities, which fix it up uniquely. This result reaffirmed the putative duality to a Wilson loop expectation value, as after proper identification it is identical to the four-cusp Wilson loop [51] and, in addition, is strikingly similar to its SYM counterpart. The three-loop verification was further provided in [52] as an evidence for absence of contributions to the cusp anomalous dimension in the ABJM theory at odd loop orders, also known from other considerations [53]. In ABJM theory, all multileg amplitudes beyond four external lines correspond to non-MHV ones, in the SYM language. This implies that the duality, if exists, should be to some version of a superloop, see e.g. [54] for a proposal. Currently, the only available higher-loop data is the six-leg amplitude which was computed at one [55][56][57][58] and two [59] loops. It was found that its anomalous part is, again, in agreement with the results of the dual conformal anomaly equations, reproducing the BDS ansatz [10]. However, there is now a non-trivial homogeneous term which is the remainder function of the dual cross ratios, in complete analogy with the SYM theory. Inspired by these observations, in this paper, we apply the pentagon paradigm to ABJM scattering amplitudes and demonstrate that, within the current state of the art, our analysis suggests the existence of a field theoretical observable that encodes both a (super) Wilson loop on a null polygonal contour as well as the scattering amplitudes in a single object. We provide some evidence for this by analyzing the OPE structure of WLs and scattering amplitudes through two loops using the pentagon factorization. Further verifications and confirmations require availability of higher loop perturbative data as well as multileg amplitudes. Our subsequent presentation is organized as follows. In the next section, we briefly review the physics of the flux-tube in the ABJM theory. Some preliminary acquaintance is expected with the subject. Next, we turn to the discussion of the pentagon transitions for all types of fundamental excitations of the flux-tube, starting with twist-one, where our results are robust, and then turning to the twist-one-half spinons, where they are more hypothetical. We use them in Section 3 to construct OPEs for the bosonic Wilson loops with six and seven points. Then, we move on to the six-leg ABJM amplitude in Section 4 and accommodate it within the pentagon framework. Finally, we discuss problems that have to be addressed in future studies. Ansätze for ABJM pentagons In this section, we present conjectures for the pentagon transitions between flux-tube excitations in the ABJM theory. We begin with a lightning review of the flux-tube spectrum and S matrices, and of their relations with the N = 4 SYM flux-tube data. The reader is assumed to have some familiarity with the flux-tubology of N = 4 SYM. Prior to starting our exposition, let us point out that throughout this paper we shall use an effective coupling g 2 = h(λ) = λ 2 + . . . where h(λ) is the interpolating function of the integrable spin chain of the ABJM theory. This function relates integrability to perturbation theory. It was computed at NLO in [60,61] and is known, albeit conjecturally, to all orders in the 't Hooft coupling [62]; see also [63] for its computations done at strong coupling via the string theory side of the dual pair. The coupling g 2 is also the most natural one to use for comparison between the ABJM and SYM theories. As an illustration, the cusp anomalous dimension, which is the flux-tube vacuum energy density, can be matched between the ABJM (N = 6) and SYM (N = 4) theory, using integrability [53], at given coupling g, Γ N =4 cusp (g) = 2Γ N =6 cusp (g) . (2.1) In particular, Γ N =6 cusp (g) = 2g 2 + O(g 4 ) to leading order at weak coupling. Finally, note that since g ∼ λ, the coupling g 2 , which is the natural loop expansion parameter in the N = 4 theory, maps to two powers of the loop expansion parameter of the ABJM theory. The integrability formulae that we will shortly put forward all run in powers of g 2 and as such miss the odd part of physics. Flux tube spectrum Let us start by addressing the flux-tube excitations. These are effective particles which are produced when one deforms the contour of a null polygonal Wilson loop [11] and which propagate on top of the electric flux sourced by the loop [64]. In particular, they are produced in the collinear limit when nearby edges are set to be parallel. The idea behind the null Wilson loop OPE [11] is that a null WL can be completely flattened and replaced by multiple sums over the complete states of flux-tube excitations. As alluded to before, flux-tube excitations can be related to field insertions along a light ray [65] or alternatively to the spectrum of large spin local operators, see [66] for the case at hand. The latter picture allows one to obtain all-order information about their dynamics using integrability. In particular each excitation carries a momentum p for motion through the large-spin background and an energy E(p) which measures its twist. The dispersion relations E(p) are known to any number of loops [67]. The excitations were classified in [66] and come in two types for the adjoint and bi-fundamental fields, respectively. U(N) k U(N) -k A µ b A µ , ¯ ,¯ Adjoints The adjoint excitations describe gluonic degrees of freedom and their fermionic superpartners. The most relevant bosonic excitation F = F 11 corresponds to the twist 1 component of the field strength tensor F αβ , with α, β being the spinor indices. It is the bottom representative of an infinite tower of excitations F a = D a−1 11 F 11 with the twist a = 1, 2, 3, ..., where D αβ is a covariant derivative. In the integrability set up these higher twist excitations are not fully independent and can be seen as bound states of a twist 1 gluons, F a ∼ F a . It might be surprising to talk about gluons in a Chern-Simons-like theory where these are non-dynamical (non-propagating) degrees of freedom. We could, in principle, eliminate them using equations of motion and use products of bi-fundamental matter fields instead. E.g., in the large spin background, one can certainly think of the F excitation as a singlet compound of matter fields, F ∼ φ Aφ A +φ A φ A ,(2.2) where φ A=1,2,3,4 denotes the scalar components of the matter hypermultiplet andφ A is its conjugate, see figure 1. This writing is not very useful however, if not for recalling the fact that whenever an F appears, we should also expect a pair of matter fields as well, see e.g. figure 6. What matters is that these compounds behave like single-particle excitations on the flux tube of the ABJM theory. In particular they are stable, have real dispersion relations and are to a large extent easier to deal with than the bi-fundamentals they are made out off at the microscopic level. They are the 3d counterparts of the gluonic modes that live on the flux tube of the N = 4 SYM theory. In the latter case we had two of them, F a andF a , carrying opposite charges (helicities) w.r.t. the transverse rotation group O(2). In the 3d theory, the transverse plane reduces to a line and we get a single tower of gluonic modes. Also, these 3d gluons are charge-less, since there is no (continuous) helicity group in 3d. Up to this small departure in quantum numbers, the gluons of the 3d theory are essentially the same as those of the SYM theory. Their flux-tube dispersion relations are in fact identical to the ones found in the 4d theory at any coupling. In particular, one has for the twist 1 gluon, E F (u) = 1 + 2g 2 ψ( 3 2 + iu) + ψ( 3 2 − iu) − 2ψ(1) + O(g 4 ) ,(2.3) where ψ(z) = ∂ z log Γ(z) is the digamma function and where u is a rapidity for the momentum of the excitation, p F (u) = 2u + O(g 2 ). Its mass starts at 1 at weak coupling, since the field excitation has twist 1, and grows up to √ 2 at strong coupling, where it becomes identifiable with the transverse mode of a fast-rotating string in AdS 4 , see [64,[68][69][70] for discussions. Notice that formula (2.3) is 1 loop in SYM but a 2 loop result in ABJM. The remaining adjoint particles are fermionic, Ψ AB = −Ψ BA , and fill out a vector multiplet under the R symmetry group SU(4) ∼ SO(6), where A, B are SU(4) spinor indices. They have twist 1 and are images of the fermions of the SYM theory -if not for the fact that in the latter theory fermions came in pairs transforming as the 4 and4 of SU(4). The fermions cannot bind on the physical sheet and thus do not produce towers of the type we just discussed for gluons. There is something funny about them however, in a sense that they do have the tendency to attach to other particles at weak coupling. They then carry small momentum and minimal energy and localize on other flux tube excitations to form descendants or strings. The latter are not really stable, but are long lived at weak coupling and can to a large extent be viewed as particles on their own, see [14,71,72] for more details. We will encounter this phenomenon latter on. For the time being, let us just add that the fermions and their funny physics is essentially identical to the one in the SYM theory. In particular, their dispersion relation is the same as in the 4d theory, E Ψ (u) = 1 + 2g 2 (ψ(1 + iu) + ψ(1 − iu) − 2ψ(1)) + O(g 4 ) ,(2.4) with p Ψ (u) = 2u + O(g 2 ). They are (non-relativistic) Goldstone fermions for the SUSY generators that are spontaneously broken by the flux tube and, as a consequence, their mass is 1 at any value of the coupling [64]. Spinons The remaining flux-tube excitations are bi-fundamentals. They come in conjugate pairs, Z A andZ A , called spinons and anti-spinons. They are the ABJM counterparts of the scalar excitations in the SYM theory and are the lightest modes on the flux tube at finite coupling. They have twist 1/2 and belong to the 4 and4 of SU(4). They carry the quantum numbers of the field components (φ A |ψ A ;ψ A |φ A ) of the ABJM matter hypermultiplets. Nonetheless, they do not obviously map to either the boson or the fermion in these multiplets. Instead [73] they are solitonic excitations -in the sense that they interpolate between two degenerate flux tube vacua -and they carry a fractional spin 1/4. As such, we do not expect them to be easily written in terms of fundamental fields. At a coarse-grained level, they are mixtures of the two bi-fundamental fields of the ABJM theory; they can be produced by either field. Although a bit mysterious on the field theory side, a lot is known about them on the integrability side [66,73,74]. In particular, the energy and momentum of a spinon Z with rapidity u are just half of those found for a scalar Φ in the SYM theory, E Z (u) = 1 2 + g 2 ψ( 1 2 + iu) + ψ( 1 2 − iu) − 2ψ(1) + O(g 4 ) ,(2.5) and p Z (u) = u − πg 2 tanh(πu) + O(g 4 ) , (2.6) where the O(g 2 ) correction to the momentum is displayed for later reference. Flux tube spectroscopy in 4d and 3d type \ theory SYM ABJM vacuum This is it for the content of the theory. A comparative summary of the spectra of the 3d and 4d theory is shown in table 1 and in figure 2. The arrangement of flux tube excitations shown in figure 2 first appeared in [75] in connection with the embedding in the SYM integrable spin chain. 1 2 (degenerate) lightest Φ AB Z A &Z A fermion Ψ A &Ψ A Ψ AB gluon F a &F a F a Scattering matrices The relation between the 3d and 4d theory does not stop at the level of their energy spectra. The scattering matrices between all of these excitations are also deeply connected to one another. We recall these relations below. They will serve as prototypes for the pentagon transitions to be discussed shortly. The simplest relation holds for flux tube S matrices among adjoint excitations. In this case, we have 2 excitations on the SYM side mapping to just 1 in the ABJM theory. The rule of thumb is that we should fold the SYM excitations to obtain the ABJM result 1 . E.g., for the gluon S matrix, we have two 4d choices, corresponding to F F and FF scattering respectively, 2 while we have only one for the ABJM theory. Hence, we write S F F (u, v) N =6 = S F F (u, v) N =4 × S FF (u, v) N =4 . (2.7) Higher twist gluons are bound states of F 's and their S matrices can be obtained by fusion. This operation commutes with the folding rule and thus the formula must also apply to them, S FaF b (u, v) N =6 = S FaF b (u, v) N =4 × S FaF b (u, v) N =4 . (2.8) The rule is more general than that since it applies to all adjoint excitations and thus also to fermions and scattering among gluons and fermions. Fermions carry R charge indices which are different in the 3d and 4d theory. The matrix part of the S matrices that deal with these indices is universal and given by the SU(4) rational R-matrices, in the relevant representations. The folding rule does not apply to them. It applies to the dynamical (a.k.a. abelian) factors of the S matrices. The S matrices between adjoints and spinons obey an even simpler rule since they are identical to their SYM counterparts. E.g., the S matrix between a gluon F and a spinon Z in the ABJM theory is the same as the S matrix for a gluon F and a scalar Φ in the SYM theory, and more generally S F Z (u, v) N =6 = S FZ (u, v) N =6 = S F Φ (u, v) N =4 = SF Φ (u, v) N =4 . (2.9) This sequence of equalities stays true even if F is replaced by any adjoint excitation. In case where F is replaced by a fermion Ψ we are then referring to the dynamical factors of the S matrices. The rest, the actual matrix in the S matrix, are again given by SU(4) R-matrices. Last but not least, we have to discuss the pure spinon dynamics and its respective two S matrices, i.e., for the ZZ and ZZ scattering. Putting aside the R-matrices, the relation to the SYM S matrix is now reversed since the mapping from 4d to 3d is one-to-two. We get, accordingly, S ZZ (u, v)S ZZ (u, v) = S ΦΦ (u, v) ,(2.10) where S ΦΦ is the scalar flux tube S matrix of the SYM theory. Hence, in this sector, the knowledge of the SYM S matrix is not enough to unravel S ZZ and S ZZ individually. The missing information lies in the ratio of the S matrices, which is coupling independent and given in terms of the minimal SU(2) S matrix [66]. Altogether, these relations fully characterize the flux tube S matrix of the ABJM theory in terms of the SYM one. The latter has been extensively discussed in the literature, at both weak and strong coupling, see e.g. [13,14,16,64,66,[76][77][78][79][80][81]. S ZZ (u, v)/S ZZ (u, v) = S SU (2) (u − v) = Γ 1 2 (iu − iv) Γ 1 2 (1 + iv − iu) Γ 1 2 (iv − iu) Γ 1 2 (1 + iu − iv) ,(2. Pentagon transitions Next, we proceed with the pentagon transitions. These are the amplitudes for production and annihilation of excitations on the edges of a pentagon null WL [12]. They are building blocks for the OPE decomposition of a generic null WL. The most basic pentagon transition describes a single excitation jumping from a state |u to a state |v , residing at the bottom and top of a pentagon, respectively, as shown in figure 3. Their knowledge is usually enough to build all the other pentagon transitions through a factorized ansatz, see [12,82,83]. In this section, we present a series of conjectures for all elementary pentagon transitions in the ABJM theory which relates them to their SYM counterparts, see [12-14, 16, 17, 82] for the full list of transitions in the SYM theory and [19] for a summary. Our conjectures are robust for the adjoint excitations. The guesswork for the spinons appears to be more difficult and features a new ingredient, not present in the context of the SYM theory. We discuss them at the end. Pentagons for adjoints The most natural guess for the gluon pentagon transition in the ABJM theory is P (u|v) = P (u|v) N =4 ×P (u|v) N =4 ,(2.12) where P (u|v) N =4 andP (u|v) N =4 are respectively the helicity preserving and non-preserving gluon transition of the N = 4 SYM theory. This conjecture has all the desired properties and verifies all the axioms imposed on the pentagon transitions. To begin with, it obeys the fundamental relation, namely P (u|v) = S(u, v)P (v|u) ,(2.13) as a result of the relations between the S-matrices of the two theories. Then, it has the right mirror property, upon the analytic continuation −γ : u → u −γ of the bottom excitation to the neighbouring edge of the pentagon, see figure 3, P (u −γ |v) = P (v|u) . (2.14) This property follows from the mirror properties of the SYM pentagon transitions, P (u −γ |v) N =4 =P (v|u) N =4 ,P (u −γ |v) N =4 = P (v|u) N =4 . (2.15) It is also mirror symmetric, P (u γ |v γ ) = P (u|v), since both P N =4 andP N =4 do possess this property. Finally, the above ansatz has a single pole at v = u, which is a kinematical singularity requirement on pentagon transitions involving identical excitations. This pole comes solely from the P N =4 factor in (2.12). It is required to define the flux tube measure µ(u) = lim v→u 1 (iu − iv)P (u|v) = µ N =4 (u) ×P N =4 (u|u) . (2.16) It fixes the rule for integrating in rapidity space when considering WLs, see e.g. Eq. (3.10). In the end, we could write the ansatz (2.12) directly in terms of the S-matrix data of the N = 6 theory, with no reference to the SYM theory, P 2 (u|v) = S(u, v) S(u γ , v) , (2.17) and recognise the canonical (and the most simple form of the) ansatz for pentagon transitions. It obeys all requirements thanks to the unitarity, crossing symmetry transformation and mirror invariance of the gluon S matrix, S(u, v)S(v, u) = 1 , S(u γ , v)S(u −γ , v) = 1 , S(u γ , v γ )S(v, u) = 1 , (2.18) where γ is the mirror move depicted in figure 3. Plugging the 4d expressions for the transitions [13] inside (2.12) yields the weak coupling expression 19) and its residue at iu = iv provides the gluon measure P (u|v) = − Γ(iu − iv)Γ(2 + iu − iv) g 2 Γ − 1 2 + iu Γ 3 2 + iu Γ − 1 2 − iv Γ 3 2 − iv + O(1) ,(2.µ(u) = − π 2 g 2 cosh 2 (πu) + O(g 4 ) . (2.20) It roughly measures the cost of producing a gluon on top of the flux tube. We see that it starts at two loops, i.e. g 2 , in accord with the intuition that it takes a loop of matter fields to produce it, see figure 6. The ansatz for the lightest gluons also determines expressions for higher twist gluons, through the fusion procedure, alluded to above, see e.g. [16], 21) and the associated measure as P Fa|F b (u|v) = P Fa|F b (u|v) N =4 × P Fa|F b (u|v) N =4 ,(2.P Fa|F b (u|v) ∼ δ ab (iu − iv)µ Fa (u) , (2.22) with δ ab being the Krönecker delta. To leading order at weak coupling, one finds using formulae in [16], P Fa|F b (u|v) = (−1) b (u 2 + a 2 4 )(v 2 + b 2 4 )Γ a−b 2 + iu − iv Γ a+b 2 − iu + iv Γ 1 + a+b 2 + iu − iv g 2 Γ 2 1 + a 2 + iu Γ 2 1 + b 2 − iv Γ 1 + a−b 2 − iu + iv , while the measure takes the form µ Fa (u) = (−1) a g 2 Γ 2 a 2 + iu Γ 2 a 2 − iu Γ(a)Γ(1 + a) + O(g 4 ) . (2.23) In distinction to what happens in SYM, these measures display infinite towers of double poles for imaginary rapidities. As we shall see later on, this feature introduces spurious singularities for the vacuum expectation values of the WLs. It indicates the need to have another source of contributions that will cancel them out to leading order at weak coupling, in sharp contrast with the SYM theory. These additions can only emerge from the spinons which we will discuss below. The square ansatz (2.21) works well for all other pentagon transitions P X|Y between two adjoints X and Y , that is F, Ψ and bound states DF, etc. E.g., the transition between fermions reads P Ψ|Ψ (u|v) = P Ψ|Ψ (u|v) N =4 × P Ψ|Ψ (u|v) N =4 . (2.24) It obeys the fundamental axiom P Ψ|Ψ (u|v) = −S ΨΨ (u, v)P Ψ|Ψ (v|u), with the minus sign stemming for the fact that the fermion S matrix is defined such that S ΨΨ (u, u) = 1. It is harder to carry out further consistency tests since the fermions do not mirror cross nicely, see [14]. Nonetheless, as far as we can tell, the properties of the above ansatz are as good as those of the fermion proposals made for in 4d theory. Using the known expressions for the fermion transitions in the 4d theory, see e.g. [19], 3 we obtain to leading order at weak coupling, P Ψ|Ψ (u|v) = Γ(iu − iv)Γ(1 + iu − iv) g 2 Γ(iu)Γ(1 + iu)Γ(−iv)Γ(1 − iv) + O(1) ,(2.25) and, from the pole at iu = iv, we read out its measure µ Ψ (u) = π 2 g 2 sinh 2 (πu) + O(g 4 ) . (2.26) The other set of transitions for which a direct lift from the 4d theory appears naturally are those involving one adjoint excitation and a spinon. These ones do not have a direct bosonic WL interpretation, since they do not conserve the R charge, but they are building blocks for engeneering more complicated pentagon transitions. In the SYM theory it was possible to isolate them by considering suitable component of the super-Wilson loop [15,17,19]. We shall not discuss this issue here as we do not know of a loop that could accommodate for all these excitations. (Processes with fermions might be possible to produce using the super loop of [54].) Take as an example a mixed transition between a spinon Z and a gluon F . A naive guess is simply that P Z|F (u|v) = P Φ|F (u|v) N =4 = P Φ|F (u|v) N =4 . (2.27) Here again, all axioms can be easily seen to be satisfied and self-consistent, owing in part to the fact that the RHSs are insensitive to the helicity of the adjoint excitation. We could as well replace F by a bound state or by a fermion. In the following we will also need the pentagon transition connecting spinons and fermions and use for these the following expressions P Z|Ψ (u|v) = P Φ|Ψ (u|v) N =4 = P Φ|Ψ (u|v) N =4 . (2.28) We also set PZ |X = P Z|X for any adjoint X. The mixed transitions were bootstrapped on the SYM side in [15,17]. At weak coupling, in the normalisation of [19], they read P Z|F (u|v) = 1 4 + v 2 Γ(1 + iu − iv) gΓ 1 2 + iu Γ 3 2 − iv + O(g) , P Z|Ψ (u|v) = √ v Γ 1 2 + iu − iv gΓ 1 2 + iu Γ(1 − iv) + O(g) ,(2.29) and P X|Z = (P Z|X ) * with the involution * being merely the complex conjugation. The square roots are harmless in the SYM theory; these transitions never come alone in physical applications and their square roots always get screened by other factors. It is less evident to us whether the same will always happen in the ABJM theory, but they will be of no harm in applications we consider below. Pentagons for spinons Finally, we come to the most elaborate set of transitions, those for the bi-fundamentals. In this case, we should take a square root of sort, since the scalar field in the SYM theory maps to two excitations of the ABJM theory. The situation is now reversed and hence much harder. Below we present reasonable relations and assumptions for these transitions. We shall also present some weak coupling expressions that we will test later on. We clearly need two pentagon transitions to characterize various processes, namely, P (u|v) = P Z|Z (u|v) ,P (u|v) = P Z|Z (u|v) . (2.30) It is natural, in light of the relation between the spinon and scalar excitations, to expect that P (u|v)P (u|v) = P (u|v) N =4 ,(2.31) where P (u|v) N =4 is the scalar transition in the SYM theory. We can therefore parameterize the spinon transitions as P 2 (u|v) = f (u, v) × P (u|v) N =4 ,P 2 (u|v) = 1 f (u, v) × P (u|v) N =4 , (2.32) where f (u, v) is an unknown function. We shall insist that it is such that the fundamental relation to the S-matrix is obeyed. Enforcing it, we must have f (u, v)/f (v, u) = S SU(2) (u − v) , (2.33) where the RHS is the minimal SU(2) S-matrix (2.11). However, not every solution to (2.33) is acceptable. The function f must be such that the pentagon transitions have decent singularities at weak coupling. In particular, since both P and P N =4 have a simple pole at u = v, it must be so for f as well, f (u, v) ∼ f (u) iu − iv . (2.34) The residue f (u) = ∂ u f (u, v)| v=u relates to the spinon measure µ(u) = µ Z (u) = µZ(u), canonically defined as the residue of the P -transition, µ 2 (u) = 1 f (u) × µ(u) N =4 . (2.35) Let us now make an educated guess for the missing ingredient, that is, the function f . First, recall the expression for the scalar pentagon in the SYM theory at weak coupling, which is given, in the normalization used in [19], by P (u|v) N =4 = Γ(iu − iv) gΓ( 1 2 + iu)Γ( 1 2 − iv) + O(g) . (2.36) Applying the duplication formula for the Euler Gamma function, Γ(iu − iv) = 2 iu−iv 2 √ π Γ iu−iv 2 Γ 1 2 + iu−iv 2 , (2.37) it can be re-written as P (u|v) N =4 = 2 iu−iv Γ 2 iu−iv 2 2 √ πgΓ 1 2 + iu Γ 1 2 − iv × Γ 1 2 + iu−iv 2 Γ iu−iv 2 + O(g) . (2.38) This representation suggests a simple way of achieving correct analytic behavior for the ABJM transitions by choosing f (u, v) = α 2 Γ iu−iv 2 √ 2Γ 1 2 + iu−iv 2 . (2.39) The choice we will make for α is to assume that it is independent of rapidities, but can in principle be a function of the coupling g 2 . This choice fulfills the property (2.33). Also the pole at u = v, as well as its images at u = v + in, are doubled, as needed to make the P 2 (u|v) = α 2 2 iu−iv Γ 2 iu−iv 2 2 √ 2πgΓ 1 2 + iu Γ 1 2 − iv + O(1) , P 2 (u|v) = 2 iu−iv Γ 2 1 2 + iu−iv 2 α 2 √ 2πgΓ 1 2 + iu Γ 1 2 − iv + O(1) . (2.40) From the first line, we also read out the measure µ 2 (u) = π √ πg α 2 √ 2 cosh (πu) + o(g) . (2.41) Equations (2.40) and (2.41) are the expressions that we will put to test later on. (In particular, comparison with the two loop hexagon WL will enforce that α 4 = 1 + O(g).) The choice (2.39) appears quite natural at weak coupling where the transition P (u|v) should relate to tree level propagators for matter fields inserted along the pentagon WL; see figure 4 for an illustration. The square roots in the transition appear worrisome in this regard. There is however no obvious relation between the implicit normalization implied by our ansätze and the one needed to represent the direct tree-level insertions of fields along the loop. In other words, we can assume that the pentagon transition (2.40) describes a free propagator for suitably smeared insertions. After stripping out conformal weights of the scalar field, see [13,82] for a detailed discussion, its propagator reads φ(σ 1 )φ(σ 2 ) = 1 √ e σ 1 −σ 2 + e σ 2 −σ 1 + e σ 1 +σ 2 = dudv (2π) 2 e −iuσ 1 +ivσ 2 P φ|φ (u − i0|v) , (2.42) where P φ|φ (u|v) = Γ 1 4 − iu 2 Γ iu−iv 2 Γ 1 4 + iv 2 . (2.43) Similarly, for the twist 1/2 component of the fermion ψ, we get P ψ|ψ (u|v) = Γ 3 4 − iu 2 Γ iu−iv 2 Γ 3 4 + iv 2 . (2.44) Both relations follow from the following general formula dudv (2π) 2 Γ s − iu 2 Γ iu−iv+0 2 Γ s + iv 2 e −iuσ 1 +ivσ 2 = 4Γ(2s) (e σ 1 −σ 2 + e σ 2 −σ 1 + e σ 1 +σ 2 ) 2s , (2.45) used above for the conformal spins s = 1/4 and s = 3/4 for φ and ψ fields, respectively. Now, clearly, one can find smearing factors for the incoming and outgoing flux tube states such that the transition P , dressed with the measures, satisfies N φ (u)µ(u)P (u|v)µ(v)N * φ (v) ∝ P φ|φ (u|v) ,(2.46) up to an irrelevant overall factor, and similarly for P ψ|ψ . For instance, we can choose N 2 φ (u) ∝ Γ 1 4 − iu 2 Γ 3 4 − iu 2 , (2.47) for the smearing factor relating the scalar insertion to our abstract spinon, and N 2 ψ (u) = 1/N 2 φ (u) , (2.48) the one of the fermion. We will re-encounter these smearing factors later on in the flux tube analysis of scattering amplitudes, although combined differently. Smearing factors also showed up in the SYM theory in the study of non MHV amplitudes [83] and were dubbed non MHV form factors [13]. Their structure was simpler and easier to understand thanks to their relation to supersymmetry generators. We do not understand them that well in the current 3d story. It is therefore difficult to make precise the mapping between the integrability based predictions and field theory WLs with insertions at higher loops. However one might be able to learn about the higher loop structure of the pentagon transitions by considering dressed propagators like the one depicted in the right panel of figure 4. In light of this agreement, it is tempting to lift the ansatz (2.39) to an all-order conjecture. Equation (2.33) for f is coupling independent and function of the difference of rapidities only. It is then natural to look for a solution possessing the same properties. There is a problem however with the mirror axiom. Namely, the function (2.39) transforms badly upon the mirror rotation and the weak coupling singularities that it is removing on one sheet eventually re-emerge on its mirror rotated version. The SYM transition itself is mirror symmetric, P Φ|Φ (u −γ |v) = P Φ|Φ (v|u) . (2.49) The problem comes from the function f . The inverse mirror rotation −γ : u → u −γ boils down to a shift by −i on any meromorphic function, but f does not map back to itself under this shift, f (u −γ , v) = f (u − i, v) = −i tanh(π(u − v)) × f (v, u) = f (v, u) . (2.50) Therefore, it is hard to believe that f will remain the same at any loop order. We could enlarge our ansatz by promoting α in (2.39) to a symmetric function of rapidities α → α(u, v) = α(v, u) and look for a solution with a cut structure permitting both α ∼ 1 at weak coupling and α 2 (u −γ , v) = i coth(π(u − v))α 2 (v, u) at finite coupling. The space of solutions is huge and we do not even know if this factor admits an expansion in integer powers of g 2 , like everything else so far, or if odd loops should be included as well. Odd loop corrections to null polygonal Wilson loops are not excluded; although they were found to cancel out at one loop [51,84,85]. If they exist and if our other conjectures are correct, then they must necessarily sit inside the function α. (Progress with this issue might be accessible without necessarily computing loop corrections to higher polygonal WLs. Investigation of the loop corrections to the pentagon WL showed in figure 4 should already provide some insights into the structure of the extra term in P .) In lack of information on the class of functions we are after, it appears difficult, if not impossible, to pin down the right solution for α. 4 In this paper we shall stick to our naive ansatz and treat α as a constant. Although it is unlikely to be valid at higher loops, it will be sufficient for the weak coupling data that we shall analyze in subsequent sections. Let us add in conclusion that it is possible to find a simple function f that obeys both the fundamental relation and the mirror axiom. For instance, α 2 (u, v) ∝ sech(π(u − v)) ⇒ f (u, v) ∝ Γ iu−iv 2 Γ 1 2 − iu−iv 2 (2.51) does obey both of them. This choice is natural at strong coupling and relates to the minimal form factors for twist operators in Bykov model [74]. However, since it is not a perfect square, it yields unwieldy singularities at weak coupling and as such does not appear as a viable option. Wilson loops Equipped with a set of pentagon transitions, we can move on to the actual computation of the null polygonal Wilson loop in the ABJM theory. The latter is defined in the usual fashion as a vacuum expectation value of a path ordered exponential of a gauge field integrated along a contour C n , W n = 1 N tr P e i Cn dx·A = W BDS n × R n . (3.1) Here C n describes a null polygon with n edges and A can be either of the two gauge fields of the ABJM theory, see figure 1. In this paper we shall remain agnostic about which gauge field is running around the loop. To the accuracy that we will be working, there is simply no difference between the two options [51,84,86]. (The difference is odd in the coupling and stays beyond the range of applicability of our conjectures; it could contain important information about higher loop completion of our ansätze, however). In Eq. (3.1), we anticipated a factorization of the Wilson loop into a BDS part and a remainder function, with the former absorbing all the UV divergences and the latter being a finite function of conformal cross ratios. This decomposition, which is a consequence of 4 Another source of inspiration for this problem is strong coupling where the pentagon transitions should map to form factors of twist operators in Bykov model [74]. There are several candidates here again and we could not find a single hint as to how to solve the problem all the way down to weak coupling. the dual conformal Ward identities in the SYM theory [6], was also observed to be true perturbatively in the 3d theory [51,84,86]. Moreover, and quite remarkably, the remainder function R n vanishes through two loops for all polygons [51,84,86], R n = 1 + O(g 3 ) ,(3.2) meaning that WLs in the ABJM and SYM theory are the same to leading order at weak coupling, if not for the difference in the cusp anomalous dimension, see (2.1). In this section, we will apply our formulae to the computation of the two-loop hexagonal and heptagonal loops for the lowest two twists in the multi-collinear limit, reproducing available perturbative results. We shall also provide a prediction for logarithmically enhanced terms, or leading OPE discontinuity, of the hexagon loop at four loops. Finally, we shall subject our conjectures to a test at strong coupling, by comparing them with the leading twist corrections to the areas of minimal surfaces in AdS 4 . Our analysis relies on the previously derived expressions for the pentagon transitions. We also assume that the multi-particle integrands take the usual form and factorize into products of pentagon transitions [16,17,82], for the dynamical parts, and rational functions of rapidities [13,21,87], for the matrix parts. More specifically, specializing to the hexagon WL for simplicity, we assume that the OPE integrand for a flux tube state made out of n excitations A i (u i ), with i = 1, . . . , n, takes the form i µ A i (u i ) i =j P A i |A j (u i |u j ) × Π({u i }) ,(3.3) where Π({u i }) is the matrix part. The latter can be obtained using an integral formula [87] or by contracting the matrix pentagons of [21]. We cannot confidently predict the sign of each contribution however. These signs will be fixed through a comparison with perturbative results -and more specifically through the condition that spurious singularities cancel out globally. Hexagon at weak coupling We begin with the hexagonal Wilson loop. It is convenient to use the 4d cross ratios (u 1 , u 2 , u 3 ) to parameterize its geometry. The latter can then be converted to the standard OPE parameters (τ, σ, φ) through the map [13,88] u 2 = e −2τ 1 + e −2τ , u 3 = 1 1 + e 2σ + 2 cos φ e σ−τ + e −2τ , u 1 = e 2σ+2τ u 2 u 3 . (3.4) The collinear limit corresponds to τ → ∞, at fixed flux tube position σ and angle φ; equivalently, u 2 → 0 with u 1 + u 3 = 1. The restriction to the 3d kinematics is obtained by setting φ = 0. 5 The OPE does not compute the vev of a Wilson loop, that is UV divergent, but instead a certain ratio W n of Wilson loops, that is finite. The ratio is defined for given a tessellation of the loop in terms of pentagons, as shown in figure 5. For instance, for the hexagon, it reads W 6 = W 6 × W m 4 W b 5 × W t 5 , (3.5) where W b/t 5 is the bottom/top pentagon WLs embedded in the hexagon and W m 4 being the middle square Wilson loop on which the above two pentagons overlap. This combination has the effect of subtracting the BDS component of the Wilson loop and replacing it by the abelian OPE ratio function [88]. The latter is a finite function of the cross ratios (3.4), W U(1) 6 = exp Γ cusp 4 r 6 (σ, τ, φ) ,(3.6) where r 6 = 2ζ(2) − log (1 − u 2 ) log u 1 u 2 u 3 (1 − u 2 ) − log u 1 log u 3 − 3 i=1 Li 2 (1 − u i ) ,(3.7) and Γ cusp (g) = 2g 2 + O(g 4 ) is the cusp anomalous dimension. With its help, one can write W 6 = W U(1) 6 × R 6 ,(3.8) where R 6 = 1 + O(g 3 ) is the remainder function. So defined, the loop admits a nice expansion in the collinear limit, organized in terms of the twists of particles which are being exchanged between the bottom and top pentagons. In the following, we will consider the leading twist-1 and twist-2 components only. They follow immediately from the large τ expansion of (3.7), using the cross ratios (3.4) and setting φ = 0. The result reads One the flux tube side, the '1' in (3.9) comes from the vacuum state, while the next two terms from the twist-1 and twist-2 excitations, respectively. In the SYM theory, there is only one candidate at leading twist, the twist-1 gluon (which comes with two helicities). Everything else is either heavier or carries an R-charge. In the ABJM theory, we have a gluonic twist-1 excitation as well but we can also form a singlet combination of twist-1/2 spinons. We expect that both will contribute at two loops, since they should both stem from the collinear limit of a gluon propagator dressed by a loop of hypermultiplets, as depicted in figure 6. This is what we are set to show below. W 6 = 1 − g 2 e −τ e σ log (1 + e −2σ ) + e −σ log (1 + e 2σ ) + g 2 e −2τ σ − 1 2 + sinh σ(e σ log (1 + e −2σ ) − e −σ log (1 + e 2σ )) + O(g 2 e −3τ , g 3 ) . Twist 1 The gluon contribution to the hexagon is by far the simplest one to evaluate. It is given by the Fourier transform of the gluon measure, W F = du 2π µ F (u)e −τ E F (u)+iσp F (u) . (3.10) To leading order in weak coupling, we have E F (u) = 1, p F (u) = 2u and the measure takes the simple form (2.20). We then evaluate the integral by closing the contour of integration in the upper half plane, summing up the residues. This yields W F = −g 2 e −τ × σ sinh σ + O(g 4 ) .(3.11) This expression has poles at e 2σ = 1 in conflict with the expected analytical properties of the WL, see e.g. the exact expression (3.9). Similar poles were uncovered in individual flux tube components of the SYM hexagonal WL at higher twists [89,90]. They were observed to cancel out, however, after adding up all contributions at a given twist order. We expect a similar phenomenon to occur in the current situation. Namely, we regard the spurious poles in (3.11) as an indication that other flux tube contributions must be added to the mix. The only candidate is a spinon-anti-spinon pair. The ZZ contribution naturally take on the form of a two-fold integral over the twospinon phase space, W ZZ = du 1 du 2 (2π) 2 µ ZZ (u 1 , u 2 )e −τ (E Z (u 1 )+EZ (u 2 ))+iσ(p Z (u 1 )+pZ (u 2 )) . (3.12) The energy and momentum are the same for the spinon and the anti-spinon and are given to leading order at weak coupling by Eq. (2.5). The rest of the integrand is assumed to take the factorized form introduced earlier and can be written as µ ZZ (u 1 , u 2 ) = −4µ Z (u 1 )µZ(u 2 ) ((u 1 − u 2 ) 2 + 4)P Z|Z (u 1 |u 2 )PZ |Z (u 2 |u 1 ) ,(3.13) where µ Z = µZ and P Z|Z = PZ |Z =P are the spinon measure and transitions considered in section 2.3.2. The rational part is needed to project the quantum numbers of the pair to the SU(4) singlet channel. The overall minus was fixed a posteriori, such as to permit a successful comparison with the field theory answer. (The sign might look awkward but it would also be needed in the SYM theory if we were to consider the fermion-anti-fermion contribution to the hexagon using the normalization of Ref. [19] for the pentagons.) Plugging the ansätze (2.41) and (2.40) for the weak coupling measure and transition into Eq, (3.13), it yields µ ZZ (u 1 , u 2 ) = −4π 2 g 2 cosh 1 2 π(u 1 − u 2 ) ((u 1 − u 2 ) 2 + 4) cosh (πu 1 ) cosh (πu 2 ) + o(g 2 ) . (3.14) Notice that 1) the unpleasant square roots predicted by (2.41) and (2.40) combine together such that the resulting integrand is meromorphic and 2) the undermined factor α cancels out between the µ's in the numerator and theP 's in the denominator. The integrand is of order O(g 2 ) in agreement with the diagrammatic intuition, see figure 6. We compute the integral by closing the contours in the upper half-planes and summing up the residues. Integrating first over u 2 , we pick up the residues at u 2 = i/2 + in and u 2 = u 1 +2i, with n ∈ N, and then at u 1 = i/2+im, with m ∈ N. Thanks to the zeros in the numerator, only the residues corresponding to the odd powers of e −σ survive, in agreement with the structure of the perturbative answer (3.9). Combining everything together, we obtain W ZZ = −g 2 e −τ × 2 cosh σ log (1 + e −2σ ) − σe −2σ sinh σ . (3.15) It displays the same spurious poles as the gluon part. They readily cancel up in the sum, as anticipated, W F + W ZZ = −g 2 e −τ e σ log (1 + e −2σ ) + e −σ log (1 + e 2σ ) + o(g 2 ) . (3.16) This is precisely the field theory result (3.9). Interestingly, although the OPE representation discussed here is more involved compared to the one in SYM, -we have a double integral at leading twist in the ABJM case, -the final expression ends up being the same as in the SYM theory at one loop, up to a factor 1/2 to accommodate the difference in the cusp anomalous dimensions in the two theories. We also note that the bulk of the final answer comes from the ZZ pair. Twist 2 There are many more states to consider at twist-2 level. The complete list includes DF, F F, F ZZ, ΨΨ, Z 2 Ψ,Z 2 Ψ, Z 4 ,Z 4 , (ZZ) 2 ,(3.17) where DF = F 2 is the twist-2 gluon bound state, F F a two-gluon state, etc. However, if our ansätze are correct, assuming also that α = O(g 0 ), then only 4 of the above states contribute at order O(g 2 ), namely, DF, ΨΨ, ΨZ 2 and ΨZ 2 . 6 The gluon contribution is again the easiest one to write. It follows directly from (2.23), W DF = g 2 e −2τ du 2π e 2iuσ π 2 u 2 2 sinh 2 (πu) = g 2 e −2τ σ(e σ + e −σ ) − (e σ − e −σ ) (e σ − e −σ ) 3 . (3.18) As before, it has again the undesired singularities at e 2σ = 1. Then comes the ΨΨ contribution W ΨΨ = 1 2 e −2τ du 1 du 2 (2π) 2 6µ Ψ (u 1 )µ Ψ (u 2 )e i(p Ψ (u 1 )+p Ψ (u 2 ))σ ((u 1 − u 2 ) 2 + 4)((u 1 − u 2 ) 2 + 1)P Ψ|Ψ (u 1 |u 2 )P Ψ|Ψ (u 2 |u 1 ) , (3.19) with a symmetry factor in front compensating for the two identical fermions. The matrix part is as for the two-scalar contribution to the SYM hexagon [14]. Looking at the weak coupling formulae (2.25) and (2.26) for the fermion pentagon and measure, one would conclude that this integral is ∼ g 8 at weak coupling, that is 8 loops in the ABJM theory. This estimate is not correct however. It overlooks the fact that the fermions develop quite a strange behavior at small momenta, i.e., p ∼ g 2 , and the aforementioned weak coupling formulae do not properly represent this domain. What we need instead are the weak coupling expressions on the so-called small fermion sheet. They are obtained through an analytic continuation using formulae at finite coupling, as described in Ref. [14]. The small fermion sheet, reached via the above procedure, can be parameterized in terms of a rapidity u, with u = ∞ corresponding to zero momentum. Following [14], we will denote functions evaluated on that sheet, like the momentum, the energy, etc., with a 'check' on the rapidity, e.g., p Ψ (ǔ) = 2g 2 /u + O(g 4 ) , E Ψ (ǔ) = 1 + O(g 6 ) . (3.20) Other quantities like the measure and pentagon transitions also drastically simplify. In particular, one finds, after folding the 4d formulae in appendices of Ref. [19] into 3d ones, 1 P Ψ|Ψ (ǔ 2 |u 1 )P Ψ|Ψ (u 1 |ǔ 2 ) = u 2 2 + O(g 2 ) ,(3.21) together with µ Ψ (ǔ 2 ) = −1 + O(g 2 ) . (3.22) We cannot have more than one small fermion at a time in the case at hand, since to produce a non-vanishing contribution, the small fermions must always bind to something 'big'. Here, one fermion will attach to the other and form a string; of course, it does not matter which one we choose, as long as we add a factor 2 in the end to reflect the doubling. So we can use (3.21), (3.22) in equation (3.19) as well as (2.26) for the measure of the large fermion Ψ(u 1 ). The resulting integrand is then of order O(g 2 ) as desired. We can then integrate the small fermion out by attaching it to the other one. The string is determined by the zeros of the rational factor in (3.19). Here we get two options, u 2 = u 1 − i and u 2 = u 1 − 2i. 7 Picking up the residues, we arrive at W ΨΨ = g 2 e −2τ 2 R+i0 du 1 2π π 2 (u 2 1 + 2) sinh 2 (πu 1 ) e 2iu 1 σ = g 2 e −2τ σ(2 − 5e 2σ + e 4σ ) (1 − e 2σ ) 3 − e 2σ (1 − e 2σ ) 2 . (3.23) The i0 prescription is a remnant of the splitting of the kinematics into the small and large domains, see [14], and is needed to avoid the double pole at u 1 = 0. (More precisely, the latter pole is a trace of the small-fermion region that collapses into a point on the large fermion sheet.) We notice here again the presence of unwanted singularities. Finally, we have the integral for ΨZZ, and equivalently ΨZZ, W ΨZZ = 1 2 e −2τ dudv 1 dv 2 (2π) 3 e i(p Ψ (u)+p Z (v 1 )+p Z (v 2 ))σ µ Ψ (u)µ Z (v 1 )µ Z (v 2 )Π ΨZZ P Z|Z (v 1 |v 2 )P Z|Z (v 2 |v 1 ) i=1,2 P Ψ|Z (u|v i )P Z|Ψ (v i |u) , (3.24) with an overall symmetry factor removing overcounting due to the identity of the spinons. The matrix part Π ΨZZ can be obtained from the integral formula of [20] or by contracting the matrix pentagons of [21]. For the singlet channel in 6 ⊗ 4 ⊗ 4, it yields M Ψ(u)Z(v 1 )Z(v 2 ) = 12 (u − v 1 ) 2 + 9 4 (u − v 2 ) 2 + 9 4 ((v 1 − v 2 ) 2 + 1) . (3.25) The spinon part of the integrand reads, according to (2.40) and (2.41), µ Z (v 1 )µ Z (v 2 ) P Z|Z (v 1 |v 2 )P Z|Z (v 1 |v 2 ) = π 2 g 2 (v 1 − v 2 ) sinh 1 2 π(v 1 − v 2 ) α 4 cosh (πv 1 ) cosh (πv 2 ) + o(g 2 ) . (3.26) Hence, after using the fermion data (2.26) and (2.29), the integrand is superficially small, of order O(g 8 /α 4 ). However, here again the dominant contribution does not come from the kinematical domain where the latter formulae apply, but from the small fermion domain. Continuing our expressions to that sheet and taking the weak coupling limit afterwards, one obtains 1 P Ψ|Z (ǔ|v)P Z|Ψ (v|ǔ) = u + O(g 2 ) . The poles in the matrix part (3.25) dictate that the small fermion binds below the spinon's rapidity v i at u = v 1,2 − 3i/2. Picking these residues up and using (3.26) for the rest, it yields W ΨZZ = g 2 e −2τ dv 1 dv 2 (2π) 2 π 2 (9 + 2v 2 1 + 2v 2 2 )(v 1 − v 2 ) sinh 1 2 π(v 1 − v 2 ) e i(v 1 +v 2 )σ α 4 ((v 1 − v 2 ) 2 + 1)((v 1 − v 2 ) 2 + 9) . (3.28) 7 The contour of integration in the small fermion domain goes anti-clockwise around all singularities in the lower half plane, see [19] for further detail. We then simply repeat the analysis carried out earlier for the two spinon integral and find W ΨZZ = g 2 e −2τ 2α 4 −1 + 6e 2σ − e 4σ 2(1 − e 2σ ) 2 + σe 2σ (−1 + 9e 2σ − 5e 4σ + e 6σ ) (1 − e 2σ ) 3 + 1 2 (e σ − e −σ ) 2 log (1 + e 2σ ) . (3.29) Adding everything up, one verifies that the bad singularities go away and that the sum matches with (3.9) if α 4 = 1. Four loop leading discontinuity Being convinced that our formulae work correctly at weak coupling, at least at low twists, we can use them to make higher loop predictions for the leading OPE discontinuities (LD) [11,91]. The latter correspond to terms exhibiting maximal powers of the OPE time τ at a given loop order. They follow unambiguously from dressing the flux tube integrands with the leading weak coupling corrections to the energies of the flux-tube excitations. Realizing that these corrections all start at two loops, we obtain W 6 (σ, τ ) LD = ∞ L=2 g 2L τ L−1 (e −τ f (1) L (σ) + e −2τ f (2) L (σ) + . . .) ,(3.30) where f (n) L (σ) is a coupling independent function of σ. We focus here on the LD ∝ g 4 τ . At leading twist, plugging into (3.10) the correction (2.3) to the energy of a gluon, and expanding the exponent at weak coupling, provides the gluon contribution to f (1) 2 , f (1) 2 | F = πdu cosh 2 (πu) e 2iuσ ψ( 3 2 + iu) + ψ( 3 2 − iu) − 2ψ(1) . (3.31) Similarly, one gets with (2.5) the ZZ contribution, f (1) 2 | ZZ = 2 du 1 du 2 cosh 1 2 π(u 1 − u 2 ) ψ( 1 2 + iu 1 ) + ψ( 1 2 − iu 1 ) − 2ψ(1) ((u 1 − u 2 ) 2 + 4) cosh (πu 1 ) cosh (πu 2 ) e i(u 1 +u 2 )σ . (3.32) The integrals can be evaluated by picking up the residues. Then their sum can be expressed in a concise form as f (1) 2 = e σ log (1 + e −2σ )(2 − log (1 + e 2σ )) + e −σ log (1 + e 2σ )(2 − log (1 + e −2σ )) . (3.33) Remarkably, it is identical to the LD of the two loop hexagon (at φ = 0) in SYM up to a factor of 1/4. We proceeded similarly at twist 2 and found that here again the result coincides with the SYM expression. Extrapolating to higher twist, one can reasonably conjecture that the LD of the four loop hexagonal WL in the ABJM is 1/4 the corresponding two-loop LD in the SYM theory. (Explicit expression for the LD of the SYM hexagon can be extracted from the formulae given in [91].) It would be interesting to further test this extrapolation and see if one can 'bootstrap' the missing 4 loop information -in the spirit of what was done in [91] for the 2 loop hexagon WL in the SYM theory or at higher loops using the hexagon function bootstrap program [23,25,26,30]. Heptagon at weak coupling We can probe more of the pentagon transitions by considering the heptagon WL, shown in the right panel of figure 5. After modding out by the pentagons and squares in the sequence, the OPE ratio reads W 7 = exp (r 6 (σ 1 , τ 1 , φ 1 ) + r 6 (σ 2 , τ 2 , φ 2 ) + r 7 (σ 1 , σ 2 , τ 1 , τ 2 , φ 1 , φ 2 )) × R 7 , (3.34) with the restriction to the 3d kinematics corresponding to φ 1,2 = 0. According to [85], the remainder function R 7 = 1 + O(g 3 ), and thus W 7 should match with the SYM answer to leading order at weak coupling ∼ g 2 . The r 6 components originate from hexagons embedded inside the heptagon and their OPE analysis reduces to the one carried out earlier. The interesting new ingredient is the 7-point abelian remainder function r 7 that was constructed in [92]. It describes flux-tube excitations traveling all the way from the bottom to the top of the heptagon; X 1 , X 2 = ∅ in figure 5. It is a function of two OPE time τ 1,2 and space σ 1,2 coordinates. We shall only consider it to leading order in the double collinear limit τ 1,2 → ∞ which flattens the heptagon on the middle pentagon in figure 5. The relevant expression is (3.35) where the ellipses stand for higher twist corrections. It is obtained from the expression analyzed in [12,13,92] by setting φ 1 = φ 2 = 0. W 7 | conn = 1 + g 2 e −τ 1 −τ 2 e σ 1 +σ 2 log (1 + e 2σ 1 )(1 + e 2σ 2 ) e 2σ 1 + e 2σ 2 + e 2σ 1 +2σ 2 + 2e σ 1 −σ 2 log e 2σ 1 (1 + e 2σ 2 ) e 2σ 1 + e 2σ 2 + e 2σ 1 +2σ 2 + σ 1 ↔ σ 2 + . . . , On the flux tube side, there are 4 distinct processes contributing to (3.35) at leading twist in the bottom and top channels, namely 1) vacuum → F (u) → F (v) → vacuum , 2) vacuum → F (u) → Z(v 1 )Z(v 2 ) → vacuum , 3) vacuum → Z(u 1 )Z(u 2 ) → F (v) → vacuum , 4) vacuum → Z(u 1 )Z(u 2 ) → Z(v 1 )Z(v 2 ) → vacuum . (3.36) Process 1) parallels the one studied in [12,13] for the SYM theory. The integrand is given by W F |F = e −τ 1 −τ 2 dudv (2π) 2 µ F (u)µ F (v)P F |F (−u|v)e ip F (u)σ 1 +ip F (v)σ 2 , (3.37) where the contour of integration is taken to be R + i0 in both cases. The prescription is needed to avoid the decoupling pole at u = −v and is dictated by the kinematics of the heptagon WL, see discussion in [13]. 8 At weak coupling, we replace the momenta by twice their arguments and use expressions (2.19) and (2.20) for the pentagon and measure, µ F (u)P F |F (−u|v)µ F (v) = −g 2 Γ 2 3 2 + iu Γ(−iu − iv)Γ(2 − iu − iv)Γ 2 3 2 + iv u 2 + 1 4 v 2 + 1 4 . (3.38) 8 The contours are such that the heptagon integral (3.37) reduces to the hexagon one (3.10) when σ1,2 → −∞ and σ = σ2 − σ1 is held fixed. π 1 δ C A δ B D + π 2 δ B A δ C D . The integrand is of order O(g 2 ) as expected. Evaluating the integral by picking up residues in the upper half planes, we obtain W F |F = −2g 2 e − i (τ i +σ i ) 1 + (2 − 3σ 1 )e −2σ 1 + (2 − 3σ 2 )e −2σ 2 + . . . , (3.39) for the first few terms in the asymptotic limit σ 1,2 = ∞. Processes 2) and 3) are symmetrical and can be obtained from one another by permuting the OPE coordinates σ 1,2 → σ 2,1 . The integral for 2) is given by W F |ZZ = e −τ 1 −τ 2 dudv 1 dv 2 (2π) 3 e ip F (u)σ 1 +i i p Z (v i )σ 2 µ F (u)µ ZZ (v 1 , v 2 ) i P F |Z (−u|v i ) , (3.40) with the two-spinon measure (3.14). There is no decoupling pole to handle here and thus the integrals can be taken directly along the real lines. Using (2.29) for the gluon-to-spinon pentagon, one verifies that the integrand is of order O(g 2 ) and one easily obtains W F |ZZ = g 2 e − i (σ i +τ i ) 1 + (3 − 4σ 1 )e −2σ 1 + 3 2 (3 − 4σ 2 )e −2σ 2 + . . . . (3.41) The final process involves a non-trivial transition between two ZZ pairs at the bottom and top of the pentagon. The integrand is given by µ ZZ (u 1 , u 2 )µ ZZ (v 1 , v 2 ) × i P Z|Z (−u i |v i )P Z|Z (−u i |v i ) × M ({−u}, {v}) , (3.42) where v 1,2 = v 2,1 . It involves a nontrivial matrix part M ({−u}, {v}) which receives contributions from the two tensors allowed for the transition, see figure 7. The two associated polynomials in rapidities differences can be found in [21]. Here we need their sum in the singlet channel, M ({u}, {v}) = (u 1 − v 1 )(u 2 − v 2 + i) − 1 4 (u 1 − u 2 − 2i)(v 1 − v 2 + 2i) . (3.43) Plugging this expression in the integrand and using our guesses for the spinon transitions (2.40), we find that the integrand is meromorphic, of order O(g 2 ), and that it does not depend on α. We get (3.44) where integration is performed using +i0 prescriptions for the decoupling poles. One can finally take the sum of all these terms and verify the agreement with the field theory result (3.35). We checked it up to high order in the double expansion at large σ i 's. W ZZ|ZZ = −g 2 e − i (τ i +σ i ) 1 + (3 − 4σ 1 )e −2σ 1 + (3 − 4σ 2 )e −2σ 2 + . . . , Strong coupling Complementary tests of our ansätze can be carried out at strong coupling. Wilson loops can be computed at strong using AdS minimal surfaces, log W A . (3.45) Integrability greatly helps finding the minimal area A for null polygonal contours and allows one to cast the answer in the form of the free energy of a system of Thermodynamic Bethe Ansatz (TBA) equations [32,33], see also [93,94] for recent studies. One can develop their systematic expansion in the near collinear regime [11], which corresponds to the low temperature expansion of the TBA equations. Here we will only discuss the leading contributions for the hexagonal and heptagonal Wilson loops. They are controlled by the spectrum of AdS excitations, the TBA weights and TBA kernels. The expressions for AdS 4 can be straightforwardly obtained by folding those of the AdS 5 case. Let us illustrate this for the hexagonal loop. In AdS 5 , the renormalized minimal area receives two types of contributions at strong coupling, see [11,33], (3.46) from two transverse modes with mass √ 2 and from one longitudinal mass 2 boson. The dots above stand for contributions of multi-particle states that will not be needed. The reduction to AdS 4 follows simply by setting φ = 0 and adjusting the string tension, A AdS 5 = Γ N =4 cusp (g) × e iφ A √ 2 (σ, τ ) + A 2 (σ, τ ) + e −iφ A √ 2 (σ, τ ) + . . . ,A AdS 4 = Γ N =6 cusp (g) × 2A √ 2 (σ, τ ) + A 2 (σ, τ ) + . . . . (3.47) Since at a given g, the cusp anomalous dimension in the ABJM theory is half the one of SYM, Γ N =6 cusp (g) = 1 2 Γ N =4 cusp (g) = g + O(1) , (3.48) we conclude that the contribution per unit of g from a transverse mode is the same in the two theories. It implies on the flux tube side that the gluon measure µ F should be identical to its SYM counterpart at strong coupling, µ F | N =6 = µ F | N =4 + O(1/g) . (3.49) This stringy prediction is easily seen to be obeyed by our formula (2.16) for the gluon measure, after using thatP N =4 (u|u) = 1 + O(1/g), see, e.g., [14]. The analysis for the mass 2 boson is more delicate. Like in the SYM theory [14,[95][96][97], this boson does not correspond to a fundamental flux tube excitation at finite coupling. It is closer to a virtual bound state that reaches the two fermion threshold at strong coupling. As such it originates from the two fermion integral (3.19). In Appendix A we show that the latter integral is half the corresponding one in the SYM theory at strong coupling in perfect agreement with the minimal surface prediction. Finally we can verify our pentagon transition for the gluons by considering the heptagonal Wilson loop. The pentagon encodes information about the TBA kernel K connecting neighbouring channels. The map is given by [12] P = 1 + 1 2g K + . . . ,(3.50) and as written it can be applied to both the SYM and ABJM theory. On the TBA side, because of the folding relation, the kernel connecting transverse bosons is simply obtained by averaging over the two transverse modes, K AdS 4 = K AdS 5 +K AdS 5 . Amplitudes While the application of the pentagon paradigm to the Wilson loop expectation values, described in the previous section, should not be surprising at all, in this section, we will extend it to the ABJM amplitudes. As we already alluded to in the introduction, the fourleg amplitude at lowest orders of perturbative series is identical to the four-cusp bosonic Wilson loop, hinting at an MHV-like duality previously unveiled in the SYM case. However, for the case at hand, it stops right there and begs for a supersymmetric extension to account for non-MHV amplitudes. Since the N = 6 supersymmetry is not maximal, the on-shell particle multiplet is not CPT self-conjugate and is packaged in two N = 3 superfields, Φ = φ 4 + θ a ψ a + 1 2 abc θ a θ b φ c + 1 3! abc θ a θ b θ c ψ 4 , (4.1) Ψ =ψ 4 + θ aφ a + 1 2 abc θ a θ bψc + 1 3! abc θ a θ b θ cφ4 ,(4.2) given by terminating expansions in the Grassmann variables θ a with a = 1, 2, 3 and where abc is the associated totally antisymmetric tensor. In this superspace representation, the SU(4) symmetry of the Lagrangian is broken down: the original R-symmetry index is split up as A = (a, 4) and only the U(3) remains explicit. (Also, since the gauge fields are pure gauges, they do not emerge as asymptotic on-shell states.) Factoring out a super-delta function for super-momentum conservation and a Parke-Taylor-like prefactor [40], the n-leg super-amplitude reads A n (Ψ 1 Φ 2Ψ3 Φ 4 . . .Ψ n−1 Φ n ) = δ 3 (P )δ 6 (Q) − 12 23 . . . n1 × A n (θ) ,(4.3) where A n (θ) is an observable that it similar in spirit to the super-loop in the SYM theory. Several comments are in order with regards to this expression. First, the amplitude can have only an even number of external legs [98] as a consequence of alternating the gauge groups of the elementary fields along the color-ordered trace. Second, the n-point amplitude A n has Grassmann degree 3 2 n and thus the reduced amplitude A n inherits the residual degree 3 2 (n−4)/2 in θ's; it is N 1 2 (n−4) MHV in four-dimensional terminology. Finally, dividing out the bosonic loop W n from the "super-loop" A n should remove the divergences and return a dual-conformal invariant ratio R n = A n /W n . (4.4) However, despite its nice properties, this is not the object that naturally arises in the OPE. Instead, the OPE ratio is canonically defined by dividing by pentagons and multiplying by squares, as illustrated earlier, see equation (3.5). The "super-loop" W n , which we shall be analyzing below, is of this type. It can be built from R n and the bosonic OPE ratio function W n , W n = R n × W n . (4.5) The hexagon and heptagon W were discussed in the previous sections, however, only even n's play a role in the consideration that follows. Notice that to leading order at weak coupling W n = W n = 1 + O(g 2 ) and thus all these super-objects are identical at tree level and one loop, W n = A n = R n when g → 0. Contrary to the Wilson loop expectation values, for which the question is not entirely settled, the ABJM amplitudes are known to receive contributions from both odd and even loops, i.e., W n = ∞ =0 g W ( ) n ,(4.6) where both W ( =even) n and W ( =odd) n are non-vanishing. In fact, it was demonstrated by an explicit calculation [55][56][57][58] that all one-loop amplitudes are proportional to the shifted tree amplitudes, W (1) = π 2 W (0) shifted ,(4.7) up to an overall step-function of kinematical variables, with W shifted ∼ Φ 1Ψ2 Φ 3 . . .Ψ n tree . 9 They are thus rational functions. The two-loop amplitudes are functions of transcendentality two [49,50,59]. This is consistent with would-be dual conformal anomaly equations which would predict the presence of the BDS function accompanied by the cusp anomalous dimension in addition to a remainder function of the conformal cross ratios. This was verified by a two-loop calculation of the six-leg amplitude in [59]. Our focus in the subsequent discussion will be on the even part of the six-leg (hexagon) amplitude, leaving the flux-tube interpretation of the odd part to a future investigation. Hexagon data In order to carry out a systematic OPE analysis of the hexagon amplitude, we need to cast it in a right form and express it in terms of momentum twistors Z i and associated Grassmann variables η i , with i = 1, ..., 6 enumerating the legs. At tree level, we can use a Yangian invariant form, that was derived in [45], and latter recast in terms of momentum twistors in [43]. It is given by the sum of two Yangian invariants Y 1,2 , W (0) 6 = J (Y 1 + Y 2 ) ,(4.8) which correspond to the s = ± terms in I 2 , respectively, in Eq. (5.53) of [43]. It is accompanied by a Jacobian J , whose form we will recall shortly. The other linearly-independent combination of Y's determines the one-loop amplitude, which is expressed via the shifted tree amplitude by one site, i.e., W (0) 6,shifted = J (Y 1 − Y 2 ) . (4.9) Both the Y's and J are given in terms of the momentum twistors of the six-leg amplitude. For application to the collinear limit, we shall parameterise them in a conventional way, see Appendix A of Ref. [13], using          Z 1 Z 2 Z 3 Z 4 Z 5 Z 6          =           e σ− i 2 φ 0 e τ + i 2 φ e −τ + i 2 φ 1 0 0 0 −1 0 0 1 0 1 −1 1 0 1 0 0 0 e −σ− i 2 φ e τ + i 2 φ 0           ,(4.10) with σ, τ, φ being the 4d OPE coordinates introduced earlier. The reduction to 3d is obtained by imposing sp(4) ∼ so(2, 3) constraints on the twistors [43], namely, i, i + 1 = 0 , ∀i ,(4.11) where the bracket is a symplectic form, i, j = Ω AB Z A i Z B j , Ω AB = −Ω BA . (4.12) Imposing these constraints on the hexagon twistors (4.10) enforces e 2iφ = 1 and fixes Ω =      0 +1 0 0 −1 0 0 0 0 0 0 +1 0 0 −1 0      ,(4.13) up to an overall factor. One can then plug the above twistors, and the double-angle brackets constrained in this manner, in the Grasmannian formulae and expand the amplitudes in the collinear limit τ → ∞. (As we stated above, we work with φ = 0 parametrization in the following.) In particular, the Jacobian takes a very concise form 10 J = − 6, 2 4, 6 5, 1 2 2, 4 = e 1 2 (σ+τ ) . (4.14) The fractional twist, that this factor implies, is essential for the proper flux-tube interpretation of the scattering amplitudes. Besides constraining the kinematics, we also want to fix the R charge and select 'good' components of the superamplitudes from the point of view of the OPE. Since the underlying R-symmetry is SU(4) rather than SU(3) (which is manifest), there are multiple relations among Grassmann components of the superamplitude (4.3). In fact, there are only two nontrivial amplitudes that we have to extract. We choose them to be the coefficients in front of η 3 1 and η 3 4 , W 6 = η 3 1 W ψ + η 3 4 W φ + . . . . (4.15) The reason is that this choice was natural in the SYM, where these amounted to replacing the incoming and outgoing vacua in the pentagon decomposition by charged vacua. We expect something similar here. Plugging the constrained twistors in the formulae of Ref. [43], we get the two amplitudes W φ = J × e −τ (e σ + 2e −τ ) (1 + e −2τ )(1 + e 2σ + 2e σ−τ + e −2τ ) , W ψ = J × e −τ (1 − e σ−τ − e −2τ ) (1 + e −2τ )(1 + e 2σ + 2e σ−τ + e −2τ ) . (4.16) Remarkably, these expressions coincide, up to the Jacobian, with the scalar and fermion components of the 6-leg SYM amplitude, at φ = 0, W φ = J × W (1144) N =4 , W ψ = J × W (1444) N =4 ,(4.17) hence we dressed them with the 'boson' φ and 'fermion' ψ subscript, respectively. We measure now the importance of the Jacobian, it is adjusting the twists of what is flowing in the OPE channel. In the SYM theory all excitations have integer twists. Thanks to the Jacobian, in the ABJM theory, all excitations that are being exchanged have half-integer twists, implying that what flows has the quantum number of a spinon. Tree level OPE Let us proceed with the large τ -expansion of the tree amplitudes. The leading-twist contributions at tree level are immediately found to be W φ = e −τ /2 e 3σ/2 1 + e 2σ + . . . , W ψ = e −τ /2 e σ/2 1 + e 2σ + . . . . They exhibit a clear signature of the exchanged particle to possess twist 1/2. It, therefore, must be a single spinon. We can thus propose a flux-tube representation in the form of a single integral over the momentum of the spinon, W φ/ψ = du 2π e ip Z (u)σ−E Z (u)τ ν φ/ψ (u) + . . . . (4.19) The weights ν φ/ψ (u) for production/absorption of the spinon can immediately be read off from the above expressions at tree level by an inverse Fourier transformation yielding ν φ (u) = ν ψ (−u) = 1 2 Γ 1 4 + i 2 u Γ 3 4 − i 2 u = π 2 cosh π 2 (u + i 2 ) . (4.20) They are different from the measure (2.41) that we had obtained earlier, and this is the case for a good reason. The latter measure has bad square root singularities and thus cannot be the image of a tree level amplitude. However, we note that we can view these weights as the measure dressed with the smearing factors introduced earlier to describe the insertions of hypermultiplets along the bosonic Wilson loop. Namely, ν φ (u) ∼ N ψ (u)µ Z (u)N * φ (u) . (4.21) It is very suggestive that the spinon that is flowing on the 'loop' W φ is produced as a fermion ψ at the bottom and annihilated as a scalar φ at the top, -and inversely for the W ψ . This hybrid nature is apparently needed to get a proper 'propagator' with the singularity of a tree level amplitude. In comparison, the non-hybrid process N s (u)µ Z (u)N * s (u) ∼ Γ(s + iu 2 )Γ(s − iu 2 ) with s = 1/4 and s = 3/4 for boson and fermion, respectively, has square root singularities in position space, since it is a Fourier transform of a free field propagator ∼ (cosh σ) −2s for a field with the conformal spin s. (This relation is the square limit of equation (2.45) obtained by sending σ 1,2 → −∞ and σ = σ 2 − σ 1 held fixed.) Let us finally note that the smearing factors cancel out in the product ν φ (u)ν ψ (u) = √ 2π 2g µ 2 Z (u) + o(1) ,(4.22) which, therefore, appears closely related to the spinon measure (2.41), and hence to the scalar measure of the SYM theory, µ Φ (u) = πg cosh (πu) + O(g 3 ) . (4.23) Accordingly the effective measures ν φ,ψ can be seen as an alternative way of splitting the SYM scalar measure into two meromorphic factors. Equipped with the weights for the fundamental spinons, we can try to make sense of the higher twist corrections. High-twist means higher particle number and particle production is generically suppressed at weak coupling. The only known exception is when the particles are being produced as small fermions. These are known to be the only extra particles needed for scattering amplitudes in the SYM theory through one loop, see the loop counting rules and discussion in Refs. [15,17,19,71,72]. We expect the same to happen in the 3d theory meaning that all the higher twist corrections should arise from multiparticle states involving one spinon and an arbitrarily many small fermions attached to it, that is, states = n=0 ZΨ 2n ⊕ZΨ 2n+1 . (4.24) An estimate of the weights of genuine multiparticle states suggests that (4.24) is valid through two loops. 11 In the following, we demonstrate that the exact kinematical dependence of the tree amplitude can be recovered from the flux tube series (4.24). In the next subsection, we verify that it is still so at two loops. Let us address the first subleading term in order to demonstrate the structure and then generalize to an arbitrary number of small fermions. Take the φ component. The higher twist excitation has twist 3/2 and arises from a single small fermion forming a string with a parent spinonZ. We expect the integrand for the process to be given by iν ψ (u)µ Ψ (v) ((u − v) 2 + 9/4)P Z|Ψ (u|v)P Ψ|Z (v|u) ,(4.25) where we choose the ψ weight for the spinonZ. 12 This choice is natural in light of the 'bosonic' nature of the component. The rational part is the matrix part for the projection 4 ⊗ 6 → 4. The integration over the fermion boils down to picking up the residue v = u − 3i/2. Using the formula (3.27) for the transition between a small fermion and a spinon, we get the integrand for the twist-3/2 descendent of the spinon i(u − 3i/2)ν ψ (u) (4.26) A similar argument would apply to the ψ component, choosing this time −iν φ (u) for the measure of the parent spinon. One verifies that the effective measures so obtained match perfectly with the next-to-leading term in the tree amplitudes, W φ = · · · + e −3τ /2 2e σ/2 (1 + e 2σ ) 2 + . . . , W ψ = · · · − e −3τ /2 e 3σ/2 (3 + e 2σ ) (1 + e 2σ ) 2 + . . . . (4.27) We can generalize this story to strings of an arbitrary length, by carrying out the integral over the phase space of n small fermions coupled to a spinon. Focusing on the φ component, the all-twist flux-tube representation that we put to the test is W φ = n 0 du 2π ν φ (u) C d 2n v (2π) 2n µ Ψ (v)Π (2n) (u, v)e iP σ−Eτ P Z|Ψ (u|v)P Ψ|Z (v|u)P = Ψ|Ψ (v|v) + i n 0 du 2π ν ψ (u) C d 2n+1 v (2π) 2n+1 µ Ψ (v)Π (2n+1) (u, v)e iP σ−Eτ P Z|Ψ (u|v)P Ψ|Z (v|u)P = Ψ|Ψ (v|v) , (4.28) 11 The estimate follows from considering the available twist 3/2 states: ZF,ZΨ, Z 2Z ,Z 3 , with Ψ a large fermion. For the corresponding integrands, we expect, schematically, µZF ∼ ν φ µF P Z|F P F |Z ∼ µZ Ψ ∼ ν ψ µΨ PZ |Ψ P Ψ|Z ∼ g 4 , µ Z 2Z ∼ ν 2 φ ν ψ (P Z|Z PZ |Z ) 2 P 2 Z|Z ∼ µZ3 ∼ ν 3 ψ (P Z|Z ) 6 ∼ g 3 . 12 As said earlier we do not have much control on global phase factors. We put an i by hand in (4.25) because it is needed for matching the tree amplitude. This factor might in principle be absorbed in a rescaling of the ZΨ transition. where v denotes the set of fermion rapidities, with 2n and 2n + 1 elements in the first and second line, respectively, with E, P being the total energy and momentum, E = E Z (u) + i E Ψ (v i ) , P = p Z (u) + i p Ψ (v i ) ,(4.29) and where, to save space, we introduced notations for functions of sets, f (w) = i f (w i ) , f = (w, w) = i =j f (w i , w j ) . (4.30) The small fermion contour C goes anti-clockwise around all singularities in the lower half plane and Π (k) (u, v) denotes corresponding matrix parts. The latter are bulky rational functions of rapidity differences, which can be written explicitly using formulae in [21] or implicitly as a matrix-model-like integral over a set of SU(4) auxiliary rapidities [20]. The simplification that comes about here is that the small fermions can be understood as extending the latter matrix-model integral into the one for a system with OSp(4|2) symmetry. Namely, using the weak coupling expressions for the small fermion transition and measure yields the integral Π (k) OSp(4|2) (u) = C d k v k!(2π) k h(v) i<j (v i − v j ) 2 Π (k) (u, v) ,(4.31) where h(v) = (−1) k µ Ψ (v) P Z|Ψ (u|v)P Ψ|Z (v|u)P = Ψ|Ψ (v|v) i<j (v i − v j ) 2 = i v i × (1 + O(g 2 )) (4.32) is a symmetric function of the fermion rapidities. Importantly, the self-interaction of the fermions reduces to a Vandermonde determinant, as expected for a fermionic node. Combining this integral with the integral representation for Π (k) (u, v), see rules in [20], one immediately identifies in the pattern of the couplings the Dynkin diagram of OSp(4|2), as pictured in figure 8. One can then integrate out the nested integrals in (4.31) starting with the fermions. The excitation numbers, shown in figure 8, are indicating the number of integration variables per node and are such that a unique pattern of residues is allowed at every step. E.g., at the first step, the k fermions rapidities must bind below the k auxiliary roots w they couple to, C d k v k!(2π) k h(v) × i<j (v i − v j ) 2 i,j ((v i − w j ) 2 + 1/4) = h(w [−1] ) × i =j 1 (w i − w j ) 2 + 1 , (4.33) where w [−a] = {w i − ia/2}. The right-hand-side cancels a similar factor present in the self-interaction of the SU(2) rapidities w which are then left to interact by means of a Vandermonde determinant. Said differently, the rapidities w are fermionised and one can iterate the procedure. The steps are schematised in figure 9 and mimic the dualisation of nested Bethe ansatz equations for super-spin chains. At the end of the process, one is left with an effective integral for an SL(2) system, The punch line is that only 1 string remains given a k, namely, a length k string attached below the spinon at a distance 3i/2. This is quite remarkable given that the fermions here are in the vector representation, which offers a wider patterns of strings a priori. E.g., the two fermion integral discussed in Appendix A produces two type of strings that both contribute in the end. A similar pattern of strings, although more complicated, was found in the higher twist analysis of the tree and loop amplitudes in the SYM theory [71,72]. On the field theory side, we can view the length k string as describing the twist k + 1/2 descendent D k 12 φ. The answer is simpler than in the 4d case since we do not need to include powers of D 22 ∼ ∂ τ in the OPE. Owing to the equation of motion D 11 D 22 φ ∼ D 2 12 φ and thus in the large spin background the twist 2 derivative D 22 can be traded for D 2 12 . (A similar argument works for the fermion.) Hence only one type of strings is needed to span all the field-excitations. Π (2n) OSp(4|2) (u) = C d n w 2 n n!(2π) n h(w [−2] ∪ w [−4] ) i 1 ((u − w i ) 2 + 1/4) i<j (w i − w j ) 2 (w i − w j ) 2 + 4 = 1 (2n)! 2n j=1 h(u [−1−2j] ) , In the end, once all strings are formed, we obtain the flux tube representation of the tree level amplitudes, W φ/ψ = e −τ /2 ∞ n=0 (−1) n e −2nτ du 2π e iuσ (iu + 3 2 ) 2n (2n)! ν φ/ψ (u) ± e −τ (iu + 3 2 ) 2n+1 (2n + 1)! ν ψ/φ (u) ,(4.35) 2n 2n n n 2n n n n n n ) ) ) Figure 9. Integrating out the small fermions yields an effective matrix integral for a new system with the matrix structure depicted here. One can keep going until one is left with a single node coupled to the spinon. The structure of the latter node is akin to the one for an SL(2) spin chain. The manipulations carried out here are reminiscent of the dualization procedure for super spin chains. where the trace of the small fermions is encoded in the Pochhammer symbol (α) n = Γ(α + n)/Γ(α) . (4.36) One easily verifies that the above series matches with the higher twist terms in (4.16). One can actually do better and resums the OPE, in the spirit of what was done in the SYM theory [71,72]. All one needs to note is the relation ∞ n=0 (−1) n iu + 3 2 2n (2n)! e −2nτ = 1 2 (1 + ie −τ ) −3/2−iu + (1 − ie −τ ) −3/2−iu ,(4.37) and an analogous one for odd n's, which merely yields a sign change in front of the second term in brackets and an overall factor of i. With their helps, we can write the flux tube series (4.35) as W φ/ψ = 1 2 ν φ/ψ (σ + ) (1 + ie −τ ) 3/2 + ν φ/ψ (σ − ) (1 − ie −τ ) 3/2 ± i 2 ν ψ/φ (σ + ) (1 + ie −τ ) 3/2 − ν ψ/φ (σ − ) (1 − ie −τ ) 3/2 , (4.38) where ν φ (σ) and ν ψ (σ) = ν φ (−σ) are the twist 1/2 seeds (4.18) and where σ ± = σ − log (1 ± ie −τ ) . (4.39) One easily verifies that these expressions agree with the tree amplitudes (4.16) at any τ . Loop level OPE After this initial success, let us move on to the two-loop analysis of W 6 . The two loop ratio function R 6 was computed in [59], under the assumption of cut-constructibility of the amplitude from a set of dual-conformal invariant integrals, and was cast in the form of the tree amplitudes dressed by transcendentality-two functions of the conformal cross ratios u j , R (2) 6 = 1 2 W (0) 6 3 i=1 −2π 2 + Li 2 (1 − u i ) + 1 2 log u i log u i+1 + (arccos √ u i ) 2 + 1 2 W (0) 6,shifted 3 i=1 arccos √ u i log u i+1 u i+2 ,(4.40) with implied cyclicity u j+3 = u j . Notice that we eliminated the BDS contribution from the result of [59] according to the definition (4.4). Translated to our language, it means that W (2) 6 = R (2) 6 + 1 2 W (0) 6 r 6 ,(4.41) according to (4.5) and (3.8). To evaluate it we need the shifted tree amplitudes. They happen to be directly related to our component amplitudes and read W φ/ψ,shifted = η φ/ψ W ψ/φ ,(4.42) up to a sign η φ/ψ = −/+ . We can then expand the two loop formula at leading twist, using the parameterization (3.4) for the hexagon cross ratios, and obtain which are such that W ψ (σ) = W φ (−σ) + O(e −3τ /2 ). We immediately observe that these expressions contain τ -enhanced terms. These are leading discontinuities, which according to (4.18) should arise from the expansion of the spinon energy to the first order in g 2 , The two cases can be accommodated in the expression e −τ /2 du 2π e iuσ ν tree φ/ψ (u) ip (2) Z (u)σ + δµ(u) , (4.47) with the same δµ for both the φand ψ-components. Furthermore, the most complicated part of the shift in weights is given by half the shift of the scalar measure in SYM, namely, δµ(u) = 1 2 δµ Φ (u) − π 2 sech 2 (πu) − 2ζ 2 , (4.48) W (2) φ = τ e −τ /2 e 3σ/2 1 + e 2σ log (e σ + e −σ ) − ie −σ log e σ + i e σ − i − 1 2 πe −σ (4.43) + e −τ /2 e 3σ/2 1 + e 2σ − 7 12 π 2 + 1 4 log e σ + i e σ − i log e −σ + i e −σ − i−τ e −τ /2 du 2π e iuσ ν tree φ (u)E (2) Z (u) . with [13] δµ Φ (u) = 8ζ 2 − 2π 2 sech 2 (πu) − 2H iu− 1 2 H −iu− 1 2 ,(4.49) where ζ 2 = ζ(2) = π 2 /6 and H z = H(z) = ψ(1 + z) − ψ(1). The loop correction (4.48) might come from the smearing factors N φ,ψ and/or the measure µ Z . In the latter case it would be the first perturbative evidence that our ansätze for the spinon pentagons must be corrected. E.g., if we assume that formula (4.22) is valid through two loops and discard possible odd loop effects then the first correction to f (u) in (2.35) is fixed by (4.48) to be f (u) = 1 + 2g 2 π 2 sech 2 (πu) + 2ζ 2 + . . . . (4.50) It can alternatively be written as a correction to α = 1 + O(g 2 ) in (2.40). As done at tree level above, the knowledge of the lowest twist components opens a way for an all-twist resummation of the OPE at two loops with minor modifications. Let us begin with the terms linear in τ . Here we simply need to note that the small fermion energy (3.20) is not corrected at O(g 2 ). Hence, each term in (4.35) gets dressed with the same spinon energy E (2) Z , independently of the twist. We can therefore re-sum the OPE by plugging in (4.38) the leading discontinuities at leading twist and verify that they match with the terms ∝ τ ∼ − 1 2 log u 2 in (4.41). We can also test the all-twist OPE formula (4.28) for the term in τ 0 . All we need to do to accomplish this is to keep the first sub-leading term in the perturbative expansion of the ZΨ and ΨΨ pentagons, µ Ψ (v) P Z|Ψ (u|v)P Ψ|Z (v|u) = − v − πg 2 tanh(πu) + O(g 4 ) , 1 P Ψ|Ψ (ǔ|v)P Ψ|Ψ (v|ǔ) = (u − v) 2 1 + 2g 2 /uv + O(g 4 ) , (4.51) and plug them into (4.31). The first term shifts the weight of each fermion, while the second one slightly corrects the pairwise interaction between fermions. Putting everything together and taking the string pattern into account gives W (2) φ/ψ | τ 0 = e −τ /2 ∞ n=0 e −nτ n! du 2π e iuσ ν (n) φ/ψ (u)(iu + 3 2 ) n (4.52) × δµ(u) + iσp φ/ψ = (−1) n ν φ/ψ and ν (2n+1) φ/ψ = ±(−1) n ν ψ/φ . The second line contains the two loop correction to the spinon measure (4.48) and the loop correction to the total momentum P , which comes from the spinon and the string of small fermions attached to it, see (4.29) and (3.20). Finally, the third line contains the shifts (4.51). To perform the resummation is not more difficult than for trees. In addition to Eq. (4.37), we merely need two more results Having reproduced the two-loop hexagon within the pentagon OPE, let us finish with a few predictions. Since presently we are unable to unambiguously find all-order expressions for all building blocks of the ABJM pentagon program, we will limit ourselves to the four loop leading discontinuities ∝ g 4 τ 2 , W (4) φ/ψ = τ 2 (ν τ 2 φ/ψ (σ)e −τ /2 + O(e −3τ /2 )) + O(τ ) . (4.54) They arise from the insertion of the second power of the spinon energy into the leading order flux-tube integrands. We find ν τ 2 φ (σ) = du 2π e iuσ ν tree φ (u)(E(2) Z (u)) 2 = e σ/2 e 2σ + 1 1 2 e σ σ 2 + 3 2 πσ + 1 4 ζ 2 e σ + e σ log 2 e σ + i e σ − i + e σ log 2 (e 2σ + 1) − (2e σ σ + π) log (e 2σ + 1) + i log e σ + i e σ − i (2σ − πe σ − 2 log (e 2σ + 1)) , The formulae can be upgraded to higher twists, such as to produce all terms in brackets in (4.54), by applying the recipe (4.38) to ν τ 2 φ/ψ (σ). Discussion With this work, we initiated a systematic application of the pentagon program to the N = 6 supersymmetric Chern-Simons theory with matter. Presently, we addressed pentagon transitions for all fundamental excitations propagating on the ABJM flux tube. While the twist-one fermions and gluons (as well as all bound states thereof) were fixed uniquely, the spinons could not be constrained in a complete fashion. However, it did not create an obstruction for the applications that we were interested in. Namely, in this paper we made a small step towards implementing the pentagon program for ABJM amplitudes. A success of this bold endeavor was not warranted as, contrary to their SYM counterparts, the dual description in terms of Wilson loops is not known and a naive supersymmetrization of the latter did not provide an adequate dual description for amplitudes with more than four legs. The fact that we could use the pentagon paradigm for their description within the same framework provides some new evidence for the existence of an observable that unifies both the ABJM Wilson loop and amplitudes under the same umbrella. It is unclear at this moment what it is, though. There are a number of avenues open for future considerations which, at the same time, will make our conclusions more precise. The one of paramount importance is dedicated efforts in higher loop calculations of scattering amplitudes. In particular, a two-loop eightleg analysis would provide explicit data to constraint the spinon pentagons directly as a function of the two rapidities, rather than just one through the spinon measure, as we performed in this study. This amplitude is within reach within the generalized unitarity framework since contributing graph topologies are the same as for the ABJM hexagon 13 . Another very valuable piece of data would come from three-loop hexagon since it would clarify the pentagon structure for the odd part of the amplitude. Having these at our disposal would put the framework on a firmer foundation, as it would allow one to point the way for proper implementation of the mirror axiom for the spinon excitations. Hopefully future studies along these lines will endow ABJM amplitudes with a dual Wilson-loop-like observable and will, therefore, make the application of pentagons fully justified. where Σ(u 1 , u 2 ) = 1 + O(1/g) at strong coupling in the regime of interest, (2g) 2 < u 2 1,2 . Naively, after rescaling the rapidities u 1,2 → 2gu 1,2 , the above integrand is of order O(g 0 ) and thus should not enter the computation of the minimal area A = O(g). This is overlooking that the integration contours get pinched between the lower and upper half plane poles. Deforming the contours and picking the residues lead to single particle like contributions that are of the right order O(g). In the case at hand we get two strings of fermions corresponding to the poles at u 1 − u 2 = 2i and u 1 − u 2 = i in (A.1). These strings are degenerate at strong coupling and both behave like a mass 2 boson. The sum of their residues yields 1 2 du for the measure of their center of mass. In comparison, the two fermion integral in the SYM has the structure 4du 1 du 2 ((u 1 − u 2 ) 2 + 4) Σ (u 1 , u 2 ) , (A.2) where Σ = 1 + . . . and thus offers a single string at u 1 − u 2 = 2i with unit residue du. From there it follows that per unit of g the 2 fermion contribution to the minimal area in AdS 4 is half the one for AdS 5 , in agreement with the string theory prediction. Figure 1 . 1Quiver diagram for the ABJM theory. Figure 2 . 2The flux tube excitations of the N = 4 and N = 6 theory can be aligned along the nodes of two infinite Dynkin diagrams of A and D type, respectively. The coloring goes along with the mass E(p = 0) of the excitation -the heavier the darker. On the left panel, we have at the center the 6 scalar fields of the SYM theory surrounded by the 4 + 4 twist 1 gaugino fields (light grey blobs). The darkest grey blobs stand for the gluonic modes: they carry no R charge but come in two infinite families of bound states (of positive and negative helicities, respectively) with twist a = 1, 2, ... . The right panel shows the corresponding picture for the ABJM theory. There is a single infinite tail of gluons F a=1,2,... in that case. The light grey blob on the fork is for the fermions Ψ AB in the 6 of SU(4). The lightest modes are on the fork's extremities: they are twist 1/2 spinons Z A and anti-spinonsZ A , in the 4 and4 of SU(4), respectively. Figure 3 . 3Left panel: cartoon for pentagon transition P (u|v) between a flux tube excitation smeared with rapidity u at the bottom and one with a rapidity v at the top. Right panel: under the inverse mirror rotation −γ : u → u −γ an excitation is moved anticlockwise to the neighbouring edge. The result is a pentagon transition with bottom and top being exchanged, P (u −γ |v) = P (v|u). Figure 4 . 4Tree level representation of the pentagon transition for matter fields, here taken to be scalars, and its first loop corrections. The two colors (red and blue) refer to the two gauge fields of the ABJM theory. Analyzing the first loop corrections could help understanding how to upgrade pentagon transitions to higher orders. Figure 5 . 5Decomposition of hexagon and heptagon Wilson loops into overlapping sequences of pentagons. For the hexagon, we have two pentagons overlapping on one square and correspondingly only one complete sum over intermediate states is needed. The heptagon has an extra hat, one more pentagon and middle square, and two sums are needed. 5Figure 6 . 6Another possible choice is φ = π. We shall not consider it here. The OPE cut of the two loop contribution to the bosonic Wilson loop reveals a pair of matter fields, coming from the bubble correction to the Chern-Simons propagator. Its twist-1 component can take the form of a gluonic flux tube excitation F or of a spinon-anti-spinon pair ZZ. They both start contributing at order O(g 2 ) at weak coupling, according to our pentagon conjectures. (3.22) it takes out six powers of g and returns an integrand of order O(g 2 /α 4 ). Figure 7 . 7Cartoon of the matrix structure of the Z AZ B → Z CZ D transitions. There are two structures for the two possible ways of contracting indices, 50) into account, it agrees with the doubling relation (2.12). Clearly, the doubling relation (2.12) is the most natural all order uplift of this kernel averaging procedure that relates AdS 4 to AdS 5 . This observation concludes the tests of our formulae on the Wilson loop side. Figure 8 . 8Dynkin diagrams encoding the matrix integral for descendents of a spinon Z ∼ φ and anti-spinonZ ∼ ψ. The top part is the Dynkin diagram for the SU (4) degrees of freedom with the labels indicating the numbers of corresponding auxiliary roots. They couple to the matter content represented by the bottom part. The small fermions are represented by the crossed nodes and the spinons by the boxes. The excitations numbers are chosen such that the overall charge matches the quantum number of φ. We can view the small fermions as associated to a fermionic generator and extending the symmetry to OSp(4|2). one for the odd cases. ψ −σ log e σ + i e σ − i log (e σ + e −σ ) + 1 4 πe −σ log (e 2σ + 1) + . . . , = τ e −τ /2 e σ/2 1 + e 2σ log (e σ + e −σ ) + ie σ log σ log e σ + i e σ − i log (e σ + e −σ ) − 1 4 πe σ log (e 2σ + 1) + . . . , -tube integral is easily verified to reproduce the first lines in (4.43) and (4.44) using the expression (2.5) for the two loop spinon energy E(2) Z . The remaining terms in Eqs. (4.43) and (4.44) have a number of origins. Some of them stem from the correction p Z (u) to the spinon momentum, see(2.5), and some from the correction to the spinon weights,ν φ/ψ (u) = ν tree φ/ψ (u)(1 + O(g 2 )).(4.46) ones for odd n with corresponding changes. The Fourier transform in rapidity is performed by means of the known leading twist expression at two loop order (4.43) and (4.44). The resulting two-loop expression coincides with the corresponding components of Eq. (4.41) with (4.40). Table 1 . 1Flux tube excitations in 4d and 3d and their correspondence. This procedure can be visualized by folding the Dynkin diagram of SYM (left panel of figure 2) on itself through the middle node.2 There is no backward FF scattering in this theory so S FF is just a transmission phase. To be precise, our ansatz is i × (P Ψ|Ψ P Ψ|Ψ ) with P Ψ|Ψ and P Ψ|Ψ the SYM pentagons listed in[19]. The rescaling by an i allows us to get a real measure µΨ. We find that F F, F ZZ, (ZZ) 2 , Z 4 scale as g 8 , g 8 , g 8 /α 8 , g 8 /α 16 , respectively. The step function vanishes for n = 4 and the one-loop four-leg amplitude is identically zero[98]. J differs slightly from the Jacobian J234 given in Eq. (5.38) of Ref. [43], since we stripped out the Parke-Taylor prefactor − 12 23 ... 61 from the amplitude, see Eq. (4.3). We would like to thank Simon Caron-Huot for correspondence on this issue. AcknowledgmentsWe thank Simon Caron-Huot, Amit Sever and Pedro Vieira for discussions. The research of A.B. was supported by the U.S. National Science Foundation under the grant PHY-1713125. The research of B.B. was supported by the French National Agency for Research grant ANR-17-CE31-0001-02.A Fermions at strong couplingIn this appendix we discuss the two fermions integral at strong coupling and compare its prediction with the string theory answer for the mass 2 boson. We refer the reader to[14,96,97]for detailed analysis in the SYM theory. All we need to know is that the two fermion integrand (3.19) can be written as Gluon scattering amplitudes at strong coupling. L F Alday, J M Maldacena, 10.1088/1126-6708/2007/06/064JHEP. 06640705.0303L. F. Alday and J. M. Maldacena, Gluon scattering amplitudes at strong coupling, JHEP 06 (2007) 064, [0705.0303]. Conformal properties of four-gluon planar amplitudes and Wilson loops. J M Drummond, G P Korchemsky, E Sokatchev, 10.1016/j.nuclphysb.2007.11.041Nucl. Phys. 7950707.0243J. M. Drummond, G. P. Korchemsky and E. Sokatchev, Conformal properties of four-gluon planar amplitudes and Wilson loops, Nucl. Phys. B795 (2008) 385-408, [0707.0243]. MHV amplitudes in N=4 super Yang-Mills and Wilson loops. A Brandhuber, P Heslop, G Travaglini, 10.1016/j.nuclphysb.2007.11.002Nucl. Phys. 7940707.1153A. Brandhuber, P. Heslop and G. Travaglini, MHV amplitudes in N=4 super Yang-Mills and Wilson loops, Nucl. Phys. B794 (2008) 231-243, [0707.1153]. On planar gluon amplitudes/Wilson loops duality. J M Drummond, J Henn, G P Korchemsky, E Sokatchev, 10.1016/j.nuclphysb.2007.11.007Nucl. Phys. 7950709.2368J. M. Drummond, J. Henn, G. P. Korchemsky and E. Sokatchev, On planar gluon amplitudes/Wilson loops duality, Nucl. Phys. B795 (2008) 52-68, [0709.2368]. Notes on the scattering amplitude / Wilson loop duality. S Caron-Huot, 10.1007/JHEP07(2011)0581010.1167JHEP. 0758S. Caron-Huot, Notes on the scattering amplitude / Wilson loop duality, JHEP 07 (2011) 058, [1010.1167]. Conformal Ward identities for Wilson loops and a test of the duality with gluon amplitudes. J M Drummond, J Henn, G P Korchemsky, E Sokatchev, 10.1016/j.nuclphysb.2009.10.013Nucl. Phys. 8260712.1223J. M. Drummond, J. Henn, G. P. Korchemsky and E. Sokatchev, Conformal Ward identities for Wilson loops and a test of the duality with gluon amplitudes, Nucl. Phys. B826 (2010) 337-364, [0712.1223]. Dual superconformal symmetry of scattering amplitudes in N=4 super-Yang-Mills theory. J M Drummond, J Henn, G P Korchemsky, E Sokatchev, 10.1016/j.nuclphysb.2009.11.022Nucl. Phys. 8280807.1095J. M. Drummond, J. Henn, G. P. Korchemsky and E. Sokatchev, Dual superconformal symmetry of scattering amplitudes in N=4 super-Yang-Mills theory, Nucl. Phys. B828 (2010) 317-374, [0807.1095]. Yangian symmetry of scattering amplitudes in N=4 super Yang-Mills theory. J M Drummond, J M Henn, J Plefka, 10.1088/1126-6708/2009/05/0460902.2987JHEP. 0546J. M. Drummond, J. M. Henn and J. Plefka, Yangian symmetry of scattering amplitudes in N=4 super Yang-Mills theory, JHEP 05 (2009) 046, [0902.2987]. Hexagon Wilson loop = six-gluon MHV amplitude. J M Drummond, J Henn, G P Korchemsky, E Sokatchev, 10.1016/j.nuclphysb.2009.02.015Nucl. Phys. 8150803.1466J. M. Drummond, J. Henn, G. P. Korchemsky and E. Sokatchev, Hexagon Wilson loop = six-gluon MHV amplitude, Nucl. Phys. B815 (2009) 142-173, [0803.1466]. Iteration of planar amplitudes in maximally supersymmetric Yang-Mills theory at three loops and beyond. Z Bern, L J Dixon, V A Smirnov, 10.1103/PhysRevD.72.085001hep-th/0505205Phys. Rev. 7285001Z. Bern, L. J. Dixon and V. A. Smirnov, Iteration of planar amplitudes in maximally supersymmetric Yang-Mills theory at three loops and beyond, Phys. Rev. D72 (2005) 085001, [hep-th/0505205]. An Operator Product Expansion for Polygonal null Wilson Loops. L F Alday, D Gaiotto, J Maldacena, A Sever, P Vieira, 10.1007/JHEP04(2011)0881006.2788JHEP. 0488L. F. Alday, D. Gaiotto, J. Maldacena, A. Sever and P. Vieira, An Operator Product Expansion for Polygonal null Wilson Loops, JHEP 04 (2011) 088, [1006.2788]. Spacetime and Flux Tube S-Matrices at Finite Coupling for N=4 Supersymmetric Yang-Mills Theory. B Basso, A Sever, P Vieira, 10.1103/PhysRevLett.111.091602Phys. Rev. Lett. 111916021303.1396B. Basso, A. Sever and P. Vieira, Spacetime and Flux Tube S-Matrices at Finite Coupling for N=4 Supersymmetric Yang-Mills Theory, Phys. Rev. Lett. 111 (2013) 091602, [1303.1396]. Space-time S-matrix and Flux tube S-matrix II. Extracting and Matching Data. B Basso, A Sever, P Vieira, 10.1007/JHEP01(2014)0081306.2058JHEP. 018B. Basso, A. Sever and P. Vieira, Space-time S-matrix and Flux tube S-matrix II. Extracting and Matching Data, JHEP 01 (2014) 008, [1306.2058]. Space-time S-matrix and Flux-tube S-matrix III. The two-particle contributions. B Basso, A Sever, P Vieira, 10.1007/JHEP08(2014)0851402.3307JHEP. 0885B. Basso, A. Sever and P. Vieira, Space-time S-matrix and Flux-tube S-matrix III. The two-particle contributions, JHEP 08 (2014) 085, [1402.3307]. Nonsinglet pentagons and NMHV amplitudes. A V Belitsky, 10.1016/j.nuclphysb.2015.05.0021407.2853Nucl. Phys. 896A. V. Belitsky, Nonsinglet pentagons and NMHV amplitudes, Nucl. Phys. B896 (2015) 493-554, [1407.2853]. Space-time S-matrix and Flux-tube S-matrix IV. Gluons and Fusion. B Basso, A Sever, P Vieira, 10.1007/JHEP09(2014)1491407.1736JHEP. 09149B. Basso, A. Sever and P. Vieira, Space-time S-matrix and Flux-tube S-matrix IV. Gluons and Fusion, JHEP 09 (2014) 149, [1407.1736]. Fermionic pentagons and NMHV hexagon. A V Belitsky, 10.1016/j.nuclphysb.2015.02.025Nucl. Phys. 8941410.2534A. V. Belitsky, Fermionic pentagons and NMHV hexagon, Nucl. Phys. B894 (2015) 108-135, [1410.2534]. OPE for all Helicity Amplitudes. B Basso, J Caetano, L Cordova, A Sever, P Vieira, 10.1007/JHEP08(2015)0181412.1132JHEP. 0818B. Basso, J. Caetano, L. Cordova, A. Sever and P. Vieira, OPE for all Helicity Amplitudes, JHEP 08 (2015) 018, [1412.1132]. OPE for all Helicity Amplitudes II. Form Factors and Data Analysis. B Basso, J Caetano, L Cordova, A Sever, P Vieira, 10.1007/JHEP12(2015)0881508.02987JHEP. 1288B. Basso, J. Caetano, L. Cordova, A. Sever and P. Vieira, OPE for all Helicity Amplitudes II. Form Factors and Data Analysis, JHEP 12 (2015) 088, [1508.02987]. Hexagonal Wilson loops in planar N = 4 SYM theory at finite coupling. B Basso, A Sever, P Vieira, 10.1088/1751-8113/49/41/41LT011508.03045J. Phys. 49B. Basso, A. Sever and P. Vieira, Hexagonal Wilson loops in planar N = 4 SYM theory at finite coupling, J. Phys. A49 (2016) 41LT01, [1508.03045]. Matrix pentagons. A V Belitsky, 10.1016/j.nuclphysb.2017.08.0111607.06555Nucl. Phys. 923A. V. Belitsky, Matrix pentagons, Nucl. Phys. B923 (2017) 588-607, [1607.06555]. Classical Polylogarithms for Amplitudes and Wilson Loops. A B Goncharov, M Spradlin, C Vergu, A Volovich, 10.1103/PhysRevLett.105.1516051006.5703Phys. Rev. Lett. 105151605A. B. Goncharov, M. Spradlin, C. Vergu and A. Volovich, Classical Polylogarithms for Amplitudes and Wilson Loops, Phys. Rev. Lett. 105 (2010) 151605, [1006.5703]. Bootstrapping the three-loop hexagon. L J Dixon, J M Drummond, J M Henn, 10.1007/JHEP11(2011)0231108.4461JHEP. 1123L. J. Dixon, J. M. Drummond and J. M. Henn, Bootstrapping the three-loop hexagon, JHEP 11 (2011) 023, [1108.4461]. Analytic result for the two-loop six-point NMHV amplitude in N=4 super Yang-Mills theory. L J Dixon, J M Drummond, J M Henn, 10.1007/JHEP01(2012)024JHEP. 01241111.1704L. J. Dixon, J. M. Drummond and J. M. Henn, Analytic result for the two-loop six-point NMHV amplitude in N=4 super Yang-Mills theory, JHEP 01 (2012) 024, [1111.1704]. Hexagon functions and the three-loop remainder function. L J Dixon, J M Drummond, M Hippel, J Pennington, 10.1007/JHEP12(2013)0491308.2276JHEP. 1249L. J. Dixon, J. M. Drummond, M. von Hippel and J. Pennington, Hexagon functions and the three-loop remainder function, JHEP 12 (2013) 049, [1308.2276]. The four-loop remainder function and multi-Regge behavior at NNLLA in planar N = 4 super-Yang-Mills theory. L J Dixon, J M Drummond, C Duhr, J Pennington, 10.1007/JHEP06(2014)1161402.3300JHEP. 06116L. J. Dixon, J. M. Drummond, C. Duhr and J. Pennington, The four-loop remainder function and multi-Regge behavior at NNLLA in planar N = 4 super-Yang-Mills theory, JHEP 06 (2014) 116, [1402.3300]. Bootstrapping an NMHV amplitude through three loops. L J Dixon, M , 10.1007/JHEP10(2014)0651408.1505JHEP. 1065L. J. Dixon and M. von Hippel, Bootstrapping an NMHV amplitude through three loops, JHEP 10 (2014) 065, [1408.1505]. A Symbol of Uniqueness: The Cluster Bootstrap for the 3-Loop MHV Heptagon. J M Drummond, G Papathanasiou, M Spradlin, 10.1007/JHEP03(2015)0721412.3763JHEP. 0372J. M. Drummond, G. Papathanasiou and M. Spradlin, A Symbol of Uniqueness: The Cluster Bootstrap for the 3-Loop MHV Heptagon, JHEP 03 (2015) 072, [1412.3763]. The four-loop six-gluon NMHV ratio function. L J Dixon, M Hippel, A J Mcleod, 10.1007/JHEP01(2016)0531509.08127JHEP. 0153L. J. Dixon, M. von Hippel and A. J. McLeod, The four-loop six-gluon NMHV ratio function, JHEP 01 (2016) 053, [1509.08127]. Bootstrapping a Five-Loop Amplitude Using Steinmann Relations. S Caron-Huot, L J Dixon, A Mcleod, M , 10.1103/PhysRevLett.117.2416011609.00669Phys. Rev. Lett. 117241601S. Caron-Huot, L. J. Dixon, A. McLeod and M. von Hippel, Bootstrapping a Five-Loop Amplitude Using Steinmann Relations, Phys. Rev. Lett. 117 (2016) 241601, [1609.00669]. L J Dixon, J Drummond, T Harrington, A J Mcleod, G Papathanasiou, M Spradlin, 10.1007/JHEP02(2017)1371612.08976Heptagons from the Steinmann Cluster Bootstrap. 137L. J. Dixon, J. Drummond, T. Harrington, A. J. McLeod, G. Papathanasiou and M. Spradlin, Heptagons from the Steinmann Cluster Bootstrap, JHEP 02 (2017) 137, [1612.08976]. L F Alday, D Gaiotto, J Maldacena, 10.1007/JHEP09(2011)0320911.4708Thermodynamic Bubble Ansatz. 32L. F. Alday, D. Gaiotto and J. Maldacena, Thermodynamic Bubble Ansatz, JHEP 09 (2011) 032, [0911.4708]. Y-system for Scattering Amplitudes. L F Alday, J Maldacena, A Sever, P Vieira, 10.1088/1751-8113/43/48/4854011002.2459J. Phys. 43485401L. F. Alday, J. Maldacena, A. Sever and P. Vieira, Y-system for Scattering Amplitudes, J. Phys. A43 (2010) 485401, [1002.2459]. A Requiem for AdS 4 × CP 3 Fermionic self-T-duality. E Ó Colgáin, A Pittelli, 10.1103/PhysRevD.94.1060061609.03254Phys. Rev. 94106006E. Ó. Colgáin and A. Pittelli, A Requiem for AdS 4 × CP 3 Fermionic self-T-duality, Phys. Rev. D94 (2016) 106006, [1609.03254]. On the fermionic T-duality of the AdS 4 xCP 3 sigma-model. I Adam, A Dekel, Y Oz, 10.1007/JHEP10(2010)1101008.0649JHEP. 10110I. Adam, A. Dekel and Y. Oz, On the fermionic T-duality of the AdS 4 xCP 3 sigma-model, JHEP 10 (2010) 110, [1008.0649]. On ads 4 x cp 3 t-duality. I Bakhmatov, 10.1016/j.nuclphysb.2011.01.0201011.0985Nucl. Phys. 847I. Bakhmatov, On ads 4 x cp 3 t-duality, Nucl. Phys. B847 (2011) 38-53, [1011.0985]. . D Sorokin, L Wulff, 10.1002/prop.2011000091101.3777Peculiarities of String Theory on AdS. 43Fortsch. Phys.D. Sorokin and L. Wulff, Peculiarities of String Theory on AdS 4 xCP 3 , Fortsch. Phys. 59 (2011) 775-784, [1101.3777]. Fermionic T-duality: A snapshot review. E Colgain, 10.1142/S0217751X123003231210.5588Int. J. Mod. Phys. 271230032E. O Colgain, Fermionic T-duality: A snapshot review, Int. J. Mod. Phys. A27 (2012) 1230032, [1210.5588]. Symmetries of Tree-level Scattering Amplitudes in N=6 Superconformal Chern-Simons Theory. T Bargheer, F Loebbert, C Meneghelli, 10.1103/PhysRevD.82.0450161003.6120Phys. Rev. 8245016T. Bargheer, F. Loebbert and C. Meneghelli, Symmetries of Tree-level Scattering Amplitudes in N=6 Superconformal Chern-Simons Theory, Phys. Rev. D82 (2010) 045016, [1003.6120]. Dual Superconformal Symmetry of N=6 Chern-Simons Theory. Y.-T Huang, A E Lipstein, 10.1007/JHEP11(2010)0761008.0041JHEP. 1176Y.-t. Huang and A. E. Lipstein, Dual Superconformal Symmetry of N=6 Chern-Simons Theory, JHEP 11 (2010) 076, [1008.0041]. Yangian Invariant Scattering Amplitudes in Supersymmetric Chern-Simons Theory. S Lee, 10.1103/PhysRevLett.105.1516031007.4772Phys. Rev. Lett. 105151603S. Lee, Yangian Invariant Scattering Amplitudes in Supersymmetric Chern-Simons Theory, Phys. Rev. Lett. 105 (2010) 151603, [1007.4772]. ABJM amplitudes and the positive orthogonal grassmannian. Y.-T Huang, C Wen, 10.1007/JHEP02(2014)1041309.3252JHEP. 10402Y.-T. Huang and C. Wen, ABJM amplitudes and the positive orthogonal grassmannian, JHEP 02 (2014) 104, [1309.3252]. H Elvang, Y Huang, C Keeler, T Lam, T M Olson, S B Roland, 10.1007/JHEP12(2014)1811410.0621Grassmannians for scattering amplitudes in 4d N = 4 SYM and 3d ABJM. 12181H. Elvang, Y.-t. Huang, C. Keeler, T. Lam, T. M. Olson, S. B. Roland et al., Grassmannians for scattering amplitudes in 4d N = 4 SYM and 3d ABJM, JHEP 12 (2014) 181, [1410.0621]. A Duality For The S Matrix. N Arkani-Hamed, F Cachazo, C Cheung, J Kaplan, 10.1007/JHEP03(2010)020JHEP. 03200907.5418N. Arkani-Hamed, F. Cachazo, C. Cheung and J. Kaplan, A Duality For The S Matrix, JHEP 03 (2010) 020, [0907.5418]. Tree-level Recursion Relation and Dual Superconformal Symmetry of the ABJM Theory. D Gang, Y Huang, E Koh, S Lee, A E Lipstein, 10.1007/JHEP03(2011)1161012.5032JHEP. 03116D. Gang, Y.-t. Huang, E. Koh, S. Lee and A. E. Lipstein, Tree-level Recursion Relation and Dual Superconformal Symmetry of the ABJM Theory, JHEP 03 (2011) 116, [1012.5032]. Progress in one loop QCD computations. Z Bern, L J Dixon, D A Kosower, 10.1146/annurev.nucl.46.1.109hep-ph/9602280Ann. Rev. Nucl. Part. Sci. 46Z. Bern, L. J. Dixon and D. A. Kosower, Progress in one loop QCD computations, Ann. Rev. Nucl. Part. Sci. 46 (1996) 109-148, [hep-ph/9602280]. Magic identities for conformal four-point integrals. J M Drummond, J Henn, V A Smirnov, E Sokatchev, 10.1088/1126-6708/2007/01/064hep-th/0607160JHEP. 0164J. M. Drummond, J. Henn, V. A. Smirnov and E. Sokatchev, Magic identities for conformal four-point integrals, JHEP 01 (2007) 064, [hep-th/0607160]. The Two-Loop Six-Gluon MHV Amplitude in Maximally Supersymmetric Yang-Mills Theory. Z Bern, L J Dixon, D A Kosower, R Roiban, M Spradlin, C Vergu, 10.1103/PhysRevD.78.045007Phys. Rev. 78450070803.1465Z. Bern, L. J. Dixon, D. A. Kosower, R. Roiban, M. Spradlin, C. Vergu et al., The Two-Loop Six-Gluon MHV Amplitude in Maximally Supersymmetric Yang-Mills Theory, Phys. Rev. D78 (2008) 045007, [0803.1465]. Dualities for Loop Amplitudes of N=6 Chern-Simons Matter Theory. W.-M Chen, Y.-T Huang, 10.1007/JHEP11(2011)0571107.2710JHEP. 1157W.-M. Chen and Y.-t. Huang, Dualities for Loop Amplitudes of N=6 Chern-Simons Matter Theory, JHEP 11 (2011) 057, [1107.2710]. Scattering Amplitudes/Wilson Loop Duality In ABJM Theory. M S Bianchi, M Leoni, A Mauri, S Penati, A Santambrogio, 10.1007/JHEP01(2012)0561107.3139JHEP. 0156M. S. Bianchi, M. Leoni, A. Mauri, S. Penati and A. Santambrogio, Scattering Amplitudes/Wilson Loop Duality In ABJM Theory, JHEP 01 (2012) 056, [1107.3139]. Light-like polygonal Wilson loops in 3d Chern-Simons and ABJM theory. J M Henn, J Plefka, K Wiegandt, 10.1007/JHEP11(2011)053,10.1007/JHEP08(2010)0321004.0226JHEP. 0832J. M. Henn, J. Plefka and K. Wiegandt, Light-like polygonal Wilson loops in 3d Chern-Simons and ABJM theory, JHEP 08 (2010) 032, [1004.0226]. On the ABJM four-point amplitude at three loops and BDS exponentiation. M S Bianchi, M Leoni, 10.1007/JHEP11(2014)0771403.3398JHEP. 1177M. S. Bianchi and M. Leoni, On the ABJM four-point amplitude at three loops and BDS exponentiation, JHEP 11 (2014) 077, [1403.3398]. The all loop AdS4/CFT3 Bethe ansatz. N Gromov, P Vieira, 10.1088/1126-6708/2009/01/0160807.0777JHEP. 0116N. Gromov and P. Vieira, The all loop AdS4/CFT3 Bethe ansatz, JHEP 01 (2009) 016, [0807.0777]. Wilson loops in N=6 superspace for ABJM theory. M Rosso, C Vergu, 10.1007/JHEP06(2014)1761403.2336JHEP. 06176M. Rosso and C. Vergu, Wilson loops in N=6 superspace for ABJM theory, JHEP 06 (2014) 176, [1403.2336]. M S Bianchi, M Leoni, A Mauri, S Penati, A Santambrogio, 10.1007/JHEP07(2012)0291204.4407One Loop Amplitudes In ABJM. 29M. S. Bianchi, M. Leoni, A. Mauri, S. Penati and A. Santambrogio, One Loop Amplitudes In ABJM, JHEP 07 (2012) 029, [1204.4407]. Conformal Anomaly for Amplitudes in N = 6 Superconformal Chern-Simons Theory. T Bargheer, N Beisert, F Loebbert, T Mcloughlin, 10.1088/1751-8113/45/47/475402J. Phys. 454754021204.4406T. Bargheer, N. Beisert, F. Loebbert and T. McLoughlin, Conformal Anomaly for Amplitudes in N = 6 Superconformal Chern-Simons Theory, J. Phys. A45 (2012) 475402, [1204.4406]. A note on amplitudes in N=6 superconformal Chern-Simons theory. A Brandhuber, G Travaglini, C Wen, 10.1007/JHEP07(2012)1601205.6705JHEP. 07160A. Brandhuber, G. Travaglini and C. Wen, A note on amplitudes in N=6 superconformal Chern-Simons theory, JHEP 07 (2012) 160, [1205.6705]. All one-loop amplitudes in N=6 superconformal Chern-Simons theory. A Brandhuber, G Travaglini, C Wen, 10.1007/JHEP10(2012)1451207.6908JHEP. 10145A. Brandhuber, G. Travaglini and C. Wen, All one-loop amplitudes in N=6 superconformal Chern-Simons theory, JHEP 10 (2012) 145, [1207.6908]. The two-loop six-point amplitude in ABJM theory. S Caron-Huot, Y.-T Huang, 10.1007/JHEP03(2013)0751210.4226JHEP. 0375S. Caron-Huot and Y.-t. Huang, The two-loop six-point amplitude in ABJM theory, JHEP 03 (2013) 075, [1210.4226]. Magnon dispersion to four loops in the ABJM and ABJ models. J A Minahan, O Sax, C Sieg, 10.1088/1751-8113/43/27/275402J. Phys. 432754020908.2463J. A. Minahan, O. Ohlsson Sax and C. Sieg, Magnon dispersion to four loops in the ABJM and ABJ models, J. Phys. A43 (2010) 275402, [0908.2463]. Superspace calculation of the four-loop spectrum in N=6 supersymmetric Chern-Simons theories. M Leoni, A Mauri, J A Minahan, O Sax, A Santambrogio, C Sieg, 10.1007/JHEP12(2010)074JHEP. 12741010.1756M. Leoni, A. Mauri, J. A. Minahan, O. Ohlsson Sax, A. Santambrogio, C. Sieg et al., Superspace calculation of the four-loop spectrum in N=6 supersymmetric Chern-Simons theories, JHEP 12 (2010) 074, [1010.1756]. Exact Slope and Interpolating Functions in N=6 Supersymmetric Chern-Simons Theory. N Gromov, G Sizov, 10.1103/PhysRevLett.113.1216011403.1894Phys. Rev. Lett. 113121601N. Gromov and G. Sizov, Exact Slope and Interpolating Functions in N=6 Supersymmetric Chern-Simons Theory, Phys. Rev. Lett. 113 (2014) 121601, [1403.1894]. Quantum spinning strings in AdS(4) x CP**3: Testing the Bethe Ansatz proposal. T Mcloughlin, R Roiban, A A Tseytlin, 10.1088/1126-6708/2008/11/069JHEP. 11690809.4038T. McLoughlin, R. Roiban and A. A. Tseytlin, Quantum spinning strings in AdS(4) x CP**3: Testing the Bethe Ansatz proposal, JHEP 11 (2008) 069, [0809.4038]. Comments on operators with large spin. L F Alday, J M Maldacena, 10.1088/1126-6708/2007/11/019JHEP. 11190708.0672L. F. Alday and J. M. Maldacena, Comments on operators with large spin, JHEP 11 (2007) 019, [0708.0672]. OPE for null Wilson loops and open spin chains. A V Belitsky, 10.1016/j.physletb.2012.02.027Phys. Lett. 7091110.1063A. V. Belitsky, OPE for null Wilson loops and open spin chains, Phys. Lett. B709 (2012) 280-284, [1110.1063]. Bethe ansatze for GKP strings. B Basso, A Rej, 10.1016/j.nuclphysb.2013.11.010Nucl. Phys. 8791306.1741B. Basso and A. Rej, Bethe ansatze for GKP strings, Nucl. Phys. B879 (2014) 162-215, [1306.1741]. Exciting the GKP string at any coupling. B Basso, 10.1016/j.nuclphysb.2011.12.0101010.5237Nucl. Phys. 857B. Basso, Exciting the GKP string at any coupling, Nucl. Phys. B857 (2012) 254-334, [1010.5237]. Semiclassical quantization of rotating superstring in AdS(5) x S**5. S Frolov, A A Tseytlin, 10.1088/1126-6708/2002/06/007hep-th/0204226JHEP. 067S. Frolov and A. A. Tseytlin, Semiclassical quantization of rotating superstring in AdS(5) x S**5, JHEP 06 (2002) 007, [hep-th/0204226]. Spinning strings at one-loop in AdS(4) x P**3. T Mcloughlin, R Roiban, 10.1088/1126-6708/2008/12/101JHEP. 121010807.3965T. McLoughlin and R. Roiban, Spinning strings at one-loop in AdS(4) x P**3, JHEP 12 (2008) 101, [0807.3965]. Semiclassical Quantization of Spinning Strings in AdS(4) x CP**3. L F Alday, G Arutyunov, D Bykov, 10.1088/1126-6708/2008/11/0890807.4400JHEP. 1189L. F. Alday, G. Arutyunov and D. Bykov, Semiclassical Quantization of Spinning Strings in AdS(4) x CP**3, JHEP 11 (2008) 089, [0807.4400]. Hexagon POPE: effective particles and tree level resummation. L Cordova, 10.1007/JHEP01(2017)0511606.00423JHEP. 0151L. Cordova, Hexagon POPE: effective particles and tree level resummation, JHEP 01 (2017) 051, [1606.00423]. Resumming the POPE at One Loop. H T Lam, M , 10.1007/JHEP12(2016)0111608.08116JHEP. 1211H. T. Lam and M. von Hippel, Resumming the POPE at One Loop, JHEP 12 (2016) 011, [1608.08116]. On the integrability of two-dimensional models with U(1)xSU(N) symmetry. B Basso, A Rej, 10.1016/j.nuclphysb.2012.09.0031207.0413Nucl. Phys. 866B. Basso and A. Rej, On the integrability of two-dimensional models with U(1)xSU(N) symmetry, Nucl. Phys. B866 (2013) 337-377, [1207.0413]. The worldsheet low-energy limit of the AdS 4 xCP 3 superstring. D Bykov, 10.1016/j.nuclphysb.2010.05.013Nucl. Phys. 8381003.2199D. Bykov, The worldsheet low-energy limit of the AdS 4 xCP 3 superstring, Nucl. Phys. B838 (2010) 47-74, [1003.2199]. String hypothesis for gl(n|m) spin chains: a particle/hole democracy. D Volin, 10.1007/s11005-012-0570-9Lett. Math. Phys. 1021012.3454D. Volin, String hypothesis for gl(n|m) spin chains: a particle/hole democracy, Lett. Math. Phys. 102 (2012) 1-29, [1012.3454]. Luescher formula for GKP string. B Basso, A V Belitsky, 10.1016/j.nuclphysb.2012.02.0111108.0999Nucl. Phys. 860B. Basso and A. V. Belitsky, Luescher formula for GKP string, Nucl. Phys. B860 (2012) 1-86, [1108.0999]. N Dorey, M Losi, 10.1007/JHEP12(2010)0141008.5096Spiky Strings and Giant Holes. 1214N. Dorey and M. Losi, Spiky Strings and Giant Holes, JHEP 12 (2010) 014, [1008.5096]. Scattering of Giant Holes. N Dorey, P Zhao, 10.1007/JHEP08(2011)1341105.4596JHEP. 08134N. Dorey and P. Zhao, Scattering of Giant Holes, JHEP 08 (2011) 134, [1105.4596]. On the scattering over the GKP vacuum. D Fioravanti, S Piscaglia, M Rossi, 10.1016/j.physletb.2013.12.003Phys. Lett. 7281306.2292D. Fioravanti, S. Piscaglia and M. Rossi, On the scattering over the GKP vacuum, Phys. Lett. B728 (2014) 288-295, [1306.2292]. Worldsheet scattering for the GKP string. L Bianchi, M S Bianchi, 10.1007/JHEP11(2015)1781508.07331JHEP. 11178L. Bianchi and M. S. Bianchi, Worldsheet scattering for the GKP string, JHEP 11 (2015) 178, [1508.07331]. On the scattering of gluons in the GKP string. L Bianchi, M S Bianchi, 10.1007/JHEP02(2016)1461511.01091JHEP. 02146L. Bianchi and M. S. Bianchi, On the scattering of gluons in the GKP string, JHEP 02 (2016) 146, [1511.01091]. Quantum mechanics of null polygonal Wilson loops. A V Belitsky, S E Derkachov, A N Manashov, 10.1016/j.nuclphysb.2014.03.0071401.7307Nucl. Phys. 882A. V. Belitsky, S. E. Derkachov and A. N. Manashov, Quantum mechanics of null polygonal Wilson loops, Nucl. Phys. B882 (2014) 303-351, [1401.7307]. On factorization of multiparticle pentagons. A V Belitsky, 10.1016/j.nuclphysb.2015.05.0241501.06860Nucl. Phys. 897A. V. Belitsky, On factorization of multiparticle pentagons, Nucl. Phys. B897 (2015) 346-373, [1501.06860]. From Correlators to Wilson Loops in Chern-Simons Matter Theories. M S Bianchi, M Leoni, A Mauri, S Penati, C Ratti, A Santambrogio, 10.1007/JHEP06(2011)1181103.3675JHEP. 06118M. S. Bianchi, M. Leoni, A. Mauri, S. Penati, C. Ratti and A. Santambrogio, From Correlators to Wilson Loops in Chern-Simons Matter Theories, JHEP 06 (2011) 118, [1103.3675]. On the amplitude/Wilson loop duality in N=6 Chern-Simons theory. K Wiegandt, 10.1016/j.nuclphysbps.2011.05.012Nucl. Phys. Proc. Suppl. 216K. Wiegandt, On the amplitude/Wilson loop duality in N=6 Chern-Simons theory, Nucl. Phys. Proc. Suppl. 216 (2011) 273-275. Equivalence of Wilson Loops in N = 6 super Chern-Simons matter theory and N = 4 SYM Theory. K Wiegandt, 10.1103/PhysRevD.84.1260151110.1373Phys. Rev. 84126015K. Wiegandt, Equivalence of Wilson Loops in N = 6 super Chern-Simons matter theory and N = 4 SYM Theory, Phys. Rev. D84 (2011) 126015, [1110.1373]. Collinear Limit of Scattering Amplitudes at Strong Coupling. B Basso, A Sever, P Vieira, 10.1103/PhysRevLett.113.2616041405.6350Phys. Rev. Lett. 113261604B. Basso, A. Sever and P. Vieira, Collinear Limit of Scattering Amplitudes at Strong Coupling, Phys. Rev. Lett. 113 (2014) 261604, [1405.6350]. Pulling the straps of polygons. D Gaiotto, J Maldacena, A Sever, P Vieira, 10.1007/JHEP12(2011)0111102.0062JHEP. 1211D. Gaiotto, J. Maldacena, A. Sever and P. Vieira, Pulling the straps of polygons, JHEP 12 (2011) 011, [1102.0062]. Wilson loop OPE, analytic continuation and multi-Regge limit. Y Hatsuda, 10.1007/JHEP10(2014)0381404.6506JHEP. 1038Y. Hatsuda, Wilson loop OPE, analytic continuation and multi-Regge limit, JHEP 10 (2014) 38, [1404.6506]. Hexagon OPE Resummation and Multi-Regge Kinematics. J M Drummond, G Papathanasiou, 10.1007/JHEP02(2016)1851507.08982JHEP. 02185J. M. Drummond and G. Papathanasiou, Hexagon OPE Resummation and Multi-Regge Kinematics, JHEP 02 (2016) 185, [1507.08982]. Bootstrapping Null Polygon Wilson Loops. D Gaiotto, J Maldacena, A Sever, P Vieira, 10.1007/JHEP03(2011)0921010.5009JHEP. 0392D. Gaiotto, J. Maldacena, A. Sever and P. Vieira, Bootstrapping Null Polygon Wilson Loops, JHEP 03 (2011) 092, [1010.5009]. Multichannel Conformal Blocks for Polygon Wilson Loops. A Sever, P Vieira, 10.1007/JHEP01(2012)0701105.5748JHEP. 0170A. Sever and P. Vieira, Multichannel Conformal Blocks for Polygon Wilson Loops, JHEP 01 (2012) 070, [1105.5748]. MHV amplitudes at strong coupling and linearized TBA equations. K Ito, Y Satoh, J Suzuki, 10.1007/JHEP08(2018)0021805.07556JHEP. 082K. Ito, Y. Satoh and J. Suzuki, MHV amplitudes at strong coupling and linearized TBA equations, JHEP 08 (2018) 002, [1805.07556]. Thermodynamic Bethe Ansatz Equations for Minimal Surfaces in AdS 3. Y Hatsuda, K Ito, K Sakai, Y Satoh, 10.1007/JHEP04(2010)1081002.2941JHEP. 04108Y. Hatsuda, K. Ito, K. Sakai and Y. Satoh, Thermodynamic Bethe Ansatz Equations for Minimal Surfaces in AdS 3 , JHEP 04 (2010) 108, [1002.2941]. Fine Structure of String Spectrum in AdS 5 x S 5. K Zarembo, S Zieme, 10.1134/S0021364013080134,10.1134/S0021364012050116JETP Lett. 951110.6146K. Zarembo and S. Zieme, Fine Structure of String Spectrum in AdS 5 x S 5 , JETP Lett. 95 (2012) 219-223, [1110.6146]. Strong Wilson polygons from the lodge of free and bound mesons. A Bonini, D Fioravanti, S Piscaglia, M Rossi, 10.1007/JHEP04(2016)0291511.05851JHEP. 0429A. Bonini, D. Fioravanti, S. Piscaglia and M. Rossi, Strong Wilson polygons from the lodge of free and bound mesons, JHEP 04 (2016) 029, [1511.05851]. Fermions and scalars in N = 4 Wilson loops at strong coupling and beyond. A Bonini, D Fioravanti, S Piscaglia, M Rossi, 1807.09743A. Bonini, D. Fioravanti, S. Piscaglia and M. Rossi, Fermions and scalars in N = 4 Wilson loops at strong coupling and beyond, 1807.09743. A Agarwal, N Beisert, T Mcloughlin, 10.1088/1126-6708/2009/06/045Scattering in Mass-Deformed N>=4 Chern-Simons Models. 450812.3367A. Agarwal, N. Beisert and T. McLoughlin, Scattering in Mass-Deformed N>=4 Chern-Simons Models, JHEP 06 (2009) 045, [0812.3367].
[]
[ "ON OPTIMAL BLOCK RESAMPLING FOR GAUSSIAN-SUBORDINATED LONG-RANGE DEPENDENT PROCESSES", "ON OPTIMAL BLOCK RESAMPLING FOR GAUSSIAN-SUBORDINATED LONG-RANGE DEPENDENT PROCESSES" ]
[ "Qihao Zhang [email protected] \nDepartment of Statistics\nIowa State University\n\n", "Soumendra N Lahiri [email protected] \nDepartment of Mathematics and Statistics\nWashington University in St. Louis\n\n", "Daniel J Nordman [email protected] \nDepartment of Statistics\nIowa State University\n\n" ]
[ "Department of Statistics\nIowa State University\n", "Department of Mathematics and Statistics\nWashington University in St. Louis\n", "Department of Statistics\nIowa State University\n" ]
[]
Block-based resampling estimators have been intensively investigated for weakly dependent time processes, which has helped to inform implementation (e.g., best block sizes). However, little is known about resampling performance and block sizes under strong or long-range dependence. To establish guideposts in block selection, we consider a broad class of strongly dependent time processes, formed by a transformation of a stationary longmemory Gaussian series, and examine block-based resampling estimators for the variance of the prototypical sample mean; extensions to general statistical functionals are also considered. Unlike weak dependence, the properties of resampling estimators under strong dependence are shown to depend intricately on the nature of non-linearity in the time series (beyond Hermite ranks) in addition the long-memory coefficient and block size. Additionally, the intuition has often been that optimal block sizes should be larger under strong dependence (say O(n 1/2 ) for a sample size n) than the optimal order O(n 1/3 ) known under weak dependence. This intuition turns out to be largely incorrect, though a block order O(n 1/2 ) may be reasonable (and even optimal) in many cases, owing to non-linearity in a long-memory time series. While optimal block sizes are more complex under long-range dependence compared to short-range, we provide a consistent data-driven rule for block selection, and numerical studies illustrate that the guides for block selection perform well in other block-based problems with long-memory time series, such as distribution estimation and strategies for testing Hermite rank.MSC2020 subject classifications: Primary 62G09; secondary 62G20, 62M10.
10.1214/22-aos2242
[ "https://export.arxiv.org/pdf/2208.01713v1.pdf" ]
235,740,765
2208.01713
de857914344f59d2ffe73d5c0faa94af59dc8447
ON OPTIMAL BLOCK RESAMPLING FOR GAUSSIAN-SUBORDINATED LONG-RANGE DEPENDENT PROCESSES Qihao Zhang [email protected] Department of Statistics Iowa State University Soumendra N Lahiri [email protected] Department of Mathematics and Statistics Washington University in St. Louis Daniel J Nordman [email protected] Department of Statistics Iowa State University ON OPTIMAL BLOCK RESAMPLING FOR GAUSSIAN-SUBORDINATED LONG-RANGE DEPENDENT PROCESSES Block-based resampling estimators have been intensively investigated for weakly dependent time processes, which has helped to inform implementation (e.g., best block sizes). However, little is known about resampling performance and block sizes under strong or long-range dependence. To establish guideposts in block selection, we consider a broad class of strongly dependent time processes, formed by a transformation of a stationary longmemory Gaussian series, and examine block-based resampling estimators for the variance of the prototypical sample mean; extensions to general statistical functionals are also considered. Unlike weak dependence, the properties of resampling estimators under strong dependence are shown to depend intricately on the nature of non-linearity in the time series (beyond Hermite ranks) in addition the long-memory coefficient and block size. Additionally, the intuition has often been that optimal block sizes should be larger under strong dependence (say O(n 1/2 ) for a sample size n) than the optimal order O(n 1/3 ) known under weak dependence. This intuition turns out to be largely incorrect, though a block order O(n 1/2 ) may be reasonable (and even optimal) in many cases, owing to non-linearity in a long-memory time series. While optimal block sizes are more complex under long-range dependence compared to short-range, we provide a consistent data-driven rule for block selection, and numerical studies illustrate that the guides for block selection perform well in other block-based problems with long-memory time series, such as distribution estimation and strategies for testing Hermite rank.MSC2020 subject classifications: Primary 62G09; secondary 62G20, 62M10. 1. Introduction. Block-based resampling methods provide useful nonparametric approximations with statistics from dependent data, where data blocks help to capture time dependence (cf. [27]). Considering a stretch from a stationary series X 1 , . . . , X n , a prototypical problem involves estimating the standard error of the sample meanX n = n t=1 X t /n. Subsampling [13,21,40] and block-bootstrap [28,33] use sample averagesX i, computed over length < n data blocks {(X i , . . . , X i+ −1 )} n− +1 i=1 within the data X 1 , . . . , X n ; in both resampling approaches, the empirical variance of block averages, say σ 2 , approximates the block variance σ 2 ≡ Var(X ). If the series {X t } exhibits short-range dependence (SRD) with quickly decaying covariances r(k) ≡ Cov(X 0 , X k ) → 0 as k → ∞ (i.e., ∞ k=1 |r(k)| < ∞), then the target variance converges nσ 2 n = nVar(X n ) → C > 0 as n → ∞ and σ 2 is consistent for nVar(X n ) under mild conditions ( −1 + /n → ∞) [42]. Block-based variance estimators have further history in time series analysis (cf. overview in [38]), including batch means estimation in Markov chain Monte Carlo. Particularly for SRD, much research has focused on explaining properties of block-based estimators σ (cf. [17,28,30,31,42,46]). In turn, these resampling studies have advanced understanding of best block sizes (e.g., O(n 1/3 )) and implementation under SRD [12,20,32,37,41]. However, in contrast to SRD, relatively little is known about properties of block-based resampling estimators and block sizes under strong or long-range time dependence (LRD). For example, recent tests of Hermite rank [9] as well as other approximations with block bootstrap and subsampling under LRD rely on data-blocks [6,8,10,25], creating a need for guides in block selection. To develop an understanding of data-blocking under LRD, we consider the analog problem from SRD of estimating the variance Var(X n ) of a sample mean X n through block resampling (cf. Sec. [2][3][4]; block selections in this context extend to broader statistics (cf. Sec. 5) and provide guidance for distributional approximations with resampling (cf. Sec. 6). Because long-memory or long-range dependent (LRD) time series are characterized by slowly decaying covariances (i.e., ∞ k=1 |r(k)| = +∞ diverges), optimal block sizes in this problem have intuitively been conjectured as longer O(n 1/2 ) than the best block size O(n 1/3 ) associated with weak dependence [10,22]. However, this intuition about block selections is misleading. Under general forms of LRD, the best block selections turn out to depend critically on both dependence strength (i.e., rate of covariance decay) and the nature of non-linearity in a time series. To illustrate, consider a stationary Gaussian LRD time series {Z t }, which we may associate with common models for long-memory [19,34], and suppose {Z t } has a covariance decay controlled by a long-memory exponent α ∈ (0, 1) (described more next). Then, the LRD process Z t for α < 1/2 can have an optimal block length O(n α ) while a cousin LRD process Z t + 0.5Z 2 t has a best block size O(n 1/2 ) regardless of the memory level α ∈ (0, 1/2). That is, classes of LRD processes exist where non-linearity induces a best block order O(n 1/2 ). Also, as the optimal blocks O(n α ) for Z t (α ∈ (0, 1/2)) illustrate, when covariance decay slows α ↓ 0 here, best block sizes for a resampling variance estimator under LRD do not generally increase with increasing dependence strength. While theory justifies a block length O(n 1/2 ) as optimal in some cases, the forms of theoretically best block sizes can generally be complex under LRD and we also establish a provably consistent databased estimator of this block size. Numerical studies show that the empirical block selection performs well in variance estimation and provides a guide with good performance in other resampling problems under LRD (e.g., distribution estimation for statistical functionals). Section 2 describes the LRD framework and variance estimation problem. We consider stationary LRD processes X t = G(Z t ) defined by a transformation G(·) of a LRD Gaussian process {Z t } with a long-memory exponent α ∈ (0, 1/m) (cf. [47,48]); here integer m ≥ 1 is the so-called Hermite rank of G(·), which has a well-known impact on the distributional limit of the sample meanX n for such LRD processes (e.g., normal if m = 1 [15,49]). Section 3 provides the large-sample bias and variance of block-based resampling estimators in the sample mean case, which are used to determine MSE-optimal block sizes and a consistent approach to block estimation in Section 4. As complications, best block lengths can depend on the memory exponent α and a higher order rank beyond Hermite rank m (e.g., 2nd Hermite rank). Two versions of data blocking are also compared, involving fully overlapping (OL) or non-overlapping (NOL) blocks; while OL blocks are always MSE-better for variance estimation under SRD [30,31], this is not true under LRD. Section 5 extends the block resampling to broader statistical functionals under LRD (beyond sample means) and includes the block jackknife technique for comparison. Numerical studies are provided in Section 6 to illustrate block selections and resampling across problems of variance estimation, distributional approximations, and Hermite rank testing under LRD. Section 7 has concluding remarks, and a supplement [54] provides proofs and supporting results. We end here with some related literature. Particularly, for Gaussian series X t = Z t (or G(x) = x with m = 1), the computation of (log) block-based variance estimators over a series of large block sizes can be a graphical device for estimating the long-memory parameter α (using that Var(X ) ≈ C 0 −α for subsample averages, cf. Sec. 2) [35,51]. Relatedly, [18] considered block-average regression-type estimators of α in the Gaussian case. For distribution estimation with LRD linear processes, [36,53] studied subsampling, while [26] examined block bootstrap. As perhaps the most closely related works, [2,18,26] studied optimal blocks/bandwidths for estimating a sample mean's variance with LRD linear processes (under various assumptions) using data-block averages or related Bartlett-kernel heteroskedasticity and autocorrelation consistent (HAC) estimators. Those results share connections to optimal block sizes here for purely Gaussian series X t = Z t (cf. Sec. 3.1), but no empirical estimation rules were considered. As novelty, we account for LRD data X t = G(Z t ) from general transformations G(·) (i.e., the pure Gaussian/linear case is comparatively simpler), establish consistent block estimation, provide results for more general statistical functions, and consider the applications of block selections in wider contexts under LRD. In terms of resampling from LRD transformed Gaussian processes, [29] showed the block bootstrap is valid for approximating the full distribution of the sample meanX n when the Hermite rank is m = 1, while [22] established subsampling as valid for any m ≥ 1. (While block bootstrap and subsampling differ in their distributional approximations [42], these induce a common block-based variance estimator for the sample mean, as described in Sec. 2.) Recently, much research interest has also focused on further distributional approximations with subsampling for LRD transformed Gaussian processes; see [6,8,10,25]. Preliminaries: LRD processes and block-based resampling estimators. 2.1. Class of LRD processes. Let {Z t } be a mean zero, unit variance, stationary Gaussian sequence with covariances satisfying (1) γ Z (k) ≡ EZ 0 Z k ∼ C 0 k −α as k → ∞ for some given 0 < α < 1 and constant C 0 > 0. Examples include fractional Gaussian noise with Hurst parameter 1/2 < H < 1 having covariances γ Z (k) = (|k + 1| 2H − 2|k| 2H + |k − 1| 2H )/2 which satisfy (1) with C 0 = H(2H − 1) and α = 2 − 2H ∈ (0, 1), as well as FARIMA processes with difference parameter 0 < d < 1/2 which satisfy (1) with α = 1 − 2d ∈ (0, 1); see [19,34]. Let G : R → R be a real-valued function such that E[G(Z 0 )] 2 < ∞ holds for a standard normal variable Z 0 . In which case, the function G(Z 0 ) may be expanded as (2) G(Z 0 ) = ∞ k=0 J k k! H k (Z 0 ) in terms of Hermite polynomials, H k (z) ≡ (−1) k e z 2 /2 d k dz k e −z 2 /2 , k = 0, 1, 2, . . . , and corresponding coefficients J k ≡ E[G(Z 0 )H k (Z 0 )], k ≥ 0. The first few Hermite polynomials are given by H 0 (z) = 1, H 1 (z) = z, H 2 (z) = z 2 − 1, H 3 (z) = z 3 − z, for example, and EH k (Z 0 ) = 0 holds for k ≥ 1. Let µ ≡ EG(Z 0 ) = J 0 denote the mean of G(Z 0 ) and define the Hermite rank of G(·) (cf. [47]) as m ≡ min{j ≥ 1 : J k = 0}. To avoid degeneracy, we assume Var[G(Z 0 )] > 0 whereby m ∈ [1, ∞) is a finite integer. The target processes of interest are defined as X t ≡ G(Z t ) with respect to a stationary Gaussian series Z t with covariances as in (1) with 0 < α < 1/m. Such processes X t exhibit strong or long-range dependence (LRD) as seen by partial sums n k=1 |r(k)| of covariances r(k) = Cov(X 0 , X k ) having a slow decay proportional to (3) n k=1 |r(k)| ∝ n 1−αm as n → ∞, where αm ∈ (0, 1), depending on the Hermite rank m of the transformation G(·) and memory exponent α ∈ (0, 1/m) under (1). This represents a common formulation of LRD, with partial covariance sums diverging as n → ∞ [47]; see [43,50] for further characterizations. Suppose X 1 , . . . , X n is an observed time stretch from the transformed Gaussian series X t ≡ G(Z t ), having sample meanX n = n −1 n t=1 X t . Setting v n,αm ≡ n αm Var(X n ), the process structure (1)-(3) entails a so-called long-run variance as [2,47,48]). Under LRD, the variance Var(X n ) of the sample mean decays at a slower rate O(n −αm ) as n → ∞ (i.e., αm ∈ (0, 1)) than the typical O(n −1 ) rate under SRD. The limit distribution of n αm/2 (X n − µ) also depends on the Hermite rank m ≥ 1 [15,48]. The development first considers the variance v n,αm ≡ n αm Var(X n ) of the sample mean (or, equivalently here, its limit (4)) as target of inference under LRD. Resampling results are then extended to broader classes of statistics in Section 5. (4) lim n→∞ v n,αm = v ∞,αm ≡ J 2 m m! 2C m 0 (1 − αm)(2 − αm) > 0 (cf. 2.2. Block-based resampling variance estimators under LRD. A block bootstrap "recreates" the original series X 1 , . . . , X n by independently resampling b ≡ n/ blocks from a collection of length < n data blocks. Resampling from the overlapping (OL) blocks {(X i , . . . , X i+ −1 ) : i = 1, . . . , n − + 1} of length within X 1 . . . , X n yields the moving block bootstrap [28,30,33], while resampling from non-overlapping (NOL) blocks {(X 1+ (i−1) , . . . , X i ) : i = 1, . . . , b ≡ n/ } gives the NOL block bootstrap [13,31]. Resampled blocks are concatentated to produce a bootstrap series, say X * 1 , . . . , X * b , and the distribution of a statistic from the bootstrap series (e.g., X * b ≡ ( b) −1 b i=1 X * i ) approximates the sampling distribution of an original-data statistic (e.g.,X n ). Subsampling [40,42] is a different approach to approximation that computes statistics from one resampled data block. Both subsampling and bootstrap, though, estimate a sample mean's variance v n,αm ≡ n αm Var(X n ) with a common block-based estimator; this is the induced variance of an average under resampling (e.g., Var * (X * b )), which has a closed form (cf. [29] under LRD), resembling a batch means estimator [17]. Based on X 1 , . . . , X n , the OL block-based variance estimator of v n,αm ≡ n αm Var(X n ) is given by (5) V ,αm,OL = 1 n − + 1 n− +1 i=1 αm (X i, − µ n,OL ) 2 , µ n,OL = 1 n − + 1 n− +1 i=1X i, , where aboveX i, = i+ −1 j=i X j / is the sample average of the ith data block (X i , . . . , X i+ −1 ), i = 1, . . . , n − + 1. Essentially, block versions { αm/2 (X i, −X n )} n− +1 i=1 of the quantity n αm/2 (X n − µ) give a sample variance V ,αm,OL that estimates v ,αm ≡ αm Var(X ) ≈ v n,αm ≡ n αm Var(X n ) for sufficiently large , n by (4). The NOL block-based variance estimator is defined as V ,αm,NOL = 1 b b i=1 αm (X 1+(i−1) , − µ n,NOL ) 2 , µ n,NOL = b i=1X 1+(i−1) , /b, using NOL averagesX 1+(i−1) , , i = 1, . . . , b ≡ n/ , where µ n,NOL =X n when n = b . Under SRD, variance estimators are standardly defined by letting αm = 1 above (e.g., in V ,αm,OL from (5)). Likewise, under LRD, both the target variance v n,αm ≡ n αm Var(X n ) and block-based estimators V ,αm,OL or V ,αm,NOL are scaled to be comparable, which involves the long-memory exponent αm ∈ (0, 1). In practice, αm ∈ (0, 1) is usually unknown. To develop block-based estimators under LRD, we first consider αm ∈ (0, 1) as given. Ultimately, an estimate αm n of αm can be substituted into V ,αm,OL or V ,αm,NOL which, under mild conditions, does not change conclusions about consistency or best estimation rates (cf. Sec. 4.2). Properties for block-based resampling estimators under LRD. Large-sample results for the block-based variance estimators require some extended notions of the Hermite rank of G(·) in defining X t ≡ G(Z t ) = ∞ k=0 J k /k! · H k (Z t ), for J k = EG(Z 0 )H k (Z 0 ), k ≥ 1. Recalling m ≡ min{k > 0 : J k = 0} as the usual Hermite rank of G(·), define the 2nd Hermite rank of G(·) by the index m 2 ≡ min{k > m : J k = 0} of the next highest non-zero coefficient in the Hermite expansion (2) of G(·). In other words, m 2 is the Hermite rank of X t − µ − J m H m (Z t )/m! upon removing the mean and 1st Hermite rank term from X t = G(Z t ). If the set {k > m : J k = 0} is empty, we define m 2 = ∞. We also define the Hermite pair-rank of a function G(·) by m p ≡ inf{k ≥ m : J k J k+1 = 0}; when the above set is empty, we define m p = ∞. The Hermite pair-rank m p identifies the index of the first consecutive pair of non-zero terms (J k , J k+1 ) in the expansion X t = G(Z t ) = µ + ∞ k=1 J k /k!H k (Z t ). For non-degenerate series X t = G(Z t ), the Hermite rank m is always finite, but the 2nd rank m 2 and pair-rank m p may not be (and m 2 = ∞ implies m p = ∞). For example, both series G(X t ) = H 1 (Z t ) and G(X t ) = H 1 (Z t ) + H 3 (Z t ) have Hermite rank m = 1, pair-rank m p = ∞, and 2nd ranks m 2 = ∞ and 3, respectively; the series G( X t ) = H 1 (Z t ) + H 3 (Z t ) + H 4 (Z t ) and G(X t ) = H 3 (Z t ) + H 4 (Z t ) have pair-rank m p = 3 with Hermite ranks m = 1 and 3, and 2nd ranks m 2 = 3 and 4, respectively. In what follows, due to combined effects of dependence and non-linearity in a LRD time series X t = G(Z t ), the Hermite pair-rank m p ∈ [m, ∞] of G plays a role in the asymptotic variance of resampling estimators (Sec. 3.2), while the 2nd Hermite rank m 2 ∈ [m + 1, ∞] impacts the bias of resampling estimators (Sec. 3.1). 3.1. Large-sample bias properties. Bias expansions for the block resampling estimators require a more detailed form of the LRD covariances than (1) and we suppose that (6) γ Z (k) ≡ Cov(Z 0 , Z k ) = C 0 k −α (1 + k −τ L(k)), k > 0, holds for some α ∈ (0, 1/m) and C 0 > 0 (again γ Z (0) = 1) with some τ ∈ (1 − αm, ∞) and slowly varying function L : R + → R + that satisfies lim x→∞ L(ax)/L(x) = 1 for any a > 0. For Gaussian FARIMA (i.e., α = 1 − 2d ∈ (0, 1)) and Fractional Gaussian noise (i.e., α = 2 − 2H ∈ (0, 1)) processes {Z t }, one may verify that (6) holds with τ = 1 for any α ∈ (0, 1/m) and m ≥ 1. The statement of bias in Theorem 3.1 additionally requires process constants B 0 (m), B 1 (m 2 ) that depend on the 1st m and 2nd m 2 Hermite ranks and covariances in (6). These are given as B 1 (m 2 ) ≡ 2 ∞ j=m2 (J 2 j /j!) ∞ k=1 [γ Z (k)] j when αm 2 > 1; B 1 (m 2 ) ≡ 2C m2 0 J 2 m2 /m 2 ! when αm 2 = 1; B 1 (m 2 ) ≡ 2J 2 m2 C m2 0 /[m 2 !(1 − m 2 α)(2 − m 2 α)] = v ∞,αm2 from (4) when αm 2 < 1; and (7) B 0 (m) ≡ 2C m 0 J 2 m m!    I αm + ∞ k=1 k −αm m j=1 m j [L(k)k −τ ] j    + ∞ k=m J 2 k k! , with Euler's generalized constant I αm ≡ lim k→∞ ( k j=1 j −αm − k 0 x −αm dx) ∈ (−∞, 0). THEOREM 3.1. Suppose X t ≡ G(Z t ) where the stationary Gaussian process {Z t } satisfies (6) with memory exponent α ∈ (0, 1/m) and where G(·) has Hermite rank m and 2nd Hermite rank m 2 (noting m 2 > m and possibly m 2 = ∞). Let V ,αm denote either V ,αm,OL or V ,αm,NOL as block resampling estimators of v n,αm = n αm Var(X n ) based on X 1 , . . . , X n . If −1 + /n → 0 as n → ∞, then the bias of V ,αm is given by (4), and constants B 0 (m), B 1 (m 2 ) are from (7). E V ,αm − v n,αm = B 0 (m) αm 1 + o(1) − v ∞,αm n αm 1 + o(1) +I(m 2 < ∞)B 1 (m 2 ) αm min{1,αm2} [log ] I(αm2=1) 1 + o(1) , where I(·) denotes the indicator function, the constant v ∞,αm ≡ 2J 2 m C m 0 /[m!(1 − αm)(2 − αm)] is from REMARK 1: If we switch the target of variance estimation from v n,αm = n αm Var(X n ) to the limit variance v ∞,αm ≡ lim n→∞ v n,αm from (4), this does not change the bias expansion in Theorem 3.1 or results in Section 4 on best block sizes for minimizing MSE. To better understand the bias of a block-based estimator under LRD, we may consider the case of a purely Gaussian LRD series X t = Z t (i.e., no transformation), corresponding to G(x) = x, m = 1 and m 2 = ∞. The bias then simplifies under Theorem 3.1 as (8) E V ,αm − v n,αm = B 0 (1) α 1 + o(1) − v ∞,αm n α 1 + o(1) , depending only on the memory exponent α of the process Z t . This bias form can also hold when X t is LRD and linear [18,26]. However, for a transformed LRD series X t = G(Z t ), the function G and the underlying exponent α impact the bias of the block-based estimator in intricate ways. The order of a main bias term in Theorem 3.1 is generally summarized as (9) O [log ] I(αm2=1) min{1,αm2}−αm , which depends on how the 2nd Hermite rank m 2 > m of the transformed series X t ≡ G(Z t ), as a type of non-linearity measure, relates to the long-memory exponent α ∈ (0, 1/m). Small values of m 2 satisfying 1/α > m 2 induce the worst bias rates O( −(m2−m)α ) compared to the best possible bias O( −(1−αm) ) occurring, for example, when m 2 = ∞ (or no terms in the Hermite expansion of G(·) beyond the 1st rank m). In fact, the largest bias rates arise whenever 2nd Hermite rank terms J m2 H m2 (Z t )/m 2 ! exist in the expansion of X t ≡ G(Z t ) (i.e., J m2 = 0) and exhibit long-memory under αm 2 < 1. (cf. Sec. A.1 in [54]). For comparison, block-based estimators in the SRD case [30] exhibit a smaller bias O(1/ ) than the best possible bias in (9) under LRD. 3.2. Large-sample variance properties. To establish the variance of the block resampling estimators under LRD, we require an additional moment condition regarding the transformed series X t = G(Z t ). For second moments, a simple characterization exists that EX 2 t = E[G(Z t )] 2 < ∞ is finite if and only if ∞ k=0 J 2 k /k! < ∞. For higher order moments, however, more elaborate conditions are required to guarantee EX 4 t = E[G(Z t )] 4 < ∞ and perform expansions of EX t1 X t2 X t3 X t4 . We shall use a condition "G ∈ G 4 (1)" from [48]. (More generally, Definition 3.2 of [48] prescribes a condition G ∈ G 4 ( ), with ∈ (0, 1], for moment expansions, which could be applied to derive Theorem 3.2 next. We use = 1 for simplicity, where a sufficient condition for G ∈ G 4 (1) is ∞ k=0 3 k/2 |J k |/ √ k! < ∞, holding for any polynomial G, cf. [48]). See the supplement [54] for more technical details. To state the large-sample variance properties of block-based estimators V ,αm,OL or V ,αm,NOL in Theorem 3.2, we also require some proportionality constants. As a function of the Hermite rank, when m ≥ 2 and αm < 1, define a positive scalar φ α,m ≡ 2 (1 − 2α)(1 − α) 2J 2 m C m 0 (m − 1)! 1 [1 − (m − 1)α][2 − (m − 1)α)] 2 . In the case of a Hermite rank m = 1, define another positive proportionality constant, as a function of α ∈ (0, 1) and the type of resampling blocks (OL/NOL)), as a α ≡ 8J 4 1 C 2 0 (1 − α) 2 (2 − α) 2 ×                      1 + (2−α) 2 (2α 2 +3α−1) 4(1−2α)(3−2α) − Γ 2 (3−α) Γ(4−2α) if 0 < α < 1/2, OL or NOL 9/32 if α = 1/2, OL or NOL ∞ x=−∞ g 2 α (x) if 1/2 < α < 1, NOL ∞ −∞ g 2 α (x)dx if 1/2 < α < 1, OL, where Γ(·) denotes the gamma function and g α ( x) ≡ (|x + 1| 2−α − 2|x| 2−α + |x − 1| 2−α )/2, x ∈ R. In the definition of a α , g 2 α (x) is summable/integrable when α ∈ (1/2, 1) using g α (x) ∼ (2 − α)(1 − α)x −α /2 as x → ∞. Finally, as a function of any Hermite pair-rank m p ∈ [1, ∞] and α ∈ (0, 1), we define a constant as λ α,mp ≡ 8C 0 (1 − α)(2 − α) ×      ( 2C mp 0 Jm p Jm p +1 mp! ) 2 [(1 − αm p )(2 − αm p )] −2I(αmp<1) if αm p ≤ 1 ∞ k=mp ∞ j=−∞ [γ Z (j)] k J k J k+1 /k! 2 if αm p > 1, with Gaussian covariances γ Z (·) and C 0 > 0 from (1) and an indicator I(·) function. With constants λ α,mp , φ α,m , a α > 0 as above, we may next state Theorem 3.2. THEOREM 3.2. Suppose X t ≡ G(Z t ) where the stationary Gaussian process {Z t } satisfies (1) with C 0 > 0 and memory exponent α ∈ (0, 1/m) and where G ∈ G 4 (1) has Hermite rank m ≥ 1 and Hermite pair-rank m p (note m p ≥ m and possibly m p = ∞). Let V ,αm denote either V ,αm,OL or V ,αm,NOL as block resampling estimators of v n,αm = n αm Var(X n ) based on X 1 , . . . , X n . If −1 + /n → 0 as n → ∞, then the variance of V ,αm is given by REMARK 2: Above r n,α,m,mp represents a second variance contribution, which depends on the Hermite pair-rank m p and is non-increasing in block length (by 1/α < m ≤ m p ). The value of r n,α,m,mp is zero when m p = ∞ and is largest O(n −α ) when the pair-rank assumes its smallest possible value m p = m. For example, the series X t = H m (Z t ) + H m+1 (Z t ) and X t = H m (Z t ) + H m+2 (Z t ) have pair-ranks m p = 1 and ∞, respectively, inducing different r n,α,m,mp terms. While r n,α,m,mp can dominate the variance expression of Theorem 3.2 for some block sizes, the contribution of r n,α,m,mp emerges as asymptotically negligible at an optimally selected block size opt (cf. Sec. 4.1). Var( V ,αm ) =          φ α,m n 2α 1 + o(1) + r n,α,m,mp if m ≥ 2 a α n By Theorem 3.2, the variance of a resampling estimator V ,αm depends on the block size through a decay rate O(( /n) 2α ) that, surprisingly, does not involve the exact value of the rank Hermite m. The reason is that, when m ≥ 2, fourth order cumulants of the transformed process X t = G(Z t ) determine this variance (cf. [54]). Also, any differences in block type ( V ,αm,OL vs V ,αm,NOL ) only emerge in a proportionality constant a α when m = 1 and α ∈ (1/2, 1); otherwise, a α does not change with block type. Consequently, for processes X t = G(Z t ) with strong LRD (α < 1/2, m ≥ 1), there is no large-sample advantage to OL blocks for variance estimation. In contrast, under SRD, OL blocks reduce the variance of a resampling variance estimator by a multiple of 2/3 compared to NOL blocks [28,30,31], because the non-overlap between two OL blocks (e.g., X 1 , . . . , X and X 1+i , . . . , X +i , i < ) acts roughly uncorrelated. This fails under strong LRD where OL/NOL blocks have the same variance/bias/MSE properties here. Section 6 provides numerical examples. As under SRD, however, OL blocks remain generally preferable (i.e., smaller a α for weak LRD. α > 1/2). Best Block Selections and Empirical Estimation. Optimal Block Size and MSE. Based on the large-sample bias and variance expressions in Section 3, an explicit form for the optimal block size opt ≡ opt,n can be determined for minimizing the asymptotic mean squared error (10) MSE n ( ) ≡ E( V ,αm − v n,αm ) 2 of a block-based resampling estimator V ,αm of v n,αm ≡ n αm Var(X n ) under LRD. COROLLARY 4.1. Under Theorems 3.1-3.2 assumptions, the optimal block size for a resampling estimator V ,αm,OL or V ,αm,NOL is given by (as n → ∞) opt,n = K α,m,m2 ×      n α α(1−m)+min{1,αm 2 } (log n) I(αm2=1) if 0 < α < 0.5, m ≥ 1 ( n log n ) 0.5 (log n) I(αm2=1) if α = 0.5, m = 1 n 1 3−2α if 0.5 < α < 1, m = 1, for a constant K α,m,m2 > 0, changing by block type OL/NOL only when m = 1, α ∈ (1/2, 1). The Appendix provides values for K α,m,m2 > 0. For LRD processes X t = G(Z t ), best block lengths opt depend intricately on the transformation G(·) (through ranks m, m 2 ) and the memory parameter α < 1/m of the Gaussian process Z t . Optimal blocks increase in length whenever the strength of long-memory decreases (i.e., α increases); as α moves closer to 1/m, the order of opt moves closer to O(n). This is a counterintuitive aspect of LRD in resampling. With variance estimation under SRD [30,31], best block size has a known order Cn 1/3 where the proportionality constant C > 0 increases with dependence. The 2nd Hermite rank m 2 of G(·) can particularly impact opt . Whenever α < 1/m 2 , the optimal block order opt ∝ n 1/(m2−m+1) does not change. As a consequence in this case, if an immediate second term H m+1 (Z 0 ) appears in the Hermite expansion (2) of X 0 = G(Z 0 ), so that the 2nd rank is m 2 = m + 1, then the optimal block size becomes opt ∝ n 1/2 . This suggests that a guess opt = O(n 1/2 ) often found in the literature for block resampling under LRD can be reasonable, though not by the intuition that slow covariance decay under LRD implies larger blocks compared to those O(n 1/3 ) for SRD. Rather, for transformations G(·) where m 2 = m + 1 may hold naturally, the choice opt ∝ n 1/2 is optimal with sufficiently strong α < 1/(m + 1) dependence, regardless of the exact Hermite rank m. For completeness, we note that MSE n ( ) ≡ E( V ,αm − v n,αm ) 2 has an optimized order as (11) MSE n ( opt,n ) ∝        n −2α(min{1,αm 2 }−αm) α(1−m)+min{1,αm 2 } (log n) 2αI(αm2=1) if 0 < α < 0.5, m ≥ 1 ( n log n ) −0.5 (log n) I(αm2=1) if α = 0.5, m = 1 n − 2(1−α) 3−2α if 0.5 < α < 1, m = 1. at the optimal block opt ≡ opt,n , which also depends on m, m 2 and α under LRD. Section 4.2 next shows that estimation of the long-memory exponent does not affect the large-sample results and block considerations for the resampling variance estimators. Section 4.3 then provides a consistent data-driven method for estimating the block size opt . 4.2. Empirical considerations for long-memory exponent. We have assumed the memory exponent αm ∈ (0, 1) of the LRD process X t = G(Z t ) is known in block-based resampling estimators (5). If an appropriate estimator αm n of αm is instead substituted, then the resulting estimators will possess similar consistency rates under mild conditions. We let V generically denote a block-based estimator V ,αm of v n,αm = n αm Var(X n ) (e.g., V ,αm,OL or V ,αm,NOL ) found by replacing αm with an estimator αm n based on X 1 , . . . , X n . COROLLARY 4.2. Suppose Theorem 3.1-3.2 assumptions. As n → ∞, (i) if | αm n − αm| log n p → 0, then V is consistent for v n,αm , i.e., | V − v n,αm | p → 0. (ii) if | αm n − αm| log n = O p (n −1/4 ), then the convergence rate of | V − v n,αm | in proba- bility matches that of | V ,αm − v n,αm | from Theorems 3.1-3.2. Several potential estimators of αm satisfy Corollary 4.2 conditions, such as logperiodogram or local Whittle estimation [43]. These, for example, can exhibit sufficiently fast convergence in probability, e.g. O p (n −2/5 ) (cf. [3,24]). For simplicity, we use local Whittle estimation with bandwidth n 0.7 (cf. [3]) in the following. 4.3. Data-driven block estimation. The block results from Section 4.1 suggest that datadriven choices of block size under LRD have no simple analogs to block-resampling in the SRD case. For variance estimation under SRD, several approaches exist for estimating the best block size opt through plug-in estimation [12,32,41] or empirical MSE-minimization ([20]-method). By exploiting the known block order opt ≈ Cn 1/3 under SRD, these methods target the proportionality constant C > 0. In contrast, optimal blocks under LRD have a form opt ≈ Kn a (log n)Ĩ from Corollary 4.1, where K ≡ K α,m,m2 > 0 and a ≡ a α,m,m2 ∈ (0, 1) are complicated terms based on α, m, m 2 , whileĨ ≡ −0.5I(α = 0.5, m = 1) + I(αm 2 = 1) involves indicator functions. Because the order n a (log n)Ĩ is unknown in practice, previous strategies to block estimation are not directly applicable in the LRD setting. Plug-in estimation seems particularly intractable under LRD; general plug-in approaches under SRD [32] require known orders for bias/variance in estimation, but these are also unknown under LRD (Theorems 3.1-3.2). Consequently, we consider a modified method for estimating block size opt ≡ opt,n that involves two rounds of empirical MSE-minimization ([20]-method) To adapt the [20]-method for LRD, we take a collection of subsamples (X i , . . . , X i+h−1 ) of length h < n, i = 1, . . . , n − h + 1. Based on X 1 . . . , X n , let αm n denote an estimator of αm (for use in all estimators to follow) and let V˜ denote a resampling variance estimator (replacing αm with αm n ) based on a pilot block size˜ (e.g.,˜ ∝ n 1/2 ). Similarly, let V (i,h) denote a resampling variance estimator computed on the subsample (X i , . . . , X i+h−1 ) using a block length < h, i = 1, . . . , n − h + 1. We then define an initial block-length estimator opt,h as the minimizer of the empirical MSE MSE h,n ( ) ≡ 1 n − h + 1 n−h+1 i=1 V (i,h) − V˜ 2 , 1 ≤ < h. Here MSE h,n ( ) estimates MSE h ( ) in (10), or the MSE of a resampling estimator based on a sample of size h and block size , while opt,h then estimates the minimizer of MSE h ( ) or the optimal block opt,h from Corollary 4.1 using "h" in place of "n" there. Above the pilot estimator V˜ plays the role of a target variance to mimic the MSE formulation (10). Theorem 4.3 establishes important conditions on the subsample size h and pilot block for consistent estimation under LRD. For the transformed series X t = G(Z t ), the result involves a general 8th order moment condition (i.e., G ∈ G 8 (1) under Definition 3.2 of [48]) analogous to the 4th order moment condition described in Section 3.2. THEOREM 4.3. Along with Theorem 3.1-3.2 assumptions, suppose | αm n − αm| log n = O p (n −1/4 ) (as in Corollary 4.2) and that G ∈ G 8 (1). Suppose also that h → ∞ and˜ → ∞ with h/˜ +˜ h/n = O(1). Then, as n → ∞, the empirical MSE, MSE h,n ( ), has a sequence of minimizers opt,h such that opt,h opt,h p → 1 and MSE h,n ( opt,h ) MSE h ( opt,h ) p → 1, where opt,h has Corollary 4.1 form with MSE h ( opt,h ) as in (11) (with "h" replacing "n"). Theorem 4.3 does not address estimation of the best block size opt,n for a length n time series, but rather the optimal block opt,h for a smaller length h < n series. Nevertheless, the result establishes a non-trivial first step that, under LRD, some block sizes can be validly estimated through empirical MSE ([20]-method) provided that the subsample size h and pilot block˜ are appropriately chosen. In particular, the condition h/˜ +˜ h/n = O(1) cannot be reduced (related to pilot estimation V˜ ) and entails that the largest subsample length possible is h = O(n 1/2 ) within the empirical MSE approach under LRD. With this in mind and because the order of opt,n is unknown, we use empirical MSE device twice, based on two subsample lengths h ≡ h 1 = C 1 n 1/r and h 2 = C 2 n θ/r . Here r ≥ 2 and 0 < θ < 1 are constants to control the subsample sizes (i.e., h having larger order than h 2 ) with C 1 , C 2 > 0. A common pilot estimate V˜ is used for both MSE h,n ( ) and MSE h2,n ( ). We denote corresponding block estimates as opt,h and opt,h2 , and define an estimator of the target optimal block size opt,n as opt,n ≡ opt,h C an 1 r h an opt,h r−1 c n , c n ≡ r In log h 2 log h (r−1) In(log h)/ log(h/h2) , where a n ≡ log( opt,h / opt,h2 ) log(h/h 2 ) , I n ≡ 1 2 log( opt,h ) − a n log h log log h + log( opt,h2 ) − a n log h 2 log log h 2 estimate the exponent a ≡ a α,m,m2 ∈ (0, 1) and indicator quantityĨ ≡ −0.5I(α = 0.5, m = 1) + I(αm 2 = 1) appearing in the Corollary 4.1 expression for opt,n ≈ Kn a (log n)Ĩ . The estimator opt,n has three components, where h,opt /C an 1 estimates Kh a (log h)Ĩ while opt,h /h an captures K(log h)Ĩ up to a constant, and c n is scaling adjustment from log n ≈ r log h. The data-driven block estimator opt,n is provably valid over differing forms for opt,n under LRD. COROLLARY 4.4. Let h ≡ C 1 n 1/r and h 2 = C 2 n θ/r (for C 1 , C 2 > 0, r ≥ 2, θ ∈ (0, 1)), and suppose Corollary 4.3 assumptions hold. Then, as n → ∞, the estimator opt,n is consistent for opt,n in that opt,n / opt,n p → 1 and, additionally, a n p → a, I n p →Ĩ, c n p → [rθ (r−1)/(1−θ) ]Ĩ , opt,h h an · 1 K(log h)Ĩ p → θĨ /(1−θ) , regarding constantsĨ ≡ −0.5I(α = 0.5, m = 1) + I(αm 2 = 1), K ≡ K α,m,m2 > 0, and a ≡ a α,m,m2 for prescribing opt,n ≈ Kn a (log n)Ĩ under Corollary 4.1. We suggest a first subsample size h = C 1 n 1/2 of maximal possible order (r = 2). We then take the pilot block to be˜ = n 1/2 , representing a reasonable choice under LRD and also satisfying Theorem 4.3-Corollary 4.4 (i.e., h/˜ + h˜ /n = O(1) then holds). For a general rule in numerical studies to follow, we chose C 1 = 9, C 2 = 12, θ = 0.95 to keep subsamples adequately long under LRD. We also consider a modified block estimation rule (12) n = min{ n/20 , opt,n }, to avoid overly large block selection in finite sample cases. This variation retains consistency due to opt,n = o(n) and performs well over a variety of applications in Section 6. Extending the Scope of Statistics. Here we discuss extending block selection and resampling variance estimation to a larger class of statistics defined by functionals of empirical distributions. Using a small notational change to develop this section, let us denote data from an observed time stretch as Y 1 =G(Z 1 ), . . . , Y n =G(Z n ) (rather than X t = G(Z t )) and let F n ≡ n −1 n t=1 δ Yt denote the corresponding the empirical distribution, where δ y denotes a unit point mass at y ∈ R. Consider a statistic T n ≡ T (F n ), given by a real-valued functional T (·) of F n , which estimates a target parameter T (F ) defined by the process marginal distribution F . A broad class of statistics and parameters can be expressed through such functionals, with some examples given below. Example 1: Smooth functions T n of averages given by T n ≡ H n −1 n t=1 φ 1 (Y t ), . . . , n −1 n t=1 φ l (Y t ) , involving a function H : R l → R of l ≥ 1 real-valued functions φ j : R → R for j = 1, . . . , l. These statistics include ratios/differences of means as well as sample moments ( [31], ch. 5). for an estimating function with mean zero E[ψ(Y t , T (F ))] = 0. This includes several types of location/scale or regression estimators investigated in the LRD literature (cf. [4,5]). Example 3: L-estimators T n defined through integrals as T n = xJ(F n (x))dF n (x), involving a bounded function J : [0, 1] → R. These include trimmed means J(x) = I(δ 1 < x < δ 2 )/(δ 2 − δ 1 ) (based on the indicator function and trimming proportions δ 1 , δ 2 ∈ (0, 1)) along with Windsorized averages and Gini indices (cf. [44]). For a fixed integer k ≥ 1, functionals defined by linear combinations or products of components in "k-dimensional" marginal distributions might also be considered (i.e., empirical distributions of (Y t , Y t+1 , . . . , Y t+k )). For simplicity, we use k = 1. Under regularity conditions [16,44], statistical functionals T n = T (F n ) as above are approximately linear and admit an expansion (13) T n = T (F ) + 1 n n t=1 IF (Y t , F ) + R n , in terms of the influence function IF (y, F ), defined as IF (y, F ) ≡ lim ↓0 T ((1 − )F + δ y ) − T (F ) , y ∈ R, and an appropriately small remainder R n ; note that E[IF (Y t , F )] = 0 holds. See [14] and [23] for such expansions with LRD Gaussian subordinated processes. To link to our previous block resampling developments (Sec. 2), a statistic T n as in (13) corresponds approximately to an averageX n = n t=1 X t /n of transformed LRD Gaussian observations X 1 , . . . , X n , where X t ≡ IF (Y t , F ) = IF (G(Z t ), F ) = G(Z t ) has Hermite rank denoted by m with αm < 1 here. That is, under appropriate conditions, the normalized statistic n αm/2 [T n − T (F )] = n αm/2X n + o p (1) has a distributional limit determined byX n (e.g., [47,48]) with a limiting variance lim n→∞ n αm Var(X n ) = v ∞,αm given by (4) as before. Results in [7] also suggest that compositions X t ≡ G(Z t ) = IF (G(Z t ), F ) may tend to produce Hermite ranks of m = 1, in which case n α/2 [T n − T (F )] will be asymptotically normal with asymptotic variance v ∞,αm . To estimate v ∞,αm through block resampling, we would ideally use X 1 , . . . , X n to obtain a variance estimator as in Section 2, which we denote as V ,αm ≡ V ,αm (X). Then, all estimation and block properties from Sections 3-4 would apply. Unfortunately, F is generally unknown in practice so that {X t ≡ IF (Y t , F )} n t=1 are unobservable from the data Y 1 , . . . , Y n . Consequently, V ,αm (X) represents an oracle estimator. In Sections 5.1-5.2, we detail two block-based strategies for estimating v ∞,αm based on either a substitution method or block jackknife. In both cases, these approaches can be as good as the oracle estimator V ,αm (X) under some conditions. These resampling results under LRD have counterparts to the SRD case [28,39], though we non-trivially include L-estimation in addition to M-estimation. 5.1. Substitution method. Classical substitution (i.e., plug-in) estimates F in the influence function IF (y, F ) with its empirical version F n (cf. [39,44]) and develops observations as X 1 , . . . , X n with X t ≡ IF (Y t , F n ). For example, in a smooth function T n = H(Ȳ n ) of the averageȲ n , we have IF (y, F n ) = H (Ȳ n )(y −Ȳ n ), where H denotes the derivative of H. We denote a resampling variance estimator computed from such observations as V ,αm ( X). To compare V ,αm ( X) to the oracle estimator V ,αm (X), we require bounds between estimated IF (y, F n ) and true influence functions IF (y, F ). For weakly dependent processes and M-estimators, [28] considered pointwise expansions of IF (y, F ) − IF (y, F n ) as linear combinations of other functions in y. We need to generalize the concept of such expansions to accommodate LRD and more general functionals (e.g., L-estimators) as follows. Condition-I: There exist random variables U 1,n , U 2,n , W n and real constants c, d ∈ R, C > 0 such that, for any generic real values y 1 , . . . , y k and k ≥ 1, it holds that 1 k k j=1 IF (y j , F )− 1 k k j=1 IF (y j , F n )+U 1,n ≤ |W n |+|U 2,n |   d c 1 k k j=1 h λ (y j ) 2 dλ   1/2 , where |W n | = O p (n −αm ); |U 1,n |, |U 2,n | = O p (n −αm/2 ); and, as indexed by λ ∈ [c, d], h λ (·) denotes a real-valued function such that h λ (Y t ) = h λ (G(Z t )) has mean zero, variance E[h λ (Y t )] 2 ≤ C, and Hermite rank of at least m (the rank of G(Z t ) = IF (Y t , F )). For context, if we set αm = 1 above and skip the notion of Hermite rank, then Condition-I would include, as a special case, an assumption used by [28] with weakly dependent processes. However, under LRD, we need to explicitly incorporate Hermite ranks in bounds. If we define m y ≥ 1 as the Hermite rank of an indicator function I(Y t ≤ y) = I(G(Z t ) ≤ y) for y ∈ R, then the smallest rank m * ≡ {m y : y ∈ R} is known to be useful for describing convergence of the empirical distribution [F n (·) − F (·)] (cf. [14]). One general way to ensure any function h λ (·) appearing in Condition-I has Hermite rank of at least m (the rank of IF (Y t , F )) is that m = m * . The reason is that m * sets a lower bound on the Hermite rank of any function of Y t (cf. (2.5) of [14]). Such equality m = m * appears implicit in work of [23] on statistical functionals under LRD and holds automatically when m = 1. We show next that the statistics T n in Examples 1-3 can satisfy Condition-I. (ii) Example 2 (M-estimation) where a constant C > 0 and a neighborhood N 0 of T (F ) exist such that |ψ(y, θ)| ≤ C on R × N 0 ;ψ ≡ ∂ψ/∂θ exists and |ψ(y, θ)| ≤ C on R × N 0 ; |ψ(y, θ 1 ) −ψ(y, θ 2 )| ≤ C|θ 1 − θ 2 | for y ∈ R, θ 1 , θ 2 ∈ N 0 ; Eψ(Y t , T (F )) = 0; and either m = m * holds or the Hermite rank of ψ(Y t , θ) remains the same for θ ∈ N 0 . (iii) Example 3 (L-estimation) where J is bounded and Lipschtiz on [0, 1] with J(t) = 0 when t ∈ [0, δ 1 ] ∪ [δ 2 , 1] for some 0 < δ 1 < δ 2 < 1; and either m = m * holds or m ≤ min{m y : y 1 ≤ y ≤ y 2 } for some real y 1 < y 2 with 0 < F (y 1 ) < δ 1 < δ 2 < F (y 2 ) < 1. THEOREM 5.1. For Y t =G(Z t ), suppose X t ≡ G(Z t ) = IF (Y t , Theorem 5.1 assumptions for Examples 1-2, dropping Hermite rank conditions, match those of [28]. Smooth function statistics in Example 1 have influence functions X t = IF (Y t , F ) as a linear combination of the baseline functions φ j (Y t ), 1 ≤ j ≤ l, so that the smallest Hermite rank among these typically gives the Hermite rank m of IF (Y t , F ). In M-estimation, the Hermite rank of X t ≡ G(Z t ) = IF (Y t , F ) matches that of ψ(Y t , T (F )) and it is sufficient that ψ(Y t , θ) maintains the same rank m in a θ-neighborhood of T (F ); the latter condition is mild and implies that the rank ofψ(Y t , T (F )) must be at least m, which is important asψ(·, T (F )) arises in Condition-I under M-estimation. To illustrate with a standard normal Y t = Z t , M-estimation of the process mean uses ψ(Z t , θ) = Z t − θ with a constant Hermite rank of 1 as a function of θ and a derivativeψ(Y t , T (F )) = −1 of infinite rank; similarly, Huber-estimation uses ψ(Z t , θ) = max{−c, min{Z t −θ, c}} (for some c > 0) which has constant rank 1 for θ in a neighborhood of T (F ) = 0 here, while the derivativė ψ(Z t , T (F )) = I(|Z t | ≤ c) has rank 2. For general L-estimation, conditions on the Hermite ranks m y of indicator functions I(Y t ≤ y) (or the empirical distribution F n (y)) are necessary, particularly when trimming percentages δ 1 , δ 2 are involved; in this case, we may use the rank m y of F n (y) over a y-region ([y 1 , y 2 ]) that is not trimmed away. Theorem 5.2 establishes that the oracle resampling estimator V ,αm (X) (true influence) and the plug-in version V ,αm ( X) (estimated influence) are often close to the extent that the latter is as good as the former. Blocks can be either OL/NOL below. F ) has Hermite rank m ≥ 1 with αm < 1, Condition-I holds, and −1 + /n → 0 as n → ∞. Then, THEOREM 5.2. For Y t =G(Z t ), suppose X t ≡ G(Z t ) = IF (Y t ,V ,αm ( X) = V ,αm (X) + O p (( /n) αm/2 ) + O p (n −αm/2 ). Theorem 5.2 is the LRD analog of a result by [28] for weakly dependent processes (i.e., setting αm = 1 above). As in the SRD case, the difference between estimators is often no larger than the estimation error O p (( /n) min{α,1/2} ) from the standard deviation of the oracle V ,αm (X) (Theorem 3.2). Consequently, optimal block orders and convergence rates for V ,αm (X) (Section 4.1) generally apply to the substitution version V ,αm ( X). The block rule of Section 4.2 can also be applied to X 1 , . . . , X n , which we illustrate in Section 6. Block jackknife (BJK) method. For estimating the asymptotic variance v ∞,αm of the functional T n , a block jackknife (BJK) estimator is possible under LRD. BJK uses only OL data blocks, as NOL blocks are generally invalid (Remark 4.1, [28]) . For j = 1, . . . , N ≡ n − + 1, we compute the functional T (j) n after removing observations in jth OL block (Y j , . . . , Y j+ −1 ) from the data (Y 1 , . . . , Y n ). The BJK estimator of v ∞,αm is then V BJK ,αm,OL = (N − 1) 2 2 αm N N j=1 (T (j) n −T n ) 2 ,T n ≡ 1 N N j=1 T (j) n . Unlike the plug-in method (Sec. 5.1), BJK does not involve influence functions, but uses repeated evaluations of the functional. For the sample mean statistic T n = n t=1 Y t /n, the BJK estimator matches the plug-in estimator V ,αm ( X) ≡ V ,αm,OL ( X) with OL blocks (cf. [28]). More generally, these two estimators may differ, though not substantially, as shown in Theorem 5.3. To state the result, for each OL data block j = 1, . . . , N , we define a remainder S M (j) n ≡ 1 n − 1≤t≤n, t ∈[j,j+ −1] X t − 1 n n t=1 X t involves an average of estimated values { X i ≡ IF (Y i , F n )} n i=1 after removing the jth block. THEOREM 5.3. For Y t =G(Z t ), suppose X t ≡ G(Z t ) = IF (Y t , F ) has Hermite rank m ≥ 1 with αm < 1, and that the OL block plug-in estimator V ,αm,OL ( X) is consistent. Then, V BJK ,αm = V ,αm,OL ( X) + O p ( /n) holds as n → ∞ if αm N j=1 [S (j) n ] 2 /N = O p ( 4 /[n 2 (N − 1) 2 ]) ; the latter is true under Theorem 5.1 assumptions for Examples 1-3. The above difference O p ( /n) between BJK and plug-in estimators holds similarly under weak dependence (akin to setting αm = 1 above), which improves the bound O p ( 3/2 /n) originally given by [28] (Theorem 4.2). Theorems 5. 2-5.3 show that BJK can also differ no more from the oracle estimator V ,αm,OL (X) than the plug-in estimator V ,αm,OL ( X). Numerical Illustrations and Applications. 6.1. Illustration of MSE over block sizes. Here we describe an initial numerical study of the MSE-behavior of resampling variance estimators under LRD. In particular, results of Section 3 suggest that OL/NOL resampling blocks should induce identical large-sample performances under strong dependence (e.g., α < 1/2) and that optimal blocks should generally decrease in size as the covariance strength increases (cf. Sec 4.1). LRD series were generated as X t = H 2 (Z t ) or X t = H 3 (Z t ), using three values of the memory parameter with α < 1/m for m = 2 or m = 3, based on a standardized Fractional Gaussian process Z t with covariances as in (1) (i.e., H = (2 − α)/2). For each simulated series, OL/NOL block-based estimators V ,αm of the variance v n,αm = n αm Var(X n ) were computed over a sequence of block sizes . Repeating this procedure over 3000 simulation runs and averaging differences ( V ,αm − v n,αm ) 2 /v 2 n,αm produced approximations of standardized MSE-curves E( V ,αm − v n,αm ) 2 /v 2 n,αm , as shown in Figure 1 with sample sizes n = 1000 or 5000. The MSE curves are quite close between OL/NOL blocks, particularly as sample sizes increase to n = 5000, in agreement with theory. Also, as suggested by Section 4.1, MSEs should improve at the best block choice as covariance strength increases under LRD (α ↓), which is visible in Figure 1. Table 1 presents best block lengths from the figure, showing that optimal blocks decrease for these LRD processes with decreasing α. The supplement [54] provides additional simulation studies to further illustrate bias/variance behavior of resampling estimators. 6.2. Resampling variance estimation by empirical block size. We next examine empirical block choices for resampling variance estimation of the sample mean and provide comparison to other approaches under LRD. Application to another functional is then considered. We use the data-based rule (12) of Section 4.3 for choosing a block size. We first compare resampling estimators V of the sample mean's variance v n,αm between block selections = n and = n 1/2 , where the latter represents a reasonable choice under LRD by theory in Section 4.1. OL blocks are used along with local Whittle estimation αm of the memory parameter αm (Sec. 4.2). Similarly to Section 6.1, we simulated samples from LRD processes defined by X t = H m (Z t ) for m = 2 with α = 0.20, 0.45 or m = 3, α = 0.20, 0.30 and approximated the MSE E( V − v n,αm ) 2 /v 2 n,αm using 500 simulations. Table 2 provides these results. Estimated blocks n are generally better than the default n 1/2 , though the latter is also competitive. The default seems preferable with a small sample size and particularly strong dependence (e.g., n = 500, m = 3, α = 0.2), but empirical block selections show improved MSEs with increased sample sizes n = 1000, 2000 under LRD. For comparison against resampling variance estimators, we also consider the Bartlettkernel heteroskedasticity and autocorrelation consistent (HAC) estimator [52] and the memory and autocorrelation consistent (MAC) estimator [45], whose large-sample properties have been studied for the sample mean with linear LRD processes (cf. [2,18]), but not for transformed LRD series X t = G(Z t ). As numerical suggestions from [2], we implemented HAC and MAC estimators of the sample mean's variance using bandwidths n 1/5 , n 4/5 , respectively; the HAC approach further used local Whittle estimation of the memory parameter αm, like the resampling estimator. The MSEs of HAC/MAC estimators are given in Table 3 (approximated from 500 simulation runs) for comparison against the resampling estimators in Table 2 with the same processes. For the process X t = H 2 (Z t ) with α = 0.45, HAC/MAC estimators emerge as slightly better than the resampling approach with estimated block sizes, though the resampling estimator outperforms HAC/MAC estimators as the dependence increases (smaller α) or as the Hermite rank increases (m = 3). With small sample sizes n and strong dependence, the HAC estimator can exhibit large MSEs, indicating that the bandwidth n 1/5 is perhaps too small for the non-linear LRD series in these settings. In comparison, the empirical block selections with resampling estimators show consistently reasonable MSE-performance among all cases, which is appealing. We further consider a different statistical functional with resampling estimators and empirical blocks n . In the notation of Section 5, we simulated stretches Y 1 , . . . , Y n of LRD processes defined by Y t = H 2 (Z t ) or Y t = sin(Z t ) and considered an L-estimator T n as a 40% trimmed mean based on the empirical distribution F n (i.e., δ 1 = 1 − δ 2 = 0.2 in Example 3, Sec. 5). For either process, the influence function X t ≡ IF (Y t , F ) = Y t I(F −1 (0.2) < Y t < F −1 (0.8))/0.6 has Hermite rank m = 1, where F and F −1 denote the distribution and quantile functions, respectively, of Y t . To estimate the variance, say v n,αm , of n αm/2 T n , we apply the substitution method (Sec. 5.1). That is, using estimated influences X t ≡ IF (Y n , F n ) = Y t I(F −1 n (0.2) < Y t < F −1 n (0.8))/0.6, we obtain an estimator αm of the memory-parameter by local Whittle estimation and compute a plug-in resampling variance estimator V ( X). Table 4 provides MSEs (i.e., E( V ( X) − v n,αm ) 2 /v 2 n,αm approximated from 500 simulation runs) with block choices = n or n 1/2 over sample sizes with the plug-in variance estimator here, though the choice n 1/2 appears also reasonable. X t ≡ G(Z t ) with G = H 2 (Z t ) (m = 2) and G(Z t ) = H 3 (Z t ) (m = 3). 6.3. Resampling distribution estimation by empirical block size. Block selection also plays an important role in other resampling inference, such as approximating full sampling distributions with block bootstrap for purposes of tests and confidence intervals. While optimal block sizes for distribution estimation are difficult and unknown under LRD, we may apply blocking notions developed here for guidance. For distributional approximations of sample means and other statistics as in Section 5, the block bootstrap is valid with transformed LRD series when a normal limit exists (e.g., Hermite rank m = 1) [29]. Such normality may occur commonly in practice [7] and can be further assessed as described in Section 6.4. To study empirical blocks for distribution estimation with the bootstrap, we consider two LRD processes as Y t = sin(Z t ) or Y t = Z t + 20 −1 H 2 (Z t ) defined by Gaussian {Z t } as before with memory exponent α. Based on a size n sample, block bootstrap is applied to approximate the distribution of ∆ n ≡ n αm/2 |T n − θ|, where T n ≡ T (F n ) represents either the sample mean or the 40% trimmed mean (Example 3, Sec. 5)), while θ ≡ T (F ) denotes the corresponding process mean or trimmed mean parameter. In sample mean case, we compute αm using local Whittle estimation with data stretch X 1 = Y 1 , . . . , X n = Y n and define a bootstrap averageX * b by resampling b ≡ n/ OL data blocks of length * n ≡ b 1/2 αm/2 |X * b − E * X * b | (cf. [29]), where E * X * b = n− +1 i=1X i, /(n − + 1) is a bootstrap expected average. In the trimmed mean case, the estimator αm and the bootstrap approximation ∆ * n are similarly defined from estimated values X t ≡ IF (Y n , F n ) = Y t I(F −1 n (0.2) < Y t < F −1 n (0.8))/0.6, t = 1, . . . , n. We construct 95% bootstrap confidence intervals for θ by approximating the 95th percentile of ∆ n with the bootstrap counterpart from ∆ * n (based on 200 bootstrap re-creations). Note that, for these processes and statistics, the effective Hermite rank is m = 1 (i.e., the rank of X t = IF (Y t , F )) so that the bootstrap should be valid in theory. We used the empirical rule (12) as a guide for selecting a block length . Table 5 shows the empirical coverages of 95% bootstrap intervals with samples of size n = 1000 or n = 5000 (based on 500 simulation runs). For strongest LRD α = 0.02, bootstrap intervals exhibit under-coverage, as perhaps expected, though accuracy improves with increasing sample size n in this case. The bootstrap performs well in the other cases of long-memory. The coverage rates of bootstrap intervals are closer to the nominal level with empirically chosen blocks n compared to a standard choice n 1/2 , for both the sample mean and trimmed mean. This suggests that the driven-data rule for blocks provides a reasonable guidepost for resampling distribution estimation, as an application beyond variance estimation. 6.4. A test of Hermite rank/normality. In a concluding numerical example, we wish to illustrate that data blocking has impacts for inference under LRD beyond the resampling. One basic application of data blocks is for testing the null hypothesis that the Hermite rank is m = 1 for a transformed LRD process X t = G(Z t ) against the alterative m > 1. This type of assessment has practical value in application. For example, analyses in financial econometrics can involve LRD models with assumptions about m (cf. [11]). More generally, inference from sample averages under LRD may use normal theory only if m = 1 [47,48]. Even considering resampling approximations under LRD, the block bootstrap (i.e., full data re-creation) becomes valid when m = 1 [29], while subsampling (i.e., small scale re-creation) should be used instead if m > 1 [6,10,20]. Based on data X 1 , . . . , X n from a LRD process X t = G(Z t ), a simple assessment of H 0 : m = 1 can be based on data blocks of as follows. The idea is to make averages (say) W i ≡ j=1 X j+ (i−1) / of length blocks, i = 1, . . . , b ≡ n/ , as in Section 2.2, and then Algorithm 1: Hermite rank test for m = 1 (normality) from LRD series Data: Given a LRD sample X 1 , .., Xn. Set initializations: block size ; number M of resamples; and significance level α sig = 0.05. Step 1. Calculate a test statistic T 0 for normality (e.g., Anderson-Darling) from block averages; Step 2. Estimate αm the memory parameter αm or Hurst Index H = 1 − αm/2; Step 3. for k = 1, . . . , M do (i) Simulate a Fractional Brownian motion sample {B * H ( 1 n ), . . . , B * H ( n n )} with Hurst index H; (ii) Obtain a bootstrap sample X * 1 , . . . , X * n as X * 1 = B * H ( 1 n ) and X * j = B * H ( j n ) − B * H ( j−1 n ), j = 2, . . . , n; (iii) Calculate the kth bootstrap test statistic, T * k , for normality from block averages in X * 1 , . . . , X * n ; end Step 4. Compute q as the 1 − α sig sample percentile of {T * 1 , . . . , T * M }; Step 5. Reject if T 0 > q. check their agreement to normality. Letting Φ(·) denote a standard normal cdf, we compare the collection of residuals R i ≡ Φ((W i −W )/S W ) to a uniform(0, 1) distribution, wherē W , S W are the average and standard deviation of {W 1 , . . . , W b }. In a usual fashion, we can assess uniformity by applying a Kolmogorov-Smirnov statistic or an Anderson-Darling [1] statistic (e.g., A ≡ −b − b −1 b i=1 (2i − 1) log{R (i) [1 − R (b+1−i) ]} for ordered R (i) ) . Of course, the distribution of such a test statistic requires calibration under the null H 0 : m = 1. However, a central limit theorem [49] for LRD processes X t = G(Z t ) when m = 1 gives 1 Var(X n ) 1 n nt i=1 (X i − µ) d → cB H (t), 0 ≤ t ≤ 1,(14) as n → ∞, where B H (t) denotes fractional Brownian motion with Hurst index H = 1 − α/2 ∈ (0, 1) and c > 0 is a process constant. Note that (14) no longer holds under LRD when m > 1, which aids in testing. The property (14) suggests a simple bootstrap procedure for recreating the null distribution of residual-based test statistics, given in Algorithm 1, because such statistics are invariant to the location-scale used in a bootstrap sample. The role of data blocking for tests of Hermite rank H 0 : m = 1 with LRD series X t = G(Z t ) may be traced to recent work of [9]. Those authors test for m = 1 (normality) with a cumulant-based two-sample t-test, using two samples generated from data by different OL block resampling approaches. Our block-based test is different and perhaps more basic. To briefly compare these tests, we use data generation settings from [9] where X t = G(Z t ) with G(z) = z + 20 −1 H 2 (z) + (20 √ 3) −1 H 3 (z) (i.e., m = 1) or G(z) = cos(z) (i.e., m = 2), and Z t denotes a standardized FAIRMA(0, (1 − α)/2, 0) Gaussian process for α = 0.2, 0.8. The test in [9] uses block lengths = n 1/4 or = n 1/2 , where some best-case results provided there assume the memory parameter to be known. To facilitate comparison against these, we simply use a similar block = n 1/2 for our test, as a reasonable choice under LRD, and consider both OL/NOL blocks; we also use local Whittle estimation of the memory parameter along with 200 resamples in Algorithm 1. Table 6 lists power (based on 500 simulation runs) of our test using a 5% nominal level compared to test findings of [9] (Table 2); we report an Anderson-Darling statistic in Table 6 though a Kolmogorov-Smirnov statistic produced similar results. Both the proposed test and the [9]-test maintain the nominal size for the LRD processes with m = 1, but our block-based test has much larger power for the LRD process defined by m = 2 and α = 0.2. The process defined by m = 2 and α = 0.8 in Table 6 is actually SRD; as both our test and the [9]-test are block-based assessments of normality, both tests should maintain their sizes in this case and our test performs a bit better. This illustrates that data-blocking has potential for assessments beyond usage in resampling. TABLE 6 Rejection rates of nominal 5% tests of Hermite rank m = 1 from block-based statistic (Algorithm 1 with OL/NOL blocks = n 1/2 ) with LRD series X t ≡ G(Z t ) using G 1 (z) = z + 20 −1 H 2 (z) + (20 √ 3) −1 H 3 (z) or G 2 (z) = cos(z). Results from the test of [9] (OL blocks = n 1/2 or n 1/4 ) are included. α = 0.20 α = 0.80 G/m Testing Method n = 400 1000 10000 n = 400 1000 10000 7. Concluding Remarks. While block-based resampling methods provide useful estimation under data dependence, their performance is intricately linked to a block length parameter, which is important to understand. This problem has been extensively investigated under weak or short-range dependence (SRD) (cf. [31], ch. 3), though relatively little has been known for long-range dependence (LRD), especially outside the pure Gaussian case (cf. [26]). For general long-range dependent (LRD) X t = G(Z t ) processes, based on transforming a LRD Gaussian series Z t , results here showed that properties and best block sizes with resampling variance estimators under LRD can intricately depend on covariance strength and the structure of non-linearity in G(·). The long-memory guess O(n 1/2 ) for block size [22] may have optimal order at times, owing more to such non-linearity rather than intuition about LRD. Additionally, we provided a data-based rule for best block selection, which was shown to be consistent under complex cases for blocks with LRD. While we focused on a variance estimation problem with resampling under LRD, block selection for distribution estimation is also of interest, though seemingly requires further difficult study of distributional expansions for statistics from LRD series X t = G(Z t ). However, we showed that the block selections developed here can provide helpful benchmarks for choosing block size with resampling or other block-based inference problems under LRD. G 1 , m = 1 The current work may also suggest future possibilities toward estimating the Hermite m rank (or other ranks) under LRD. No estimators of m currently exist; instead, only estimation of the long-memory exponent αm of X t = G(Z t ) has been possible, which depends on the covariance decay rate α < 1/m of Z t . Results here established that the variance of a block resampling estimator depends only on α, apart from the Hermite rank m itself. This suggests that some higher-order moment estimation may be investigated for separately estimating the memory coefficient α and Hermite rank m under LRD. APPENDIX A: COEFFICIENT OF OPTIMAL BLOCK SIZE The coefficient K α,m,m2 of Corollary 4.1 is presented in cases with notation: (7); E ≡ v 2 ∞,αm in (4); and F ≡ a α from Theorem 3.2. A ≡ B 2 0 (m), B ≡ (2C m2 0 J 2 m2 /m 2 !) 2 , C ≡ (B 0 (m) + 2 ∞ j=m2 ∞ k=1 [γ Z (k)] j J 2 j /j!) 2 , and D ≡ (2C m2 0 /[(1 − m 2 α)(2 − m 2 α)](J 2 m2 /m 2 !)) 2 related toCase 1: m 2 = ∞ K α,m,m2 =                −(1−2α) √ AE+ √ (1−2α) 2 AE+4α(1−α)A(E+F ) 2α(E+F ) if 0 < α < 0.5, m = 1 A F 0.5 if α = 0.5, m = 1 2A(1−α) F 1 3−2α if 0.5 < α < 1, m = 1 A(1−αm) F α 1 2(1+α−αm) if 0 < α < 1 m , m ≥ 2 Case 2: m 2 < ∞ with αm 2 > 1 K α,m,m2 =                −(1−2α) √ CE+ √ (1−2α) 2 CE+4α(1−α)C(E+F ) 2α(E+F ) if 1 m2 < α < 0.5, m = 1 C F 0.5 if α = 0.5, m = 1 2C(1−α) F 1 3−2α if max{ 1 m2 , 0.5} < α < 1, m = 1 C(1−αm) F α 1 2(1+α−αm) if 1 m2 < α < 1 m , m ≥ 2 Case 3: m 2 < ∞ with αm 2 = 1 K α,m,m2 =          −(1−2α) √ BE+ √ (1−2α) 2 BE+4α(1−α)B(E+F ) 2α(E+F ) if 0 < α = 1 m2 < 0.5, m = 1 B F 0.5 if α = 1 m2 = 0.5, m = 1 B(1−αm) F α 1 2(1+α−αm) if 0 < α = 1 m2 < 0.5, m ≥ 2 Case 4: m 2 < ∞ with αm 2 < 1 K α,m,m2 =        (m2−2)+ √ (m2−2) 2 +(m2−1)(E+F )D E+F 1 αm 2 if 0 < α < 1 m2 , m = 1 D(m2−m) F 1 2α(1+m 2 −m) if 0 < α < 1 m2 , m ≥ 2 APPENDIX B: PROOF OF THEOREM 3.2 The appendix considers the proof for the large-sample variance (Theorem 3.2) of resampling estimators. Proofs are other results are shown in the supplement [54]. To derive variance expansions of Theorem 3.2, we first consider the OL block variance estimator V ,αm,OL = N i=1 αm (X i, − µ) 2 /N − αm ( µ n,OL − µ) 2 from (5), expressed in terms of the process mean EX t = µ, the number N = n − + 1 of blocks, the block averages X i, ≡ i+ −1 j=i X j / (integer i), and µ n,OL = N i=1X i, /N . Due to mean centering, we may assume µ = 0 without loss of generality. We then write the variance of V ,αm,OL as Var V ,αm,OL = v 1, +v 2, −2c , c ≡ Cov 1 N N i=1 αmX 2 i, , αm µ 2 n,OL = c a, +c b, , v 1, ≡ Var 1 N N i=1 αmX 2 i, = v 1a, + v 1b, , v 2, ≡ Var αm µ 2 n,OL = v 2a, + v 2b, , where each variance/covariance component v 1, , v 2, and c is decomposed into two further subcomponents v 1a, ≡ 2 2αm N N k=−N 1 − |k| N Cov(X 0, ,X k, ) 2 , v 1b, ≡ 2αm N N k=−N 1 − |k| N cum(X 0, ,X 0, ,X k, ,X k, ),(15) v 2a, ≡ 2 2αm [Var( µ n,OL )] 2 , v 2b, ≡ 2αm cum( µ n,OL , µ n,OL , µ n,OL , µ n,OL ), c a, ≡ 2 2αm N N i=1 Cov(X i, , µ n,OL ) 2 , c b, ≡ 2αm N 3 N i=1 N j=1 N k=1 cum(X i, ,X i, ,X j, ,X k, ), consisting of sums involving 4th order cumulants (v 1b, , v 2b, , c b, ) or sums involving covari- ances (v 1a, , v 2a, , c a, ). The second decomposition step follows from the product theorem for cumulants (e.g., Cov(Y 1 Y 2 , Y 3 Y 4 ) = Cov(Y 1 , Y 3 )Cov(Y 2 , Y 4 ) + Cov(Y 1 , Y 4 )Cov(Y 2 , Y 3 ) + cum(Y 1 , Y 2 , Y 3 , Y 4 ) for arbitrary random variables with EY i = 0 and EY 4 i < ∞). Note that EX 4 t = E[G(Z 0 )] 4 < ∞ and G ∈ G 4 (1) imply these variance components exist finitely for any n, (see (S.9) or Lemma 3 of the supplement [54]). Collecting terms, we have Var V ,αm,OL = Γ ,OL + ∆ ,OL where Γ ,OL ≡ v 1a, + v 2a, − 2c a, and ∆ ,OL ≡ v 1b, + v 2b, − 2c b, denote sums over covariance-terms Γ ,OL or sums of 4th order cumulant terms ∆ ,OL . In the NOL estimator case V ,αm,NOL , the variance expansion is similar Var V ,αm,NOL = Γ ,NOL + ∆ ,NOL with the convention that Γ ,NOL , ∆ ,NOL are defined by replacing the OL block number N = n − + 1, averagesX i, (orX j, ,X k, ) and estimator µ n,OL = N i=1X i, /N with the NOL counterparts b = n/ ,X 1+(i−1) , (orX 1+(j−1) , ,X 1+(k−1) , ) and µ n, NOL = b i=1X 1+(i−1) , /b in v 1a, , v 2a, , c a, , v 1b, , v 2b, , c b, . Let Γ denote either counterpart Γ ,OL or Γ ,NOL , and ∆ denote either ∆ ,OL or ∆ ,NOL . Theorem 3.2 then follows by establishing that To establish these expansions of Γ , ∆ , we require a series of technical lemmas (Lemmas 1-4), involving certain graph-theoretic moment expansions. To provide some illustration, Lemma 1 and its proof are outlined in Appendix C; the remaining lemmas are described in the supplement [54]. Define an order constant τ ,m ≡ ( /n) 2α if m ≥ 2 and τ ,m ≡ ( /n) min{1,2α} [log n] I(α=1/2) if m = 1; we suppress the dependence of τ ,m on n and α for simplicity. Then, the above expansion of Γ follows directly from Lemma 4 (i.e., Γ = a α τ ,1 (1 + o(1)) if m = 1 and Γ = o(τ ,m ) if m ≥ 2). For handling ∆ , Lemma 1 gives that v 1b, = r n,α,m,mp + φ α,m τ ,m 1 + o(1) when m ≥ 2 and v 1b, = r n,α,m,mp + o(τ ,m ) when m = 1. Combined with this, the expansion of ∆ then follows from Lemmas 2 and 3, which respectively show that c b, = o(τ ,m ) and v 2b, = o(τ ,m ) for any m ≥ 1. Γ = O n min{1, APPENDIX C: LEMMA 1 (DOMINANT 4TH ORDER CUMULANT TERMS) In the proof of Theorem 3.2 (Appendix B), recall v 1b, from (15) represents a sum of 4th order cumulants block averages with OL blocks, where the version with NOL blocks is v 1b, ≡ b −1 2αm b k=−b 1 − |k| b cum(X 0, ,X 0, ,X k , ,X k , ). Lemma 1 provides an expansion of v 1b, under LRD, which is valid in either OL/NOL block case. H k1 (Y 1 ), H k2 (Y 2 ), H k3 (Y 3 ), H k4 (Y 4 ) (k 1 , k 2 , k 3 , k 4 ≥ 1) at a generic sequence (Y 1 , Y 2 , Y 3 , Y 4 ) of marginally standard nor- mal variables, with covariances EY i Y j = r ij = r ji for 1 ≤ i < j ≤ 4. Namely, it holds that (16) cum[H k1 (Y 1 ), H k2 (Y 2 ), H k3 (Y 3 ), H k4 (Y 4 )] = k 1 !k 2 !k 3 !k 4 ! A∈Ac(k1,k2,k3,k4) g(A)r(A), where above A c (k 1 , k 2 , k 3 , k 4 ) denotes the collection of all path-connected multigraphs from a generic set of four points/vertices p 1 , p 2 , p 3 , p 4 , such that point p i has degree k i for i = 1, 2, 3, 4. Each multigraph A ≡ (v 12 , v 13 , v 14 , v 23 , v 24 , v 34 ) ∈ A c (k 1 , k 2 , k 3 , k 4 ) is defined by distinct counts v ij = v ji ≥ 0, interpreted as the number of graph lines connecting points p i and p j , 1 ≤ i < j ≤ 4; no other lines are possible in A. Then g(A) ≡ 1/[ 1≤i<j≤4 (v ij !)] represents a so-called multiplicity factor, while r(A) ≡ 1≤i<j≤4 r vij ij represents a weighted product of covariances among variables in (Y 1 , Y 2 , Y 3 , Y 4 ) (cf. [48]). Membership A ∈ A c (k 1 , k 2 , k 3 , k 4 ) requires degrees k i = j:j =i v ij for i = 1, 2, 3, 4 (e.g., k 2 = v 12 + v 23 + v 24 ) as well as a path-connection in A between any two points p i and p j ; namely, for any given 1 ≤ i < j ≤ 4, an index sequence i ≡ i 0 , i 1 , . . . , i m ≡ j ∈ {1, 2, 3, 4} (for some 1 ≤ m ≤ 3) must exist whereby v iu−1iu > 0 holds for each u = 1, . . . , m, entailing that consecutive points among p i = p i0 , p i1 , . . . , p im = p j are connected with lines in A. In (16), cum[H k1 (Y 1 ), H k2 (Y 2 ), H k3 (Y 3 ), H k4 (Y 4 )] = 0 holds whenever A c (k 1 , k 2 , k 3 , k 4 ) is empty; given integers k 1 , k 2 , k 3 , k 4 ≥ 1, A c (k 1 , k 2 , k 3 , k 4 ) will be empty if k 1 + k 2 + k 3 + k 4 = 2q fails to hold for some integer q ≥ 2 with max i=1,2,3,4 k i ≤ q. See the supplement [54] for more background and details on this graph-theoretic representation. Proof of Lemma 1. We focus on the OL block version (15) of v 1b, ; the NOL block case follows by the same essential arguments, though the cumulant sums involved with NOL blocks are less involved and simpler to handle. We assume µ = 0 for reference; the mean µ ≡ EG(Z t ) does not impact the 4th order cumulants here. Define τ ,m ≡ ( /n) 2α if m ≥ 2 (i.e., α < 1/2) and τ ,m ≡ ( /n) min{1,2α} [log n] I(α=1/2) if m = 1 to describe the order of interest in Lemma 1 along cases m ≥ 2 or m = 1. Using Lemma 3 (i.e., max 0≤k≤2 |cum(X 0, ,X 0, ,X k, ,X k, )| is O( −2αm )) for m ≥ 2 or O( −α−min{1,2α} [log ] I(α=1/2) ) for m = 1), we may truncate the sum v 1b, in (15) as v 1b, = v trun 1b, + O 1+I(m=1)(α−min{1,2α}) n [log ] I(α=1/2) ,(17) where the order term is o(τ ,m ). Then, using that G ∈ G 4 (1) (cf. (S.9) of the supplement [54]) along with the 4th order cumulant form (16) for Hermite polynomials, we may use the multilinearity of cumulants and the Hermite expansion (2) to express v trun 1b, in (17) as 2 2αm N N k=2 +1 1 − k N cum(X 0, ,X 0, ,X k, ,X k, ) ≡ v trun 1b,(18)= 2 2αm N N k=2 +1 1 − k N m≤k1,k2,k3,k4 4 i=1 J ki k i ! 1 4 × 1≤i1,i2,i3,i4≤ cum(H k1 (Z i1 ), H k2 (Z i2 ), H k3 (Z i3+k ), H k4 (Z i4+k )) = 2 ∞ q=2m k 1 +k 2 +k 3 +k 4 =2q, m≤k 1 ,k 2 ,k 3 ,k 4 ≤q 4 i=1 J ki A∈Ac (k 1 ,k 2 ,k 3 ,k 4 ), Ac (k 1 ,k 2 ,k 3 ,k 4 ) =∅ g(A) 1 N N k=2 +1 1 − k N R ,k (A) ≡ t 1, + t 2, + t 3, (say), with g(A) = 1/[ 1≤i<j≤4 (v ij !)], A ≡ (v 12 , v 13 , v 14 , v 23 , v 24 , v 34 ) ∈ A c (k 1 , k 2 , k 3 , k 4 ), and R ,k (A) ≡ 2αm 4 i1=1 i2=1 i3=1 i4=1 [γ Z (i 1 − i 2 )] v12 [γ Z (i 1 − i 3 − k)] v13 × (19) [γ Z (i 1 − i 4 − k)] v14 [γ Z (i 2 − i 3 − k)] v23 [γ Z (i 2 − i 4 − k)] v24 [γ Z (i 3 − i 4 )] v34 in terms of Gaussian process covariances γ Z (·) from (1). In (18), we use that the Hermite rank of G is m (so J k = 0 for k < m); that the 4th order cumulants (16) of Hermite polynomials are zero when a collection A c (k 1 , k 2 , k 3 , k 4 ) is empty; and, relatedly, for given integers k 1 , k 2 , k 3 , k 4 ≥ m, a non-empty collection A c (k 1 , k 2 , k 3 , k 4 ) requires the number of lines, say q ≡ (k 1 + k 2 + k 3 + k 4 )/2, of a graph A in A c (k 1 , k 2 , k 3 , k 4 ) to be an integer q ≥ 2m with m ≤ max i=1,2,3,4 k i ≤ q. The three components t 1, , t 2, and t 3, in (18) are defined by splitting the sum into three mutually exclusive cases, depending on the number of lines q ≡ (k 1 + k 2 + k 3 + k 4 )/2 = 1≤i<j≤4 v ij and the value of v 13 23 , v 24 involve covariances γ Z (·) at large lags in (19) (i.e., larger than by 2k ≥ + 1), which is not true of counts v 12 , v 34 . The three cases for defining t 1, , t 2, and t 3, are given by: (i) the case q = 2m with k 1 = k 2 = k 3 = k 4 = m, which yields + v 14 + v 23 + v 24 = q − v 12 − v 34 for a multigraph A ≡ (v 12 , v 13 , v 14 , v 23 , v 24 , v 34 ) ∈ A c (k 1 , k 2 , k 3 , k 4 ) = ∅; note that counts v 13 , v 14 , v(20) t 1, ≡ 2J 4 m A∈Ac(m,m,m,m) =∅ g(A) 1 N N k=2 +1 1 − k N R ,k (A); (ii) the case that q ≥ 2m + 1 where the sum over A ∈ A c (k 1 , k 2 , k 3 , k 4 ) is also restricted to multigraphs A ≡ (v 12 , v 13 , v 14 , v 23 , v 24 , v 34 ) with v 13 + v 14 + v 23 + v 24 = 1, which yields and (iii) the final case that q ≥ 2m + 1 with the sum over A ∈ A c (k 1 , k 2 , k 3 , k 4 ) containing those connected graphs A where v 13 + v 14 + v 23 + v 24 ≥ 2, which yields t 3, ≡ 2 ∞ q=2m+1 k 1 +k 2 +k 3 +k 4 =2q, m≤k 1 ,k 2 ,k 3 ,k 4 ≤q 4 i=1 J ki A∈Ac (k 1 ,k 2 ,k 3 ,k 4 ) =∅, v 13 +v 14 +v 23 +v 24 ≥2 g(A) 1 N N k=2 +1 1 − k N R ,k (A). Note that, for a connected multigraph A here (i.e., A ∈ A c (k 1 , k 2 , k 3 , k 4 ) = ∅ for some k 1 , k 2 , k 3 , k 4 ≥ m), a case v 13 + v 14 + v 23 + v 24 = 0 is not possible (i.e., entailing that points {p 1 , p 2 } are not path-connected in A to points {p 3 , p 4 }); consequently, terms t 2, and t 3, address all graphs A with q ≥ 2m + 1 lines. Graphs with q = 2m lines appear in t 1, . Lemma 1 will then follow from (17) t 1, = φ α,m n 2α 1 + o(1) if m ≥ 2 & t 1, = 0 if m = 1.(22) for τ ,m ≡ ( /n) 2α if m ≥ 2 and τ ,m ≡ ( /n) min{1,2α} [log n] I(α=1/2) if m = 1. We first consider showing (22) for t 2, . Recall t 2, is defined by sums over connected multigraphs A ≡ (v 12 , v 13 , v 14 , v 23 , v 24 , v 34 ) ∈ A c (k 1 , k 2 , k 3 , k 4 ) = ∅ involving q ≡ (k 1 + k 2 + k 3 + k 4 )/2 ≥ 2m + 1 lines and 1 = q − v 12 − v 34 = v 13 + v 14 + v 23 + v 24 (with k 1 , k 2 , k 3 , k 4 ≥ m). In any such graph A, exactly one value among v 13 , v 14 , v 23 , v 24 equals 1, implying that four possible cases for the configuration of degrees, namely ( k 1 = v 12 , k 4 = v 34 , k 2 = k 1 + 1, k 3 = k 4 + 1) or (k 1 = v 12 , k 3 = v 34 , k 2 = k 1 + 1, k 4 = k 3 + 1) or (k 2 = v 12 , k 4 = v 34 , k 1 = k 2 + 1, k 3 = k 4 + 1) or (k 2 = v 12 , k 3 = v 34 , k 1 = k 2 + 1, k 4 = k 3 + 1) with k 1 , k 2 , k 3 , k 4 ≥ m. Consequently, t 2, can be re-written using (19) as (24) t 2, = 2 ∞ j1=mp ∞ j2=mp J j1 J j1+1 J j2 J j2+1 1 j 1 !j 2 ! 1 N N k=2 +1 1 − k N R * ,k,j1,j2 , due the Hermite pair-rank m p ≡ inf{k ≥ 1 : J k J k+1 = 0} ≥ m, for a covariance sum R * ,k,j1,j2 ≡ 2αm 4 i1=1 i2=1 i3=1 i4=1 [γ Z (i 1 − i 2 )] j1 [γ Z (i 3 − i 4 )] j2 × (γ Z (i 1 − i 3 − k) + γ Z (i 1 − i 4 − k) + γ Z (i 2 − i 3 − k) + γ Z (i 2 − i 4 − k)) , k ≥ 2 + 1. Note that t 2, = 0 if m p = +∞ (i.e., if J k J k+1 = 0 for all k ≥ 1). Hence, we may assume m p < ∞ in the following to establish the form of t 2, in (22) along the possibilities αm p < 1, αm p = 1, or αm p > 1. Due to the Gaussian covariance decay γ Z (k) ∼ C 0 k −α as k → ∞ (cf. (1)), note that, for k ≥ 2, 0 ≤ i ≤ − 1 and j 1 , j 2 ≥ m p , we can express R * ,k +i,j1,j2 in (24) as (25) R * ,k +i,j1,j2 = 2αm min{1,j1α}+min{1,j2α} L ,j1 L ,j2 × 4 −α C 0 k −α + R * * ,k +i,j1,j2 , through a covariance-type average (26) L ,v ≡ min{1,vα} j=− 1 − |j| [γ Z (j)] v , v = 0, 1, 2 . . . , along with a remainder R * * ,k +i,j1,j2 satisfying (1) and for constants C, C 1 ≥ 1 not depending on , k ≥ 2, 0 ≤ i ≤ − 1 or j 1 , j 2 ≥ m p ; here the bound on the remainder follows from [54]), to write |R * * ,k +i,j1,j2 | ≤ C 2αm (log ) I(j1=1/α)+I(j2=1/α) min{1,j1α}+min{1,j2α} −α (k − 1) −α 4C 0 ε + 2(k − 1) −1 for ε ≡ sup j≥ |γ Z (j)/[C 0 j −α ] − 1| = o(27) sup v∈{0,1,2,...} (log ) −I(v=1/α) −1+min{1,vα} t=− |γ Z (t)| v ≤ C for some C > 0 (i.e., by applying |γ Z (t)| ≤ min{1, C 1 |t| −α } for all t ≥ 1) along with | k + i + i 1 − i 2 | ≥ (k − 1) and | α | k + i + i 1 − i 2 | −α − k −α | ≤ 2|k − 1| −1−α (by Tay- lor expansion) for any k ≥ 2, 0 ≤ i, i 1 , i 2 ≤ . Hence, if m p α < 1, we use (24)-(27), with ∞ j=mp |J j J j+1 |/j! < ∞ by G ∈ G 4 (1) (cf. (S.9) oft 2, = 2 J mp J mp+1 m p ! 2 L 2 ,mp 2αm 2αmp 1−α N n/ k=2 (1 − N −1 k)4C 0 k −α + 2αm 2αmp O 1−α N + 2αm (log ) 2 αmp+min{1,α(1+mp)} + 2αm 2αmp O n + ε O   1−α N n/ k=2 k −α   = 8C 0 J mp J mp+1 m p ! 2C mp 0 (1 − αm p )(2 − αm p ) 2 1 (1 − α)(2 − α) 2αm 2αmp 1 n α (1 + o(1)), upon applying, as n → ∞, that ε → 0, /n → 0, N/n → 1, αmp−min{1,α(1+mp)} (log ) 2 → 0 for m p α < 1 (0 < α < 1) and L ,mp → 2C mp 0 (1 − αm p )(2 − αm p ) , N 1−α n/ k=2 (1 − N −1 k)k −α → 1 (1 − α)(2 − α) . This shows (22) for t 2, when αm p < 1. If αm p = 1 holds, then the derivation is similar by (24)- (27), where t 2, = 2 J mp J mp+1 m p ! 2 L 2 ,mp (log ) 2 2αm 2 (log ) 2 1−α N n/ k=2 (1 − N −1 k)4C 0 k −α + 2αm (log ) 2 2 O 1−α N + 2αm (log ) 2 2 1 + O n + ε O   1−α N n/ k=2 k −α   = 8C 0 J mp J mp+1 m p ! 2C mp 0 2 1 (1 − α)(2 − α) 2αm 2 (log ) 2 1 n α (1 + o(1)), using instead L ,mp log → 2C mp 0 > 0; this then gives (22) for t 2, when αm p = 1. In the final case αm p > 1, we analogously have t 2, = 2   ∞ j=mp J j J j+1 j! L ,j   2 2αm 2 1−α N n/ k=2 (1 − N −1 k)4C 0 k −α + 2αm 2   O 1−α N + O n + ε O   1−α N n/ k=2 k −α     = 8C 0   ∞ j=mp J j J j+1 j! ∞ k=−∞ [γ Z (k)] j   2 1 (1 − α)(2 − α) 2αm 2 1 n α (1 + o(1)), where ∞ j=mp J j J j+1 j! L ,j → ∞ j=mp J j J j+1 j! ∞ k=−∞ [γ Z (k)] j ∈ R follows from the Dominated Convergence Theorem using that L ,s → ∞ k=−∞ [γ Z (k)] v for each v ≥ m p ; that sup v≥mp |L ,v | ≤ ∞ k=−∞ |γ Z (k)| mp < ∞ holds for all (i.e., αm p > 1 and |γ Z (k)| v ≤ |γ Z (k)| mp ≤ C mp 1 k −αmp for all k ≥ 1, v ≥ m p by γ Z (0) = 1); and that ∞ j=mp |J j J j+1 |/j! < ∞ (by G ∈ G 4 (1)). This shows (22) for t 2, when αm p > 1 and concludes the establishment of (22). We next consider showing (21) for t 3, . For Gaussian covariances γ Z (·), using again that |γ Z (k)| ≤ min{1, C 1 k −α } for all k ≥ 1 (for some C 1 ≥ 1) under (1), we may bound max 0≤i,i1,i2≤ |γ Z (k + i + i 1 − i 2 )| ≤ min{1, C 1 −α (k − 1) −α } for any k ≥ 2, which follows by |k + i + i 1 − i 2 | ≥ (k − 1) . Applying this covariance inequality in R , k+i (A) from (19) shows that, for a multigraph A ≡ (v 12 , v 13 , v 14 , v 23 , v 24 , v 34 ) with q = 1≤i<j≤4 v ij lines, we have |R , k+i (A)| ≤ 2αm 2 t=− |γ Z (t)| v12 t=− |γ Z (t)| v34 [min{1, C 1 −α (k − 1) −α }] q−v12−v34 ≤ C 2αm (log ) I(v12=1/α)+I(v12=1/α) min{1,v12α}+min{1,v34α} [min{1, C 1 −α (k − 1) −α }] q−v12−v34 ≡ B ,k,v12,v34,q−v12−v34(28) holds for a generic constant C > 0, not depending on the integers k, ≥ 2 or 0 ≤ i ≤ − 1, or the values of v 12 , v 34 , q − v 12 − v 34 = v 13 + v 14 + v 23 + v 24 ∈ {0, 1, 2 . . . , }; above I(·) denotes an indicator function and we used (27) for establishing the bound in (28). Now because G ∈ G 4 (1) (cf. (S.9) of [54]) and because 1 N using above that q ≥ 2m+1; that k −α(q−v12−v34) ≤ k −α2 for k ≥ 1; and that /n n/ k=1 k −α2 = O(τ ,m ) where −1 + /n → 0. Note that the bound above is consequently o(τ ,m ) and does not depend on the exact values of 0 ≤ v 12 , v 34 < 1/α. In the case that q − v 12 − v 34 ∈ {2, . . . , 2m + 1} with v 12 , v 34 ≥ 1/α , we similarly obtain from (28) that using k −α(q−v12−v34) ≤ k −α2 for k ≥ 1 as well as αv 34 ≤ αv * < 1 for v * = 1/α − 1 as the largest integer less than 1/α; the bound above does not depend on the exact values of v 12 ≥ 1/α and v 34 < 1/α and the analog of the above also holds similarly when v 34 ≥ 1/α and v 12 < 1/α (using q − v 12 − v 34 = k 1 + k 2 − 2v 12 ≥ 2m − 2v 12 ). Hence, (29) will now follow by treating a final case that q − v 12 − v 34 ≥ 2m + 2. When q − v 12 − v 34 ≥ 2m + 2, we use the bound in (28) to write (for any v 12 , v 34 ≥ 0 and q − v 12 − v 34 ≥ 2m + 2) that n n/ k=2 B ,k,v12,v34,q−v12−v34 ≤ C 2αm (log ) 2 n n/ k=1 [C 1 ( k) −α ] 2m+2 = o(τ ,m ). This establishes (29) and consequently t ,3 = o(τ ,m ) in (21). To complete the proof of Lemma 1, we now consider establishing (23) for the term t 1, , shown in (20) (1 − N −1 k)R † ,k for R † ,k ≡ 2αm 4 i1=1 i2=1 i3=1 i4=1 [γ Z (i 1 − i 2 )γ Z (i 3 − i 4 )] m−1 × [γ Z (i 1 − i 3 − k)γ Z (i 2 − i 4 − k) + γ Z (i 1 − i 4 − k)γ Z (i 2 − i 3 − k)], k ≥ 2 . Similarly, to the expansion in (25), we may use the Gaussian covariance decay γ Z (k) ∼ C 0 k −α as k → ∞ (cf. (1)) and Taylor expansion to re-write R † ,k +i for any k ≥ 2 and 0 ≤ i ≤ − 1 as R † ,k +i = αm α(m−1) L ,m−1 2 2 −2α C 2 0 k −2α + R ‡ ,k +i , with a covariance sum L ,m−1 as in (26) and a remainder R ‡ ,k +i satisfying |R ‡ ,k +i | ≤ C(k − 1) −2α ε + (k − 1) −1 for ε ≡ sup j≥ |γ Z (j)/[C 0 j −α ] − 1| = o(1) and for a constant C ≥ 1 not depending on , k ≥ 2, 0 ≤ i ≤ − 1; the bound on the remainder follows from (27) (i.e., |L ,m−1 | ≤ C) along with | k + i + i 1 − i 2 | ≥ (k − 1) in the covariance bound |γ Z ( k + i + i 1 − i 2 )| ≤ C 0 (1 + ε )[ (k − 1)] −α as well as | 2α | k + i + i 1 − i 2 | −2α − k −2α | ≤ 2|k − 1| −1−2α for any k ≥ 2, 0 ≤ i, i 1 , i 2 ≤ (noting 2α ≤ αm < 1 with m ≥ 2). Hence, we have which concludes the proof of (23) as well as the proof of Lemma 1. Acknowledgements. The authors are grateful to two anonymous referees and an Associate Editor for constructive comments that improved the quality of this paper. Research was supported by NSF DMS-1811998 and DMS-2015390. SUPPLEMENTARY MATERIAL Proofs and other technical details A supplement [54] contains proofs and technical details along with further numerical results. [ log n] I(α=1/2) 1 + o(1) + r n,α,m,mp if m = 1, where I(·) denotes an indicator function and r n,α,m,mp ≡ I(m p < ∞) F ) has Hermite rank m ≥ 1 with αm < 1. Then, Condition-I holds if the functional T n is as in (i) Example 1 (smooth function) where φ 1 , . . . , φ l are bounded functions; first partial derivatives of H : R l → R are Lipschitz in a neighborhood of (E[φ 1 (Y t )], . . . , E[φ l (Y t )]); and either m = m * holds or m = min{Hermite rank of φ j (Y t ) : 1 ≤ j ≤ l}. due to a type of Taylor expansion of T (j) n about T n , where FIG 1 . 1MSE curves (over block length ) for resampling estimators with different LRD processes (m, α). n MSE for plug-in resampling estimators V ( X) of the variance of the 40% trimmed mean statistic Tn based on blocks = n or = n 1/2 with LRD series Y t ≡ H 2 (Z t ) or Y t = sin(Z t ). = 500, 1000, 2000. The empirical block selections perform better than the default n 1/2 2αm} [log n] I(2αm=1) = o n 2α if m ≥ 2 a α n min{1,2α} [log n] I(α=1/2) min{1,2α} [log n] I(α=1/2) if m = 1, where I(·) denotes an indicator function and r n,α,m,mp is defined in Theorem 3.2. For reference, when the Hermite rank m ≥ 2, the contribution of ∆ ∝ ( /n) 2α dominates the variance of V ,αm,OL or V ,αm,NOL ; when m = 1, the contribution of Γ ∝ ( /n) min{1,2α} [log n] I(α=1/2) instead dominates the variance in Theorem 3.2. LEMMA 1 . 1Suppose the assumptions of Theorem 3.2 (G has Hermite rank m ≥ 1 and Hermite pair-rank m ≤ m p ≤ ∞) with positive constants φ α,m , λ α,mp there. Then, v 1b, = r n,α,2α} [log n] I(α=1/2) if m = 1, where I(·) denotes an indicator function and r n,α,m,mp is from Theorem 3.2. REMARK 2: The proof of Lemma 1 involves a standard, but technical, graph-theoretic representation of the 4th order cumulant among Hermite polynomials t 2 , ≡ 2 ∞FIG 2 . 222q=2m+1 k 1 +k 2 +k 3 +k 4 =2q,m≤k 1 ,k 2 ,k 3 ,k 4 ≤q 4 i=1 J kiA∈Ac (k 1 ,k 2 ,k 3 ,k 4 ) =∅, v 13 +v 14 +v 23 +v 24 =1 An example of multigraph A ≡ (1, 0, 1, 1, 0, 1) ∈ Ac(m, m, m, m) for m = 2. -(18) by showingt 3, = o(τ ,m ),(21)t 2, ≡ r n,α,m,mp = I(m p < ∞) B multigraph A ≡ (v 12 , v 13 , v 14 , v 23 , v 24 , v 34 ) with q = 1≤i<j≤4 v ij by(28), then t 3, = o(τ ,m ) will hold in (21) by establishing(29) supA≡(v 12 ,v 13 ,v 14 ,v 23 ,v 24 ,v 34 ): q≥2m+1;q−v 12 −v 34 ≥2;k 1 ,k 2 ,k 3 ,,k,v12,v34,q−v12−v34 = o(τ ,m ), noting n/N → 1 for N = n − + 1. First, consider the case that q − v 12 − v 34 ∈ {2, . . . , 2m + 1} with 0 ≤ v 12 , v 34 < 1/α so that (28α2 = O( −α )O(τ ,m ), = α2 = O( −α )O(τ ,m ),using above that ( k) −α(q−v12−v34) ≤ ( k) −α2 for k ≥ 1 and αm ≤ 1; again the bound above is o(τ ,m ) and does not depend on the exact values of v 12 , v34 ≥ 1/α . When q − v 12 − v 34 ∈ {2, . . . , 2m + 1} with v 12 ≥ 1/α and v 34 < 1/α, we use q − v 12 − v 34 = v 13 + v 14 + v 23 + v 24 = k 3 + k 4 − 2v 34 ≥ 2m − 2v 34 in(29)(i.e., k 3 , k 4 ≥ m) to write from (O( αv * −1 log )O(τ ,m ) = o(τ ,m ), involving a sum over multigraphs A ≡ (v12 , v13 , v14 , v23 , v24 , v 34 ) ∈ A c (m, m, m, m). For such A, the same degree requirementk 1 = k 2 = k 3 = k 4 = m entails that v 12 = v 34 , v 13 = v 24 , v 14 = v 23 so that we may prescribe the multigraph A ≡ (v 12 , v 13 , m − v 12 − v 13 , m − v 12 − v 13 , v 13 , v 12 ) having q = 2m lines in terms of two counts 0 ≤ v 12 , v 13 where v 12 + v 13 ≤ m. Additionally, for a connected multigraph A ∈ A c (m,m, m, m), it cannot be the case that v 12 = m holds, which would imply v 12 = m lines between points {p 1 , p 2 } and v 34 = m lines between points {p 3 , p 4 }, with no lines between these two point groups (i.e., A would not be connected); likewise, it cannot be the case that v 13 = m or that v 12 + v 13 = 0 (so that v 14 = k 1 − v 12 − v 13 = m). For this reason, A c (m, m, m, m) is empty when m = 1, so that the sum t 1, = 0 in (23) for m = 1. When m ≥ 2, it holds that A c (m, m, m, m) = ∅ and that the largest possible value of v 12 for some A ∈ A c (m, m, m, m) is m − 1, occurring when v 12 = m − 1 = v 34 with either v 13 = 1 = v 24 or v 14 = 1 = v 23 (while, for reference, the smallest possible value of v 12 is 0, occurring when v 12 = 0 = v 34 and 1 ≤ v 13 = v 24 < m with v 14 = m − v 13 = v 23 ). As a consequence, when m ≥ 2, we will split the sum t 1, = s 1, + s 2, into two parts, involving either a sum over A ∈ A c (m, m, m, m) where v 12 = v 34 = m − 1 (given by s 1, ) or a sum over A ∈ A c (m, m, m, m) where 0 ≤ v 12 ≤ m − 2 (given by s 2, ). For the first sum, we use the form of g(A) ≡ 1/[ 1≤i<j≤4 (v ij !)] and R ,k (A) in t 1, ε → 0 and that 2α ≤ αm < 1 and 0 < (m − 1)α ≤ αm < 1 (by m ≥ 21 − N −1 k)k −2α → [(1 − 2α)(2 − 2α)] −1 as n → ∞ with N/n → 1 and /n → 0. For m ≥ 2, this now establishes an expansion for the first sums 1, = φ α,m ( /n) 2α (1 + o(1)) in t 1, = s 1, + s 2, . Now (23) will follow for t 1, (when m ≥ 2) by showing s ,2 = o(( /n) 2α ). , for a multigraph A ≡ (v 12 , v 13 , v 14 , v 23 , v 24 , v 34 ) ∈ A c (m, m, m, m) (with v 12 = v 34 , v 13 = v 24 , v 14 = v 23 , v 12 + v 13 + v 14 = m), we may apply the bound from (28) to find(30) |R , k+i (A)| ≤ C 2αm 2αv12 [C 1 −α (k − 1) −α ] 2m−2v12 ≤ CC 2m 1 (k − 1) −4αholds for generic constants C > 0, C 1 > 1, not depending on integers , k ≥ 2, 0 ≤ i ≤ − 1, and the values of 0 ≤ v 12 ≤ m − 2; above we used 2m − 2v 12 ≥ 4. Application of the bound(30) with A∈Ac(m,m,m,m) g(A) < ∞ (e.g., A c (m, m, m, m) is finite) then yields |s ,2 TABLE 1 1Optimal OL block sizes for LRD series with Hermite ranks m = 2 and 3m = 2 m = 3 α 0.400 0.425 0.450 0.300 0.315 0.330 n = 1000 4 6 9 2 3 5 n = 5000 12 21 35 8 9 18 TABLE 2 2MSE for resampling estimators V of sample mean's variance based on blocks = n or = n 1/2 with LRD series X t ≡ G(Z t ) with G = H 2 (Z t ) (m = 2) and G(Z t ) = H 3 (Z t ) (m = 3). MSE for HAC and MAC estimators of sample mean's variance with LRD seriesn = 500 n = 1000 n = 2000 G/m α n n 1/2 n n 1/2 n n 1/2 m = 2 0.20 0.294 0.316 0.236 0.248 0.180 0.214 0.45 0.270 0.312 0.269 0.293 0.280 0.292 m = 3 0.20 0.917 0.754 0.805 0.858 0.455 0.495 0.30 0.270 0.294 0.396 0.413 0.393 0.393 TABLE 3 TABLE 5 5Empirical coverage probabilities of 95% block bootstrap confidence intervals for the process mean or 40% trimmed mean, based on block sizes = n or = n 0.5 , with LRD series as sin(Z t ) or Z t + 20 −1 H 2 (Z t ).Mean α = 0.20 α = 0.50 α = 0.80 n = 1000 5000 n = 1000 5000 n = 1000 5000 sin(Z t ) n 0.820 0.866 0.932 0.944 0.964 0.958 n 1/2 0.766 0.814 0.878 0.914 0.958 0.934 Z t + 1 20 H 2 (Z t ) n 0.848 0.864 0.922 0.930 0.942 0.960 n 1/2 0.806 0.854 0.914 0.894 0.938 0.952 Trimmed Mean α = 0.20 α = 0.50 α = 0.80 n = 1000 5000 n = 1000 5000 n = 1000 5000 sin(Z t ) n 0.798 0.848 0.934 0.942 0.964 0.950 n 1/2 0.752 0.792 0.892 0.920 0.962 0.946 Z t + 1 20 H 2 (Z t ) n 0.864 0.876 0.930 0.916 0.956 0.980 n 1/2 0.828 0.852 0.918 0.908 0.876 0.754 (see Sec 2.2); the bootstrap version of ∆ n is then ∆ Asymptotic theory of certain "goodness of fit" criteria based on stochastic processes. T W Anderson, D A Darling, Ann. Math. Statist. 23Anderson, T. W., & Darling, D. A. (1952). Asymptotic theory of certain "goodness of fit" criteria based on stochastic processes. Ann. Math. Statist., 23, 193-212. Two estimators of the long-run variance: beyond short memory. K M Abadir, W Distaso, L Giraitis, J. Econometrics. 150Abadir, K. M., Distaso, W., & Giraitis, L. (2009). Two estimators of the long-run variance: beyond short memory. J. Econometrics, 150, 56-70. Adaptive local polynomial Whittle estimation of long-range dependence. D W K Andrews, Y Sun, Econometrica. 72Andrews D. W. K. & Sun, Y. (2004). Adaptive local polynomial Whittle estimation of long-range dependence. Econometrica, 72, 569-614. M estimators of location for Gaussian and related processes with slowly decaying serial correlations. J Beran, J. Amer. Statist. Assoc. 86Beran, J. (1991). M estimators of location for Gaussian and related processes with slowly decaying serial correlations. J. Amer. Statist. Assoc., 86, 704-708. Long-range dependence. J Beran, Wiley Ser. Comput. Stat. 2Beran, J. (2010). Long-range dependence. Wiley Ser. Comput. Stat., 2, 26-35. On the validity of resampling methods under long memory. S Bai, M S Taqqu, Ann. Statist. 45Bai, S. & Taqqu, M. S. (2017). On the validity of resampling methods under long memory. Ann. Statist., 45, 2365-2399. Sensitivity of the Hermite rank. S Bai, M S Taqqu, Stochastic Process. Appl. 129Bai, S., & Taqqu, M. S. (2019). Sensitivity of the Hermite rank. Stochastic Process. Appl., 129, 822-840. A unified approach to self-normalized block sampling. S Bai, M S Taqqu, T Zhang, Stochastic Process. Appl. 126Bai, S., Taqqu, M. S. & Zhang, T. (2016). A unified approach to self-normalized block sampling. Stochastic Process. Appl., 126, 2465-2493. Testing for Hermite rank in Gaussian subordination processes. J Beran, S Möhrle, S Ghosh, J. Comput. Graph. Statist. 25Beran, J., Möhrle, S., & Ghosh, S. (2016). Testing for Hermite rank in Gaussian subordination processes. J. Comput. Graph. Statist., 25, 917-934. Subsampling for general statistics under long range dependence with application to change point analysis. A Betken, M Wendler, Statist. Sinica. 28Betken, A. & Wendler, M. (2018). Subsampling for general statistics under long range dependence with application to change point analysis. Statist. Sinica, 28, 1199-1224. Detection and estimation of long memory in stochastic volatility. J Breidt, N Crato, P De Lima, J. Econometrics. 83Breidt, J., Crato, N., & de Lima, P. (1998). Detection and estimation of long memory in stochastic volatility. J. Econometrics, 83, 325-348. Block length selection in the bootstrap for time series. P Bühlman, H R Künsch, Comput. Statist. Data Anal. 31Bühlman, P. & Künsch, H. R. (1999). Block length selection in the bootstrap for time series. Comput. Statist. Data Anal., 31, 295-310. The use of subseries methods for estimating the variance of a general statistic from a stationary time series. E Carlstein, Ann. Statist. 14Carlstein, E. (1986). The use of subseries methods for estimating the variance of a general statistic from a stationary time series. Ann. Statist., 14, 1171-1179. The empirical process of some long-range dependent sequences with an application to U-statistics. H Dehling, M S Taqqu, Ann. Statist. 17Dehling, H., & Taqqu, M. S. (1989). The empirical process of some long-range dependent sequences with an application to U-statistics. Ann. Statist., 17, 1767-1783. Non-central limit theorems for non-linear functional of Gaussian fields. R L Dobrushin, P Major, Z. Wahr. Verw. Gebiete. 50Dobrushin, R.L. & Major, P. (1979). Non-central limit theorems for non-linear functional of Gaussian fields. Z. Wahr. Verw. Gebiete, 50, 27-52. Von Mises calculus for statistical functionals. L T Fernholz, Springer Science & Business MediaFernholz, L. T. (2012). Von Mises calculus for statistical functionals. Springer Science & Business Media. Batch means and spectral variance estimators in Markov chain Monte Carlo. J Flegal, G L Jones, Ann. Statist. 38Flegal, J. M & Jones, G. L. (2010). Batch means and spectral variance estimators in Markov chain Monte Carlo. Ann. Statist., 38, 1034-1070. Variance-type estimation of long memory. L Giraitis, P M Robinson, D Surgailis, Stoch. Proc. Appl. 80Giraitis, L., Robinson, P. M., & Surgailis, D. (1999). Variance-type estimation of long memory. Stoch. Proc. Appl., 80, 1-24. An introduction to long-memory time series models and fractional differencing. C W J Granger, R Joyeux, J. Time Ser. Anal. 1Granger, C. W. J. & Joyeux, R. (1980). An introduction to long-memory time series models and fractional differencing. J. Time Ser. Anal., 1, 15-29. On blocking rules for the bootstrap with dependent data. P Hall, J L Horowitz, B.-Y Jing, Biometrika. 82Hall, P., Horowitz, J.L. & Jing, B.-Y. (1995). On blocking rules for the bootstrap with dependent data. Biometrika, 82, 561-574. On sample reuse methods for dependent data. P Hall, B Jing, J. R. Stat. Soc. Ser. B. 58Hall, P., & Jing, B. (1996). On sample reuse methods for dependent data. J. R. Stat. Soc. Ser. B, 58, 727-737. On the sampling window method for long-range dependent data. P Hall, B.-Y Jing, S N Lahiri, Statist. Sinica. 8Hall, P., Jing, B.-Y. & Lahiri, S. N. (1998). On the sampling window method for long-range dependent data. Statist. Sinica, 8, 1189-1204. Delta method for long-range dependent observations. O Hössjer, J Mielniczuk, J. Nonparametr. Stat. 5Hössjer, O., & Mielniczuk, J. (1995). Delta method for long-range dependent observations. J. Nonparametr. Stat., 5, 75-82. The mean square error of Geweke and Porter-Hudak's estimator of the memory parameter of a long memory time series. C M Hurvich, R Deo, J Brodsky, J. Time Ser. Anal. 19Hurvich, C. M., Deo, R. & Brodsky, J. (1998). The mean square error of Geweke and Porter-Hudak's esti- mator of the memory parameter of a long memory time series. J. Time Ser. Anal., 19, 19-46. Corrigendum to 'subsampling inference for the mean of heavy-tailed long-memory time series. A Jach, T Mcelroy, D N Politis, J. Time Ser. Anal. 37Jach, A., McElroy, T. & Politis, D. N. (2016). Corrigendum to 'subsampling inference for the mean of heavy-tailed long-memory time series.' J. Time Ser. Anal., 37, 713-720. Properties of a block bootstrap under long-range dependence. Y.-M Kim, D J Nordman, Sankhyā A. 73Kim Y.-M. & Nordman, D. J. (2011). Properties of a block bootstrap under long-range dependence. Sankhyā A, 73, 79-109. Bootstrap methods for time series. J.-P Kreiss, S N Lahiri, In Handbook of Statist. 30ElsevierKreiss, J.-P. & Lahiri, S. N. (2012). Bootstrap methods for time series. In Handbook of Statist., 30, 3-26. Elsevier. The jackknife and bootstrap for general stationary observations. H R Künsch, Ann. Statist. 17Künsch, H. R. (1989). The jackknife and bootstrap for general stationary observations. Ann. Statist., 17, 1217-1261. On the moving block bootstrap under long range dependence. S N Lahiri, Statist. Probab. Lett. 11Lahiri, S. N. (1993). On the moving block bootstrap under long range dependence. Statist. Probab. Lett., 11, 405-413. Theoretical comparisons of block bootstrap methods. S N Lahiri, Ann. Statist. 27Lahiri, S. N. (1999). Theoretical comparisons of block bootstrap methods. Ann. Statist., 27, 386-404. Resampling Methods for Dependent Data. S N Lahiri, SpringerNew YorkLahiri, S. N. (2003). Resampling Methods for Dependent Data, Springer, New York. A nonparametric plug-in rule for selecting optimal block lengths for block bootstrap methods. S N Lahiri, K Furukawa, Y-D Lee, Stat. Methodol. 3Lahiri, S. N., Furukawa, K. & Lee, Y-D. (2007). A nonparametric plug-in rule for selecting optimal block lengths for block bootstrap methods. Stat. Methodol., 3, 292-321. Moving blocks jackknife and bootstrap capture weak dependence. R Y Liu, K Singh, Exploring the Limits of the Bootstrap. R. Lepage & L. BillardNew YorkWileyLiu, R. Y. & Singh, K. (1992). Moving blocks jackknife and bootstrap capture weak dependence. In Explor- ing the Limits of the Bootstrap (Edited by R. Lepage & L. Billard), 225-248. Wiley, New York. Fractional Brownian motions, fractional noises and applications. B B Mandelbrot, J W Van Ness, SIAM Rev. 10Mandelbrot, B. B. & Van Ness, J. W. (1968). Fractional Brownian motions, fractional noises and applica- tions. SIAM Rev., 10, 422-437. Estimating long-range dependence in the presence of periodicity: an empirical study. A Montanari, M S Taqqu, V Teverovsky, Math. Comput. Modelling. 29Montanari, A., Taqqu, M. S., & Teverovsky, V. (1999). Estimating long-range dependence in the presence of periodicity: an empirical study. Math. Comput. Modelling, 29, 217-228. Validity of the sampling window method for long-range dependent linear processes. D J Nordman, S N Lahiri, Econom. Theory. 21Nordman, D. J. & Lahiri, S. N. (2005). Validity of the sampling window method for long-range dependent linear processes. Econom. Theory, 21, 1087-1111. Convergence rates of empirical block length selectors for block bootstrap. D J Nordman, S N Lahiri, Bernoulli. 20Nordman, D. J., & Lahiri, S. N. (2014). Convergence rates of empirical block length selectors for block bootstrap. Bernoulli, 20, 958-978. The impact of bootstrap methods on time series analysis. D N Politis, Statist. Sci. 18Politis, D. N. (2003). The impact of bootstrap methods on time series analysis. Statist. Sci., 18, 219-230. The tapered block bootstrap for general statistics from stationary sequences. E Paparoditis, D Politis, Econom. J. 5Paparoditis, E., & Politis, D. (2002). The tapered block bootstrap for general statistics from stationary se- quences. Econom. J., 5, 131-148. Large sample confidence regions based on subsamples under minimal assumptions. D N Politis, J P Romano, Ann. Statist. 22Politis, D. N. & Romano, J. P. (1994). Large sample confidence regions based on subsamples under minimal assumptions. Ann. Statist., 22, 2031-2050. Automatic block-length selection for the dependent bootstrap. D N Politis, H White, Econometric Rev. 23Politis, D. N. & White, H. (2004) Automatic block-length selection for the dependent bootstrap. Economet- ric Rev., 23, 53-70. . D N Politis, J P Romano, M Wolf, SpringerNew YorkSubsamplingPolitis, D. N., Romano, J. P. & Wolf, M. (1999). Subsampling. Springer, New York. Log-periodogram regression of time series with long range dependence. P M Robinson, Ann. Statist. 23Robinson, P. M. (1995). Log-periodogram regression of time series with long range dependence. Ann. Statist., 23, 1048-1072. Mathematical statistics. J Shao, Springer Science & Business MediaShao, J. (2003). Mathematical statistics. Springer Science & Business Media. Robust covariance matrix estimation: HAC estimates with long memory/antipersistence correction. P M Robinson, Econom. Theory. 21Robinson, P. M. (2005). Robust covariance matrix estimation: HAC estimates with long mem- ory/antipersistence correction. Econom. Theory, 21, 171-180. Optimal mean-squared-error batch sizes. W T Song, B W Schmeiser, Manage Sci. 41Song, W. T. & Schmeiser, B. W. (1995). Optimal mean-squared-error batch sizes. Manage Sci., 41, 110-123. Weak convergence to fractional brownian motion and to the Rosenblatt process. M S Taqqu, Z. Wahr. Verw. Gebiete. 31Taqqu, M. S. (1975). Weak convergence to fractional brownian motion and to the Rosenblatt process. Z. Wahr. Verw. Gebiete, 31, 287-302. Law of the iterated logarithm for sums of non-linear functions of Gaussian variables that exhibit a long range dependence. M S Taqqu, Z. Wahr. Verw. Gebiete. 40Taqqu, M. S. (1977), Law of the iterated logarithm for sums of non-linear functions of Gaussian variables that exhibit a long range dependence. Z. Wahr. Verw. Gebiete, 40, 203-238. Convergence of integrated processes of arbitrary Hermite rank. M S Taqqu, Z. Wahr. Verw. Gebiete. 50Taqqu, M. S. (1979). Convergence of integrated processes of arbitrary Hermite rank. Z. Wahr. Verw. Gebiete, 50, 53-83. Fractional Brownian motion and long-range dependence. M S Taqqu, Theory and applications of long-range dependence. Doukhan, P., Oppenheim, G. & Taqqu, M. S.Springer Science & Business MediaTaqqu, M. S. (2002). Fractional Brownian motion and long-range dependence. In Theory and applications of long-range dependence. Doukhan, P., Oppenheim, G. & Taqqu, M. S. (eds). Springer Science & Business Media. Testing for long-range dependence in the presence of shifting means or a slowly declining trend using a variance type estimator. V Teverovsky, M S Taqqu, J. Time Ser. Anal. 18Teverovsky, V. & Taqqu, M. S. (1997). Testing for long-range dependence in the presence of shifting means or a slowly declining trend using a variance type estimator. J. Time Ser. Anal., 18, 279-304. A heteroskedasticity-consistent covariance matrix estimator and a direct test for heteroskedasticity. H White, Econometrica. 48White, H. (1980). A heteroskedasticity-consistent covariance matrix estimator and a direct test for het- eroskedasticity. Econometrica, 48, 817-838. Block sampling under strong dependence. T Zhang, H.-C Ho, M Wendler, W B Wu, Stochastic Process. Appl. 123Zhang, T., Ho, H.-C., Wendler, M. & Wu, W. B. (2013). Block sampling under strong dependence. Stochastic Process. Appl., 123, 2323-2339. Supplement to "On optimal block resampling for Gaussian-subordinated long-range dependent processes. Q Zhang, S N Lahiri, D J Nordman, Zhang, Q., Lahiri, S. N. & Nordman, D. J. (2022). Supplement to "On optimal block resampling for Gaussian-subordinated long-range dependent processes."
[]
[ "On monophonic position sets in graphs", "On monophonic position sets in graphs" ]
[ "Elias John Thomas \nDepartment of Mathematics\nMar Ivanios College\nUniversity of Kerala\nThiruvananthapuram-695015KeralaIndia\n", "Ullas Chandran \nDepartment of Mathematics\nMahatma Gandhi College\nUniversity of Kerala\nThiruvananthapuram-695004KeralaIndia\n", "S V ", "James Tuite \nDepartment of Mathematics and Statistics\nOpen University\nWalton Hall, Milton KeynesUK\n", "Gabriele Di \nDepartment of Information Engineering, Computer Science and Mathematics\nUniversity of L'Aquila\nItaly\n", "Stefano " ]
[ "Department of Mathematics\nMar Ivanios College\nUniversity of Kerala\nThiruvananthapuram-695015KeralaIndia", "Department of Mathematics\nMahatma Gandhi College\nUniversity of Kerala\nThiruvananthapuram-695004KeralaIndia", "Department of Mathematics and Statistics\nOpen University\nWalton Hall, Milton KeynesUK", "Department of Information Engineering, Computer Science and Mathematics\nUniversity of L'Aquila\nItaly" ]
[]
The general position problem in graph theory asks for the largest set S of vertices of a graph G such that no shortest path of G contains more than two vertices of S. In this paper we consider a variant of the general position problem called the monophonic position problem, obtained by replacing 'shortest path' by 'induced path'. We prove some basic properties and bounds for the monophonic position number of a graph and determine the monophonic position number of some graph families, including unicyclic graphs, complements of bipartite graphs and split graphs. We show that the monophonic position number of triangle-free graphs is bounded above by the independence number. We present realisation results for the general position number, monophonic position number and monophonic hull number. Finally we discuss the complexity of the monophonic position problem.
10.1016/j.dam.2023.02.021
[ "https://export.arxiv.org/pdf/2012.10330v4.pdf" ]
229,331,921
2012.10330
18c2d3aa5c106c88bf2fbb83592f5f33dd5573cf
On monophonic position sets in graphs 14 Dec 2022 Elias John Thomas Department of Mathematics Mar Ivanios College University of Kerala Thiruvananthapuram-695015KeralaIndia Ullas Chandran Department of Mathematics Mahatma Gandhi College University of Kerala Thiruvananthapuram-695004KeralaIndia S V James Tuite Department of Mathematics and Statistics Open University Walton Hall, Milton KeynesUK Gabriele Di Department of Information Engineering, Computer Science and Mathematics University of L'Aquila Italy Stefano On monophonic position sets in graphs 14 Dec 2022general position setgeneral position numbermonophonic position setmonophonic position number 2000 MSC: 05C1205C69 The general position problem in graph theory asks for the largest set S of vertices of a graph G such that no shortest path of G contains more than two vertices of S. In this paper we consider a variant of the general position problem called the monophonic position problem, obtained by replacing 'shortest path' by 'induced path'. We prove some basic properties and bounds for the monophonic position number of a graph and determine the monophonic position number of some graph families, including unicyclic graphs, complements of bipartite graphs and split graphs. We show that the monophonic position number of triangle-free graphs is bounded above by the independence number. We present realisation results for the general position number, monophonic position number and monophonic hull number. Finally we discuss the complexity of the monophonic position problem. Introduction In 1900 Dudeney, famous for his mathematical puzzles, posed the following question [15]: what is the largest number of pawns that can be placed on an n × n chessboard such that no three pawns are on a straight line? This problem was generalised to the setting of graph theory independently at least three times in [6,26,27] as follows: a set of vertices S in a graph G is in general position if no shortest path of G contains more than two vertices of S. The problem then consists of finding the largest set of vertices in general position for a given graph G. This has been shown to be an NP-complete problem [27]. The general position problem has been the subject of intensive research; for some recent developments see [17,25,28,29,30,32]. Some interesting variants of the general position problem have been considered in the literature. In [23] the authors consider the general position problem using the Steiner distance Email addresses: [email protected] (Elias John Thomas), [email protected] (Ullas Chandran S. V.), [email protected] (James Tuite), [email protected] (Gabriele Di Stefano) instead of the normal graph distance. In [24] the authors set a limit on the length of the shortest paths considered. For a fixed integer d, they define a set S of vertices of a graph G to be a general d-position set if for any three vertices u, v, w ∈ S that lie on a common geodesic P , the length of P is greater than d; the number of vertices in a largest general d-position set is the general d-position number gp d (G) of G. The paper [33] discusses the largest general position sets that are also independent sets. A further recent variant of the general position problem can be found in [11], which discusses mutual visibility sets; a set M of vertices of a graph G is mutually visible if for any u, v ∈ M there exists at least one shortest u, v-path in G that intersects M only in the vertices u, v. In this paper, we consider a variation of the general position problem, which we call the monophonic position problem, obtained by replacing 'shortest path' by 'induced path'. We now define the terminology that will be used in this paper. By a graph G = (V, E) we mean a finite, undirected simple graph. The set of neighbours of a vertex u will be written N(u). The distance d(u, v) between two vertices u and v in a connected graph G is the length of a shortest u, v-path in G; any such shortest u, v-path is a geodesic. A path P in G is induced or monophonic if G contains no chords between non-adjacent vertices of P . We will denote the subgraph of G induced by a subset S ⊆ V (G) by G [S]. A vertex is simplicial if its neighbourhood induces a clique; in particular every leaf is simplicial. We denote the number of simplicial vertices and leaves of a graph G by s(G) and ℓ(G) respectively. The clique number ω(G) of G is the number of vertices in a maximum clique in G and the independence number α(G) is the number of vertices of a maximum independent set. A subset S ⊆ V (G) is an independent union of cliques of G if G[S] is a disjoint union of cliques; the number of vertices in a maximum independent union of cliques will be written as α ω (G). A graph G is a block graph if every maximal 2-connected component of G is a clique. The path of order ℓ and length ℓ − 1 will be written as P ℓ and the cycle of length ℓ as C ℓ . The join G ∨ H of two graphs is the graph formed from the disjoint union of G and H by joining every vertex of G to every vertex of H by an edge. Let G and H be graphs where V (G) = {v 1 , . . . , v n }; then the corona product G ⊙ H is obtained from the disjoint union of G and n disjoint copies of H, say H 1 , . . . , H n , by making the vertex v i adjacent to every vertex in H i for 1 ≤ i ≤ n. Finally, the Cartesian product G H, is the graph with vertex set V (G H) = V (G) × V (H) such that two vertices (u 1 , v 1 ), (u 2 , v 2 ) are adjacent in G H if and only if either u 1 = u 2 and v 1 ∼ v 2 in H, or else v 1 = v 2 and u 1 ∼ u 2 in G. For two distinct vertices u, v of a graph G, the monophonic interval K[u, v] is the set of all vertices lying on at least one monophonic path connecting u and v. The monophonic closure of a set M ⊆ V (G) is K[M] = u,v∈M K[u, v]. If K[M] = M,[M] = K r+1 [M], then K r [M] = [M] m . A set M ⊆ V (G) is a monophonic hull set if [M] m = V (G). The monophonic hull number h m (G) of G is the number of vertices in a smallest monophonic hull set of G. A vertex u in M is said to be an m-interior vertex of M if u ∈ K[M \ {u}] and the set of all m-interior vertices of M is denoted by M 0 . For any other graph-theoretical terminology we refer to [4]. The plan of this paper is as follows. In Section 2 we introduce the monophonic position number of a graph, determine some simple bounds and discuss the behaviour of the monophonic position number under some graph operations. In Section 3 we give a sharp bound for the mp-number of triangle-free graphs and determine the mp-numbers of unicyclic graphs and the join and corona products of graphs. In Section 4 we find the mp-numbers of split graphs and complements of bipartite graphs. Section 5 provides realisation results for the gp-number, mp-number and monophonic hull number. Finally in Section 6 we consider the computational complexity of the monophonic position problem. Monophonic position sets in graphs Recall that a set S of vertices in a graph G is a general position set if no shortest path in G contains more than two vertices of S; by convention a gp-set is a largest general position set of G. The number of vertices in a largest general position set of G is called the general position number of G and is denoted by gp(G). We have the following result for the general position number of graphs with diameter two. Theorem 2.1 ([2] ). If diam(G) = 2, then gp(G) = max{ω(G), η(G)}, where η(G) is the maximum order of an induced complete multipartite subgraph of the complement of G. We now introduce the following variant of the gp-number. Definition 2.2. A set M ⊆ V (G) is a monophonic position set or mp-set of G if no three vertices of M lie on a common monophonic path in G. The monophonic position number or mp-number mp(G) of G is the number of vertices in a largest mp-set of G. For an example of these concepts see Figure 1. Observe that every monophonic position set S of a graph G is also in general position; it follows that mp(G) ≤ gp(G). Any pair of vertices is in monophonic position, so for graphs with order n ≥ 2 we have 2 ≤ mp(G) ≤ n. It is easily seen that a connected graph satisfies mp(G) = n if and only if G ∼ = K n and the only connected graphs G with mp(G) = n − 1 are a) the joins of K 1 with a disjoint union of cliques and b) graphs formed from a clique by deleting between one and n − 2 edges incident to a given vertex. Also the mp-number of the cycle C n is given by mp(C n ) = 2 for n ≥ 4. We begin by noting that for a wide class of graphs the monophonic position number coincides with the general position number. A graph G is distance-hereditary if for any connected induced subgraph H of G and vertices u, v ∈ H we have d H (u, v) = d G (u, v), where d G (u, v) is the distance between u and v in G and d H (u, v) is the distance between u and v in the subgraph H. Distance-hereditary graphs have been characterised by Howorka [19] and many works in the literature are dedicated to them as well as to their generalisations or specialisations (see e.g. [7,9,10,20]). In a distance-hereditary graph a path is a geodesic if and only if it is induced; hence the definitions of general position and monophonic position coincide for such graphs. Distance-hereditary graphs G are not characterised by the relation mp(G) = gp(G); for example, we have mp(C 5 ⊙ K 1 ) = gp(C 5 ⊙ K 1 ) = 5, but this graph is not distance-hereditary. Since the class of distance-hereditary graphs includes other well studied classes of graphs such as cographs, block graphs, trees and Ptolemaic graphs, results provided for these classes can be seen as a consequence of Observation 2.3. Hence by Theorem 2.1 and the result of [27] we immediately have the following corollaries. Corollary 2.5. For integers r 1 ≥ r 2 ≥ · · · ≥ r t the mp-number and gp-number of the complete multipartite graph K r 1 ,r 2 ,...,rt are given by gp(K r 1 ,r 2 ,...,rt ) = mp(K r 1 ,r 2 ,...,rt ) = max{r 1 , t}. We now provide some bounds on the monophonic position number of a graph. It was shown in [6] that the gp-number of a graph with diameter D is bounded above by n − D + 1; there is an analogous upper bound for the mp-number of a graph in terms of the length of its longest induced path. Proposition 2.6. If G is a graph with order n and the longest induced path in G has length L, then then mp(G) ≤ n − L + 1. This bound is sharp. Proof. Let P be a longest monophonic path in G with length L and let M be a maximum mp-set. The path P can contain at most two vertices of M, so that at least L − 1 vertices of G do not lie in M. The bound is sharp for cliques and paths. In [27] the authors bound the gp-number of a graph G using the isometric-path number, which is the smallest number of geodesics in G such that each vertex of G is contained in exactly one of the geodesics. There is a similar upper bound for the mp-number in terms of the induced path number ρ(G), which is defined to be the smallest number of induced paths in G such that each vertex of G is contained in exactly one of the paths. This parameter was introduced in [8]. Lemma 2.7. The mp-number of a graph G is bounded above by mp(G) ≤ 2ρ(G). Proof. Consider a partition of V (G) into ρ(G) induced paths and let M be a largest mp-set. M can intersect each path in the partition in at most two vertices, so that |M| ≤ 2ρ(G). It is shown in [1] that the induced path number of any connected cubic graph G with order n ≥ 7 is at most ρ(G) ≤ n−1 3 . Thus Lemma 2.7 has the following interesting corollary. Corollary 2.8. For any connected cubic graph with order n ≥ 7, the monophonic position number is at most mp(G) ≤ 2(n−1) 3 . Corollary 2.8 is tight; for example the cube has order eight and mp-number four. However, the authors conjecture that the coefficient 2 3 is not best possible asymptotically. Conjecture 2.9. The the largest possible monophonic position number of a cubic graph with order n is n 3 + O(n). We can also bound mp(G) above in terms of the number of cut-vertices in G. Proof. Let M be a maximum mp-set of G that contains as few cut-vertices of G as possible. Suppose for a contradiction that M contains a cut-vertex v of G. Let W 1 , W 2 , . . . , W k , k ≥ 2, be the components of G \ {v}. Then M intersects at most one component, say W 1 , or else there will be a monophonic path between vertices of M lying in different components that must pass through v. Let W be the subgraph of G induced by W 2 ∪ {v} and P be a longest path in W with initial vertex v. It is easily seen that the terminal vertex u of P is not a cut-vertex of G. The set M ′ = (M \ {v}) ∪ {u} is also a maximum mp-set of G, but contains fewer cut-vertices of G than M, contradicting the definition of M. Lemma 2.10 is sharp for block graphs. We now give a lower bound for mp(G) in terms of the number of simplicial vertices in G. Lemma 2.11. For any graph G we have mp(G) ≥ s(G). Proof. Let S be the set of simplicial vertices of G. Suppose that u, v, w ∈ S and that P is a monophonic u, v-path containing w (in particular, this requires s(G) ≥ 3). The vertex w has two neighbours w ′ and w ′′ on P ; however, by definition, we must have w ′ ∼ w ′′ , so that P is not induced, a contradiction. Corollary 2.4 shows that the lower bound in Lemma 2.11 is also sharp for block graphs. Proof. Since every mp-set of G is also an mp-set of G ′ , it follows that mp( G) ≤ mp(G ′ ); conversely, if M ′ is a maximum mp-set of G ′ , then M ′ \ {u} is an mp-set of G, so that |M ′ \ {u}| ≤ mp(G) and hence mp(G ′ ) = |M ′ | ≤ mp(G) + 1. Suppose that mp(G ′ ) = mp(G) + 1 with a largest mp-set M ′ ; then mp(G ′ ) ≥ 3 and u ∈ M ′ . Since |M ′ | ≥ 3, it follows that v ∈ M ′ , for otherwise v would lie on a monophonic path from u to another member of M ′ in G ′ . Let M = (M ′ ∪ {v}) \ {u}. M is too large to be an mp-set of G, so there must exist three vertices x, y, z of M that lie on a common monophonic path in G. Since M ′ \ {u} = M \ {v} is an mp-set of G, one of these vertices, say x, must be v. If y ∈ K[v, z] in G, then we would have y ∈ K[u, z] in G ′ , contradicting the fact that M ′ is an mp-set of G ′ . Therefore v ∈ K[y, z] and so v is the only vertex in M such that v ∈ M 0 . Thus M \ {v} is the required set. Conversely, assume that there exists a maximum mp- set M of G with v ∈ M and M ∪ {v} 0 = {v}. We claim that M ′ = M ∪ {u} is an mp-set of G ′ . Suppose that there exist x, y, z ∈ M ′ such that x ∈ K[y, z] in G ′ . As u is a leaf, x = u, so we can set z = u. Then x ∈ K[y, v] and hence x ∈ M ∪ {v} 0 , which is impossible. Thus M ′ is an mp-set in G ′ with |M ′ | = mp(G) + 1. Proposition 2.13. If G ′ is a graph obtained from G by adding a pendant vertex u to a simplicial vertex v in G, then mp(G) = mp(G ′ ). Proof. By Lemma 2.12 mp(G) ≤ mp(G ′ ) ≤ mp(G) + 1 and if mp(G ′ ) = mp(G) + 1, then there exists an mp-set M of G with |M| = mp(G) such that v ∈ M and M ∪ {v} 0 = {v}. However, as a simplicial vertex v cannot be an interior vertex of any monophonic path in G, we have mp(G) = mp(G ′ ). Monophonic position in graph families In this section we discuss the monophonic position numbers of some common graph families. In Theorem 3.2 we give a sharp bound for the mp-numbers of triangle-free graphs. We then give exact expressions for the mp-numbers of unicyclic graphs and graphs formed as the join or corona product of two graphs. Lemma 3.1. Let G be a connected graph and M ⊆ V (G) be an mp-set. Then G[M] is a disjoint union of k cliques G[M] = k i=1 W i . If k ≥ 2, then for 1 ≤ i ≤ k any two vertices of W i have a common neighbour in G \ M. Proof. Let W 1 , W 2 , . . . , W k be the components of G[M] . If some W i is not a clique, then it would contain an induced path of length two, which is impossible; hence M is an independent union of cliques. Let k ≥ 2 and suppose for a contradiction that there is a component, say W 1 , with u, v ∈ V (W 1 ) such that u and v have no common neighbour in G \ M. Let w be any vertex in W 2 and let P be a u, w-monophonic path u, u 1 , u 2 , . . . , u ℓ = w in G. Then ℓ ≥ 2. Since M is an mp-set, it follows that P together with the edge uv is not a monophonic path in G. This shows that v must be adjacent to u j for some j with 1 ≤ j ≤ ℓ − 1. It follows from the choice of u and v that j ≥ 2; let j be the largest suffix for which v is adjacent to u j . Then the u j , w-subpath of P together with the 2-path u j , v, u forms a monophonic path containing three points of M; hence u and v must have a common neighbour in G \ M. Theorem 3.2. The mp-number of a connected triangle-free graph G with order n ≥ 3 satisfies mp(G) ≤ α(G). Moreover, if the length of any monophonic path is at most three, then mp(G) = α(G). Proof. Let M be a maximum mp-set in a triangle-free graph G. By Lemma 3.1 M is an independent union of cliques W 1 , W 2 , . . . , W k . If k = 1, then G[M] is a clique and so mp( G) = |M| ≤ ω(G) = 2 ≤ α(G), so assume that k ≥ 2. If any component W i of G[M] contains distinct vertices u, v, then by Lemma 3.1 u and v have a common neighbour in G \ M, so that G would contain a triangle. It follows that each component of G[M] consists of a single vertex, so that M is an independent set of G and mp(G) ≤ α(G). Let T be a maximum independent set of G. If three vertices of T lie on a common monophonic path P , then the length of P must be at least four. Therefore if the longest monophonic path of G has length at most three, then we have equality in the bound and any maximum independent set of G will be a maximum mp-set. The bound in Theorem 3.2 is sharp; it is met by the caterpillar formed by adjoining one leaf to every internal vertex of a path, complete bipartite graphs K r,r and the corona product C n ⊙ K 1 for n ≥ 4. Hence this bound can be achieved by graphs with arbitrarily large diameter, minimum degree and girth. Theorem 3.2 can be used to find the monophonic position numbers of some graphs with large girth. This problem is particularly interesting for the cage graphs. The unique cubic cages with girths five, six and seven are the Petersen graph, the Heawood graph and the McGee graph respectively. We omit the lengthy case argument used to prove the following result, which was checked computationally by Erskine [16]. We now determine the mp-numbers of unicyclic graphs; as we have seen, such graphs can meet the upper bound in Theorem 3.2. We will identify the vertex set of the unique cycle C of a unicyclic graph G with Z s , where the length of the cycle is s ≥ 3, where i ∼ i + 1 for 0 ≤ i ≤ s − 1 (mod s). We denote by R the set of vertices of C that have degree ≥ 3. We will call any vertex in R a root and if i ∈ R we write T i for the tree in G \ {i − 1, i + 1} that contains i. The leaf number of the tree T i is ℓ(T i ) and we write ℓ ′ (T i ) for the number of leaves of T i that are also leaves of G. We have already dealt with the case that G is a cycle (mp(K 3 ) = 3, mp(C s ) = 2 for s ≥ 4), so we assume that |R| > 0. Theorem 3.5. Let G be a unicyclic graph that is not a cycle. Then mp(G) =          ℓ(G) + 2, if r = 1, ℓ(G) + 1, if r = 2, R = {i, j} and either T i or T j is a path, ℓ(G) + 1, if r = 2 and R = {i, i + 1} for some 0 ≤ i ≤ s − 1, ℓ(G), otherwise. Proof. Let M be a maximum mp-set of G; by Lemma 2.10 we can assume that M contains no cut-vertices, so that M ∩T i consists of leaves for any tree T i (in particular M ∩(C \R) = ∅). It follows from Corollary 2.4 and the mp-numbers of the cycles that ℓ(G) ≤ mp(G) ≤ ℓ(G) + 2. If |R| = 1, then the leaves of the unique tree T i and the vertices {i − 1, i + 1} form an mp-set, so that the upper bound is achieved, so we can take |R| ≥ 2. Suppose that mp(G) = ℓ(G) + 2. Then M must contains ℓ ′ (T i ) vertices in each tree T i for i ∈ R, as well as two vertices of C \ R. As we are assuming that |R| ≥ 2, this is not possible, for either there is a monophonic path from a vertex of M in a tree T i through a section of C containing two vertices of M, or else a monophonic path from a vertex of M in a tree T i to a vertex of M in a tree T j , i = j, through a vertex of M on C. Thus for |R| ≥ 2 we have mp(G) ≤ ℓ(G) + 1. Suppose that mp(G) = ℓ(G) + 1. Then either |M ∩ (C \ R)| = 1 and |M ∩ T i | = ℓ ′ (T i ) for each i ∈ R, or else |C \ R| = 2 and there is one tree In [17] the authors determined the gp-number of the join and corona product of graphs. We now determine the corresponding relation for the monophonic position numbers. Proof. Let V (G) = {v 1 , . . . , v n } and let H 1 , . . . , H n be the corresponding copies of H in G ⊙ H. Let M be a maximum mp-set of G ⊙ H. By Lemma 2.10 we can assume that M does not contain any cut-vertices of G ⊙ H. Observe that every vertex of G is a cut-vertex in G⊙H, so that we can take M to lie entirely in H 1 ∪H 2 ∪· · ·∪H n . It is easily seen that each set M ∩ V (H i ) must be in monophonic position; it follows that |M| ≤ n(G)mp(H). Conversely, if S is a maximum mp-set of H and for 1 ≤ i ≤ n the corresponding set in H i is S i , then S 1 ∪ S 2 ∪ · · · ∪ S n is in monophonic position. Therefore mp(G ⊙ H) = n(G)mp(H). Suppose that there exist distinct x 1 , x 2 ∈ M 1 such that x 1 ∼ x 2 ; in this case if y ∈ M 2 , then x 1 , y, x 2 would be a monophonic path in G ∨ H, a contradiction. Therefore M 1 and M 2 must both induce cliques in G and H respectively, so that mp(G ∨ H) ≤ ω(G) + ω(H). T i with |M ∩ T i | = ℓ ′ (T i ) − 1. If |M ∩ (C \ R)| = 2, Complements of bipartite graphs and split graphs Cographs are induced P 4 -free graphs. As cographs are distance-hereditary, the mp-and gp-numbers of these graphs are equal by Observation 2.3. It is therefore of interest to study the mp-numbers of induced P 5 -free graphs; in this class equality between the gp-and mpnumbers does not hold in general, as shown by the cycle C 5 , for which mp(C 5 ) = 2 < 3 = gp(C 5 ). We therefore consider two well-known classes of induced P 5 -free graphs, namely the complement of connected bipartite graphs and split graphs. Since the complement of the complete bipartite graph K m,n is disconnected, clearly mp(K m,n ) = m + n. Let Proof. Let S be a maximum mp-set in G. Then both S A and S B are cliques in G. If S A = ∅ or S B = ∅, then it is clear that |S| ≤ ω(G) = α(G), so assume that S A = ∅ and S B = ∅. If there is an edge between S A and S B in G, then each vertex of S A must be adjacent to all vertices of S B , so that S induces a clique in G and again |S| ≤ α(G). Hence we can assume that there is no edge between S A and S B in G; thus S induces a complete bipartite subgraph in G. Next we claim that S is uniform. Assume to the contrary that there exist vertices u, v ∈ S A and w ∈ B such that w is adjacent to v but not to u in G. Then w / ∈ S B . Choose x ∈ S B arbitrarily. Then the path v, u, w, x is a v, x-monophonic path in G containing the vertex u, a contradiction to the fact that S is an mp-set of G. Thus S must be uniform in G and so |S| ≤ ψ(G). Therefore we have mp(G) ≤ max{α(G), ψ(G)}. On the other hand, it is clear that mp(G) ≥ ω(G) = α(G). We now show that every maximum uniform set S ⊆ V (G) gives an mp-set in G. Both S A and S B are cliques in G. If there is no edge between S A and S B in G, then S induces a clique in G, so that S is an mp-set and mp(G) ≥ |S| = ψ(G). Hence we can assume that there is an edge between S A and S B in G; as S is uniform, G contains all edges between S A and S B , so that there are no edges between S A and S B in G. Also by uniformity, there are sets X ⊆ A \ S A and Y ⊆ B \ S B such that N(x) = Y for each x ∈ S A and N(y) = X for each y ∈ S B ; from this it is simple to see that S is in monophonic position in G, so that mp(G) ≥ |S| = ψ(G), concluding the proof. Let T be a tree with order n ≥ 2 and suppose that S is a uniform set of T with |S| > α(T ). Suppose that |S A | ≥ 2; then each vertex in S A is a leaf, for if d(u) ≥ 2 for some u ∈ S A , then for any v ∈ S A \ {u} the vertices u and v would have at least two common neighbours, which is impossible. As the same reasoning applies to S B and α(T ) ≥ ℓ(T ), we conclude that S A is a set of leaves and |S B | = 1, say S B = {w}. If S A ∪ S B = V (T ), then T is a star, V (T ) is a uniform set and mp(T ) = n(T ); otherwise, considering the endpoint of a longest path to a vertex of V (T ) \ S, we see that T contains a leaf that does not belong to S, so that after all |S| ≤ α(T ). This proves the following corollary. A graph is a split graph if the vertex set can be partitioned into a clique C and an independent set I. If there is a vertex v that is adjacent to every vertex of C \ {v} and nonadjacent with every vertex of I \ {v} we will say that v is divided. Obviously there cannot be divided vertices in both C and I. If there are no divided vertices, then ω(G) = |C| and α(G) = |I|. If G contains a divided vertex v, then by moving v from C into I if necessary, we can assume that any divided vertices lie in I and ω(G) = |C| + 1 and α(G) = |I|. We define a separated subgraph (C ′ , I ′ ) of the split graph G = (C, I) as follows. Let φ(G) denote the order of a largest separated subgraph of G. Trivially the clique induced by C is separated, as is the independent set induced by I. If there is a divided vertex v in I, then (C, {v}) is separated. This shows that we always have φ(G) ≥ max{ω(G), α(G)}. Theorem 4.6. Let G = (C, I) be a connected split graph. Then mp(G) = φ(G). Proof. Let (C ′ , I ′ ) be a separated subgraph of G. Suppose for a contradiction that P is a monophonic path with endpoints w 1 , w 3 ∈ C ′ ∪ I ′ that passes through a vertex w 2 ∈ (C ′ ∪ I ′ ) \ {w 1 , w 3 }. We cannot have w 1 , w 3 ∈ C ′ , as w 1 ∼ w 3 . Also w 2 cannot lie in I ′ , as the neighbours of w 2 on P would both be in C and so would be adjacent. Suppose that w 1 , w 3 ∈ I ′ , w 2 ∈ C ′ . If (C ′ , I ′ ) is a separated subgraph of Type A, then the monophonic path P would have to include at least three vertices from C, which is impossible. Hence (C ′ , I ′ ) is a separated subgraph of Type B. Now, if w 1 = v, then P cannot be monophonic, as we would have N(w 3 ) ⊂ N(w 1 ), but w 3 ∼ w 2 . Hence w 1 , w 2 ∈ I ′ \ {v}; however, in this case P would contain at least three vertices from C, which is again impossible. Finally suppose that w 1 , w 2 ∈ C ′ , w 3 ∈ I ′ . Since P has exactly two vertices from C, it follows that (C ′ , I ′ ) is Type B and w 3 = v; this is a contradiction, as we would have v ∼ w 1 . Hence we conclude that any separated subgraph is in monophonic position. Conversely, we now show that any subgraph (C ′ , I ′ ) that is in monophonic position must be separated. If (C ′ , I ′ ) is not Type A, then there is a vertex v ∈ I ′ with an edge to a vertex u ∈ C ′ . Suppose that there is a vertex u ′ ∈ C ′ \ N(v); then v, u, u ′ would be a monophonic path in (C ′ , I ′ ), a contradiction. Hence v is adjacent to every vertex of C ′ . If there is a vertex v ′ ∈ I ′ \ {v} that is adjacent to a vertex x in C ′ , then v, x, v ′ would be an induced path in (C ′ , I ′ ), so there are no edges from C ′ to I ′ \ {v}. Finally suppose that there is an edge u ′ ∼ v ′ , where u ′ ∈ C \ C ′ , v ′ ∈ I ′ \ {v}, such that v ∼ u ′ ; in this case v, u, u ′ , v ′ would be an induced path containing three vertices of (C ′ , I ′ ); we conclude that N(v ′ ) ⊂ N(v) for any v ′ ∈ I ′ \ {v}. Hence (C ′ , I ′ ) is a separated subgraph. We noted before that for any split graph G we have mp(G) = φ(G) ≥ max{ω(G), α(G)}. We now characterise the cases in which equality holds. For X ⊆ V (C) write N I (X) = I ∩ ( x∈X N(x)). By a matching between C and I in G, we will mean a matching such that no edge of the matching has both endpoints in C. there exists a matching in G between C and I that saturates either C or I. The same conclusion holds if G has a divided vertex, unless G is formed from a clique A of order r + 2 and an independent set B of order > r by adding a set E ′ of edges between A ′ and I, where A ′ ⊂ A, |A ′ | = r, such that E ′ contains a matching saturating A ′ , in which case mp(G) = α(G) + 1 > ω(G). Proof. First suppose that there is no matching between C and I that saturates C or I; we will show that φ(G) > max{ω(G), α(G)}. By Hall's Theorem [18], as there is no matching between C and I that saturates C, there is a subset K ⊆ C such that |N I (K)| < |K|. Then (K, I \ N I (K)) is a separated subgraph with order > |I| = α(G). Similarly, considering matchings from I, we see that there is subset J ⊆ I such that |N(J)| < J, implying that (C \ N(J), J) is a separated subgraph with order |(C \ N(J), J)| > |C|. If G has no divided vertices, then |C| = ω(G) and we are done, so suppose that v is a divided vertex in I. If v ∈ J, then N(J) = C, so we have |C| = |N(J)| < |J| ≤ |I| ≤ α(G) and hence ω(G) = |C| + 1 ≤ α(G). However, the separated set (K, I \ N I (K)) constructed in the previous discussion has order > α(G) and hence suffices. Otherwise, we can move v from I \ J to J to obtain a separated subset of Type B with order ≥ |C| + 2 > ω(G). Now suppose that there is a matching M between C and I that saturates C or I. If C ′ ∪ I ′ induces a clique or an independent set, then the result follows immediately, so we assume that both C ′ and I ′ are non-empty. If (C ′ , I ′ ) is Type A, then either the vertices of I ′ are matched to a subset of C \ C ′ by M, in which case |(C ′ , I ′ )| ≤ |C| ≤ ω(G), or else the vertices of C ′ are matched with a subset of I \ I ′ , in which case |(C ′ , I ′ )| ≤ α(G). Thus assume that (C ′ , I ′ ) is Type B. Firstly consider the case that M saturates I. Then by the previous argument M must contain an edge from v ∈ I ′ to C ′ and the remaining vertices of I \ {v} are matched with a subset of C \ C ′ . If C \ (C ′ ∪ N(I ′ \ {v})) = ∅, then |C ′ ∪ I ′ | ≤ |C| ≤ ω(G), so we can assume that N(I ′ \ {v}) = C \ C ′ . By definition of a separated set, v must then be adjacent to every vertex of C, so that v is a divided vertex and |C ′ ∪ I ′ | ≤ |C| + 1 = ω(G). Now suppose that M saturates C; similarly to the previous case, we can assume that M includes an edge from a vertex u ∈ C ′ to the unique vertex v ∈ I ′ that is adjacent to every vertex of C, and that M contains a perfect matching between C ′ \ {u} and I \ I ′ , from which it follows that mp(G) ≤ α(G) + 1. But now it follows that each vertex of C \ C ′ is matched by M to a vertex of I ′ \ {v}, so that C \ C ′ = N(I ′ \ {v}) and, by definition of separated set, v is adjacent to every vertex of C \ C ′ and v is divided. Hence if there are no divided vertices we have equality in the lower bound. Let H be any bipartite graph with bipartition (A, B) such that a) |B| ≥ |A| + 1 and b) there is a matching in H that saturates A. Form a split graph H ′ = (C, I) as follows: add two new vertices x, y and add every edge between vertices in {x, y}∪A, then set C = A∪{x}, I = B ∪ {y}. The original matching from A to B in H plus the edge x ∼ y gives the required matching. Also the set B ∪ {x, y} is a largest mp-set, so that mp(H ′ ) = α(H ′ ) + 1 > ω(H ′ ); in fact the previous discussion shows that a split graph G has mp(G) = α(G) + 1 > ω(G) if and only if it belongs to this family. Realisation results In this paper we have presented some properties of the monophonic position number of graphs. In some respects this graph parameter behaves like the more widely studied general position number. It is therefore of interest to ask whether there is a relation between the two numbers other than the trivial inequality mp(G) ≤ gp(G)? We will show that these two parameters are independent by proving the following realisation result: for any pair a, b ∈ N such that 2 ≤ a ≤ b there exists a graph G with mp(G) = a and gp(G) = b. Firstly observe that if a = b, then trivially the complete graph K a has the required properties, so in the following we will assume that a < b. Proof. We begin with the case a = 2. We have mp(C 5 ) = 2, gp(C 5 ) = 3, so we can assume that b ≥ 4. We define the half-wheel graph H r for r ≥ 2 as follows. Take a cycle C 2r of Figure 2. We will show that for r ≥ 4 the half-wheel has mp(H r ) = 2 and gp(G) = r. First let M be an mp-set in H r . M can contain at most two vertices of the induced cycle C 2r , so mp(H r ) ≤ 3 and equality holds only if M contains x and two vertices i, j ∈ V (C 2r ). If i and j are both even, then i, x, j is a monophonic path passing through three vertices of M, so we can take j to be odd. Suppose that i is even and j odd; if i ∼ j, then j, i, x is a monophonic path containing three vertices of M, whereas if i ∼ j, then i, x, j − 1, j is the required path. Finally if i and j are both odd, then i, i − 1, x, j + 1, j is the required induced path, where we assume that j = i + 2 if d(i, j) = 2. Thus mp(H r ) = 2. The set of all even integers on C 2r is a general position set, so gp(H r ) ≥ r. Suppose that there is a general position set K in H r with r + 1 elements. Suppose that x ∈ K. Then K contains two vertices that are neighbours on C 2r , without loss of generality i, i+1 ∈ K, where i is even. Then the distance from i + 1 to any even vertex j of C 2r apart from i and i + 2 is three and i + 1, i, x, j is a geodesic. Also i + 2 ∈ K, as i, i + 1, i + 2 is a geodesic. Therefore K must consist of the set of odd vertices of C 2r together with {i}. However i − 1, i, i + 1 would then be a geodesic containing three vertices of K. Hence we can assume that x ∈ K. Suppose that K contains an even vertex i of C 2r . Between any two even vertices of C 2r there is a geodesic passing through x, so K can contain no other even vertex of C 2r . Also K cannot contain i+1 or i−1, as i is contained in (i−1), xand (i + 1), x-geodesics. Therefore K would contain at most 2 + (r − 2) = r vertices, a contradiction. Hence K consists of x and the odd vertices of C 2r ; however, 1, 2, x, 4, 5 is a geodesic containing three vertices of K. It follows that for r ≥ 4 we have gp(H r ) = r. We now turn to the case of larger a; let 3 ≤ a < b. The independent position number of a graph G, which we will denote by igp(G), is the number of vertices in a largest set S ⊆ V (G) that is both independent and in general position. This parameter was studied in [33]. If we define the independent monophonic position number imp(G) to be the number of vertices in a maximum subset S ⊆ V (G) such that S is independent and in monophonic position, then we have both imp(G) ≤ mp(G) ≤ gp(G) and imp(G) ≤ igp(G) ≤ gp(G). This raises the question of whether there is a relationship between mp(G) and igp(G). We can easily answer this in the negative. Proof. The only graph with mp-number one is K 1 , which implies the necessity of the conditions. For n ≥ 1 we have igp(K n ) = 1 and mp(K n ) = n, so we can assume that 2 ≤ a, b. For the case a ≤ b, for r ≥ 0 and s ≥ 1, define R(r, s) to be the graph formed by attaching r leaves to one vertex of a copy of K s+1 ; then we have mp(R(r, s)) = r + s and igp(R(r, s)) = r + 1, so that the graph R(a − 1, b − a + 1) has the required properties. Consider now the case a > b. Let the graph K − s,s be the complete bipartite graph K s,s with a perfect matching deleted. Form the graph P (r, s) by adding an extra vertex x to K − s,s and joining x by an edge to each vertex of one of the partite sets of K − s,s , then adding r leaves to x. This graph satisfies mp(P (r, s)) = r + 2 and igp(P (r, s)) = r + s. Hence the graph P (b − 2, a − b + 2) has the necessary parameters. The graph R(r, s) from the proof of Theorem 5.3 can also be used to answer Question 2 from [24]. The dissociation number diss(G) of a graph G is the number of vertices in a largest subset S ⊆ V (G) such that G[S] has maximum degree at most one. Recall also that the 2-position-number gp 2 (G) is the number of vertices in a largest set K ⊂ V (G) such that no shortest path in G of length two contains three vertices of K [24]. It is easily seen that any dissociation set is in 2-general position, so that diss(G) ≤ gp 2 (G). Furthermore for 2 ≤ a ≤ b we have diss(R(a − 2, b − a + 2)) = a and gp 2 (R(a − 2, b − a + 2)) = b if a ≥ 3, whilst if a = 2 the clique K b has the required parameters. The hull number has been widely studied, for example in [3,5,13]. In [6] it was shown that for any a, b ∈ N such that 2 ≤ a ≤ b there exists a graph G with h(G) = a and gp(G) = b. The corresponding parameter for monophonic paths, the monophonic hull number h m (G) (discussed in Section 1), has been studied in [12,14,21,31]. We now prove a realisation theorem for the monophonic hull number and the monophonic position number. We will use the following two results in our analysis. Lemma 5.4 ([31]). Every monophonic hull set of a graph G contains all of its simplicial vertices. Observation 5.5. Let M be a minimum monophonic hull set of a connected graph G and let a, b ∈ M. If a vertex c = a, b lies on an a, b-monophonic path, then c ∈ M. It follows from Observation 5.5 that every minimum monophonic hull set of a connected graph G is an mp-set in G. Consequently, for any graph with order n ≥ 2 we have 2 ≤ h m (G) ≤ mp(G) ≤ n. In view of this inequality, we have the following realisation result. Proof. The only graph with h m (G) = 1 or mp(G) = 1 is K 1 , so assume that a ≥ 2. If b = n, then G must be a clique, so that a = b = n. It remains only to prove the existence of a graph G with order n, h m (G) = a and mp(G) = b for all 2 ≤ a ≤ b ≤ n − 1. Let W be a clique of order b and P ℓ be a path of order ℓ ≥ 1 with vertices x 1 , . . . , x ℓ such that x i ∼ x i+1 for 1 ≤ i ≤ ℓ − 1. For 2 ≤ a ≤ b let G(a, b, ℓ) be the graph formed from W and P ℓ by joining x 1 to b − a + 1 vertices of W by edges. Write X = V (W ) \ N(x 0 ) and Y = V (W ) ∩ N(x 0 ), so that |X| = a − 1 and |Y | = b − a + 1. The order of G(a, b, ℓ) is b + ℓ. By Lemma 5.4, any monophonic hull set of G(a, b, n) must contain X ∪ {x ℓ }; conversely, this subset is a monophonic hull set, so h m (G(a, b, ℓ)) = a. As the clique number of G(a, b, ℓ) is b, we have mp(G(a, b, ℓ)) ≥ b. For the converse, let M be a maximum mp-set of G(a, b, ℓ). M contains at most two vertices of P ℓ and if |M ∩ V (P ℓ )| = 2, then M ∩ W = ∅ and |M| ≤ b. If |M ∩ V (P ℓ )| = 1, then M cannot contain vertices of both X and Y , so that |M| ≤ max{a, b − a + 2} ≤ b. Thus |M| = b and mp(G(a, b, ℓ)) = b. Therefore the graph G(a, b, n − b) has the required properties. Computational complexity In this section, we study the computational complexity of the problem of finding the monophonic position number of a general graph. To this end, we formally define the decision version of the problem: Definition 6.1. Monophonic position set Instance: A graph G, a positive integer k ≤ |V (G)|. Question: Is there a monophonic position set S for G such that |S| ≥ k? The problem is hard to solve as shown by the next theorem. Theorem 6.2. The Monophonic position set problem is NP-hard. Proof. We prove that the Clique problem polynomially reduces to Monophonic position set. An instance of Clique is given by a graph G and a positive integer k ≤ |V |. The Clique problem asks whether G contains a clique K ⊆ V (G) of order k or more. The NP-completeness of Clique is well known, as it is one of the original list of 21 NP-complete problems presented in [22]. We polynomially transform an instance (G, k) of Clique to an instance (G ′ , k ′ ) of Monophonic position set so that G has a clique of order k or more if and only if G ′ has a monophonic position set of order k ′ or more. Given an instance (G, k), the graph G ′ is built as follows: V (G ′ ) = {v ′ , v ′′ | v ∈ V (G)} E(G ′ ) = {u ′ v ′ | uv ∈ E(G)} ∪ {u ′ v ′′ | u, v ∈ V (G)} ∪ {u ′′ v ′′ | u, v ∈ V (G)} In words, G ′ contains a subgraph H isomorphic to G and a clique graph H ′ of order n = |V (G)| such that G ′ = H ∨ H ′ . As for k ′ , we set k ′ = n + k. As a preliminary results, note that ω(G ′ ) = ω(G) + n. Indeed, if S is any clique in G, then S ′ = {s ′ | s ∈ S} is a clique in H, and S ′ ∪ V (H ′ ) is a clique in G ′ , since any vertex in H ′ is a universal vertex of G ′ . Then ω(G ′ ) ≥ ω(G) + n. On the contrary, if S is any clique of G ′ , then |S ∩ V (H)| ≤ ω(G) and |S ∩ V (H ′ )| ≤ n. Hence ω(G ′ ) ≤ ω(G) + n. By Now assume that an instance (G, k) of Clique has a positive answer, then ω(G) = ω(H) ≥ k. Then mp(G ′ ) = ω(H) + n ≥ k + n = k ′ , and hence Monophonic position set has a positive answer. On the other hand, if the instance (G, k) of Clique has a negative answer then ω(G) = ω(H) < k, which in turn implies that Monophonic position set has a negative answer, since mp(G ′ ) = ω(H) + n < k + n. By the proof of Theorem 6.2, it is not clear if Monophonic position set is NPcomplete. However if we restrict the problem to instances (G, k) with k > |V (G)|/2 and G = H ∨ K, where H is a generic graph and K is a clique graph having the same order of H, the problem is NP-complete. Indeed, given a solution, it can be tested in polynomial time if it is a clique and if its order is larger than k. Figure 1 : 1The Petersen graph with a maximum gp-set (left) and a maximum mp-set (right) Observation 2. 3 . 3For any distance-hereditary graph G we have mp(G) = gp(G). Corollary 2 . 4 . 24For any block graph G with s(G) simplicial vertices, mp(G) = s(G). In particular, for any tree T with ℓ(T ) leaves, we have mp(T ) = ℓ(T ). Lemma 2 . 10 . 210Let G be any connected graph of order n and c(G) cut-vertices. Then G has a maximum monophonic position set that does not contain any cut-vertices. Thus mp(G) ≤ n − c(G). Finally we present two results that show the effect on monophonic position sets of adding a leaf to a graph. Recall that for any set M ⊂ V (G) a vertex u of G is in the interior M 0 of M if and only if there are vertices x, y ∈ M \ {u} such that u lies on a monophonic x, y-path. Lemma 2 . 12 . 212Let G ′ be a graph obtained from a graph G by adding a pendant edge uv at a vertex v of G. Then mp(G) ≤ mp(G ′ ) ≤ mp(G) + 1. Moreover, mp(G ′ ) = mp(G) + 1 if and only if there exists a maximum mp-set M of G with v ∈ M such that M ∪ {v} 0 = {v}. Theorem 3 . 3 . 33The monophonic position numbers of the Petersen graph, the Heawood graph and the McGee graph are three, three and two respectively.This motivates the following conjecture. Conjecture 3 . 4 . 34For sufficiently large g, the monophonic position number of a (d, g)-cage G satisfies mp(G) < d. then there is a unique tree T i that contains a vertex of M and the vertices of M ∩ (C \ R) are {i − 1, i + 1}, so that, since no other tree contains a vertex of M, we must have |R| = 2 and the other tree T j is a path, in which case equality holds in mp(G) = ℓ(G)+1. Hence assume that every tree T i has |M ∩ T i | = ℓ ′ (T i ) and |M ∩ (C \ R) = 1. If there are two trees T i , T j such that i ∼ j on C, then there would be a monophonic path from M ∩ T i to M ∩ T j through the vertex of M on C \ R, so either |R| = 2 or s = |R| = 3; in the latter case trivially mp(G) = ℓ(G). If R = {i, i + 1}, then the leaves of T i and T i+1 together with the vertex i − 1 of C constitute an mp-set of G, so that we have mp(G) = ℓ(G) + 1 in this case; otherwise mp(G) = ℓ(G). Theorem 3.6 ([17]). For graphs G and H the general position number of the join G ∨ H and the corona product G ⊙ H are given by gp(G ∨ H) = max{ω(G) + ω(H), α ω (G), α ω (H)} and if G is connected and has order n(G), then gp(G ⊙ H) = n(G)α ω (H). Proposition 3. 7 . 7Let G be a connected graph with order n(G) and let H be any graph. Then mp(G ⊙ H) = n(G)mp(H). Proposition 3. 8 . 8The monophonic position number of the join G ∨ H of graphs G and H is related to the monophonic position numbers of G and H by mp(G ∨ H) = max{ω(G) + ω(H), mp(G), mp(H)}. Proof. It is evident that an mp-set in G is an mp-set in G ∨ H, so that mp(G ∨ H) ≥ mp(G) and likewise mp(G ∨ H) ≥ mp(H). Also the union of a clique in G and a clique in H is a clique in G ∨ H and is hence an mp-set in G ∨ H, so it follows that mp(G ∨ H) ≥ max{ω(G) + ω(H), mp(G), mp(H)}. For the opposite direction, let M be a maximum mpset in G ∨ H and set M 1 = M ∩ V (G) and M 2 = M ∩ V (H). If M 1 = ∅ or M 2 = ∅, then trivially mp(G∨H) ≤ max{mp(G), mp(H)}, so suppose that M 1 and M 2 are both non-empty. G be a connected bipartite graph with bipartition (A, B) and let S ⊆ V (G). FixS A = S ∩ A and S B = S ∩ B. A set X ⊆ A (or B) is uniform in G if N(u) = N(v) for all u, v ∈ X; we call S ⊆ V (G)a uniform set if both S A and S B are uniform in G. Let ψ(G) denote the number of vertices in a largest uniform set in G. Theorem 4 . 1 . 41If G is a connected bipartite graph with bipartition (A, B), then mp(G) = max{α(G), ψ(G)}. Corollary 4 . 2 . 42If T is a tree with order n ≥ 2, thenmp(T ) = n, if T is a star, α(T ), otherwise.A similar argument yields the following results. Corollary 4 . 3 . 43If n, m ≥ 2, then mp(P n P m ) = 4, if n = m = 2, α(P n P m ) = ⌈ n 2 ⌉⌈ m 2 ⌉ + ⌊ n 2 ⌋⌊ m 2 ⌋, otherwise. Corollary 4. 4 . 4If k ≥ 3, then mp(Q k ) = 2 k−1 , where Q k is the hypercube with order 2 k . Definition 4 . 5 . 45Let C ′ ⊆ C and I ′ ⊆ I. Then G ′ = (C ′ , I ′ ) is a separated subgraph of the split graph G = (C, I) if and only if either • (Type A) there is no edge from C ′ to I ′ in G, or • (Type B) there is a vertex v ∈ I ′ that is adjacent to every vertex of C ′ , there are no edges from I ′ \ {v} to C ′ and for any w ∈ I ′ \ {v} we have N(w) ⊂ N(v). Theorem 4. 7 . 7If G = (C, I) is a connected split graph, then if there are no divided vertices in G we have mp(G) = φ(G) = max{ω(G), α(G)} if and only if Theorem 5 . 1 . 51For any 2 ≤ a ≤ b there exists a graph G with mp(G) = a and gp(G) = b. Figure 2 : 2H 4 with a maximum gp-set length 2r and label the vertices by the elements of Z 2r in the natural manner. Add an extra vertex x and join x to all vertices in C 2r with even labels. An example is shown in Consider the graph H b,a formed by attaching a − 2 leaves to the central vertex of the half-wheel H b−a+2 . The half-wheel is an isometric subgraph of H b,a , so it follows from the previous discussion that if b − a + 2 ≥ 4 (i.e. if b ≥ a + 2) any largest monophonic and general position sets of H b,a can contain at most two or b − a + 2 vertices of H b−a+2 vertices respectively, so that mp(H b,a ) ≤ (a − 2) + 2 = a and gp(H b,a ) ≤ (b − a + 2) + (a − 2) = b. If L is the set of a − 2 leaves of H b,a , then the sets {2, 4} ∪ L and {2, 4, 6, . . . , 2(b − a + 2)} ∪ L are in monophonic and general position respectively, implying that mp(H b,a ) = a and gp(H b,a ) = b. This leaves only the case b = a+1 to examine. It follows as above that for b = a + 1 we have mp(H b−a+2 ) = mp(H 3 ) = 2, so that mp(H a+1,a ) = a. It is easily seen that gp(H 3 ) = 4, but the only gp-sets of H 3 contain the central vertex x. By Lemma 2.10 we can assume that a gp-set of H a+1,a does not contain x, so that gp(H a+1,a ) ≤ (a − 2) + 3 = a + 1 and the set {2, 4, 6} ∪ L shows that we have equality. Theorem 5.1 suggests the following problem for further research.Problem 5.2. For given 2 ≤ a ≤ b, for which values of n does there exist a graph G with order n, mp(G) = a and gp(G) = b? Theorem 5 . 3 . 53There exists a graph G with igp(G) = a and mp(G) = b if and only if 1 = a ≤ b or 2 ≤ a, b. For two vertices u, v of a graph G the set I[u, v] consists of all vertices that lie on a u, v-geodesic; for a subgraph S ⊆ V (G) we have I[S] = u,v∈S I[u, v]. If I[S] = S, then S is convex. A smallest convex set containing a given subset S is the convex hull I h [S] of S; if I h [S] = V (G), then S is a hull-set and the number of vertices in a smallest hull set is the hull number h(G) of G. Theorem 5 . 6 . 56For all n, a, b ≥ 1 there exists a connected graph G with h m (G) = a, mp(G) = b and order n if and only if a = b = n or 2 ≤ a ≤ b ≤ n − 1. Proposition 3.8, mp(G ′ ) = mp(H ∨ H ′ ) = max{ω(H) + ω(H ′ ), mp(H), mp(H ′ )}. We have that ω(H ′ ) = mp(H ′ ) = n for H ′ is a clique; mp(H) = mp(G) and ω(H) = ω(G) ≤ n for H is isomorphic to G. Then mp(G ′ ) = max{ω(H) + n, mp(H), n} = ω(H) + n, being mp(H) ≤ n. then M is monophonically convex or m-convex. A smallest m-convex set containing M is an m-convex hull of M and is denoted by [M] m . It is possible to construct the monophonic convex hull [M] m from the sequence {K k [M]}, k ≥ 0, where K 0 [M] = M, K 1 [M] = K[M] and K k [M] = K[K k−1 [M]] for k ≥ 2. From some term onwards, the sequence must be constant; if r is the smallest number such that K r AcknowledgementsThe first author thanks the University of Kerala for providing JRF. The third author gratefully acknowledges funding support from EPSRC grant EP/W522338/1 and London Mathematical Society grant ECF-2021-27 and thanks the Open University for an extension of funding during lockdown. The authors thank Dr. Erskine for help with computation of the mp-numbers of cubic cages and the anonymous reviewers for their useful feedback. Induced path factors of regular graphs. S Akbari, D Horsley, I M Wanless, J. Graph Theory. 972Akbari, S., Horsley, D. & Wanless, I.M., Induced path factors of regular graphs. J. Graph Theory 97 (2) (2021), 260-280. Characterization of general position sets and its applications to cographs and bipartite graphs. B S Anand, S V Chandran, U Changat, M Klavžar, S Thomas, E J , Appl. Math. Comput. 359Anand, B.S., Chandran S.V., U., Changat, M., Klavžar, S. & Thomas E.J., Charac- terization of general position sets and its applications to cographs and bipartite graphs. Appl. Math. Comput. 359 (2019), 84-89. On the hull number of some graph classes. J Araujo, V Campos, F Giroire, N Nisse, L Sampaio, R Soares, Theor. Comput. Sci. 475Araujo, J., Campos, V., Giroire, F., Nisse, N., Sampaio, L. & Soares, R., On the hull number of some graph classes. Theor. Comput. Sci. 475 (2013), 1-12. J A Bondy, U S R Murty, Graph Theory with Applications London: Macmillan. 290Bondy, J.A. & Murty, U.S.R., Graph Theory with Applications London: Macmillan, Vol. 290 (1976). On the hull sets and hull number of the Cartesian product of graphs. G B Cagaanan, S R CanoyJr, Discrete Math. 2871-3Cagaanan, G.B. & Canoy Jr., S.R., On the hull sets and hull number of the Cartesian product of graphs. Discrete Math. 287 (1-3) (2004), 141-144. The geodesic irredundant sets in graphs. S V Chandran, U Parthasarathy, G J , Int. J. Math. Comb. 4Chandran S.V., U. & Parthasarathy, G.J., The geodesic irredundant sets in graphs. Int. J. Math. Comb. 4 (2016), 135-143. Networks with small stretch number. S Cicerone, G Di Stefano, J. Discrete Algorithms. 24Cicerone, S. & Di Stefano, G., Networks with small stretch number. J. Discrete Algo- rithms 2 (4) (2004), 383-405. The induced path number of bipartite graphs. G Chartrand, J Mccanna, N Sherwani, M Hossain, J Hashmi, Ars Comb. 37Chartrand, G., McCanna, J., Sherwani, N., Hossain, M. & Hashmi, J., The induced path number of bipartite graphs. Ars Comb. 37 (1994), 191-208. Treelike comparability graphs. S Cornelsen, G Di Stefano, Discrete Appl. Math. 1578Cornelsen, S. & Di Stefano, G. Treelike comparability graphs. Discrete Appl. Math. 157 (8) (2009), 1711-1722. Distance-hereditary comparability graphs. Di Stefano, G , Discrete Appl. Math. 16018Di Stefano, G. Distance-hereditary comparability graphs. Discrete Appl. Math. 160 (18) (2012), 2669-2680. Mutual visibility in graphs. Di Stefano, G , Appl. Math. Comput. 419126850Di Stefano, G., Mutual visibility in graphs. Appl. Math. Comput. 419 (2022), 126850. Algorithmic aspects of monophonic convexity. Electron. Notes Discrete Math. M C Dourado, F Protti, J L Szwarcfiter, 30Dourado, M.C., Protti, F. & Szwarcfiter, J.L., Algorithmic aspects of monophonic con- vexity. Electron. Notes Discrete Math. 30 (2008), 177-182. On the computation of the hull number of a graph. M C Dourado, J G Gimbel, J Kratochv&apos;il, F Protti, J L Szwarcfiter, Discrete Math. 30918Dourado, M.C., Gimbel, J.G., Kratochv'il, J., Protti, F. & Szwarcfiter, J.L., On the computation of the hull number of a graph. Discrete Math. 309 (18) (2009), 5668-5674. Complexity results related to monophonic convexity. M C Dourado, F Protti, J L Szwarcfiter, Discrete Appl. Math. 15812Dourado, M.C., Protti, F. & Szwarcfiter, J.L., Complexity results related to monophonic convexity. Discrete Appl. Math. 158 (12) (2010), 1268-1274. Amusements in mathematics. H E Dudeney, Nelson, EdinburghDudeney, H.E., Amusements in mathematics. Nelson, Edinburgh (1917). . G Erskine, personal communicationErskine, G., personal communication (2020) The general position problem on Kneser graphs and on some graph operations. M Ghorbani, H R Maimani, M Momeni, F R Mahid, S Klavžar, G Rus, Discuss. Math. Graph T. 414Ghorbani, M., Maimani, H.R., Momeni, M., Mahid, F.R., Klavžar, S. & Rus, G., The general position problem on Kneser graphs and on some graph operations. Discuss. Math. Graph T. 41 (4) (2019), 1199-1213. On representatives of subsets. P Hall, J. London Math. Soc. 101Hall, P., On representatives of subsets. J. London Math. Soc., 10.1 (1935), 26-30. A characterization of distance-hereditary graphs. E Howorka, Q. J. Math. 284Howorka, E., A characterization of distance-hereditary graphs. Q. J. Math. 28 (4) (1977), 417-420. A characterization of Ptolemaic graphs. E Howorka, J. Graph Theory. 53Howorka, E., A characterization of Ptolemaic graphs. J. Graph Theory 5 (3) (1981), 323-331. On monophonic sets in graphs. C Hernando, M Mora, I Pelayo, C Seara, Discrete Math. submittedHernando, C., Mora, M., Pelayo, I. & Seara, C., On monophonic sets in graphs. Discrete Math., submitted (2003). Reducibility among combinatorial problems. R M Karp, Complexity of Computer Computations. Karp, R.M., Reducibility among combinatorial problems. In Complexity of Computer Computations (1972), 85-103. A Steiner general position problem in graph theory. S Klavžar, D Kuziak, . I Peterin, I G Yero, Comput. Appl. Math. 406Klavžar, S., Kuziak, D., Peterin. I. & Yero, I.G., A Steiner general position problem in graph theory. Comput. Appl. Math. 40 (6) (2021), 1-15. General d-position sets. S Klavžar, D F Rall, I G Yero, Ars Math. Contemp. 211Klavžar, S., Rall, D.F. & Yero, I.G., General d-position sets. Ars Math. Contemp. 21 (1) (2021), 1-03. The general position problem and strong resolving graphs. S Klavžar, I G Yero, Open Math. 171Klavžar, S. & Yero, I.G., The general position problem and strong resolving graphs. Open Math. 17 (1) (2019), 1126-1135. On the extremal combinatorics of the Hamming space. J Körner, J. Comb. Theory Ser. A. 711Körner, J. On the extremal combinatorics of the Hamming space. J. Comb. Theory Ser. A 71 (1) (1995), 112-126. A general position problem in graph theory. P Manuel, S Klavžar, Bull. Aust. Math. Soc. 982Manuel, P. & Klavžar, S., A general position problem in graph theory. Bull. Aust. Math. Soc. 98 (2) (2018), 177-187. The graph theory general position problem on some interconnection networks. P Manuel, S Klavžar, Fundam. Inform. 1634Manuel, P. & Klavžar, S. The graph theory general position problem on some intercon- nection networks. Fundam. Inform. 163 (4) (2018), 339-350. On the general position number of complementary prisms. P K Neethu, S V Chandran, U Changat, M Klavžar, S , Fundam. Inform. 1783Neethu, P.K., Chandran S.V., U., Changat, M. & Klavžar, S., On the general position number of complementary prisms. Fundam. Inform. 178 (3) (2021), 267-281. On the general position problem on Kneser graphs. B Patkós, Ars Math. Contemp. 182Patkós, B., On the general position problem on Kneser graphs. Ars Math. Contemp. 18 (2) (2020), 273-280. Geodesic convexity in graphs. I M Pelayo, SpringerNew YorkPelayo, I.M., Geodesic convexity in graphs. New York: Springer (2013). Characterization of classes of graphs with large general position number. E J Thomas, S V Chandran, U , AKCE Int. J. Graphs Comb. Thomas, E.J. & Chandran S.V., U., Characterization of classes of graphs with large general position number. AKCE Int. J. Graphs Comb. (2020), 1-5. On independent position sets in graphs. E J Thomas, S V Chandran, U , Proyecciones (Antofagasta). 402Thomas, E.J. & Chandran S.V., U., On independent position sets in graphs. Proyecciones (Antofagasta) 40 (2) (2021), 385-398.
[]
[ "Published as a conference paper at ICLR 2021 ON STATISTICAL BIAS IN ACTIVE LEARNING: HOW AND WHEN TO FIX IT", "Published as a conference paper at ICLR 2021 ON STATISTICAL BIAS IN ACTIVE LEARNING: HOW AND WHEN TO FIX IT" ]
[ "Sebastian Farquhar \nDepartment of Computer Science\nOATML\n\n", "Yarin Gal \nDepartment of Computer Science\nOATML\n\n", "Tom Rainforth \nDepartment of Statistics\n\n", "\nUniversity of Oxford\n\n" ]
[ "Department of Computer Science\nOATML\n", "Department of Computer Science\nOATML\n", "Department of Statistics\n", "University of Oxford\n" ]
[]
Active learning is a powerful tool when labelling data is expensive, but it introduces a bias because the training data no longer follows the population distribution. We formalize this bias and investigate the situations in which it can be harmful and sometimes even helpful. We further introduce novel corrective weights to remove bias when doing so is beneficial. Through this, our work not only provides a useful mechanism that can improve the active learning approach, but also an explanation of the empirical successes of various existing approaches which ignore this bias. In particular, we show that this bias can be actively helpful when training overparameterized models-like neural networks-with relatively little data. * Equal contribution. Corresponding author [email protected].
null
[ "https://arxiv.org/pdf/2101.11665v2.pdf" ]
231,719,707
2101.11665
1920ed0e7799410009d11cd7584550b9a57d5c93
Published as a conference paper at ICLR 2021 ON STATISTICAL BIAS IN ACTIVE LEARNING: HOW AND WHEN TO FIX IT Sebastian Farquhar Department of Computer Science OATML Yarin Gal Department of Computer Science OATML Tom Rainforth Department of Statistics University of Oxford Published as a conference paper at ICLR 2021 ON STATISTICAL BIAS IN ACTIVE LEARNING: HOW AND WHEN TO FIX IT Active learning is a powerful tool when labelling data is expensive, but it introduces a bias because the training data no longer follows the population distribution. We formalize this bias and investigate the situations in which it can be harmful and sometimes even helpful. We further introduce novel corrective weights to remove bias when doing so is beneficial. Through this, our work not only provides a useful mechanism that can improve the active learning approach, but also an explanation of the empirical successes of various existing approaches which ignore this bias. In particular, we show that this bias can be actively helpful when training overparameterized models-like neural networks-with relatively little data. * Equal contribution. Corresponding author [email protected]. INTRODUCTION In modern machine learning, unlabelled data can be plentiful while labelling requires scarce resources and expert attention, for example in medical imaging or scientific experimentation. A promising solution to this is active learning-picking the most informative datapoints to label that will hopefully let the model be trained in the most sample-efficient way possible (Atlas et al., 1990;Settles, 2010). However, active learning has a complication. By picking the most informative labels, the acquired dataset is not drawn from the population distribution. This sampling bias, noted by e.g., MacKay (1992); Dasgupta & Hsu (2008), is worrying: key results in machine learning depend on the training data being identically and independently distributed (i.i.d.) samples from the population distribution. For example, we train neural networks by minimizing a Monte Carlo estimator of the population risk. If training data are actively sampled, that estimator is biased and we optimize the wrong objective. The possibility of bias in active learning has been considered by e.g., Beygelzimer et al. (2009);Chu et al. (2011); Ganti & Gray (2012), but the full problem is not well understood. In particular, methods that remove active learning bias have been restricted to special cases, so it has been impossible to even establish whether removing active learning bias is helpful or harmful in typical situations. To this end, we show how to remove the bias introduced by active learning with minimal changes to existing active learning methods. As a stepping stone, we build a Plain Unbiased Risk Estimator, R PURE , which applies a corrective weighting to actively sampled datapoints in pool-based active learning. Our Levelled Unbiased Risk Estimator,R LURE , builds on this and has lower variance and additional desirable finite-sample properties. We prove that both estimators are unbiased and consistent for arbitrary functions, and characterize their variance. Interestingly, we find-both theoretically and empirically-that our bias corrections can simultaneously also reduce the variance of the estimator, with these gains becoming larger for more effective acquisition strategies. We show that, in turn, these combined benefits can sometimes lead to significant improvements for both model evaluation and training. The benefits are most pronounced in underparameterized models where each datapoint affects the learned function globally. For example, in linear regression adopting our weighting allows better estimates of the parameters with less data. On the other hand, in cases where the model is overparameterized and datapoints mostly affect the learned function locally-like deep neural networks-we find that correcting active learning bias can be ineffective or even harmful during model training. Namely, even though our corrections typically produce strictly superior statistical estimators, we find that the bias from standard active learning can actually be helpful by providing a regularising effect that aids generalization. Through this, our work explains the known empirical successes of existing active learning approaches for training deep models (Gal et al., 2017b;Shen et al., 2018), despite these ignoring the bias this induces. 1. We offer a formalization of the problem of statistical bias in active learning. 2. We introduce active learning risk estimators,R PURE andR LURE , and prove both are unbiased, consistent, and with variance that can be less than the naive (biased) estimator. 3. Using these, we show that active learning bias can hurt in underparameterized cases like linear regression but help in overparameterized cases like neural networks and explain why. BIAS IN ACTIVE LEARNING We begin by characterizing the bias introduced by active learning. In supervised learning, generally, we aim to find a decision rule f θ corresponding to inputs, x, and outputs, y, drawn from a population data distribution p data (x, y) which, given a loss function L(y, f θ (x)), minimizes the population risk: r = E x,y∼p data [L(y, f θ (x))] . The population risk cannot be found exactly, so instead we consider the empirical distribution for some dataset of N points drawn from the population. This gives the empirical risk: an unbiased and consistent estimator of r when the data are drawn i.i.d from p data and are independent of θ, R = 1 N N n=1 L(y n , f θ (x n )). In pool-based active learning (Lewis & Gale, 1994;Settles, 2010), we begin with a large unlabelled dataset, known as the pool dataset D pool ≡ {x n |1 ≤ n ≤ N }, and sequentially pick the most useful points for which to acquire labels. The lack of most labels means we cannot evaluateR directly, so we use the sub-sample empirical risk evaluated using the M actively sampled labelled points: R = 1 M M m=1 L(y m , f θ (x m )).(1) Though almost all active learning research uses this estimator (see Appendix D), it is not an unbiased estimator of eitherR or r when the M points are actively sampled. Under active-i.e. non-uniformsampling the M datapoints are not drawn from the population distribution, resulting in a bias which we formally characterize in §4. See Appendix A for a more general overview of active learning. Note an important distinction between what we will call "statistical bias" and "overfitting bias." The bias from active learning above is a statistical bias in the sense that usingR biases our estimation of r, regardless of θ. As such, optimizing θ with respect toR induces bias into our optimization of θ. In turn, this breaks any consistency guarantees for our learning process: if we keep M/N fixed, take M → ∞, and optimize for θ, we no longer get the optimal θ that minimizes r. Almost all work on active learning for neural networks currently ignores the issue of statistical bias. However, even without this statistical bias, indeed even if we useR directly, the training process itself also creates an overfitting bias: evaluating the risk using training data induces a dependency between the data and θ. This is why we usually evaluate the risk on held-out test data when doing model selection. Dealing with overfitting bias is beyond the scope of our work as this would equate to solving the problem of generalization. The small amount of prior work which does consider statistical bias in active learning entirely ignores this overfitting bias without commenting on it. In §3-6, we focus on statistical bias in active learning, so that we can produce estimators that are valid and consistent, and let us optimize the intended objective, not so they can miraculously close the train-test gap. From a more formal perspective, our results all assume that θ is chosen independently of the training data; an assumption that is almost always (implicitly) made in the literature. This ensures our estimators form valid objectives, but also has important implications that are typically overlooked. We return to this in §7, examining the interaction between statistical and overfitting bias. UNBIASED ACTIVE LEARNING:R PURE ANDR LURE We now show how to unbiasedly estimate the risk in the form of a weighted expectation over actively sampled data points. We denote the set of actively sampled points D train ≡ {(x m , y m )|1 ≤ m ≤ M }, where ∀m : x m ∈ D pool . We begin by building a "plain" unbiased risk estimator,R PURE , as a stepping stone-its construction is quite natural in that each term is individually an unbiased estimator of the risk. We then use it to construct a "levelled" unbiased risk estimator,R LURE , which is an unbiased and consistent estimator of the population risk just likeR PURE , but which reweights individual terms to produce lower variance and resolve some pathologies of the first approach. Both estimators are easy to implement and have trivial compute/memory requirements. 3.1R PURE : PLAIN UNBIASED RISK ESTIMATOR For our estimators, we introduce an active sampling proposal distribution over indices rather than the more typical distribution over datapoints. This simplifies our proofs, but the two are algorithmically equivalent for pool-based active learning because of the one-to-one relationship between datapoints and indices. We define the probability mass for each index being the next to be sampled, once D train contains m − 1 points, as q(i m ; i 1:m−1 , D pool ). Because we are learning actively, the proposal distribution depends on the indices sampled so far (i 1:m−1 ) and the available data (D pool , note though that it does not depend on the labels of unsampled points). The only requirement on this proposal distribution for our theoretical results is that it must place non-zero probability on all of the training data: anything else necessarily introduces bias. Considerations for the acquisition proposal distribution are discussed further in §3.3. We first present the estimator before proving the main results:R PURE ≡ 1 M M m=1 a m ; where a m ≡ w m L im + 1 N m−1 t=1 L it ,(2) where the loss at a point L im ≡ L(y im , f θ (x im )), the weights w m ≡ 1/N q(i m ; i 1:m−1 , D pool ) and i m ∼ q(i m ; i 1:m−1 , D pool ). For practical implementation,R PURE can further be written in the following more computationally friendly form that avoids a double summation: R PURE = 1 M M m=1 1 N q(i m ; i 1:m−1 , D pool ) + M − m N L im .(3) However, we focus on the first form for our analysis because a m in (2) has some beneficial properties not shared by the weighting factors in (3). In particular, in Appendix B.1 we prove that: Lemma 1. The individual terms a m ofR PURE are unbiased estimators of the risk: E [a m ] = r. The motivation for the construction of a m directly originates from constructing an estimator where Lemma 1 holds while only making use of the observed losses L i1 , . . . , L im , taking care with the fact that each new proposal distribution does not have support over points that have already been acquired. Except for trivial problems, a m is essentially unique in this regard; naive importance sampling (i.e. 1 M M m=1 w m L im ) does not lead to an unbiased, or even consistent, estimator. However, the overall estimatorR PURE is not the only unbiased estimator of the risk, as we discuss in §3.2. We can now characterize the behaviour ofR PURE as follows (see Appendix B.2 for proof) Theorem 1.R PURE as defined above has the properties: E R PURE = r, Var R PURE = Var [L(y, f θ (x))] N + 1 M 2 M m=1 E Dpool,i1:m−1 [Var [w m L im |i 1:m−1 , D pool ]] . (4) Remark 1. The first term of (4) is the variance of the loss on the whole pool, while the second term accounts for the variance originating from the active sampling itself given the pool. This second term is O(N/M ) times larger and so will generally dominate in practice as typically M N . Armed with Theorem 1, we can prove the consistency ofR PURE under standard assumptions:R PURE converges in expectation (i.e. its mean squared error tends to zero) as M → ∞ under the assumptions that N > M , L(y, f θ (x)) is integrable, and q(i m ; i 1:m−1 , D pool ) is a valid proposal in the sense that it puts non-zero mass on each unlabelled datapoint. Formally, as proved in Appendix B.3, Theorem 2. Let α = N/M and assume that α > 1. If E L(y, f θ (x)) 2 < ∞ and ∃β > 0 : min n∈{1:N \i1:m−1} q(i m = n; i 1:m−1 , D pool ) ≥ β/N ∀N ∈ Z + , m ≤ N, thenR PURE converges in its L 2 norm to r as M → ∞, i.e., lim M →∞ E (R PURE − r) 2 = 0. 3.2R LURE : LEVELLED UNBIASED RISK ESTIMATOR R PURE is natural in that each term is an unbiased estimator of r. However, this creates surprising behaviour given the sequential structure of active learning. For example, with a uniform proposal distribution-equivalent to not actively learning-points sampled earlier are more highly weighted than later ones andR PURE =R. Specifically, a uniform proposal, q(i m ; i 1:m−1 , D pool ) = 1 N −m+1 , gives a weight on each sampled point of 1 + M −2m+1 N = 1. Similarly, as M → N (such that we use the full pool) the weights also fail to become uniform: setting M = N gives a weight for each point of 1 + M −2m+1 R LURE are unbiased and consistent as proven above. This is in contrast to the naive risk estimatorR, for which the choice of proposal distribution affects the bias of the estimator. It is easy to satisfy the requirement for non-zero mass everywhere. Even prior work which selects points deterministically (e.g., Bayesian Active Learning by Disagreement (BALD) (Houlsby et al., 2011) or a geometric heuristic like coreset construction (Sener & Savarese, 2018)) can be easily adapted. Any scheme, like BALD, that selects the points with argmax can use softmax to return a distribution. Alternatively, a distribution can be constructed analogous to epsilon-greedy exploration. With probability we pick uniformly, otherwise we pick the point returned by an arbitrary acquisition strategy. This adapts any deterministic active learning scheme to allow unbiased risk estimation. It is also possible to useR LURE andR PURE with data collected using a proposal distribution that does not fully support the training data, though they will not fully correct the bias in this case. Namely, if we have a set of points, I, that are ignored by the proposal (i.e. that are assigned zero mass), we can still useR LURE andR PURE in the same way but they both introduce the same following bias: E[R I LURE ] = E[R I PURE ] = E E R LURE D pool − E 1 N n∈I L n D pool = r − E 1 N n∈I L n . Sometimes this bias will be small and may be acceptable if it enables a desired acquisition scheme, but in general one of the stochastic adaptations described above is likely to be preferable. One can naturally also extend this result to cases where I varies at each iteration of the active learning (including deterministic acquisition strategies), for which we again have a non-zero bias. Though the choices of acquisition proposal and risk estimator are algorithmically detached, choosing a good proposal will still be critical to performance in practice. In the next section, we will discuss how the proposal distribution can affect the variance of the estimators, and we will see that our approaches also offer the potential to reduce the variance of the naive biased estimator. Later, in §7, we will turn to a third element of active learning schemes-generalization-and consider the fact that optimization introduces a bias separately from the choice of risk estimator and proposal distribution. UNDERSTANDING THE EFFECT OFR LURE ANDR PURE ON VARIANCE In order to show that the variance of our unbiased estimators can be lower than that of the biasedR, with a well-chosen acquisition function, we first introduce an analogous result to Theorems 1 and 3 forR, the proof for which is given in Appendix B.8: Theorem 6. Let µ m := E [L im ] and µ m|i,D := E [L im |i 1:m−1 , D pool ]. ForR (defined in (1)): E R = 1 M M m=1 µ m ( = r in general) Var[R] = 1 Var Dpool E R D pool + 2 1 M 2 M m=1 E Dpool,i1:m−1 [Var [L im |i 1:m−1 , D pool ]] + 1 M 2 M m=1 E Dpool Var µ m|i,D D pool 3 + 2 E Dpool Cov L im , k<m L i k D pool 4 .(7) Examining this expression suggests that the variances ofR PURE and, in particular,R LURE will often be lower than that ofR, given a suitable proposal. Consider the terms of (7): 1 is analogous to the shared first term of (4) and (6), Var [L(y, f θ (x))] /N . IfR were an unbiased estimator ofR then these would be exactly equal, but the conditional bias introduced byR also varies between pool datasets. In general, 1 will typically be larger than, or similar to, its unbiased counterparts. In any case, recall that the first terms of (4) and (6) tend to be small contributors to the overall variance anyway, thus 1 provides negligible scope forR to provide notable benefits over our estimators. We can also relate 2 to terms in (4) and (6): it corresponds to the second half of (4), but where we replace of the expected conditional variances of the weighted losses w m L im with the unweighted losses L im . For effective proposals, w m and L im should be anticorrelated: high loss points should have higher density and thus lower weight. This means the expected conditional variance of w m L im should be less than L im for a well-designed acquisition strategy. Variation in the expected value of the weights with m can complicate this slightly forR PURE , but the correction factors applied for R LURE avoids this issue and ensure that the second half of (6) will be reliably smaller than 2 . We have shown that the variance ofR LURE is typically smaller than 1 + 2 under sensible proposals. Expression (7) has additional terms: 3 is trivially always positive and so contributes to higher variance forR (it comes from variation in the bias in index sampling given D pool ); 4 reflects correlations between the losses at different iterations which have been eliminated by our estimators. This term is harder to quantify and can be positive or negative depending on the problem. For example, sampling points without replacement can cause negative correlation, while the proposal adaptation itself can cause positive correlations (finding one high loss point can help find others). The former effect diminishes as N grows, for fixed M , hinting that 4 may tend to be positive for N M . Regardless, if 4 is big enough to change which estimator has higher variance then correlation between losses in different acquired points would lead to high bias inR. In contrast, we prove in Appendix B.9 that under an optimal proposal distribution bothR PURE and R LURE become exact estimators of the empirical risk for any number of samples M -such that they will inevitably have lower variance thanR in this case. A similar result holds when we are estimating gradients of the loss, though note that the optimal proposal is different in the two cases. Theorem 7. Given a non-negative loss, the optimal proposal distribution q * (i m ; i 1:m−1 , D pool ) = L im /Σ n / ∈i1:m−1 L n yields estimators exactly equal to the pool risk, that isR PURE =R LURE =R almost surely ∀M . In practice, it is impossible to sample using the optimal proposal distribution. However, we make this point in order to prove that adopting our unbiased estimator is certainly capable of reducing variance relative to standard practice if appropriate acquisition strategies are used. It also provides interesting insights into what makes a good acquisition strategy from the perspective of the risk estimation itself. RELATED WORK Pool-based active learning (Lewis & Gale, 1994) is useful in cases where input data are prevalent but labeling them is expensive (Atlas et al., 1990;Settles, 2010). The bias from selective sampling was noted by MacKay (1992), but dismissed from a Bayesian perspective based on the likelihood principle. Others have noted that the likelihood principle remains controversial (Rainforth, 2017), and in this case would assume a well-specified model. Moreover, from a discriminative learning perspective this bias is uncontentiously problematic. Lowell et al. (2019) observe that active learning algorithms and datasets become coupled by active sampling and that datasets often outlive algorithms. Despite the potential pitfalls, in deep learning this bias is generally ignored. As an informal survey, we examined the 15 most-cited peer-reviewed papers citing Gal et al. (2017b), which considered active learning to image data using neural networks. Of these, only two mentioned estimator bias but did not address it while the rest either ignored or were unaware of this problem (see Appendix D). There have been some attempts to address active learning bias, but these have generally required fundamental changes to the active learning approach and only apply to particular setups. (Sugiyama, 2006;Bach, 2006) to online active learning. Unlike pool-based active learning, this involves deciding whether or not to sample a new point as it arrives from an infinite distribution. This makes importance-sampling estimators much easier to develop, but as Settles (2010) notes, "the pool-based scenario appears to be much more common among application papers." Ganti & Gray (2012) address unbiased active learning in a pool-based setting by sampling from the pool with replacement. This effectively converts pool-based learning into a stationary online learning setting, although it overweights data that happens to be sampled early. Sampling with replacement is unwanted in active learning because it requires retraining the model on duplicate data which is either impossible or wasteful depending on details of the setting. Moreover, they only prove the consistency of their estimator under very strong assumptions (well-specified linear models with noiseless labels and a mean-squared-error loss). Imberg et al. (2020) consider optimal proposal distributions in an importance-sampling setting. Outside the context of active learning, Byrd & Lipton (2019) question the value of importance-weighting for deep learning, which aligns with our findings below. Figure 1: Illustrative linear regression. Active learning deliberately over-samples unusual points (red x's) which no longer match the population (black dots). Common practice uses the biased unweighted estimatorR which puts too much emphasis on unusual points. Our unbiased estimatorsR PURE andR LURE fix this, learning a function using only D train nearly equal to the ideal you would get if you had labels for the whole of D pool , despite only using a few points. APPLYINGR LURE ANDR PURE −1.5 −1.0 −0.5 0.0 0.5 1.0 1.5 x −0.5 0.0 0. We first verify thatR LURE andR PURE remove the bias introduced by active learning and examine the variance of the estimators. We do this by taking a fixed function whose parameters are independent of D train and estimating the risk using actively sampled points. We note that this is equivalent to the problem of estimating the risk of an already trained model in a sample-efficient way given unlabelled test data. We consider two settings: an inflexible model (linear regression) on toy but non-linear data and an overparameterized model (convolutional Bayesian neural network) on a modified version of MNIST with unbalanced classes and noisy labels. Linear regression. For linear functions, removing active learning bias (ALB), i.e., the statistical bias introduced by active learning, is critical. We illustrate this in Figure 1. Actively sampled points overrepresent unusual parts of the distribution, so a model learned using the unweighed D train differs from the ideal function fit to the whole of D pool . Using our corrective weights more closely approximates the ideal line. The full details of the population distribution and geometric acquisition proposal distribution are in Appendix C.1, where we also show results using an alternative epsilon-greedy proposal. We inspect the ALB in Figure 2a by comparing the estimated risk (with squared error loss and a fixed function) to the corresponding true population riskR. While M < N , the unweightedR is biased (in practice we never have M = N as then actively learning is unnecessary).R PURE and R LURE are unbiased throughout. However, they have high variance because the proposal is rather poor. Shading represents the std. dev. of the bias over 1000 different acquisition trajectories. Bayesian Neural Network. We actively classify MNIST and FashionMNIST images using a convolutional Bayesian neural network (BNN) with roughly 80,000 parameters. In Figure 2b and 2c we show thatR PURE andR LURE remove the ALB. Here the variance ofR PURE andR LURE is lower or similar to the biased estimator. This is because the acquisition proposal distribution, a stochastic relaxation of the Bayesian Active Learning by Disagreement (BALD) objective (Houlsby et al., 2011), is effective (c.f. §4). A full description of the dataset and procedure is provided in Appendix C.2. Our modified MNIST dataset is unbalanced and has noisy labels, which makes the bias more distorting. Overall, Figure 2 shows that our estimators remove the bias introduced by active learning, as expected, and can do so with reduced variance given an acquisition proposal distribution that puts a high probability mass on more informative/surprising high-expected-loss points. Bias: R − E[⋅] r − E[R ] r − E[RPURE] r − E[R LURE] (a) Linear regression. Next, we examine the overall effect of using the unbiased estimators to learn a model on downstream performance. Intuitively, removing bias in training while also reducing the variance ought to improve the downstream task objective: test loss and accuracy. To investigate this, we train models usingR, R LURE , andR PURE with actively sampled data and measure the population risk of each model. For linear regression (Figure 3a), the new estimators improve the test loss-even with small numbers of acquired points we have nearly optimal test loss (estimated with many samples). However, for the BNN, there is a small but significant negative impact on the full test dataset loss of training with R LURE orR PURE ( Figure 3b) and a slightly larger negative impact on test accuracy (Figure 3c). That is, we get a better model by training using a biased estimator with higher variance! To validate this further, we consider models trained instead on FashionMNIST (Fig. 3d), on MNIST but with Monte Carlo dropout (MCDO) (Gal & Ghahramani, 2015) (Fig. 3e), and on a balanced version of the MNIST data (Fig. 3f). In all cases we find similar patterns, suggesting the effects are not overly sensitive to the setting. Further experiments and ablations can be found in Appendix C.2. ACTIVE LEARNING BIAS IN THE CONTEXT OF OVERALL BIAS In order to explain the finding thatR LURE hurts training for the BNN, we return to the bias introduced by overfitting, allowing us to examine the effect of removing statistical bias in the context of overall bias. Namely, we need to consider the fact that training would induce am overfitting bias (OFB) even if we had not used active learning. If we optimize parameters θ according toR, then E[R(θ * )] = r, because the optimized parameters θ * tend to explain training data better than unseen data. Using R LURE , which removes statistical bias, we can isolate OFB in an active learning setting. More formally, supposing we are optimizing any of the discussed risk estimators (which we will write usingR (·) as a placeholder to stand for any of them) we define the OFB as: B OFB (R (·) ) = r −R LURE (θ * ) where θ * = arg min θ (R (·) ) B OFB (R (·) ) depends on the details of the optimization algorithm and the dataset. Understanding it fully means understanding generalization in machine learning and is outside our scope. We can still gain insight into the interaction of active learning bias (ALB) and OFB. Consider the possible relationships between the magnitudes of ALB Figure 2b). (c) BNN on FashionMNIST, OFB is somewhat larger than with MNIST, particularly forR (i.e. our approaches reduce overfitting) and dominates active learning bias (c.f. Figure 2c). Shading ±1 standard error. 150 trajectories. tend to cancel each other out. Indeed, they usually have opposite signs. B OFB is usually positive: θ * fits the training data better than unseen data. ALB is generally negative: we actively choose unusual/surprising/informative points which are harder to fit than typical points. Therefore, when significant overfitting is possible, unless ALB is also large, removing ALB will have little effect and can even be harmful. This hypothesis would explain the observations in §6 if we were to show that B OFB was small for linear regression but had a similar magnitude and opposite sign to ALB for the BNN. This is exactly what we show in Figure 4. Specifically, we see that for linear regression, the B OFB for models trained withR,R PURE , andR LURE are all small (Figure 4a) when contrasted to the ALB shown in Figure 2a. Here ALB >> OFB; removing ALB matters. For BNNs we instead see that the OFB has opposite sign to the ALB but is either similar in scale for MNIST (Figures 2b and 4b), or the OFB is much larger than ALB for Fashion MNIST (Figures 4c and 2c). The two sources of bias thus (partially) cancel out. Essentially, using active learning can be treated (quite instrumentally) as an ad hoc form of regularization. This explains why removing ALB can hurt active learning with neural networks. CONCLUSIONS Active learning is a powerful tool but raises potential problems with statistical bias. We offer a corrective weighting which removes that bias with negligible compute/memory costs and few requirements-it suits standard pool-based active learning without replacement. It requires a nonzero proposal distribution over all unlabelled points but existing acquisition functions can be easily transformed into sampling distributions. Indeed, estimates of scores like mutual information are so noisy that many applications already have an implicit proposal distribution. We show that removing active learning bias (ALB) can be helpful in some settings, like linear regression, where the model is not sufficiently complex to perfectly match the data, such that the exact loss function and input data distribution are essential in discriminating between different possible (imperfect) model fits. We also find that removing ALB can be counter-productive for overparameterized models like neural networks, even if its removal also reduces the variance of the estimators, because here the ALB can help cancel out the bias originating from overfitting. This leads to the interesting conclusion that active learning can be helpful not only as a mechanism to reduce variance as it was originally designed, but also because it introduces a bias that can be actively helpful by regularizing the model. This helps explain why active learning with neural networks has shown success despite using a biased risk estimator. We propose the following rules of thumb for deciding when to embrace or correct the bias, noting that we should always preferR LURE toR PURE . First, the more closely the acquisition proposal distribution approaches the optimal distribution (as per Theorem 7), the relatively betterR LURE will be toR. Second, the less overfitting we expect, the more likely it is thatR LURE will be useful as it reduces the chance that the ALB will actually help. Third,R LURE will tend to have more of an effect for highly imbalanced datasets, as the biased estimator will over-represent actively selected but unlikely datapoints. Fourth, if the training data does not accurately represent the test data, usingR LURE will likely be less important as the ALB will tend to be dwarfed by bias from the distribution shift. Fifth, at test-time, where optimization and overfitting bias are no-longer an issue, there is little cost to using R LURE to evaluate a model and it will usually be beneficial. This final application, of active learning for model evaluation, is an interesting new research direction that is opened up by our estimators. A OVERVIEW OF ACTIVE LEARNING Active learning selectively picks datapoints for which to acquire labels with the aim of more sampleefficient learning. For an excellent overview of the general active learning problem setting we refer the reader to Settles (2010). Since that review was written, a number of significant advances have further developed active learning. Houlsby et al. (2011) develop an efficient way to estimate the mutual information between model parameters and the output distribution, which can be used for the Bayesian Active Learning by Disagreement (BALD) score. Active learning has been applied to deep learning, especially for vision Gal et al. (2017b); Wang et al. (2017). In neural networks specifically, empirical work has suggested that simple geometric core-set style approaches can outperform uncertainty-based acquisition functions (Sener & Savarese, 2018). A lot of recent work in active learning has focused on speeding up acquisition from a computational perspective (Coleman et al., 2020) and allowing batch acquisition in order to parallelize labelling (Kirsch et al., 2019;Ash et al., 2020). Some work has also focused on applying active learning to specific settings with particular constraints ( , D pool ) + 1 N m−1 t=1 L it   = E Dpool,i1:m−1 1 N N n=1 L n , But L n is now independent of the indices which have been sampled: = E Dpool 1 N N n=1 L n = E Dpool R = r. B.2 PROOF OF THE UNBIASEDNESS AND VARIANCE FORR PURE : THEOREM 1 Theorem 1.R PURE as defined above has the properties: E R PURE = r, Var R PURE = Var [L(y, f θ (x))] N + 1 M 2 M m=1 E Dpool,i1:m−1 [Var [w m L im |i 1:m−1 , D pool ]] . (4) a m D pool + Var [L(y, f θ (x))] N ,(8) where x, y ∼ p data . Now considering the first term we have Var 1 M M m=1 a m D pool = E   1 M M m=1 a m −R 2 D pool   = 1 M 2 M m=1 M k=1 E a m −R a k −R D pool .(9) We attack this term by first considering the terms for which m = k and show that these yield E [(a m − r)(a k − r)|D pool ] = 0, returning to the m = k terms later. We will assume, without loss of generality, that k < m, noting that by symmetry the same set of arguments can be similarly applied when m < k. Substituting in the definition of a m from equation (2): E (a m −R)(a k −R) D pool = E w m L im + 1 N m−1 t=1 L it −R w k L i k + 1 N k−1 s=1 L is −R D pool . We introduce the notationR rem m =R − 1 N m−1 t=1 L it to describe the remainder of the empirical risk ascribable to the datapoint with index i m . Then, by multiplying out the terms we have: and noting that because k < m, w k L i k is deterministic given D pool and i 1:m−1 : E (a m −R)(a k −R) D pool = E [w m L im w k L i k |D pool ] a − E w m L imR rem k D pool b − E w k L i kR= E [w k L i k E [w m L im | D pool , i 1:m−1 ]|D pool ] = E w k L i kR rem m D pool , which thus cancels with c . The b and d cancel similarly: b = E E w m L imR rem k D pool , i 1:m−1 D pool = E R rem k E [w m L im | D pool , i 1:m−1 ] D pool = E R rem kR rem m D pool = d . Putting this together, we have that: E (a m −R)(a k −R) D pool = 0 ∀k = m. Considering now the m = k terms we have E a m −R 2 D pool = E i1:m−1 E im a m −R 2 i 1:m−1 , D pool D pool = E i1:m−1 [Var [a m |i 1:m−1 , D pool ]|D pool ] = E i1:m−1 [Var [w m L im |i 1:m−1 , D pool ]|D pool ] . Finally substituting everything back into (9) and then (8), and applying the tower property gives Var R PURE = Var [L(y, f θ (x))] N + 1 M 2 M m=1 E Dpool,i1:m−1 [Var [w m L im |i 1:m−1 , D pool ]] and we are done. B.3 PROOF OF THE CONSISTENCY OFR PURE : THEOREM 2 Theorem 2. Let α = N/M and assume that α > 1. If E L(y, f θ (x)) 2 < ∞ and ∃β > 0 : min n∈{1:N \i1:m−1} q(i m = n; i 1:m−1 , D pool ) ≥ β/N ∀N ∈ Z + , m ≤ N, thenR PURE converges in its L 2 norm to r as M → ∞, i.e., lim M →∞ E (R PURE − r) 2 = 0. Proof. Theorem 1 showed thatR PURE is an unbiased estimator and so we first note that its MSE is simply its variance, which we found in (4). Substituting N = αM : E R PURE − r 2 = Var R PURE = Var [L(y, f θ (x)] αM + 1 M 2 M m=1 E Dpool,i1:m−1 [Var [w m L im |i 1:m−1 , D pool ]] . The first term tends to zero as M → ∞ as our standard assumptions guarantee that 1 α Var [L(y, f θ (x)] < ∞. For the second term we note that our assumptions about q guarantee that w m ≤ 1/β and thus: Var [w m L im |i 1:m−1 , D pool ] = E w 2 m L 2 im |i 1:m−1 , D pool − R − 1 M m−1 t=1 L it 2 ≤ 1 β 2 E L 2 im |i 1:m−1 , D pool − R − 1 M m−1 t=1 L it 2 < ∞ ∀i 1:m−1 , D pool as our assumptions guarantee that 1 β 2 < ∞, we have E im L 2 im < ∞ and so the empirical risk and losses are finite. Given that each Var [w m L im |i 1:m−1 , D pool ] is finite, it follows that: s 2 := 1 M M m=1 E Dpool,i1:m−1 [Var [w m L im |i 1:m−1 , D pool ]] < ∞, and we thus have: lim M →∞ E R PURE − R 2 = lim M →∞ Var [L(y, f θ (x)] αM + s 2 M = 0 as desired. B.4 DERIVATION OF THE CONSTANTS OFR LURE We note from before that because of the unbiasedness of a m : E C M m=1 c m a m = r where C = 1 M m=1 c m , To construct our improved estimatorR LURE , we now need to find the right constants c m , that in turn lead to overall weights v m (as per (5) E [v m ] = c m N E 1 q(i m ; i 1:m−1 , D pool ) + 1 N M t=m+1 c t = c m N n / ∈i1:m−1 q(i m = n; i 1:m−1 , D pool ) q(i m = n; i 1:m−1 , D pool ) + 1 N M t=m+1 c t = c m (N − m + 1) N + 1 N M t=m+1 c t . Imposing that each E [v m ] = 1, we now have M equations for M unknowns c 1 , . . . , c M , such that we can find the required values of c m by solving the set of simultaneous equations: (N − m + 1) c m N + 1 N M t=m+1 c t = 1 ∀m ∈ {1, . . . , M }.(10) We do this by induction. First, consider E [v m ] − E [v m+1 ] = 0, for which can be rewritten: (N − m + 1) c m N − (N − m) c m+1 N + c m+1 N = 0 and thus: c m = N − m − 1 N − m + 1 c m+1 .(11) By further noting that the solution for m = M is trivial: c M = N N − M + 1 , we have by induction c m = N N − M + 1 M −1 t=m N − t − 1 N − t + 1 = N N − M + 1 exp M −1 t=m log(N − t − 1) − log(N − t + 1) .c m = N N − M + 1 exp (log(N − M ) + log(N − M + 1) − log(N − m) − log(N − m + 1)) = N (N − M ) (N − m)(N − m + 1) which is our final simple form for c m . We can now check that this satisfies the required recursive relationship (noting it trivially gives the correct value for c M ) as per (11): c m = N − m − 1 N − m + 1 c m+1 = N − m − 1 N − m + 1 N (N − M ) (N − m − 1)(N − m) = N (N − M ) (N − m)(N − m + 1) as required. Similarly, we can straightforwardly show that this form of c m satisfies (10) by substitution. We then find the form of v m given this expression for c m . Remember that: v m = c m w m + 1 N M t=m+1 c t . We can rearrange (10) to: 1 N M t=m+1 c t = 1 − N − m + 1 N c m from which it follows that: v m = 1 + c m w m − N − m + 1 N . Substituting in our expressions for c m and w m we thus have: v m = 1 + N − M (N − m)(N − m + 1) 1 q(i m ; i 1:m−1 , f (θ m−1 ), D pool ) − (N − m + 1) v m = 1 + N − M N − m 1 (N − m + 1) q(i m ; i 1:m−1 , D pool ) − 1 , which is the form given in the original expression. To finish our definition, we simply need to derive C: C = M m=1 c m −1 = N (N − M ) M m=1 1 (N − m)(N − m + 1) −1 = N (N − M ) M m=1 1 N − m − 1 N − m + 1 −1 where we now have a telescopic sum so = N (N − M ) 1 N − M − 1 N −1 = 1 M . We thus see that our v m always sum to M , giving the quoted form forR LURE in the main paper. B.5 PROOF OF UNBIASEDNESS AND VARIANCE FORR LURE : THEOREM 3 Theorem 3.R LURE as defined above has the following properties: E R LURE = r, Var R LURE = Var [L(y, f θ (x))] N + 1 M 2 M m=1 c 2 m E Dpool,i1:m−1 [Var [w m L im |i 1:m−1 , D pool ]] . (6) Proof.R LURE is, by construction a linear combination of the weights a m . By Lemma 1 each a m is an unbiased estimator of r. So by the linearity of expectation, E R LURE = r. As in Theorem 1, the variance requires a degree of care because the a m are not independent. Noting that the expectation does not change through the weighting, we analogously have Var R LURE = E Var 1 M M m=1 c m a m D pool + Var [L(y, f θ (x))] N . Similarly, we also have Var 1 M M m=1 c m a m D pool = E   1 M M m=1 c m a m 2 D pool   −R 2 = 1 M 2 M m=1 M k=1 c m c k E [a m a k |D pool ] −R 2 = 1 M 2 M m=1 M k=1 c m c k E [a m a k |D pool ] −R 2 = 1 M 2 M m=1 M k=1 c m c k E a m −R a k −R D pool . Using the result before that E a m −R a k −R D pool =EVar R PURE = Var [L(y, f θ (x))] N + 1 M 2 M m=1 E m (12) Var R LURE = Var [L(y, f θ (x))] N + 1 M 2 M m=1 c 2 m E m ,(13) where we have used the shorthand E m = E Dpool,i1:m−1 [Var [w m L im |i 1:m−1 , D pool ]]. Recall also that c 2 m = N 2 (N − M ) 2 (N − m) 2 (N − m + 1) 2 . Though the potential for one to use pathologically bad proposals means that it is not possible to show that Var R LURE ≤ Var R PURE universally holds, we can show this result under a relatively weak assumption that ensures our proposal is "sufficiently good." To formalize this assumption, we first define F m := E Dpool,i1:m−1 Var w m E [w m | i 1:m−1 , D pool ] L im i i:m−1 , D pool = N − m + 1 N −2 E m as the weight-normalized expected variance, where the second form comes from the fact that E [w m |i 1:m−1 , D pool ] = (N − m + 1)/N . Our assumption is now that F m ≥ F M −m+1 ∀m : 1 ≤ m ≤ M/2.(14) Note that a sufficient, but not necessary, condition for this to hold is that the F m do not increase with m, i.e. F m ≥ F j ∀(m, j) : 1 ≤ m ≤ j ≤ M . Intuitively, this is is equivalent to saying that the conditional variances of our normalized weighted losses should not increase as we acquire more points. It is, for example, satisfied by a uniform sampling acquisition strategy (for which all F m are equal). More generally, it should hold in practice for sensible acquisition strategies as a) our proposal should improve on average as we acquire more labels, leading to lower average variance; and b) higher loss points will generally be acquired earlier so the scaling will typically decrease with m. In particular, note that E [w m L im |i 1:m−1 , D pool ] < r and is monotonically decreasing with m because it omits the already sampled losses (which is why these are added back in when calculating a m ). This assumption is actually stronger than necessary: in practice the result will hold even if F m increases with m provided the rate of increase is sufficiently small. However, the assumption as stated already holds for a broad range of sensible proposals and fully encapsulating the minimum requirements on F m is beyond the scope of this paper. We are now ready to formally state and prove our result. For this, however, it is convenient to first prove the following lemma, which we will invoke multiple times in the main proof. Lemma 2. If a, b, M, N ∈ N + are positive integers such that, M < N and a + b ≤ M , then Proof. We start by subtracting equation (13) from (12) yielding (N − a) 2 N 2 ≥ (N − M ) 2 (N − b) 2 . Proof. (N − a) 2 N 2 − (N − M ) 2 (N − b) 2 = (N − a) 2 (N − b) 2 − N 2 (N − M ) 2 N 2 (N − b) 2 = 2N 3 (M − a − b) + N 2 (a 2 + b 2 + 4ab − M 2 ) − 2abN (a + b) + a 2 b 2 N 2 (N − b) 2 = 1 N 2 (N − b) 2 N (2N − M − a − b) + ab N (M − a − b) + abVar R PURE − Var R LURE = 1 M M m=1 (1 − c 2 m )E m = 1 M M m=1 (1 − c 2 m ) N − m + 1 N 2 F m . Assuming, for now, that M is even and M < N , we can now group terms into pairs by counting from each end of the sequence (i.e. pairing the m-th and M − m + 1-th terms) to yield Var R PURE − Var R LURE = 1 M M/2 m=1 S m where S m := 1 − c 2 m (N − m + 1) 2 N 2 F m + 1 − c 2 M −m+1 (N − M + m) 2 N 2 F M −m+1 = (N − m + 1) 2 N 2 − (N − M ) 2 (N − m) 2 F m + (N − M + m) 2 N 2 − (N − M ) 2 (N − M + m − 1) 2 F M −m+1 . We will now show that S m ≥ 0, ∀1 ≤ m ≤ M/2, from which we can directly conclude that Var R PURE ≥ Var R LURE . For this, note that F m and F M −m+1 are themselves non-negative by construction. Consider first the case where (N − M + m) 2 /N 2 ≥ (N − M ) N /(N − M + m − 1) 2 . Here the second term in S m is non-negative. Furthermore, invoking Lemma (2) with a = m − 1 and b = m (noting this satisfies a + b ≤ M for all 1 ≤ m ≤ M/2 as required) shows that (N − m + 1) 2 N 2 − (N − M ) 2 (N − m) 2 ≥ 0 and so the first term is also positive. It thus immediately follows that S m ≥ 0 in this scenario. When this does not hold, (N − M + m) 2 /N 2 < (N − M ) N /(N − M + m − 1) 2 and so the second term in S m is negative. We now instead invoke our assumption that F m ≥ F M −m+1 , to yield S m ≥ F m (N − m + 1) 2 N 2 − (N − M ) 2 (N − m) 2 + (N − M + m) 2 N 2 − (N − M ) 2 (N − M + m − 1) 2 . (15) We can now invoke Lemma (2) with a = m − 1 and b = M − m + 1 to show that (N − m + 1) 2 N 2 − (N − M ) 2 (N − M + m − 1) 2 ≥ 0 and again with a = M − m and b = m to show that (N − M + m) 2 N 2 − (N − M ) 2 (N − m) 2 ≥ 0. Substituting these back into (15) thus again yield the desired result that S m ≥ 0 as required. To cover the case where M is odd, we simply need to note that this adds the following additional term as follows: Var R PURE − Var R LURE = (N − M/2 + 1/2) 2 N 2 − (N − M ) 2 (N − M/2 − 1/2) 2 F m + 1 M (M −1/2) m=1 S m and we can again invoke Lemma (2) with a = M/2 − 1/2 and b = M/2 + 1/2 to show that this additional term is non-negative. To cover the case where M = N , we simply note that here S m = (N − m + 1) 2 N 2 F m + m 2 N 2 F M −m+1 where both terms are clearly positive. We have now shown that S m ≥ 0 in all possible scenarios given our assumption on F m , and so we can conclude that Var R PURE ≥ Var R LURE . Finally, we need to show the inequality is strict if E 1 > 0 and M > 1. For this we first note that E 1 > 0 ensures F 1 > 0 and then consider S 1 as follows: S 1 = 1 − (N − M ) 2 (N − 1) 2 F 1 + (N − M + 1) 2 N 2 − 1 F M and as the second term is clearly negative and F 1 ≥ F M , ≥ (N − M + 1) 2 N 2 − (N − M ) 2 (N − 1) 2 F 1 = (M − 1)(2N 2 − 2M N + M − 1) N 2 (N − 1) 2 F 1 > 0 as M > 1 and N ≥ M ensures that each bracketed term is strictly positive. Now as S 1 > 0 and S m ≥ 0 for all other m, we can conclude that the sum of the S m is strictly positive, and thus that the inequality in strict. B.7 PROOF OF THE CONSISTENCY OFR LURE : THEOREM 5 Theorem 5. Under the same assumptions as Theorem 2: lim M →∞ E R LURE − r 2 = 0. Proof. As before, sinceR LURE is unbiased the MSE is simply the variance so: E R PURE − r 2 = Var R PURE = Var [L(y, f θ (x))] N + 1 M 2 M m=1 c 2 m E Dpool,i1:m−1 [Var [w m L im |i 1:m−1 , D pool ]] . and by substituting N = αM = d M 2 M m=1 αM (αM − M ) (αM − m)(αM − m + 1) 2 < d M 2 M m=1 αM αM − M + 1 2 = dα 2 M ((α − 1)M + 1) 2 which clearly tends to zero as M → ∞ (remembering that α > 1) and we are done. (1)): E R = 1 M M m=1 µ m ( = r in general) Var[R] = 1 Var Dpool E R D pool + 2 1 M 2 M m=1 E Dpool,i1:m−1 [Var [L im |i 1:m−1 , D pool ]] + 1 M 2 M m=1 E Dpool Var µ m|i,D D pool 3 + 2 E Dpool Cov L im , k<m L i k D pool 4 .(7) Proof. The result of the bias follows immediately from definition of µ m and the linearity of expectations. For the variance, we have Var[R] = E R 2 − E R 2 = E Dpool E R 2 D pool − E R 2 = E Dpool Var R D pool + E R D pool 2 − E R 2 = Var Dpool E R D pool + E Dpool Var R D pool(16) where the first term is 1 from the result. For the second term, introducing the notations µ |D = E R D pool and µ m|D = E [L im |D pool ] we have Var R D pool = E   1 M M m=1 L im − µ |D 2 D pool   = 1 M 2 M m=1 M k=1 E L im − µ |D L i k − µ |D D pool = 1 M 2 M m=1 M k=1 E L im − µ m|D + µ m|D − µ |D L i k − µ k|D + µ k|D − µ |D D pool , multiplying out terms and using the symmetry of m and k = 1 M 2 M m=1 M k=1 E (L im − µ m|D )(L i k − µ k|D ) D pool + 2 M M m=1 E (L im − µ m|D ) 1 M M k=1 µ k|D − µ |D D pool + 1 M M m=1 (µ m|D − µ |D ) 1 M M k=1 µ k|D − µ |D where we have exploited symmetries in the indices. Now, as 1 M M k=1 µ k|D = µ |D , the second and third terms are simply zero, so we have = 1 M 2 M m=1 M k=1 E (L im − µ m|D )(L i k − µ k|D ) D pool , separating out the m = k and m < k terms, with symmetry = 1 M 2 M m=1 Var [L im |D pool ] + 2 k<m E (L im − µ m|D )(L i k − µ k|D ) D pool = 1 M 2 M m=1 Var [L im |D pool ] + 2E (L im − µ m|D ) k<m (L i k − µ k|D ) D pool = 1 M 2 M m=1 Var [L im |D pool ] + 2 Cov L im , k<m L i k D pool .(17) Here the second term will yield 4 in the result when substituted back into (16 Theorem 7. Given a non-negative loss, the optimal proposal distribution q * (i m ; i 1:m−1 , D pool ) = L im /Σ n / ∈i1:m−1 L n yields estimators exactly equal to the pool risk, that isR PURE =R LURE =R almost surely ∀M . Proof. We start by proving the result for the simpler case ofR PURE before consideringR LURE . To make the notation simpler, we will introduce hypothetical indices i t for t > M , noting that their exact values will not change the proof provided that they are all distinct to each other and the real indices (i.e. that they are a possible realization of the active sampling process in the setting M = N ). ForR PURE , the proof follows straightforwardly from substituting the definition of the optimal proposal into the a m form of the estimator R PURE = 1 M M m=1 a m = 1 M M m=1 w m L im + 1 N m−1 t=1 L it = 1 M M m=1 1 N N t=m L it + 1 N m−1 t=1 L it = 1 N N t=1 L it , and because all possible indices are uniquely visited = 1 N N n=1 L n =R. The proof proceeds identically for the case of ∇L im because the gradient passes through the summations. ForR LURE , we similarly substitute the optimal proposal into the definition of the estimator R LURE = 1 M M m=1 v m L im = 1 M M m=1 L im + N − M N − m L im (N − m + 1)q * (i m ; i 1:m−1 , f θm−1 , D pool ) − L im = 1 M M m=1 L im + N − M N − m N t=m L it (N − m + 1) − L im , pulling out the loss = 1 M M m=1 L im       1 − N − M N − m + (N − M ) m k=1 1 (N − k)(N − k + 1) =m/(N (N −m))       + 1 M N t=M +1 L it (N − M ) M k=1 1 (N − k)(N − k + 1) =M/(N (N −M )) , simplifying and rearranging = 1 M M m=1 L im 1 − N − M N − m + N − M N − m m N + 1 N N t=M +1 L it = 1 M M m=1 L im 1 − N − M N − m N − m N + 1 N N t=M +1 L it = 1 N M m=1 L im + 1 N N t=M +1 L it = 1 N N n=1 L n =R as required. Remark 2. The optimal proposal for estimating the gradient of the pool risk, ∇ φR , with respect to some scalar φ is instead 1 q * * (i m ; i 1:m−1 , D pool ) = |∇ φ L im | / n / ∈i1:m−1 |∇ φ L n | . Note that when taking gradients with respect to multiple variables, the optimal proposal for each will be different for each. Var [RLURE] (b) Bayesian Neural Network Our training dataset contains a small cluster of points near x = −1 and two larger clusters at 0 ≤ x ≤ 0.5 and 1 ≤ x ≤ 1.5, sampled proportionately to the 'true' data distribution. The data distribution from which we select data in a Rao-Blackwellised manner has a probability density function over x equal to: P (x = X) =    0.12 −1.2 ≤ x ≤ −0.8 0.95 0.0 ≤ x ≤ 0.5 0.95 1.0 ≤ x ≤ 1.5 while the distribution over y is then induced by: y = max(0, x) · |x| 3 2 + sin(20x) 4 . We set N = 101, where there are 5 points in the small cluster and 96 points in each of the other two clusters, and consider 10 ≤ M ≤ 100. We actively sample points without replacement using a geometric heuristic that scores the quadratic distance to previously sampled points and then selects points based on a Boltzman distribution with β = 1 using the normalized scores. Here, we also show in Figure 6 results that are collected using an epsilon-greedy acquisition proposal. The results are aligned with those from the other acquisition distribution we consider in the main body of the paper. This proposal selects the point that is has the highest total distance to all previously selected points with probability 0.9 and uniformly at random with probability = 0.1. That is, the acquistion proposal is given by: P (i m = j; i 1:m−1 , D pool ) = 1 − + |Dpool| arg max j / ∈Dtrain k∈Dtrain |x k − x j | |Dpool| otherwise where of course D train are the i 1:m−1 elements of D pool . For all graphs we use 1000 trajectories with different random seeds to calculate error bars. Although, of course, each regression and scoring is deterministic, the acquistion distribution is stochastic. Although the variance of the estimators can be inferred from Figure 2a, we also provide Figure 5a which displays the variance of the estimator directly. C.2 BAYESIAN NEURAL NETWORK We train a Bayesian neural network using variational inference (Jordan et al., 1999). In particular, we use the radial Bayesian neural network approximating distribution (Farquhar et al., 2020). The details of the hyperparameters used for training are provided in Table 1. Bias: R − E[⋅] r − E[R ] r − E[R PURE] r − E[R LURE] (a) Bias (like Fig. 2a). Figure 7: We contrast the effect of usingR LURE throughout the entire acquisition procedure and training (rather than using the same acquisition procedure based onR for all estimators). The purple test performance and orange are nearly identical, suggesting the result is not sensitive to this choice. The unbalanced dataset is constructed by first noising 10% of the training labels, which are assigned random labels, and then selecting a subset of the training dataset such that the numbers of examples of each class is proportional to the ratio (1., 0.5, 0.5, 0.2, 0.2, 0.2, 0.1, 0.1, 0.01, 0.01)-that is, there are 100 times as many zeros as nines in the unbalanced dataset. (Figure 3f shows a version of this experiment which uses a balanced dataset instead, in order to make sure that any effects are not entirely caused by this design choice.) In fact, we took only a quarter of this dataset in order to speed up acquisition (since each model must be evaluated many times on each of the candidate datapoints to estimate the mutual information). 1000 validation points were then removed from this pool to allow early stopping. The remaining points were placed in D pool . We then uniformly selected 10 points from D pool to place in D train . Adding noise to the labels and using an unbalanced dataset is designed to mimic the difficult situations that active learning systems are deployed on in practice, despite the relatively simple dataset. However, we used a simple dataset for a number of reasons. Active learning is very costly because it requires constant retraining, and accurately measuring the properties of estimators generally requires taking large numbers of samples. The combination makes using more complicated datasets expensive. In addition, because our work establishes a lower bound on architecture complexity for which correcting the active learning bias is no longer valuable, establishing that lower bound with MNIST is in fact a stronger result than showing a similar result with a more complex model. The active learning loop then proceeds by: 1. training the neural network on D train usingR; 2. scoring D pool ; 3. sampling a point to be added to D train ; 4. Every 3 points, we separately trained models on D train usingR,R PURE , andR LURE and evaluate them. This ensures that all of the estimators are on data collected under the same sampling distribution for fair comparison. As a sense-check, in Figures 7a and 7b we show an alternate version in which the first step trains withR LURE instead ofR, and find that this does not have a significant effect on the results. When we compute the bias of a fixed neural network in Figure 2b, we train a single neural network on 1000 points. We then sample evaluation points using the acquisition proposal distribution from the test dataset and evaluate the bias using those points. In Figures 8a and 8b we review the graphs shown in Figures 3b and 3c, this time showing standard errors in order to make clear that the biasedR estimator has better performance, while the earlier figures show that the performance is quite variable. We considered a range of alternative proposal distributions. In addition to the Boltzman distribution which we used, we considered a temperature range between 1,000 and 20,000 finding it had relatively little effect. Higher temperatures correspond to more certainly picking the highest mutual information Figure 9: Higher temperatures approach a deterministic acquisition function. These also tend to increase the variance of the risk estimator because the weight associated with unlikely points increases, when it happens to be selected. The overall pattern seems fairly consistent, however. point, which approaches a deterministic proposal. We found that because the mutual information had to be estimated, and was itself a random variable, different trajectories still picked very different sets of points. However, for very high temperatures the estimators became higher variance, and for lower temperatures, the acquisition distribution became nearly uniform. In Figure 9 we show the results of networks trained with a variety of temperatures other than the 10,000 ultimately used. We also considered a proposal which was simply proportional to the scores, but found this was also too close to sampling uniformly for any of the costs or benefits of active learning to be visible. We considered Monte Carlo dropout as an alternative approximating distribution (Gal & Ghahramani, 2015) (see Figures 3e and 10b). We found that the mutual information estimates were compressed in a fairly narrow range, consistent with the observation by Osband et al. (2018) that Monte Carlo dropout uncertainties do not necessarily converge unless the dropout probabilities are also optimized (Gal et al., 2017a). While this might be good enough when only the relative score is needed in order to calculate the argmax, for our proposal distribution we would ideally prefer to have good absolute scores as well. For this reason, we chose the richer approximate posterior distribution instead. Last, we considered a different architecture, using a full-connected neural network with a single hidden layer with 50 units, also trained as a Radial BNN. This showed higher variance in downstream performance, but was broadly similar to the convolutional architecture (see Figures 10d and 10e). Beygelzimer et al. (2009), Chu et al. (2011), and (Cortes et al., 2019) apply importance-sampling corrections Figure 4 : 4Overfitting bias-B OFB -for models trained using the three objectives. (a) Linear regression, B OFB is small compared to ALB (c.f.Figure 2a). Shading IQR. 1000 trajectories. (b) BNN, B OFB is similar scale and opposite magnitude to ALB (c.f. Lemma 1 .L 1Krishnamurthy et al., 2017;Yan et al., 2018;Sundin et al., 2019; Behpour et al., 2019; Shi & Yu, 2019; Hu et al.The individual terms a m ofR PURE are unbiased estimators of the risk: E [a m ] = r. Proof. We begin by applying the tower property of expectations: E [a m ] = E w m L it D pool , i 1:m−1 . By further noting that E im [w m L im | D pool , i 1:m−1 ] can be written out analytically as a sum over all the possible values of i m , it is deterministic given D pool and i 1:m−1 we have: i m ; i 1:m−1 , D pool ) L im N ( ( ( ( ( ( ( ( ( q(i m ; i 1:m−1 Now by the tower property:a = E [E [w m L im w k L i k | D pool , i 1:m−1 ]|D pool ] , L ) such that E [v m ] = 1. We start by substituting in the definition of a m it , and then redistributing the L it from later terms where they match m: v m = c m w Note that thoughR LURE remains an unbiased estimator of the risk, each individual term v m L im is not. Now we require: E [v m ] = 1 ∀m ∈ {1, ..., M }. Remembering that w m = 1/(N q(i m ; i 1:m−1 , D pool )): Now we can exploit the fact that there is a canceling of most of the term in this sum. The exceptions are − log(N − m + 1), − log(N − m), log(N − M + 1), and log(N − M ). We thus have: ≥ 0 as 2N ≥ M + a + b and M ≥ a + b so all bracketed terms are positive. Theorem 4 . 4If Equation (14) in Appendix B.6 holds then Var[R LURE ] ≤ Var[R PURE ]. If M > 1 and E Dpool [Var[w m L i1 |D pool ]] > 0 also hold, then the inequality is strict: Var[R LURE ] < Var[R PURE ]. B. 8 THEOREM 6 Theorem 6 . 866PROOF OF BIAS AND VARIANCE OF STANDARD ACTIVE LEARNING ESTIMATOR: Let µ m := E [L im ] and µ m|i,D := E [L im |i 1:m−1 , D pool ]. ForR (defined in ). For the first term, we have by analogous arguments as those used at the start of the proof for Var[R],Var [L im |D pool ] = E i1:m−1 [Var [L im |i 1:m−1 , D pool ]|D pool ] + Var µ m|i,D D pool(18)where µ m|i,D := E [L im |i 1:m−1 , D pool ] as per the definition in the theorem itself. Substituting this back into (17) and then (16) in turn now yields the desired result through the tower property of expectations, with the first term in (18) producing 2 and the second term producing 3 .B.9 PROOF OF OPTIMAL PROPOSAL DISTRIBUTION: THEOREM 7 Figure 5 : 5For linear regression (a) the biased estimator has the lowest variance, andR LURE improves onR PURE . (b) But for the BNN the variances are more comparable, withR LURE the lowest. TestMSE (likeFig. 3a). Figure 6 : 6Adopting an alternative proposal distribution-here an epsilon-greedy adaptation of a distance-based measure-does not change the overall picture for linear regression. Figure 10 : 10Further downstream performance experiments. (a)-(c) are partners to Figures 3d, 3e, and 3f. (d) and (e) show similar results for a smaller multi-layer perceptron (with one hidden layer of 50 units). In all cases the results broadly mirror the results in the main paper. Figure 2:R PURE andR LURE remove bias introduced by active learning, while unweightedR, which most active learning work uses, is biased. Note the sign:R overestimates risk because active learning samples the hardest points. Variance forR PURE andR LURE depends on the acquisition distribution placing high weight on high-expected-loss points. In (b), the BALD-style distribution means that the variance of the unbiased estimators is smaller. For FashionMNIST, (c), active learning bias is small and high variance in all cases. Shading is ±1 standard deviation.LURE (a) Linear regression. Test MSE.Figure 3: For linear regression, the models trained withR PURE orR LURE have lower 'population' risk. In contrast, BNNs trained withR LURE orR PURE perform either similarly (e) or slightly worse (b,c,d,f), even though they remove bias and have lower variance. Shading is one standard deviation. For (a) 1000 samples and 'r' estimated on 10,100 points from distribution. For (b)/(c) 45 samples and 'r' estimated on the test dataset. (d)-(f) are alternative settings to validate consistency of result.20 40 60 80 100 M 0.20 0.25 0.30 0.35 0.40 0.45 Population' Risk: r Trained with: R Trained with: RPURE Trained with: R 10 20 30 40 50 60 M 0.4 0.6 0.8 1.0 1.2 1.4 1.6 1.8 2.0 Empirical Test Risk: r Trained with R Trained with R PURE Trained with R LURE (b) MNIST: Test NLL. 10 20 30 40 50 60 M 50 55 60 65 70 75 80 85 Test Accuracy Trained with R Trained with R PURE Trained with R LURE (c) MNIST: Test Acc. 10 20 30 40 50 60 M 0.75 1.00 1.25 1.50 1.75 2.00 2.25 2.50 2.75 Empirical Test Risk: r Trained with R Trained with R PURE Trained with R LURE (d) FashionMNIST: Test NLL. 10 20 30 40 50 60 M 0.6 0.8 1.0 1.2 1.4 1.6 1.8 2.0 Empirical Test Risk: r Trained with R Trained with R PURE Trained with R LURE (e) MNIST (MCDO): Test NLL. 10 20 30 40 50 60 M 1.0 1.5 2.0 2.5 3.0 Empirical Test Risk: r Trained with R Trained with R PURE Trained with R LURE (f) MNIST (Balanced): Test NLL. and OFB: [ALB >> OFB] Removing ALB reduces overall bias and is most likely to occur when f θ is not very expressive such that there is little chance of overfitting. [ALB << OFB] Removing ALB is irrelevant as model has massively overfit regardless. [ALB ≈ OFB] Here sign is critical. If ALB and OFB have opposite signs and similar scale, they will20 40 60 80 100 M −0.05 0.00 0.05 0.10 0.15 0.20 BOFB = r − E[R LURE(θ * )] Trained with: R Trained with: R PURE Trained with: R LURE (a) Linear regression. i1:m−1 [Var [w m L im |i 1:m−1 , D pool ]|D pool ] if m = k Recall from Theorems 1 and 3 that the variances of theR LURE andR PURE estimators are0 otherwise now gives the desired result by straightforward substitution. B.6 PROOF THATR LURE HAS LOWER VARIANCE THANR PURE UNDER REASONABLE ASSUMPTIONS: THEOREM 4 BALD (M.I. between θ and output distribution) Variational Posterior Initial Mean He et al. (2016) Variational Posterior Initial Standard Deviation log[1 + e −4 ]Hyperparameter Setting description Architecture Convolutional Neural Network Conv 1 1-16 channels, 5x5 kernel, 2x2 max pool Conv 2 16-32 channels, 5x5 kernel, 2x2 max pool Fully connected 1 128 hidden units Fully connected 2 10 hidden units Loss function Negative log-likelihood Activation ReLU Approximate Inference Algorithm Radial BNN Variational Inference (Farquhar et al., 2020) Optimization algorithm Amsgrad (Reddi et al., 2018) Learning rate 5 · 10 −4 Batch size 64 Variational training samples 8 Variational test samples 8 Variational acquisition samples 100 Epochs per acquisition up to 100 (early stopping patience=20), with 1000 copies of data Starting points 10 Points per acquistion 1 Acquisition proposal distribution q(i m ; i 1:m−1 , D pool ) = e T s i e T s i Temperature: T 10,000 Scoring scheme: s Prior N (0, 0.25 2 ) Dataset Unbalanced MNIST Preprocessing Normalized mean and std of inputs. Validation Split 1000 train points for validation Runtime per result 2-4h Computing Infrastructure Nvidia RTX 2080 Ti Table 1 : 1Experimental Setting-Active MNIST.10 20 30 40 50 60 M 0.4 0.6 0.8 1.0 1.2 1.4 1.6 1.8 2.0 Empirical Test Risk: r Trained with R Trained with R PURE Trained with R LURE Entirely with R LURE (a) Test loss 10 20 30 40 50 60 M 50 55 60 65 70 75 80 85 Test Accuracy Trained with R Trained with R PURE Trained with R LURE Entirely with R LURE (b) Test accuracy Empirical Test Risk: r Figure 8: Versions of Figures 3b and 3c shown with standard errors (45 points) instead of standard deviations. This makes it clearer that the biasedR has better performance, even if only marginally so. LURE (a) T=5000 NLL. LURE (b) T=15000 NLL. LURE (c) T=20000 NLL.1.4 1.6 Trained with R Trained with R PURE Trained with R LURE (a) Test loss 10 20 30 40 50 60 M 60 65 70 75 80 85 Test Accuracy Trained with R Trained with R PURE Trained with R LURE (b) Test accuracy 10 20 30 40 50 60 M 0.4 0.6 0.8 1.0 1.2 1.4 1.6 Empirical Test Risk: r Trained with R Trained with R PURE Trained with R 10 20 30 40 50 60 M 0.4 0.6 0.8 1.0 1.2 1.4 1.6 1.8 2.0 Empirical Test Risk: r Trained with R Trained with R PURE Trained with R 10 20 30 40 50 60 M 0.50 0.75 1.00 1.25 1.50 1.75 2.00 Empirical Test Risk: r Trained with R Trained with R PURE Trained with R 10 20 30 40 50 60 M 55 60 65 70 75 80 85 Test Accuracy Trained with R Trained with R PURE Trained with R LURE (d) T=5000 Acc. 10 20 30 40 50 60 M 55 60 65 70 75 80 85 Test Accuracy Trained with R Trained with R PURE Trained with R LURE (e) T=15000 Acc. 10 20 30 40 50 60 M 55 60 65 70 75 80 85 Test Accuracy Trained with R Trained with R PURE Trained with R LURE (f) T=20000 Acc. Test Accuracy Trained with R AccuracywithTrained with R Empirical Test Risk: rPURE Trained with R LURE (a) FashionMNIST: Accuracy. 10 20 30 40 50 60 M 40 50 60 70 80 Test Accuracy Trained with R Trained with R PURE Trained with R LURE (b) MNIST (MCDO): Acc. 10 20 30 40 50 60 M 30 40 50 60 70 80 Test Accuracy Trained with R Trained with R PURE Trained with R LURE (c) MNIST (Balanced): Acc. 10 15 20 25 30 35 40 45 50 M 0.8 1.0 1.2 1.4 1.6 1.8 Trained with R Trained with R PURE Trained with R LURE (d) MNIST (MLP): NLL 10 15 20 25 30 35 40 45 50 M 45 50 55 60 65 70 75 80 Test Accuracy Trained with R Trained with R PURE Trained with R LURE (e) MNIST (MLP): Accuracy One can, in principle, actually construct an exact estimator in this scenario as well with the TABI approach ofRainforth et al. (2020) by employing two separate proposals that target max(∇ θR , 0) and − min(∇ θR , 0) respectively, then taking the difference between the two resultant estimators. ACKNOWLEDGEMENTSThe authors would like to especially thank Lewis Smith for his helpful conversations and specifically for his assistance with the proof of Theorem 4. In addition, we would like to thank for their conversations and advice Joost van Amersfoort and Andreas Kirsch.The authors are grateful to the Engineering and Physical Sciences Research Council for their support of the Centre for Doctoral Training in Cyber Security, University of Oxford as well as the Alan Turing Institute.Taking N = αM , we already showed in the proof of Theorem 2 that the first of these terms tends to zero as M → ∞.We also showed that E Dpool,i1:m−1 Var q(im;i1:m−1,Dpool) [w m L im ] is finite given our assumptions. As such, there must be some finite constant d such thatD DEEP ACTIVE LEARNING IN PRACTICEInTable 2, we show an informal survey of highly cited papers citingGal et al. (2017b), which introduced active learning to computer vision using deep convolutional neural networks. Across a range of papers including theory papers as well as applications ranging from agriculture to molecular science only two papers acknowledged the bias introduced by actively sampling and none of the papers took steps to address it. It is worth noting, though, that at least two papers motivated their use of active learning by observing that they expected their training data to already be unrepresentative of the population data and saw active learning as a way to address that bias. This does not quite work, unless you explicitly assume that the actively chosen distribution is more like the population distribution, but is an interesting phenomenon to observe in practical applications of active learning. Deep Bayesian Active Learning with Image Data. Yarin Gal, Riashat Islam, Zoubin Ghahramani, Proceedings of The 34th International Conference on Machine Learning. The 34th International Conference on Machine LearningYarin Gal, Riashat Islam, and Zoubin Ghahramani. Deep Bayesian Active Learning with Image Data. Proceedings of The 34th International Conference on Machine Learning, 2017b. Upal: Unbiased pool based active learning. Ravi Ganti, Alexander Gray, Artificial Intelligence and Statistics. 15Ravi Ganti and Alexander Gray. Upal: Unbiased pool based active learning. Artificial Intelligence and Statistics, 15, 2012. A Weakly Supervised Deep Learning Framework for Sorghum Head Detection and Counting. Sambuddha Ghosal, Bangyou Zheng, Scott C Chapman, Andries B Potgieter, David R Jordan, Xuemin Wang, Asheesh K Singh, Arti Singh, Masayuki Hirafuji, Seishi Ninomiya, Baskar Ganapathysubramanian, Soumik Sarkar, Wei Guo, Publisher: Science Partner Journal. Sambuddha Ghosal, Bangyou Zheng, Scott C. Chapman, Andries B. Potgieter, David R. Jordan, Xuemin Wang, Asheesh K. Singh, Arti Singh, Masayuki Hirafuji, Seishi Ninomiya, Baskar Ganapathysubramanian, Soumik Sarkar, and Wei Guo. A Weakly Supervised Deep Learning Framework for Sorghum Head Detection and Counting, 2019. Publisher: Science Partner Journal Volume: 2019. Active Learning With Convolutional Neural Networks for Hyperspectral Image Classification Using a New Bayesian Approach. Juan Mario Haut, Mercedes E Paoletti, Javier Plaza, Jun Li, Antonio Plaza, 10.1109/TGRS.2018.2838665Conference Name: IEEE Transactions on Geoscience and Remote Sensing. 56Juan Mario Haut, Mercedes E. Paoletti, Javier Plaza, Jun Li, and Antonio Plaza. Active Learning With Convolutional Neural Networks for Hyperspectral Image Classification Using a New Bayesian Approach. IEEE Transactions on Geoscience and Remote Sensing, 56(11):6440-6461, November 2018. ISSN 1558-0644. doi: 10.1109/TGRS.2018.2838665. Conference Name: IEEE Transactions on Geoscience and Remote Sensing. Deep Residual Learning for Image Recognition. Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun, CVPR. 73Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep Residual Learning for Image Recognition. CVPR, 7(3):171-180, 2016. Neil Houlsby, Ferenc Huszár, Zoubin Ghahramani, and Máté Lengyel. Bayesian Active Learning for Classification and Preference Learning. arXiv. Neil Houlsby, Ferenc Huszár, Zoubin Ghahramani, and Máté Lengyel. Bayesian Active Learning for Classification and Preference Learning. arXiv, 2011. Active Learning with Partial Feedback. Peiyun Hu, Zachary C Lipton, Anima Anandkumar, Deva Ramanan, International Conference on Learning Representations. Peiyun Hu, Zachary C. Lipton, Anima Anandkumar, and Deva Ramanan. Active Learning with Partial Feedback. In International Conference on Learning Representations, 2019. Cost-Effective Training of Deep CNNs with Active Model Adaptation. Sheng-Jun Huang, Jia-Wei Zhao, Zhao-Yang Liu, 978-1-4503-5552-0Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, KDD '18. the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, KDD '18New York, NY, USAAssociation for Computing MachinerySheng-Jun Huang, Jia-Wei Zhao, and Zhao-Yang Liu. Cost-Effective Training of Deep CNNs with Active Model Adaptation. In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, KDD '18, pp. 1580-1588, New York, NY, USA, July 2018. Association for Computing Machinery. ISBN 978-1-4503-5552-0. Optimal sampling in unbiased active learning. Henrik Imberg, Johan Jonasson, Marina Axelson-Fisk, Artificial Intelligence and Statistics. 232020Henrik Imberg, Johan Jonasson, and Marina Axelson-Fisk. Optimal sampling in unbiased active learning. Artificial Intelligence and Statistics, 23, 2020. Introduction to variational methods for graphical models. Michael I Jordan, Zoubin Ghahramani, Tommi S Jaakkola, Lawrence K Saul, Machine Learning. 37Michael I. Jordan, Zoubin Ghahramani, Tommi S. Jaakkola, and Lawrence K. Saul. Introduction to variational methods for graphical models. Machine Learning, 37(2):183-233, 1999. Half a Percent of Labels is Enough: Efficient Animal Detection in UAV Imagery Using Deep CNNs and Active Learning. Benjamin Kellenberger, Diego Marcos, Sylvain Lobry, Devis Tuia, 0196-2892IEEE Transactions on Geoscience and Remote Sensing. 5712Publisher: Institute of Electrical and Electronics EngineersBenjamin Kellenberger, Diego Marcos, Sylvain Lobry, and Devis Tuia. Half a Percent of Labels is Enough: Efficient Animal Detection in UAV Imagery Using Deep CNNs and Active Learning. IEEE Transactions on Geoscience and Remote Sensing, 57(12):9524-9533, December 2019. ISSN 0196-2892. Publisher: Institute of Electrical and Electronics Engineers. BatchBALD: Efficient and Diverse Batch Acquisition for Deep Bayesian Active Learning. Andreas Kirsch, Yarin Joost Van Amersfoort, Gal, arXiv:1906.08158cs, statAndreas Kirsch, Joost van Amersfoort, and Yarin Gal. BatchBALD: Efficient and Diverse Batch Acquisition for Deep Bayesian Active Learning. arXiv:1906.08158 [cs, stat], 2019. Active learning for cost-sensitive classification. Akshay Krishnamurthy, Alekh Agarwal, Tzu-Kuo Huang, Hal Daumé, Iii , John Langford, PMLRof Proceedings of Machine Learning Research. Sydney, Australia70Akshay Krishnamurthy, Alekh Agarwal, Tzu-Kuo Huang, Hal Daumé, III, and John Langford. Active learning for cost-sensitive classification. volume 70 of Proceedings of Machine Learning Research, pp. 1915-1924, International Convention Centre, Sydney, Australia, 06-11 Aug 2017. PMLR. A sequential algorithm for training text classifiers. D David, William A Lewis, Gale, 978-0-387-19889-7Proceedings of the 17th annual international ACM SIGIR conference on Research and development in information retrieval, SIGIR '94. the 17th annual international ACM SIGIR conference on Research and development in information retrieval, SIGIR '94Berlin, HeidelbergSpringer-VerlagDavid D. Lewis and William A. Gale. A sequential algorithm for training text classifiers. In Proceedings of the 17th annual international ACM SIGIR conference on Research and development in information retrieval, SIGIR '94, pp. 3-12, Berlin, Heidelberg, August 1994. Springer-Verlag. ISBN 978-0-387-19889-7. David Lowell, Zachary C Lipton, Byron C Wallace, Practical Obstacles to Deploying Active Learning. Empirical Methods in Natural Language Processing. David Lowell, Zachary C. Lipton, and Byron C. Wallace. Practical Obstacles to Deploying Active Learning. Empirical Methods in Natural Language Processing, November 2019. Information-Based Objective Functions for Active Data Selection. J C David, Mackay, Neural Computation. 44David J C MacKay. Information-Based Objective Functions for Active Data Selection. Neural Computation, 4(4):590-604, 1992. Randomized Prior Functions for Deep Reinforcement Learning. Ian Osband, John Aslanides, Albin Cassirer, Neural Information Processing Systems. Ian Osband, John Aslanides, and Albin Cassirer. Randomized Prior Functions for Deep Reinforcement Learning. Neural Information Processing Systems, 2018. Automating Inference, Learning, and Design using Probabilistic Programming. Tom Rainforth, University of OxfordPhD thesisTom Rainforth. Automating Inference, Learning, and Design using Probabilistic Programming. PhD thesis, University of Oxford, 2017. Target-aware bayesian inference: how to beat optimal conventional estimators. Tom Rainforth, Frank Goliński, Sheheryar Wood, Zaidi, Journal of Machine Learning Research. 21882020Tom Rainforth, A Goliński, Frank Wood, and Sheheryar Zaidi. Target-aware bayesian inference: how to beat optimal conventional estimators. Journal of Machine Learning Research, 21(88), 2020. On the Convergence of Adam and Beyond. J Sashank, Satyen Reddi, Sanjiv Kale, Kumar, International Conference on Learning Representations. Sashank J. Reddi, Satyen Kale, and Sanjiv Kumar. On the Convergence of Adam and Beyond. International Conference on Learning Representations, February 2018. Active Learning for Convolutional Neural Networks: A Core-Set Approach. Ozan Sener, Silvio Savarese, International Conference on Learning Representations. Ozan Sener and Silvio Savarese. Active Learning for Convolutional Neural Networks: A Core-Set Approach. In International Conference on Learning Representations, February 2018. Active Learning Literature Survey. Burr Settles, Machine Learning. 15Burr Settles. Active Learning Literature Survey. Machine Learning, 15(2):201-221, 2010. Deep Active Learning for Named Entity Recognition. Yanyao Shen, Hyokun Yun, Zachary Lipton, Yakov Kronrod, Animashree Anandkumar, International Conference on Learning Representations. Yanyao Shen, Hyokun Yun, Zachary Lipton, Yakov Kronrod, and Animashree Anandkumar. Deep Active Learning for Named Entity Recognition. In International Conference on Learning Repre- sentations, 2018. Fast direct search in an optimally compressed continuous target space for efficient multi-label active learning. Weishi Shi, Qi Yu, PMLRof Proceedings of Machine Learning Research. Long Beach, California, USA97Weishi Shi and Qi Yu. Fast direct search in an optimally compressed continuous target space for efficient multi-label active learning. volume 97 of Proceedings of Machine Learning Research, pp. 5769-5778, Long Beach, California, USA, 09-15 Jun 2019. PMLR. Deep Bayesian Active Learning for Natural Language Processing: Results of a Large-Scale Empirical Study. Aditya Siddhant, Zachary C Lipton, Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. the 2018 Conference on Empirical Methods in Natural Language ProcessingBrussels, BelgiumAssociation for Computational LinguisticsAditya Siddhant and Zachary C. Lipton. Deep Bayesian Active Learning for Natural Language Processing: Results of a Large-Scale Empirical Study. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pp. 2904-2909, Brussels, Belgium, 2018. Association for Computational Linguistics. Variational Adversarial Active Learning. Samrath Sinha, Sayna Ebrahimi, Trevor Darrell, 10.1109/ICCV.2019.00607IEEE/CVF International Conference on Computer Vision (ICCV). Seoul, KoreaIEEESamrath Sinha, Sayna Ebrahimi, and Trevor Darrell. Variational Adversarial Active Learning. In 2019 IEEE/CVF International Conference on Computer Vision (ICCV), pp. 5971-5980, Seoul, Korea (South), October 2019. IEEE. ISBN 978-1-72814-803-8. doi: 10.1109/ICCV.2019.00607. Active learning for misspecified models. Masashi Sugiyama, Neural Information Processing Systems. 18Masashi Sugiyama. Active learning for misspecified models. Neural Information Processing Systems, 18:1305-1312, 2006. Active learning for decision-making from imbalanced observational data. Iiris Sundin, Peter Schulam, Eero Siivola, Aki Vehtari, Suchi Saria, Samuel Kaski, PMLRof Proceedings of Machine Learning Research. Long Beach, California, USA97Iiris Sundin, Peter Schulam, Eero Siivola, Aki Vehtari, Suchi Saria, and Samuel Kaski. Active learning for decision-making from imbalanced observational data. volume 97 of Proceedings of Machine Learning Research, pp. 6046-6055, Long Beach, California, USA, 09-15 Jun 2019. PMLR. Cost-Effective Active Learning for Deep Image Classification. Keze Wang, Dongyu Zhang, Ya Li, Ruimao Zhang, Liang Lin, arXiv:1701.03551IEEE Transactions on Circuits and Systems for Video Technology. Keze Wang, Dongyu Zhang, Ya Li, Ruimao Zhang, and Liang Lin. Cost-Effective Active Learning for Deep Image Classification. IEEE Transactions on Circuits and Systems for Video Technology, January 2017. arXiv: 1701.03551. Comparison of Different Classifiers with Active Learning to Support Quality Control in Nucleus Segmentation in Pathology Images. Si Wen, M Tahsin, Le Kurc, Joel H Hou, Rajarsi R Saltz, Rebecca Gupta, Tianhao Batiste, Vu Zhao, Dimitris Nguyen, Wei Samaras, Zhu, 2153-4063AMIA Joint Summits on Translational Science proceedings. AMIA Joint Summits on Translational Science. Si Wen, Tahsin M. Kurc, Le Hou, Joel H. Saltz, Rajarsi R. Gupta, Rebecca Batiste, Tianhao Zhao, Vu Nguyen, Dimitris Samaras, and Wei Zhu. Comparison of Different Classifiers with Active Learning to Support Quality Control in Nucleus Segmentation in Pathology Images. AMIA Joint Summits on Translational Science proceedings. AMIA Joint Summits on Translational Science, 2017:227-236, 2018. ISSN 2153-4063. Active Learning with Logged Data. Songbai Yan, Kamalika Chaudhuri, Tara Javidi, PMLRInternational Conference on Machine Learning. Songbai Yan, Kamalika Chaudhuri, and Tara Javidi. Active Learning with Logged Data. In Interna- tional Conference on Machine Learning, pp. 5521-5530. PMLR, July 2018. ISSN: 2640-3498. Leveraging Crowdsourcing Data for Deep Active Learning An Application: Learning Intents in Alexa. Jie Yang, Thomas Drake, Andreas Damianou, Yoelle Maarek, Proceedings of the. theJie Yang, Thomas Drake, Andreas Damianou, and Yoelle Maarek. Leveraging Crowdsourcing Data for Deep Active Learning An Application: Learning Intents in Alexa. In Proceedings of the 2018 International World Wide Web Conferences Steering Committee. 978-1-4503-5639-8World Wide Web Conference, WWW '18. Republic and Canton of GenevaWorld Wide Web Conference, WWW '18, pp. 23-32, Republic and Canton of Geneva, CHE, April 2018. International World Wide Web Conferences Steering Committee. ISBN 978-1-4503-5639-8. Learning Loss for Active Learning. Donggeun Yoo, In So Kweon, 978-1-72813-293-82019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Long Beach, CA, USAIEEEDonggeun Yoo and In So Kweon. Learning Loss for Active Learning. In 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 93-102, Long Beach, CA, USA, June 2019. IEEE. ISBN 978-1-72813-293-8. Bayesian semi-supervised learning for uncertainty-calibrated prediction of molecular properties and active learning. Yao Zhang, Alpha A Lee, 10.1039/C9SC00616HChemical Science. 1035Publisher: The Royal Society of ChemistryYao Zhang and Alpha A. Lee. Bayesian semi-supervised learning for uncertainty-calibrated prediction of molecular properties and active learning. Chemical Science, 10(35):8154-8163, September 2019. doi: 10.1039/C9SC00616H. Publisher: The Royal Society of Chemistry.
[]
[ "The effect of active fluctuations on the dynamics of particles, motors and hairpins", "The effect of active fluctuations on the dynamics of particles, motors and hairpins" ]
[ "Hans Vandebroek \nFaculty of Sciences\nHasselt University\n3590DiepenbeekBelgium (\n", "Carlo Vanderzande \nFaculty of Sciences\nHasselt University\n3590DiepenbeekBelgium (\n\nInstituut Theoretische Fysica\nKatholieke Universiteit Leuven\n3001HeverleeBelgium\n" ]
[ "Faculty of Sciences\nHasselt University\n3590DiepenbeekBelgium (", "Faculty of Sciences\nHasselt University\n3590DiepenbeekBelgium (", "Instituut Theoretische Fysica\nKatholieke Universiteit Leuven\n3001HeverleeBelgium" ]
[]
Inspired by recent experiments on the dynamics of particles and polymers in artificial cytoskeletons and in cells, we introduce a modified Langevin equation for a particle in an environment that is a viscoelastic medium and that is brought out of equilibrium by the action of active fluctuations caused by molecular motors. We show that within such a model, the motion of a free particle crosses over from superdiffusive to subdiffusive as observed for tracer particles in an in vitro cytoskeleton or in a cell. We investigate the dynamics of a particle confined by a harmonic potential as a simple model for the motion of the tethered head of kinesin-1. We find that the probability that the head is close to its binding site on the microtubule can be enhanced by a factor of two due to active forces. Finally, we study the dynamics of a particle in a double well potential as a model for the dynamics of DNA-hairpins. We show that the active forces effectively lower the potential barrier between the two minima and study the impact of this phenomenon on the zipping/unzipping rate.
10.1039/c6sm02568d
[ "https://export.arxiv.org/pdf/1611.02123v1.pdf" ]
10,547,568
1611.02123
9f3f5d50eef325e8bff89063cd7a9672cd7129a4
The effect of active fluctuations on the dynamics of particles, motors and hairpins Hans Vandebroek Faculty of Sciences Hasselt University 3590DiepenbeekBelgium ( Carlo Vanderzande Faculty of Sciences Hasselt University 3590DiepenbeekBelgium ( Instituut Theoretische Fysica Katholieke Universiteit Leuven 3001HeverleeBelgium The effect of active fluctuations on the dynamics of particles, motors and hairpins Inspired by recent experiments on the dynamics of particles and polymers in artificial cytoskeletons and in cells, we introduce a modified Langevin equation for a particle in an environment that is a viscoelastic medium and that is brought out of equilibrium by the action of active fluctuations caused by molecular motors. We show that within such a model, the motion of a free particle crosses over from superdiffusive to subdiffusive as observed for tracer particles in an in vitro cytoskeleton or in a cell. We investigate the dynamics of a particle confined by a harmonic potential as a simple model for the motion of the tethered head of kinesin-1. We find that the probability that the head is close to its binding site on the microtubule can be enhanced by a factor of two due to active forces. Finally, we study the dynamics of a particle in a double well potential as a model for the dynamics of DNA-hairpins. We show that the active forces effectively lower the potential barrier between the two minima and study the impact of this phenomenon on the zipping/unzipping rate. I. INTRODUCTION At the nanoscale all physical, chemical and biological processes are influenced by the ever present thermal noise. This is especially true for all molecular and cellular biophysical processes. In theoretical models of these phenomena it is therefore common to use the theory of stochastic processes [1]. The motion of a particle in a cell, the dynamics of a molecular motor, the translocation of a biopolymer through a membrane, the folding of a protein, are but a few of the various processes that are described using Langevin, Fokker-Planck or discrete state master equations [2]. In most of these studies one assumes that the dynamics under study takes place in a viscous solvent that is in equilibrium. The latter property is taken care of by assuming a fluctuation-dissipation relation between the strength of the viscous and random forces appearing in the Langevin equations. Such an approach however neglects two important properties of the cytosol: it is a dense environment that certainly is not just an ordinary viscous solvent, and it is out of equilibrium [3,4]. The neglect of these aspects can be justified as a first approximation, or as relevant for experiments which are often performed in simple in vitro solvents where measurements can be made in a more precise and controlled way than in the complex environment of a cell. Yet, if one wants to understand how cells function, it is necessary to perform experiments in vivo and to interpret them using theories that take into account the real physical properties of the cytosol. In the present paper we present an extended Langevin model that takes into account the viscoelasticity and the out-of-equilibrium nature of the cytosol and see how it effects simple processes like the motion of a particle, the stepping of a molecular motor or the zipping of a DNA hairpin. Let us briefly discuss two important properties in which the cytosol differs from an ordinary solvent. Living cells are crowded. Extensive studies have tried to characterise the physical properties of this environment [5] through, for example, passive rheological measurements in which one follows the motion of tracer particles. The mean squared displacement (MSD), σ 2 (t), of these particles is found to follow a power law σ 2 (t) ∼ t α , where the exponent α almost always is different from one. Several models [5] have been proposed to explain this behaviour but there is now a growing concensus that it is due to viscoelasticity [6]. Viscoelastic effects appear as soon as the size of the tracer particle becomes of the order of the mesh size of the dense polymer networks that are present in the cytosol. At the same time, the cell is a system out of equilibrium where various active processes (like the action of motor molecules (myosin) on the actin filaments of the cytoskeleton) lead to fluctuations which can modify the passive transport of internalised or endogenous particles and which can also modify the motion of chromosomal loci or other dynamical processes in biopolymers. In an early study [7] it was for example found that the mean squared displacement of a microsphere inside a living cell has a superdiffusive motion for times up to 10 seconds after which the MSD changed to a subdiffusive motion. The superdiffusive motion has to be attributed to nonequilibrium, active processes. Similar behaviour has by now been found in other experiments in cellular environments [8][9][10] and for particles immersed in an in vitro model cytoskeleton that consists of actin filaments, crosslinkers and myosin motors [11][12][13][14]. Besides particles, also the dynamics of biopolymers like chromosomes and microtubuli in a cellular environment has been found to be different from that in an ordinary solvent. For example shape fluctuations of microtubuli in in vitro cytoskeletons were found to deviate from those expected for a semiflexible polymer in equilibrium [15,16]. The motion of chromosomal loci in simple organisms like bacteria and yeast was recently shown to depend on active forces [17,18]. Weber et al. found that after addition of chemicals that inhibit ATP-synthesis, the diffusion constant of chromosomal loci decreased by 49%. In another study, superdiffusive motion of bacterial chromosomal loci has been observed [19]. Moreover, measurements of chromatin in eukaryotes show evidence arXiv:1611.02123v1 [cond-mat.soft] 7 Nov 2016 for an important role played by ATP-dependent processes [20,21]. Recently the present authors [22] studied the well known Rouse model of polymer dynamics [23,24] in a viscoelastic and active environment and found similarities between the motion of monomers in that model and the observed behaviour of chromosomal loci. Less is known about the possible influence of active forces on other dynamical processes such as the stepping of molecular motors or the zipping of a DNA (or RNA) hairpin. Continuing the research presented in [22] we here aim to further understand the influence of viscoelasticity and activity on some cellular processes through the study of the dynamics in a simplified model. For this purpose, we study the motion of "particles" in various potentials and in an active viscoelastic environment. The "particle" can correspond with a real physical particle like a protein or a microsphere internalised in a cell, or could represent a slow variable or (reaction) coordinate used to describe for example the stepping of a motor domain of kinesin-1, or the zipping of a hairpin molecule. This paper is organised as follows. In section 2 we present and motivate our model. In section 3 we give an exact solution for a free particle and show that the behaviour of the MSD within our model is very similar to that observed in various experiments on the motion of internalised or endogenous particles in a cell or in an in vitro cytoskeleton. In section 4 we give an exact solution for a particle in a harmonic potential. Here we determine the transient and steady state behaviour of the position of the particle and use it to study the diffusion of the tethered motor domain of kinesin-1. In section 5 we study the motion in a double well potential where the "particle" has to be interpreted as a reaction coordinate. This case has to be studied numerically and the details of our algorithm are given in the supplementary information. We determine both the probability distribution of the position of the particle and the transition rate between the minima of the potential. This is as far we know the first study of the extension of Kramers' rate theory [1] to an active and viscoelastic environment. We apply our results to the zipping of hairpin molecules. Finally, in section 6 we present our concluding remarks. II. THE MODEL AND ITS BIOPHYSICAL MOTIVATION As a starting point, we take the well known Langevin equation for the velocity v(t) of a mesoscopic particle of mass m in a normal viscous fluid [1,25] m dv dt = −γv(t) − dV dx + ξ T (t)(1) Eq. (1) is just Newton's equation of motion for a particle experiencing friction (with a friction coeffient γ) and subject both to a deterministic force from a potential V (x) and a random thermal force ξ T (t). Most often ξ T (t) is taken to be a centered Gaussian random variable with an autocorrelation given by the fluctuation-dissipation the- orem (FDT) ξ T (t)ξ T (t ) = 2γk B T δ(t − t ) where δ(t) is the Dirac delta function. In a viscoelastic environment [26], friction is history dependent and is described in terms of a kernel K(t). For the simplest viscoelastic element, the Maxwell element [26], which consists of a spring and a dashpot, K(t) is exponential. The Langevin equation (1) is accordingly modified and becomes m dv dt = −γ t 0 K(t − t )v(t )dt − dV dx + ξ T (t) (2) The stochastic properties of the random force ξ T (t) are still determined by the FDT, which however in this case becomes ξ T (t)ξ T (t ) = γk B T K(t−t ). The precise form of K(t) depends on the rheological properties of the fluid. In the case that the viscoelasticity is due to polymers, which have a broad spectrum of relaxation times, K(t) can very well be approximated by a power law on times scales that are smaller than that of the longest relaxation time. We will therefore take K(t) = (2 − α)(1 − α)t −α(3) with 0 ≤ α ≤ 1. With this choice of prefactor, the noise ξ T (t) becomes so called fractional Gaussian noise (fGn) [27]. In the limit α → 1 the viscous case (1) is recovered. In this paper we will concentrate on the case α > 1/2 which leads to the most interesting behaviour and which can also be simulated most efficiently with our numerical algorithm. Notice that (2) still describes a system that evolves towards thermal equilibrium as guaranteed by the FDT. Only the transient behaviour of (1) and (2) is different. In this paper we will work in the overdamped limit (m/γ → 0), as appropriate for the low Reynolds numbers relevant at the scales encountered in cellular biophysics. Finally we need to model the active forces that put the system out of equilibrium. Here we use a model for active gels first introduced by Levine and MacKintosh [28,29]. It describes the in vitro cytoskeletons or the cytoplasm by a two-fluid model of a network of semiflexible polymers driven by molecular motors. It predicts an exponential correlation of active fluctuations, or, equivalently, a power spectrum that is constant at low frequencies ω and that decays as ω −2 at high frequencies. Such a form is indeed consistent with recent experiments on the fluctuations of carbon nanotubes inside cells [30]. More generally, one can expect that there is a typical time scale τ A on which active processes are persistent, for example it could corresponds to the typical time that a myosin motor is attached to the actin filaments. For these reasons we add to the Langevin equation (2) an extra active noise term ξ A (t) that is a centered random variable with exponential correlation ξ A (t)ξ A (t ) = C exp (−|t − t |/τ A )(4) Here C characterises the strength of the active processes. Since there is no friction term associated to the active forces and no related FDT, adding an active noise to (3) makes the system a non-equilibrium one. Experiments [11][12][13] have shown that the displacement of a particle immersed in an artificial actomyosin network has a Gaussian distribution with exponential tails superimposed. However, in a very recent work [14] it has been shown that at low myosin concentrations the distribution is purely Gaussian. In this paper we will assume that ξ A (t) is a Gaussian random variable. Though we acknowledge that this is an approximation, it has the advantage that it leads to exactly solvable models which can then be used as benchmarks to investigate deviations from Gaussianity. The effect of non-Gaussian active forces will be investigated in future work. In this respect, we also mention recent work where the motion of a particle subject to thermal and non-Gaussian active noise was investigated [31,32]. That work was limited to the viscous situation and also considered only the motion in an harmonic potential. Putting everything together, the equation that we will study for various potentials V (x) is the (generalized) Langevin equation γ t 0 K(t − t )v(t )dt = − dV dx + ξ T (t) + ξ A (t)Θ(t) (5) where Θ(t) is a Heaviside function. Thus we assume that the system is in thermal equilibrium at t = 0, at which time the active forces start to act. The resulting motion x(t) (t > 0) of the particle can then be interpreted as a respons to the active forces. III. FREE PARTICLE The simplest case to study is that of a constant potential V (x), i.e. the motion of a free particle in an active and viscoelastic environment. As a possible application we mention the movement of a particle inside a cell. In several experiments it is observed, that the motion of such a particle is superdiffusive at early times, but switches to being subdiffusive after a time of the order of seconds. In [7] it was for example found that the MSD of a microsphere inside a living cell has a superdiffusive motion ∼ t 3/2 for times up to 10 seconds. On larger time scales, the particle had a subdiffusive motion ∼ t 1/2 . In a study of breast-cancer cells [8,9] the role of molecular motors and filaments of the cytoskeleton in active transport was investigated. These authors tracked the motion of polystyrene particles in cells with different metastatic potential after chemical treatments that affect different cell components like myosin, actin and microtubules. While the precise value of the exponent depended on the treatment applied, in all cases it was found that the exponent α decreased from a value in the range 1.2−1.4 to a significantly lower value of 0.8 − 1.0 after approximately 3 seconds. So also here there is a crossover between superdiffusive and subdiffusive motion. Similar behaviour was also observed outside living matter for particles moving in actomyosin networks [11,12]. For a constant potential, the equation of motion (5) becomes γ t 0 K(t − t )ẋ(t )dt = ξ T (t) + ξ A (t)Θ(t)(6) This linear equation can easily be solved by Laplace transformation techniques. The details will be given in the next section where we solve the more general problem of motion in an harmonic potential V (x) = kx 2 /2. Taking the limit k → 0 we find that for a free particle the MSD σ 2 (t) ≡ (x(t) − x(0)) 2 is given by: σ 2 (t) = 2D α t α Γ(α + 1) + 2Cτ 2α A Γ 2 (α)η 2 α t/τ A 0 dye y y α−1 Γ(α; y, t/τ A ) (7) Here D α = k B T /η α , η α = γΓ(3 − α) and Γ(α; x 1 , x 2 ) = x2 x1 dx e −x x α−1 (8) is a difference of two incomplete gamma functions. The same MSD holds for the center of mass of a Rouse chain in a viscoelastic and active environment [22]. Equation (7) shows that in absence of active forces (C = 0) the particle performs a subdiffusion with an exponent α. To determine the behaviour in presence of active forces, we investigate the behaviour of the second term on the rhs of (7). After an initial regime (t τ A ) in which this term can be neglected, it becomes proportional to t 2α (t < τ A ) while for t > τ A it evolves as t 2α−1 (See the supplementary material of [22] for a detailed derivation of these results). The resulting behaviour of the MSD depends on the relative importance of the two terms in (7) (see Fig.1). When C k B T the second term dominates and the MSD changes from superdiffusive (with exponent 2α) to subdiffusive (exponent 2α−1). This scenario is consistent with the experimental results of [7] where a drop in the exponent from 3/2 to 1/2 is observed. For that experiment we conclude that α = 3/4 and that the active forces are strong in comparison with the thermal ones. Moreover from the experimentally determined time at which the exponent changes, one can conclude that τ A ∼ 10 seconds. If however we decrease C, the two terms in (7) can become comparable and hence one will observe an effective exponent that is between 2α and α for t < τ A and between α and 2α − 1 for t > τ A . This is shown in Fig.1 for α = 0.75 and for various values of C. In the insets of Fig.1 we show the dependence of the effective exponent on C for α = 3/4 and k B T = 1. In these cases the drop in the effective exponent is less than 1. For example, for C = 128 the respective exponents are 1.39 and 0.53. We see that in these situations the observed exponents cannot be simply related to the viscoelastic properties of the medium. This could be the scenario behind the experiments on breast-cancer cells discussed above. We can thus conclude that in the nonequilibrium situation there is no simple relation between the measured exponent of the MSD and the rheological property α. In the relation between the two, an important role is played by the strength of the active fluctuations in comparison with the thermal ones. IV. PARTICLE IN A HARMONIC POTENTIAL We now turn to the case where the particle moves in a harmonic potential V (x) = kx 2 /2. Before giving the solution of (5) for this case, we discuss a possible application to molecular motors like kinesin-1. Kinesin-1 is a two-headed molecular motor whose two heads are connected with a linker that in turn is connected to a cargo-binding tail. Detailed mechanochemical models have been introduced that succesfully describe the velocity of the motor as a function of applied force, ATP-concentration, etc. It is less clear how to explain the large processivity of kinesin-1 and that of other motors in the kinesin family [33]. A crucial role in these models is played by the mechanical properties of the neck linker. It is thought that after the binding of ATP to the front head, the free head has to perform a diffusive motion to the next site on the microtubule where it can then bind. This diffusive motion depends on the elastic properties of the neck linker, which in the simplest approximation is often modelled as a Hookean spring. A better approximation is to describe the linker as a wormlike chain (WLC) and use the latter's well known forceextension relation to determine the potential in which the motor head moves [34]. Important properties are then the time it takes for the head to diffuse in this potential over a distance of 4.1 nm, i.e. the distance to the next binding site, and the probability p that the head is within a small distance around that binding site. For example, in a model based on a Hookean spring, this probability was found to be rather small (p = 0.058) [34]. Below we will show that in active media, both the time to reach the binding site, and p can be modified significantly. In a harmonic potential, (5) becomes γ t 0 K(t − t )v(t )dt = −kx + ξ T (t) + ξ A (t)Θ(t) (9) Since this is a linear equation, it can be solved using Laplace transform techniques. We will denote the Laplace transform of a function g(t) byg(s) = ∞ 0 dte −st g(t). In this way, (9) becomes η α s α−1 (sx(s) − x(0)) = −kx(s) +ξ T (s) +ξ A (s) (10) which gives x(s) = x(0)s −1 1 + ks −α η α −1 +ξ T (s) +ξ A (s) η α s −α 1 + ks −α η α −1(11) The inverse Laplace transform can be done in terms of the Mittag-Leffler function E α,β (z) which is an extension of the exponential function that appears often in studies of the motion in a viscoelastic medium with a power law kernel . The Mittag-Leffler function [35] is defined through its series expansion E α,β (z) = ∞ i=0 z i Γ(αi + β)(12) For α = β = 1 we recover the Taylor series of the exponential. From (12), it can be shown that ∞ 0 dt e −st t β−1 E α,β (at α ) = s −β (1 − as −α ) −1 (13) Using (13), one can invert (11) and find the position of the particle x(t) = x(0)E α,1 (−(t/τ ) α ) + 1 η α ∞ 0 dt (ξ T (t − t ) + ξ A (t − t )) t α−1 E α,α (−(t /τ ) α )(14) where we have introduced the characteristic time τ = (η α /k) 1/α(15) Because (9) is linear and the noises ξ are Gaussian, also x(t) is a Gaussian random variable. Hence it is sufficient to calculate its first two cumulants. It is immediately clear from (14) that since we start from thermal equilibrium at t = 0 (hence x(0) = 0), and since the noises are centered, that also x(t) = 0 for all t. The calculation of the variance of x(t) is more involved. Using the autocorrelations of the noises ξ T and ξ A and the fact that at t = 0 the equilibrium equipartition theorem x 2 (t) = k B T /k holds (with k B Boltzmann's constant), one gets x 2 (t) = k B T k E 2 α,1 − (t/τ ) α + 1 η 2 α t 0 dt t 0 dt γk B T (2 − α)(1 − α) |t − t | α + Ce −|t −t |/τ A × t α−1 t α−1 E α,α − (t /τ ) α E α,α − (t /τ ) α(16) This expression can be simplified greatly when we use a property of the Mittag-Leffler functions which is derived in the supplemental material. In this way, we obtain after some further manipulations x 2 (t) = k B T k + C k 2 A α (t, τ, τ A )(17) Here we introduced the function A α (t, τ, τ A ) which is given by A α (t, τ, τ A ) = t/τ 0 dx t/τ 0 dy e −|x−y|τ /τ A x α−1 y α−1 E α,α − x α E α,α − y α .(18) The first (second) term of (17) is the contribution to the variance coming from thermal (active) fluctuations. The probability density P (x, t) that gives the chance to find the particle at time t in the small interval between x and x + dx is then given by the Gaussian P (x, t) = 1 2π x 2 (t) exp − x 2 2 x 2 (t)(19) Notice that independently of α, A α (t, τ, τ A ) is always strictly positive and therefore the density P (x, t) always broadens after the active forces have been turned on. This broadening as a function of time is illustrated in Fig. 2 for C = 10 3 and τ A = 1 in a medium with α = 3/4. For large times, the system will evolve to a new nonequilibrium steady state in which the variance reaches the value x 2 ss = lim t→∞ x 2 (t) = k B T /k + CA α (∞, τ, τ A )/k 2 . From (18) it can be seen that for t → ∞, A α (t, τ, τ A ) only depends on the ratio τ /τ A . Hence we find that in the stationary state the variance is of the form x 2 ss = k B T k + C k 2 f α (τ /τ A )(20) In the limit τ /τ A → 0, the exponential function disappears from (18) and since ∞ 0 dx x α−1 E α,α (−x α ) = 1, it follows that f α (τ /τ A ) → 1. In the reverse limit, τ /τ A 1, the exponential term in (18) can be written in terms of a Dirac-delta function, from which it follows that f α (τ /τ A ) goes as In Fig.3 we plot f α (τ /τ A ) for three values of α. The important conclusion here is that the nonequilibrium steady state depends on α, i.e. on the nature of the viscoelastic environment. This is in contrast to thermal equilibrium where there is no such dependence. We now apply these results to the diffusive motion of the free head of a kinesin-1 motor, in which as explained above, the head linker is modelled as a Hookean spring. f α (τ /τ A ) → 2τ A τ ∞ 0 dx x 2α−2 E 2 α,α (−x α ) (21) Here we follow [34] and choose k = 1 pN/nm. From (15) it then follows that τ ≈ 50µs, which is much shorter than the persistence time of the active forces, which is of the order of seconds (here as a rough estimate we took α = 1 and γ = 6πηa where for the cytosol we assumed η to be thousand times that of water and have taken a = 3 nm). Hence we are in the regime where τ /τ A 1, so that x 2 ss = k B T /k + C/k 2 . In the model of [34] it is assumed that the head can attach to the microtubule when it is within one nm of the binding site at 4.1 nm. For the probability density (19) in the stationary state, the probability p that the head is in this range is given by p = 1 2 erf 5.1 2 x 2 ss − erf 3.1 2 x 2 ss(22) where erf(x) is the error function and all distances are expressed in nm. In Fig. 4 we have plotted this probability as a function of the strength √ C of the active forces (in pN). We see that upon increasing C, p increases from its value of 0.058 in absence of active forces, reaches a maximum of p = .118 around √ C ≈ 3.5 pN and then decreases again. The value where p reaches its maximum is indeed of the expected order for active forces in a cell. Hence, we conclude that active forces can double the probability that the free head is near its binding site. Next we look at the time dependence of the MSD of the particle in the harmonic well. We have the relation σ 2 (t) = (x(t) − x(0)) 2 = x 2 (t) + x 2 (0) − 2 x(t)x(0)(23) The only term that remains to be calculated is the third one, which can be easily found from (14), the equipartition theorem at t = 0 and the independence of the initial position and the random forces. Using also (17) we find σ 2 (t) = 2k B T k [1 − E α,1 (−(t/τ ) α )] + C k 2 A α (t, τ, τ A )(24) We observe that τ is the timescale on which the particle starts to experience the effects of the potential. Hence we expect that for t τ the particle moves as a free particle [36], whereas for t τ it reaches its stationary state MSD. Therefore we predict that when τ A τ the MSD will show the two regimes (σ 2 (t) ∼ t 2α followed by σ 2 (t) ∼ t 2α−1 ) of the free particle before the MSD start to saturate. This is indeed the behaviour found if we plot (24) for the case τ A /τ = 10 −5 (see Fig. 5). In the reverse case, which is the biophysically relevant one, τ τ A , there will be only the superdiffusive regime after which the MSD reaches its value at stationarity (see again We have investigated the possible implication of the dynamics of the MSD on the motion of kinesin-1. For this we show in Fig. 6 the MSD for a particle in a viscous environment (α = 1) without active forces and in a viscoelastic environment (α = 3/4) in presence of active forces. We notice that in the latter situation the MSD is always somewhat larger. Given the uncertainty in the various parameters, we can conclude that the time to reach the binding site is of the same order in both situations. This suggests that active forces can be helpful in overcoming the expected slowing down of a molecular motor in a viscoelastic environment and thus may provide the answer why a motor in a cell moves with almost the same velocity as in an in vitro essay [37,38]. It requires further research to investigate whether the effects found here persists for more realistic models of the head linker and to quantify its impact on physical properties of the motor such as its velocity and processivity. V. PARTICLE IN A DOUBLE WELL To understand how macromolecules fold into threedimensional structures is one of the big open problems of biological physics. Folding reactions are often described in terms of a reaction coordinate x which moves stochastically in a free energy landscape V (x), i.e. whose time evolution is given by the Langevin equation (1) [39]. The reaction coordinate is a slow variable like the end-to-end distance of the polymer while V (x) is obtained by averaging out the fast variables like the positions of individual monomers. Since this averaging cannot easily be performed exactly, it is often assumed that V (x) has the shape of a double well whose minima correspond with the unfolded and folded molecule. One of the simplest and best understood folding processes is that of the zipping of a DNA or RNA hairpin. The folding of nucleic acids is simpler than that of proteins because the four nucleotides are chemically more similar than the 20 amino acids. Of all the possible secondary structures of nucleic acids, the hairpin is the simplest. In recent years, much advance has been made in measuring various properties of the zipping/unzipping of simple DNA and RNA structures, also at the single molecule level [40,41]. In these latter experiments, the hairpins are held under tension by chemically connecting them to two beads that are in an optical trap. If the appropriate forces are applied, the free energy of the zipped and unzipped state are almost equal and the molecule continuously flips between the two states. From the measured dynamics, the free energy landscape for hairpins can then be determined. The folding or unfolding rate corresponds to the Kramers' rate for the particle to diffuse from one minimum to the other. Kramers' theory starts from the Langevin dynamics (1) of a particle in a double well potential or its equivalent description using a Fokker-Planck equation [1]. Measured folding times are in the range from milliseconds to minutes, i.e. can be either slow or fast in comparison with active forces. As a simple model to investigate how crowding and nonequilibrium affect the zipped/unzipped transition for a hairpin under tension, we use again (5) with the double well potential V (x) = ∆U x b − 1 2 − 1 2(25) This potential has minima at x = 0 and 2b and a maximum at x = b of height ∆U . It gives a reasonable description of the measured free energy landscapes of the hairpins 20TS06/T4 and 20TS10/T4 [42]. Like in the case of the harmonic oscillator we can associate a typical time τ w with the motion of a particle in the double well potential. τ w is still given by (15) where now k is determined by making an harmonic approximation to (25) around a minimum of the potential (i.e. k = 8∆U/b 2 ). Thus we define τ w = η α b 2 8∆U 1/α(26) We also introduce the time τ e = 2 1/α τ w associated with the top of the potential barrier. Using the results of [42] we can estimate that for the DNA hairpins mentionned above, ∆U ≈ 20 kJ/mol (= 33 pN nm) and b ≈ 7.5 nm, from which it follows that τ w ≈ 8.8µs so that also in this case the biophysically interesting situation is the one where τ w τ A . Since the Langevin equation (5) cannot be solved exactly for the potential (25) we have developed an approach to numerically integrate Langevin equations with correlated noises and friction with a memory. It combines a number of existing algorithms such as the Hosking algorithm to generate fractional Gaussian noise. A full description of our method is given in the supplemental material. There we also compare the results of a simulation of a particle in a harmonic potential with the exact results of the preceding section. The fact that both approaches give the same results gives good confidence that our numerical technique works well. Using this numerical method, we have determined the probability distribution P (x, t) that the particle in the double well is at position x at time t. A typical result is shown in Fig. 7a. In this simulation the particles where all started at x = b. This corresponds to a nonequilibrium situation which allows us to clearly see the time evolution of P (x, t) also in cases where the stationary distribution reached after a long time is the equilibrium one. We expect that for times t τ w , P (x, t) reaches a stationary distribution P ss (x), as can indeed be seen in Fig. 7a. In absence of active forces that stationary state should, independently of α, correspond to the equilibrium one P eq (x, t) given by the Boltzmann distribution, P eq (x, t) = 1 Z exp (−V (x)/k B T )(27) where Z is the partition function. Figure 7b shows that indeed this equilibrium distribution is reached for three different values of α. This is another proof that our numerical approach is able to handle the dynamics in a viscoelastic environment correctly. More interesting is of course the steady state that is reached in the presence of active forces. Some results are given in Fig. 7c and 7d. They give P ss (x) for three different values of α with τ A τ w and τ A τ w respectively. We see that, as was the case for the harmonic oscillator, the steady state distribution depends on α if τ A τ w (Fig. 7c). However, in the limit τ A τ w (Fig. 7d) this distribution becomes again independent of α. This strongly suggests that in that limit the distribution can be described in terms of an equilibrium like distribution, albeit with a different potential V ef f (x). This effective potential can, up to an additive constant, be obtained from P ss (x) as V ef f (x) = −k B T ln P ss (x). The result of applying this transformation to the results of Fig 7d is given in Fig. 8. Surprisingly, for the parameter values used (C = 10 3 , τ A /τ e = 10 3 ), the effective potential can again be well fitted by the quartic potential (25) where however the distance between the two minima, b , has increased and the height of the potential barrier ∆U is decreased. We estimate b /b = 1.27 and ∆U /∆U = 0.72. It would be interesting to develop an analytical approach that allows one to determine V ef f (x) from V (x) as was recently done for other problems in the field of active matter [43]. We have also calculated the survival probability Q(t) for a particle in one of the potential minima. We start with a particle in the left well (its position is drawn from the equilibrium distribution in an harmonic approximation of the left well) and determine the time it needs to reach the bottom of the right well, which acts as an absorbing boundary. From a large number of such simulations we can determine the survival probability Q(t), i.e. the probability that the particle is still in the left well at time t. In the ordinary Kramers' problem, Q(t) decays exponentially for t large. The decay rate λ d is then given by Kramers' famous formula [1] λ d = (V (0)|V (b)|) 1/2 2πγ e −∆U/k B T(28) In Fig. 9, we show some results of simulations of the survival probability in the regime τ A τ e . One might be tempted to use Kramers' formula (28) with the parameters of the effective potential, but this turns out not to work. As the inset of Fig. 9 shows the survival probability depends on α even though the stationary state distribution does not. The main part of the graph shows Q(t) at fixed α = 0.8 for different values of the barrier height ∆U . We observe a strange crossover phenomenon in which initially the particles escapes more rapidly for small barriers (as would be the case in equilibrium), but for larger times this trend is reversed. For these parameter values, the decay of Q(t) is exponential for large enough times so that a unique transition rate can be defined. We have also performed similar simulations for the parameter values of the DNA-hairpins mentioned above. Furthermore, we take the values α = 0.75 (since we are in the regime τ A τ w , we only considered one value for α) and C = 10 pN 2 , τ A = 3 s. As can be seen in Fig. 10, the effective potential still has two minima. For values of C that are not too big, the effective potential can again be fitted by (25). We notice that the active fluctuations have greatly lowered the effective barrier between the two minima (∆U /∆U = 0.23) and at the same time has slightly increased the distance between the two minima (b /b = 1.03). We can expect that the active forces therefore will considerably lower the zipping/unzipping time of an DNA-hairpin since the escape time from one of the minima of the effective potential will be considerably lower. In order to quantify this result, we have numerically determined the survival probability in the potential well (25) both for the viscous passive case, and for the viscoelastic active one. Results are shown in Fig. 11. We see that within the simulated time window, a par- ticle in a viscous and passive environment (upper curve) is still almost surely in the left well whereas a particle in a viscoelastic and active environment has left that well with a probability ≈ 0.42. These results indicate that the zipping/unzipping time can be largely reduced in an active environment. Moreover, within the times simulated, the survival probability doesn't decay exponentially, a phenomenon that was also found in other studies of escape times in viscoelastic, but non active, environments [44,45]. VI. DISCUSSION In this paper we have introduced a (generalized) Langevin equation for a particle in a viscoelastic and active environment. The latter gives rise to random forces on the particle. These forces are assumed to be centered Gaussian random variables with exponentially decaying autocorrelation. Their effect is to put the particle out of equilibrium. We have argued that our model can be used as a first step towards understanding various processes occuring in artificial cytoskeletons and in real cells. It is interesting to remark here that noise with a correlation (4) has recently also been introduced in the study of active particles [43,[46][47][48]. These type of active particles have been named Ornstein-Uhlenbeck active particles (OUAP). Within this interpretation, Eq. (5) can also be used to study the motion of OUAP in an external potential and in a viscoelastic environment. For the motion of free particles we have shown that the mean squared displacement shows a regime of superdiffusive motion (if at least α > 1/2) followed by a subdiffusive one as observed in several recent experiments. We have also shown that the effective exponents measured do not only depend on the rheological properties of the environment but also on the characteristics of the active forces. On the one hand, this is a warning that care has to be taken in order to determine the "bare" value of α from experiments. On the other hand, it shows that both α, and the properties of the active forces (their strength C and persistence time τ A ) can be determined from measured data. For a particle in an harmonic potential, we have calculated the time evolution of both the mean squared displacement and the variance of its position. We have shown that these quantities evolve to non-equilibrium steady state values which depend on α. However, in the biophysically relevant regime where τ A is large in comparison to the typical relaxation time in the harmonic well, this dependence disappears. We have applied our results to the motion of the tethered head of kinesin-1. Here we have found that active forces enlarge the probability that the head is near its binding site. Moreover, the active forces help in preventing the expected slow down of the motor in a crowded and viscoelastic environment. The latter conclusions need to be verified in more realistic models of the neck linker potential, for example by replacing the harmonic force on the particle by that of a wormlike chain. This can easily be done using our numerical approach to solving (5). Of course, the stepping of the tethered head is only one step in the sequence of (25) for the experimentally determined potential of DNA hairpins [42] in a viscous medium and in absence of active forces (blue curve) and in a viscoelastic environment (α = 3/4) in the presence of active forces C = 10 pN 2 , τA = 3 s (orange curve). The black line through the upper curve is an exponential function decaying with the Kramers' rate (28). kinetic steps that describe the motion of kinesin [49] and it therefore remains to be investigated how the influence of active forces on that step can modify the speed or processivity of kinesin. Finally we have investigated the motion of a particle in a double well potential. Using a numerical approach we determined the probability distribution P (x, t) of the position of the particle and the survival probability Q(t) of a particle starting in the left well. We have shown, that as was the case for the harmonic oscillator, P (x, t) evolves towards a non-equilibrium steady state that depends on α, but in the limit of large persistence times of the active forces that dependence disappears. Nevertheless, the survival probability is still dependent on this parameter. For different α we thus have the same probability that the particle is in one of the wells while the fluxes between the wells are different and increase with decreasing values of α. We have applied this model to the zipping/unzipping of a DNA-hairpin held under tension in an optical trap. We have found that the active forces effectively lower the potential barrier between the two states, which lead to an escape rate that is higher than that in equilibrium. Clearly, the behaviour of a particle in a double well and in an active viscoelastic environment is very rich and deserves further investigation. Also, our observation that zipping/unzipping times can be decreased considerably needs further research. The literature on Kramers' theory is vast [50] and generalisations of Kramers' theory to viscoelastic environments have been discussed by various authors, without reaching a clear consensus [45]. We leave it to further work to de-velop a more complete theory of escape rates in environments that are both viscoelastic and out of equilibrium. Also simulations of zipping under tension of simple polymer models would be welcome in order to see whether the results found here in a description based on a reaction coordinate remain valid if all degrees of freedom of the polymer are taken into account. FIG. 1 . 1Mean square displacement σ 2 (t) of a particle in an active viscoelastic environment with α = 3/4. The dotted line presents the behaviour in absence of active forces, while the various full lines correspond with C = 2 i with i = 2, 4, 6, 8, 10, 12, 14 (bottom to top). Time is measured in units of the persistence time τA. The insets show the effective exponents for t < τA (upper left) and t > τA (lower right) as a function of the strength of the active forces C. FIG. 2 . 2Probability distribution P (x, t) for a particle in a harmonic potential (k = 3) after active forces with C = 10 3 and τA = 1 are turned on at t = 0. At t = 0 the distribution is that of thermal equilibrium with kBT = 1. The figure shows the broadening of the distribution as a function of time (τ = 10 −5 ) in a medium with α = 3/4. FIG. 3 . 3The function fα(τ /τA) (see text) for three values of α. FIG. 4 . 4Probability p that the motor head is in a region of 1 nm around the binding site at 4.1 nm as a function of the strength √ C of the active forces. This figure is for the regime τ /τA 1 where p only depends on C and not on τA or α. Fig. 5 ) 5. For intermediate values of τ A /τ the crossover to the stationary value occurs either in the superdiffusive or subdiffusive regime. FIG. 5 . 5Mean square distance travelled by a particle in a harmonic potential (k = 0.1, kBT = 1) in a viscoelastic (α = 3/4) and active environment (C = 10 5 ) for three different values of τA/τ . FIG. 6 . 6Mean square distance travelled by a particle in a harmonic potential (k = 1 pN/nm, kBT = 4.14 pN nm) in a viscous and non-active environment and in a viscoelastic active environment (α = 3/4, √ C = 3.5 pN, τA = 3 s and τ = 50µs. FIG. 7 . 7In all panels, the black line represents the Boltzmann distribution(27) and the simulated data is presented with coloured dots. Panel (a) shows the probability distribution P (x, t) for a particle in the double well surrounded by a viscoelastic bath at various times: t/τe = 10 −2 (orange), 10 −1 (blue), 1 (pink), 10 (purple) and 10 2 (green). The initial con-dition is P (x, 0) = δ(x − b), while α = 0.8, b = 1,∆U = 3 and kBT = γ = 1. Panel (b) gives the equilibrium distribution Peq(x) for different values of α at t/τe = 10 2 , with α = 0.4 (red), 0.6 (blue) and 0.8 (green). The other parameters are the same as in panel (a). Panels (c) and (d) show the stationary distribution Pss(x) at t/τe = 10 2 for a particle in a double well (∆U = 8, b = 1) surrounded by a viscoelastic bath (α = 0.4 (red), 0.6 (blue), 0.8 (green), kBT = γ = 1) in the presence of active forces with C = 10 3 and short persistence time (τA/τe = 10 −3 ) in panel (c) and large persistence time (τA/τe = 10 3 ) in panel (d). FIG. 8 . 8The effective potential V ef f (x) (see text) for the same data as shown inFig. 7d. The dashed line shows the original potential V (x) with b = 1 and ∆U = 8. The full line is a fit through the simulation data with the potential (25) but with modified parameters b = 1.27 and ∆U = 5.74. FIG. 9 . 9Survival probability Q(t) in the double well potential (25) for b = 1 for various values of ∆U . The particle moves in a viscoelastic medium with α = 0.8 and is subject to active forces with C = 10 3 , τA/τe = 10 3 . The inset shows Q(t) for ∆U = 8 and in media with α = 0.8, 0.6 and 0.4 (top to bottom). The parameters of the active forces are the same as in the main figure. FIG. 10 . 10Probability distribution Pss(x) (main figure) and effective potential V ef f (x) (inset) for the experimentally determined potential of DNA hairpins[42]. The full line in the main figure shows the equilibrium distribution Peq(x) associated with this potential (with b = 7.5 nm and ∆U = 33 pN nm -see also dashed line in inset). The full line in the inset is a fit with the quartic potential (25) (with b = 7.72 nm and ∆U = 7.6 pN nm) through the simulation data. FIG. 11 . 11Survival probability in the double well potential Acknowledgement The computational resources and services used in this work were provided by the VSC (Flemish Supercomputer Center), funded by the Research Foundation -Flanders (FWO) and the Flemish Government -department EWI. N Van Kampen, Stochastic Processes in Physics and Chemistry. AmsterdamElsevierN. Van Kampen, Stochastic Processes in Physics and Chemistry, Elsevier, Amsterdam, 2007. . R Phillips, J Kondev, J Theriot, H G Garcia, Physical Biology of the Cell, Garland Science. R. Phillips, J. Kondev, J. Theriot and H. G. Garcia, Physical Biology of the Cell, Garland Science, London, 2013. . K Luby-Phelps, Int. Rev. Cytol. 189K. Luby-Phelps, Int. Rev. Cytol., 2000, 192, 189. . K Luby-Phelps, Mol. Biol. Cell. K. Luby-Phelps, Mol. Biol. Cell, 2013, 24, 2593. . F Hofling, T Franosch, Rep. Prog. Phys. 46602F. Hofling and T. Franosch, Rep. Prog. Phys., 2013, 76, 046602. . M Weiss, Phys. Rev. E. 8810101M. Weiss, Phys. Rev. E, 2013, 88, 010101(R). . A Caspi, R Granek, M Elbaum, Phys. Rev. Lett. 5655A. Caspi, R. Granek and M. Elbaum, Phys. Rev. Lett., 2000, 85, 5655. . N Gal, D Weihs, Cell Biochem. Biophys. 63N. Gal and D. Weihs, Cell Biochem. Biophys., 2012, 63, 199. . D Goldstein, T Elhanan, M Aronovitch, D Weihs, Soft Matter. 97167D. Goldstein, T. Elhanan, M. Aronovitch and D. Weihs, Soft Matter, 2013, 9, 7167. . J F Reverey, J.-H Jeon, H Bao, M Leippe, R Metzler, C Selhuber-Unkel, Scientific Reports. 511690J. F. Reverey, J.-H. Jeon, H. Bao, M. Leippe, R. Met- zler and C. Selhuber-Unkel, Scientific Reports, 2015, 5, 11690. Koenderink. M Soares E Silva, B Stuhrmann, T Betz, G H , New J. Phys. 75010M. Soares e Silva, B. Stuhrmann, T. Betz and G.H. Koen- derink, New J. Phys., 2014, 16, 075010. . B Stuhrmann, M Soares E Silva, M Depken, F C Mackintosh, G H Koenderink, Phys. Rev. E. 8620901B. Stuhrmann, M. Soares e Silva, M. Depken, F.C. MacKintosh and G.H. Koenderink, Phys. Rev. E, 2012, 86, 020901(R). . T Toyota, D A Head, C F Schmidt, D Mizuno, Soft Matter. T. Toyota, D.A. Head, C.F. Schmidt and D. Mizuno, Soft Matter, 2011, 7, 3243. . A Sonn-Segev, A Bernheim-Grosswasser, Y Roichman, arXiv:1609.04163A. Sonn-Segev, A. Bernheim-Grosswasser and Y. Roich- man, arXiv:1609.04163. . C F Brangwynne, G H Koenderink, F C Mackintosh, D A Weitz, Phys. Rev. Lett. 118104C. F. Brangwynne, G. H. Koenderink, F. C. MacKintosh and D. A. Weitz, Phys. Rev. Lett., 2008, 100, 118104. . C P Broedersz, F C Mackintosh, Rev. Mod. Phys. 995C. P. Broedersz and F. C. MacKintosh, Rev. Mod. Phys., 2014, 86, 995. . S C Weber, A J Spakowitz, J A Theriot, Proc. Natl. Acad. Sci. U. S. A. 1097338S. C. Weber, A. J. Spakowitz and J. A. Theriot, Proc. Natl. Acad. Sci. U. S. A., 2012, 109, 7338. . A Javer, Z Long, E Nugent, M Grisi, K Siriwatwetchakul, K D Dorfman, P Cicuta, M , A. Javer, Z. Long, E. Nugent, M. Grisi, K. Siri- watwetchakul, K. D. Dorfman, P. Cicuta and M. . Cosentino Lagomarsino, Nat. Comm. Cosentino Lagomarsino, Nat. Comm., 2013, 4, 3003. . A Javer, N J Kuwada, Z Long, V G Benza, K D Dorfman, P A Wiggins, P Cicuta, M , Consentino Lagomarsino, Nat. Comm. 53854A. Javer, N. J. Kuwada, Z. Long, V. G. Benza, K. D. Dorfman, P. A. Wiggins, P. Cicuta and M. Consentino Lagomarsino, Nat. Comm., 2014, 5, 3854. . A Zidovska, D A Weitz, T J Mitchinson, Proc. Natl. Acad. Sci. U. S. A. 15555A. Zidovska, D. A. Weitz and T. J. Mitchinson, Proc. Natl. Acad. Sci. U. S. A., 2013, 110, 15555. . I Bronshtein, Nat. Comm. 68044I. Bronshtein et al., Nat. Comm., 2015, 6, 8044. . H Vandebroek, C Vanderzande, Phys. Rev. E. 9260601H. Vandebroek and C. Vanderzande, Phys. Rev. E, 2015, 92, 060601(R). . P E Rouse, J. Chem. Phys. P. E. Rouse, J. Chem. Phys., 1953, 21, 1272. M Doi, S F Edwards, The theory of Polymer Dynamics. OxfordOxford University PressM. Doi and S. F. Edwards, The theory of Polymer Dy- namics, Oxford University Press, Oxford, 1986. R Zwanzig, Nonequilibrium Statistical Mechanics. OxfordOxford University PressR. Zwanzig, Nonequilibrium Statistical Mechanics, Ox- ford University Press, Oxford, 2001. . P Oswald, Rheophysics , Cambridge University PressCambridgeP. Oswald, Rheophysics, Cambridge University Press, Cambridge, 2014. . B B Mandelbrot, J W Ness, Siam. Rev. 422B. B. Mandelbrot and J. W. Ness, Siam. Rev., 1968, 10, 422. . F C Mackintosh, Phys. Rev. Lett. 18104F. C. MacKintosh, Phys. Rev. Lett., 2008, 100, 018104. . A J Levine, J. Phys. Chem. B. 1133820A. J. Levine, J. Phys. Chem. B, 2009, 113, 3820. . N Fakhri, A D Wessel, C Willms, M Pasquali, D R Klopfenstein, F C Mackintosh, C F Schmidt, Science. 3441031N. Fakhri, A. D. Wessel, C. Willms, M. Pasquali, D. R. Klopfenstein, F. C. MacKintosh and C. F. Schmidt, Science, 2014, 344, 1031. . E Ben-Isaac, É Fodor, P Visco, F Van Wijland, N S Gov, Phys. Rev. E. 12716E. Ben-Isaac,É. Fodor, P. Visco, F. van Wijland and N.S. Gov, Phys. Rev. E, 2015, 92, 012716. . É Fodor, M Guo, N S Gov, P Visco, D A Weitz, F Van Wijland, Europhys. Lett. 48005É. Fodor, M. Guo, N.S. Gov, P. Visco, D.A. Weitz and F. van Wijland, Europhys. Lett., 2015, 110, 48005. . W O Hancock, Biophys. J. 1216W. O. Hancock, Biophys. J., 2016, 110, 1216. . M L Kutys, PloS Comp. Biol. M. L. Kutys, PloS Comp. Biol., 2010, 6, 1000980. . H J Haubold, A M Mathai, R K Saxena, J. Appl. Math. H. J. Haubold, A. M. Mathai and R. K. Saxena, J. Appl. Math., 2011, 2011, 298628. Using the series expansion of the Mittag-Leffler function (12) and performing one of the integrals appearing in the definition of Aα. The MSD of the free particle can be found by taking the limit k → 0, i.e. τ → ∞ in (24). t, τ, τA) then leads toThe MSD of the free particle can be found by taking the limit k → 0, i.e. τ → ∞ in (24). Using the series expansion of the Mittag-Leffler function (12) and per- forming one of the integrals appearing in the definition of Aα(t, τ, τA) then leads to (7). . D Cai, K J Verhey, E Meyhöfer, Biophys. J. 4137D. Cai, K. J. Verhey and E. Meyhöfer, Biophys. J., 2007, 92, 4137. . D B Hill, M J Plaza, K Bonin, G Holzwarth, Eur. Biophys. J. 33623D. B. Hill, M. J. Plaza, K. Bonin and G. Holzwarth, Eur. Biophys. J., 2004, 33 623. . M Oliveberg, P G Wolynes, Q. Rev. Biophys. 245M. Oliveberg and P. G. Wolynes, Q. Rev. Biophys., 2005, 38, 245. . A Borgia, P M Williams, J Clarke, Annu. Rev. Biochem. 101A. Borgia, P. M. Williams and J. Clarke, Annu. Rev. Biochem., 2008, 77, 101. . M T Woodside, C Garcia-Garcia, S M Block, Curr. Opin. Chem. Biol. 640M. T. Woodside, C. Garcia-Garcia and S. M. Block, Curr. Opin. Chem. Biol., 2008, 12, 640. . K Neupane, D B Ritchie, H Yu, D A N Foster, F Wang, M T Woodside, Phys. Rev. Lett. 10968102K. Neupane, D. B. Ritchie, H. Yu, D. A. N. Foster, F. Wang and M. T. Woodside, Phys. Rev. Lett., 2012, 109, 068102. . E Fodor, C Nardini, M E Cates, J Tailleur, P Visco, F Van Wijland, Phys. Rev. Lett. 38103E. Fodor, C. Nardini, M. E. Cates, J. Tailleur, P. Visco and F. van Wijland, Phys. Rev. Lett., 2016, 117, 038103. . I Goychuk, P Hänggi, Phys. Rev. Lett. 99I. Goychuk and P. Hänggi, Phys. Rev. Lett., 2007, 99, 200601. . I Goychuk, Phys. Rev. E. 8046125I. Goychuk, Phys. Rev. E, 2009, 80, 046125. . T F F Farage, P Krinninger, J M Brader, Phys. Rev. E. 42310T. F. F. Farage, P. Krinninger and J. M. Brader, Phys. Rev. E, 2015, 91, 042310. . N Koumakis, C Maggi, R Di Leonardo, Soft Matter. 105695N. Koumakis, C. Maggi and R. Di Leonardo, Soft Matter, 2014, 10, 5695. . G Szamel, E Flenner, L Berthier, Phys. Rev. E. 62304G. Szamel, E. Flenner and L. Berthier, Phys. Rev. E, 2015, 91, 062304. . B E Clancy, W M Behnke-Parks, J O L Andreasson, S S Rosenfeld, S M Block, Nat. Struct. Mol. Biol. B.E. Clancy, W.M. Behnke-Parks, J.O.L. Andreasson, S.S. Rosenfeld and S.M. Block, Nat. Struct. Mol. Biol., 2011, 18, 1020. . P Hänggi, P Talkner, M Borkovec, Rev. Mod. Phys. 251P. Hänggi, P. Talkner and M. Borkovec, Rev. Mod. Phys., 1990, 62, 251.
[]
[ "Goldstone bosons and the Englert-Brout-Higgs mechanism in non-Hermitian theories", "Goldstone bosons and the Englert-Brout-Higgs mechanism in non-Hermitian theories" ]
[ "Philip D Mannheim [email protected] \nDepartment of Physics\nUniversity of Connecticut\n06269StorrsCTUSA\n" ]
[ "Department of Physics\nUniversity of Connecticut\n06269StorrsCTUSA" ]
[]
In recent work Alexandre, Ellis, Millington and Seynaeve have extended the Goldstone theorem to non-Hermitian Hamiltonians that possess a discrete antilinear symmetry such as P T and possess a continuous global symmetry. They restricted their discussion to those realizations of antilinear symmetry in which all the energy eigenvalues of the Hamiltonian are real. Here we extend the discussion to the two other realizations possible with antilinear symmetry, namely energies in complex conjugate pairs or Jordan-block Hamiltonians that are not diagonalizable at all. In particular, we show that under certain circumstances it is possible for the Goldstone boson mode itself to be one of the zero-norm states that are characteristic of Jordan-block Hamiltonians. While we discuss the same model as Alexandre, Ellis, Millington and Seynaeve our treatment is quite different, though their main conclusion that one can have Goldstone bosons in the non-Hermitian case remains intact. We extend our analysis to a continuous local symmetry and find that the gauge boson acquires a non-zero mass by the Englert-Brout-Higgs mechanism in all realizations of the antilinear symmetry, except the one where the Goldstone boson itself has zero norm, in which case, and despite the fact that the continuous local symmetry has been spontaneously broken, the gauge boson remains massless. E * . Thus as originally noted by Wigner in his study of time reversal invariance, energies can thus be real or appear in complex conjugate pairs with complex conjugate eigenfunctions. It is often the case that one can move between these two realizations by a change in the parameters in H. There will thus be a transition point (known as an exceptional point) at which the switch over occurs. However, at this transition point the two complex conjugate wave functions (|ψ and A|ψ ) have to collapse into a single common wave function as there are no complex conjugate pairs on the real energy side. Since this collapse to a single common wave function reduces the number of energy eigenfunctions, at the transition point the eigenspectrum of the Hamiltonian becomes incomplete, with the Hamiltonian then being of non-diagonalizable Jordan-block form, the thus third possible realization of antilinear symmetry.While the above analysis would in principle apply to any antilinear symmetry, because of its H = p 2 +ix 3 progenitor, the antilinear symmetry program is conventionally referred to as the P T -symmetry program. However, P T symmetry can actually be selected out for a different reason, namely it has a connection to spacetime. Specifically, it was noted in[11]and emphasized in [3] that for the spacetime coordinates the linear part of a P T transformation is the same as a particular complex Lorentz transformation, while in[7,12]it was noted that for spinors the linear part of a CP T transformation is the same as that very same particular complex Lorentz transformation, where C denotes charge conjugation. 2 Then in[7,12]it was shown that if one imposes only two requirements, namely the time independence of inner products and invariance under the complex Lorentz group, it follows that the Hamiltonian must be CP T invariant, with CP T symmetry itself being antilinear. Since this analysis involves no Hermiticity requirement, the CP T theorem is thus extended to the non-Hermitian case (and thus through the complex energy realization of antilinear symmetry to decay processes that are forbidden by Hermiticity). Since charge conjugation plays no role in non-relativistic physics where one is below the threshold for particle production, CP T then defaults to P T , to thus put the P T -symmetry program on a quite secure theoretical foundation.As with the CP T theorem, one can ask what happens to other familiar results of quantum field theory when one relaxes the Hermiticity requirement. This then was the brief of Alexandre, Ellis, Millington and Seynaeve [13], who found that the Goldstone theorem can also be decoupled from Hermiticity, and can hold in the non-Hermitian but antilinearly symmetric case. 3 Alexandre, Ellis, Millington and Seynaeve restricted their discussion to those realizations of antilinear symmetry in which all the energy eigenvalues of the Hamiltonian are real. Here we extend the discussion to the two other possible P T -symmetry program realizations, namely energies in complex conjugate pairs or Jordanblock Hamiltonians that are not diagonalizable at all. In particular, we show that it is possible for the Goldstone boson mode itself to be one of the zero-norm states that are characteristic of Jordan-block Hamiltonians. While we discuss the same model as Alexandre, Ellis, Millington and Seynaeve our treatment is quite different, though their main conclusion that one can have Goldstone bosons in the non-Hermitian case remains intact. In particular, in their paper Alexandre, Ellis, Millington and Seynaeve presented a variational procedure for the action in which the surface term played an explicit role, to thus suggest that one has to use such a procedure in order to establish the Goldstone theorem in the non-Hermitian case. However, we show that one does not need to do this, as we are able to obtain a Goldstone boson using a completely standard variational procedure. Moreover, since we do use a standard variational procedure we can readily extend our analysis to a continuous local symmetry by introducing a gauge boson. We show that the gauge boson acquires a non-zero mass by the Englert-Brout-Higgs mechanism in all realizations of the antilinear symmetry, except the one where the Goldstone boson itself has zero norm, in which case, and despite the spontaneous breakdown of the continuous local symmetry, the gauge boson remains massless.The present paper is organized as follows. In Sec. II we present the complex scalar field model discussed in[13], and using a standard variational procedure for the action find its spontaneously broken tree approximation minimum and determine the eigenvalues of the associated mass matrix. In Sec. III we determine the associated left-and right-eigenvectors and construct the left-right V operator norm that plays a central role in antilinear theories. In Sec. IV we compare our treatment with that of the authors of [13], who used a non-standard variational procedure. This leads us to a Hamiltonian that looks Hermitian but is not, and in Sec. V we discuss how this is possible. In this section we also discuss the connection between antilinear symmetry and Hermiticity within the context of the CP T theorem as developed in[7]. In Sec. VI we extend the discussion to the Englert-Brout-Higgs mechanism, and in Sec. VII we provide a summary of our results. Finally, in an appendix we construct the left-right quantum theory matrix elements that would produce the c-number tree approximation classical field and the effective potential that is minimized in Sec. II, and discuss how Ward identities are realized in the non-Hermitian case.2 The complex Lorentz transformation Λ 0 3 (iπ)Λ 0 2 (iπ)Λ 0 1 (iπ) implements xµ → −xµ on coordinates and ψ 1 (x) → γ 5 ψ 1 (−x) on a Majorana spinor, just as the linear part of a CP T transformation does. 3 Since historically the CP T theorem was found during the effort to establish the spin and statistics theorem, it would be of interest to see how the spin and statistics theorem itself might fare in the non-Hermitian but CP T symmetric case.
10.1103/physrevd.99.045006
null
119,268,016
1808.00437
ee81ec11e95dd784a50aa84deafbf6fec60a5e4e
Goldstone bosons and the Englert-Brout-Higgs mechanism in non-Hermitian theories 10 Apr 2019 Philip D Mannheim [email protected] Department of Physics University of Connecticut 06269StorrsCTUSA Goldstone bosons and the Englert-Brout-Higgs mechanism in non-Hermitian theories 10 Apr 2019(Dated: January 17, 2019)arXiv:1808.00437v2 [hep-th] In recent work Alexandre, Ellis, Millington and Seynaeve have extended the Goldstone theorem to non-Hermitian Hamiltonians that possess a discrete antilinear symmetry such as P T and possess a continuous global symmetry. They restricted their discussion to those realizations of antilinear symmetry in which all the energy eigenvalues of the Hamiltonian are real. Here we extend the discussion to the two other realizations possible with antilinear symmetry, namely energies in complex conjugate pairs or Jordan-block Hamiltonians that are not diagonalizable at all. In particular, we show that under certain circumstances it is possible for the Goldstone boson mode itself to be one of the zero-norm states that are characteristic of Jordan-block Hamiltonians. While we discuss the same model as Alexandre, Ellis, Millington and Seynaeve our treatment is quite different, though their main conclusion that one can have Goldstone bosons in the non-Hermitian case remains intact. We extend our analysis to a continuous local symmetry and find that the gauge boson acquires a non-zero mass by the Englert-Brout-Higgs mechanism in all realizations of the antilinear symmetry, except the one where the Goldstone boson itself has zero norm, in which case, and despite the fact that the continuous local symmetry has been spontaneously broken, the gauge boson remains massless. E * . Thus as originally noted by Wigner in his study of time reversal invariance, energies can thus be real or appear in complex conjugate pairs with complex conjugate eigenfunctions. It is often the case that one can move between these two realizations by a change in the parameters in H. There will thus be a transition point (known as an exceptional point) at which the switch over occurs. However, at this transition point the two complex conjugate wave functions (|ψ and A|ψ ) have to collapse into a single common wave function as there are no complex conjugate pairs on the real energy side. Since this collapse to a single common wave function reduces the number of energy eigenfunctions, at the transition point the eigenspectrum of the Hamiltonian becomes incomplete, with the Hamiltonian then being of non-diagonalizable Jordan-block form, the thus third possible realization of antilinear symmetry.While the above analysis would in principle apply to any antilinear symmetry, because of its H = p 2 +ix 3 progenitor, the antilinear symmetry program is conventionally referred to as the P T -symmetry program. However, P T symmetry can actually be selected out for a different reason, namely it has a connection to spacetime. Specifically, it was noted in[11]and emphasized in [3] that for the spacetime coordinates the linear part of a P T transformation is the same as a particular complex Lorentz transformation, while in[7,12]it was noted that for spinors the linear part of a CP T transformation is the same as that very same particular complex Lorentz transformation, where C denotes charge conjugation. 2 Then in[7,12]it was shown that if one imposes only two requirements, namely the time independence of inner products and invariance under the complex Lorentz group, it follows that the Hamiltonian must be CP T invariant, with CP T symmetry itself being antilinear. Since this analysis involves no Hermiticity requirement, the CP T theorem is thus extended to the non-Hermitian case (and thus through the complex energy realization of antilinear symmetry to decay processes that are forbidden by Hermiticity). Since charge conjugation plays no role in non-relativistic physics where one is below the threshold for particle production, CP T then defaults to P T , to thus put the P T -symmetry program on a quite secure theoretical foundation.As with the CP T theorem, one can ask what happens to other familiar results of quantum field theory when one relaxes the Hermiticity requirement. This then was the brief of Alexandre, Ellis, Millington and Seynaeve [13], who found that the Goldstone theorem can also be decoupled from Hermiticity, and can hold in the non-Hermitian but antilinearly symmetric case. 3 Alexandre, Ellis, Millington and Seynaeve restricted their discussion to those realizations of antilinear symmetry in which all the energy eigenvalues of the Hamiltonian are real. Here we extend the discussion to the two other possible P T -symmetry program realizations, namely energies in complex conjugate pairs or Jordanblock Hamiltonians that are not diagonalizable at all. In particular, we show that it is possible for the Goldstone boson mode itself to be one of the zero-norm states that are characteristic of Jordan-block Hamiltonians. While we discuss the same model as Alexandre, Ellis, Millington and Seynaeve our treatment is quite different, though their main conclusion that one can have Goldstone bosons in the non-Hermitian case remains intact. In particular, in their paper Alexandre, Ellis, Millington and Seynaeve presented a variational procedure for the action in which the surface term played an explicit role, to thus suggest that one has to use such a procedure in order to establish the Goldstone theorem in the non-Hermitian case. However, we show that one does not need to do this, as we are able to obtain a Goldstone boson using a completely standard variational procedure. Moreover, since we do use a standard variational procedure we can readily extend our analysis to a continuous local symmetry by introducing a gauge boson. We show that the gauge boson acquires a non-zero mass by the Englert-Brout-Higgs mechanism in all realizations of the antilinear symmetry, except the one where the Goldstone boson itself has zero norm, in which case, and despite the spontaneous breakdown of the continuous local symmetry, the gauge boson remains massless.The present paper is organized as follows. In Sec. II we present the complex scalar field model discussed in[13], and using a standard variational procedure for the action find its spontaneously broken tree approximation minimum and determine the eigenvalues of the associated mass matrix. In Sec. III we determine the associated left-and right-eigenvectors and construct the left-right V operator norm that plays a central role in antilinear theories. In Sec. IV we compare our treatment with that of the authors of [13], who used a non-standard variational procedure. This leads us to a Hamiltonian that looks Hermitian but is not, and in Sec. V we discuss how this is possible. In this section we also discuss the connection between antilinear symmetry and Hermiticity within the context of the CP T theorem as developed in[7]. In Sec. VI we extend the discussion to the Englert-Brout-Higgs mechanism, and in Sec. VII we provide a summary of our results. Finally, in an appendix we construct the left-right quantum theory matrix elements that would produce the c-number tree approximation classical field and the effective potential that is minimized in Sec. II, and discuss how Ward identities are realized in the non-Hermitian case.2 The complex Lorentz transformation Λ 0 3 (iπ)Λ 0 2 (iπ)Λ 0 1 (iπ) implements xµ → −xµ on coordinates and ψ 1 (x) → γ 5 ψ 1 (−x) on a Majorana spinor, just as the linear part of a CP T transformation does. 3 Since historically the CP T theorem was found during the effort to establish the spin and statistics theorem, it would be of interest to see how the spin and statistics theorem itself might fare in the non-Hermitian but CP T symmetric case. I. INTRODUCTION Following work by Bender and collaborators [1][2][3][4][5] it has become apparent that quantum mechanics is much richer than conventional Hermitian quantum mechanics. However, if one wishes to maintain probability conservation, one needs to be able to define an inner product that is time independent. The reason that one has any freedom at all in doing this is because the Schrödinger equation i∂ t |ψ = H|ψ only involves the ket state and leaves the bra state unspecified. While the appropriate bra state is the Hermitian conjugate of the ket when the Hamiltonian is Hermitian, for non-Hermitian Hamiltonians a more general bra state is needed. However, one cannot define an inner product that is time independent for any non-Hermitian Hamiltonian. Rather, it has been found ( [6,7] and references therein) that the most general Hamiltonian for which one can construct a time-independent inner product is one that has an antilinear symmetry, and in such a case the required bra state is the conjugate of the ket state with respect to that particular antilinear symmetry. When a Hamiltonian has an antilinear symmetry its energy eigenspectrum can be realized in three possible ways, all eigenvalues real and eigenspectrum complete, some or all of the eigenvalues in complex conjugate pairs with the eigenspectrum still being complete, or eigenspectrum incomplete and Hamiltonian of non-diagonalizable, and thus necessarily of non-Hermitian, Jordan-block form. Of these three possible realizations only the first can also be achieved with a Hermitian Hamiltonian, and while Hermiticity implies the reality of energy eigenvalues, there is no theorem that would require a non-Hermitian Hamiltonian to have complex eigenvalues, with Hermiticity only being sufficient for the reality of eigenvalues but not necessary. 1 The necessary condition for the reality of energy eigenvalues is that the Hamiltonian have an antilinear symmetry [7][8][9][10], while the necessary and sufficient condition is that in addition all energy eigenstates are eigenstates of the antilinear operator [3]. Interest in non-Hermitian Hamiltonians with an antilinear symmetry was first triggered by the work of Bender and collaborators [1,2] who found that the eigenvalues of the Hamiltonian H = p 2 + ix 3 are all real. This surprising reality was traced to the fact that the Hamiltonian possesses an antilinear P T symmetry (P is parity and T is time reversal), under which P pP = −p, P xP = −x, T pT = −p, T xT = x, T iT = −i. In general for any Hamiltonian H with an antilinear symmetry A (i.e. with AH = HA), when acting on H|ψ = E|ψ one has AH|ψ = AHA −1 A|ψ = HA|ψ = E * A|ψ . Thus for every eigenstate |ψ of H with energy E there is another eigenstate A|ψ of H with energy II. SPONTANEOUSLY BROKEN NON-HERMITIAN THEORY WITH A CONTINUOUS GLOBAL SYMMETRY The model introduced in [13] consists of two complex (i.e. charged) scalar fields φ 1 (x) and φ 2 (x) with action I(φ 1 , φ 2 , φ * 1 , φ * 2 ) = d 4 x ∂ µ φ * 1 ∂ µ φ 1 + ∂ µ φ * 2 ∂ µ φ 2 + m 2 1 φ * 1 φ 1 − m 2 2 φ * 2 φ 2 − µ 2 (φ * 1 φ 2 − φ * 2 φ 1 ) − g 4 (φ * 1 φ 1 ) 2 ,(1) where the star symbol denotes complex conjugation, and thus Hermitian conjugation since neither of the the two scalar fields possesses any internal symmetry index. Since the action is not invariant under complex conjugation, it is not Hermitian. It is however invariant under the following CP T transformation φ 1 (x µ ) → φ * 1 (−x µ ), φ 2 (x µ ) → −φ * 2 (−x µ ), φ * 1 (x µ ) → φ 1 (−x µ ), φ * 2 (x µ ) → −φ 2 (−x µ ),(2) and thus has an antilinear symmetry. 4 Since one can construct the energy-momentum tensor T µν by the variation T µν = 2(−g) −1/2 δI(φ 1 , φ 2 , φ * 1 , φ * 2 )/δg µν with respect to the metric g µν of the covariantized form of the action (momentarily replace ordinary derivatives by covariant ones and replace the measure by d 4 x(−g) 1/2 ), it follows from general coordinate invariance that a so-constructed energy-momentum tensor is automatically covariantly conserved in solutions to the equations of motion that follow from stationarity of the same action. Then, since one can set H = d 3 xT 00 , it follows that the associated Hamiltonian is time independent. Moreover, since the metric is CP T even, then since the action is CP T invariant it follows that the Hamiltonian is CP T invariant too. The Hamiltonian associated with (1) thus has an antilinear CP T symmetry. 5 In regard to (2), we note here that for φ 2 the transformation is not the conventional CP T transformation of scalar fields that is used in quantum field theory (one in which all scalar field CP T phases are positive [14]) but a similarity transformation of it. We will need to return to this point below, but for the moment we just use (2) as is. As written, the action given in (1) is invariant under the electric charge transformation φ 1 → e iα φ 1 , φ * 1 → e −iα φ * 1 , φ 2 → e iα φ 2 , φ * 2 → e −iα φ 2 ,(3) to thus possess a standard Noether current j µ = i(φ * 1 ∂ µ φ 1 − φ 1 ∂ µ φ * 1 ) + i(φ * 2 ∂ µ φ 2 − φ 2 ∂ µ φ * 2 )(4) that is conserved in solutions to the equations of motion (36) and (37) associated with (1). We note here that the authors of [13] used a non-standard Euler-Lagrange variational procedure (one which involves a non-trivial surface term) to obtain a non-standard set of equations of motion and a non-standard current (one not a Noether current invariance of the action), one that is nonetheless conserved in solutions to this non-standard set of equations of motion, and we discuss this issue in Sec. IV. However, we shall use a standard variational procedure and a standard Noether current approach. With the potential of the field φ 1 being of the form of a double-well potential, in its non-trivial minimum the scalar field φ 1 would acquire a non-trivial vacuum expectation value. This would then break the electric charge symmetry spontaneously, and one would thus wonder whether there might still be a massless Goldstone boson despite the lack of Hermiticity. As shown by the authors of [13] for the current they use and by us here for the above j µ , in both the cases a Goldstone boson is indeed present. To study the dynamics associated with the action given in (1) we have found it convenient to work in the component basis φ 1 = 1 √ 2 (χ 1 + iχ 2 ), φ * 1 = 1 √ 2 (χ 1 − iχ 2 ), φ 2 = 1 √ 2 (ψ 1 + iψ 2 ), φ * 2 = 1 √ 2 (ψ 1 − iψ 2 ),(5) 4 The study of [7,12] shows that for relativistic actions such as that given in (1) CP T must be an invariance, a point we elaborate on further below. In their paper the authors of [13] took T to conjugate fields. While T does conjugate wave functions in quantum mechanics, conventionally in quantum field theory T does not conjugate q-number fields (it only conjugates c-numbers). Rather, it is charge conjugation C that conjugates fields. Thus what the authors of [13] refer to as P T is actually CP T , just as required by the analysis of [7,12]. However none of the conclusions of [13] are affected by this. 5 Just as is familiar from Hermitian quantum field theory, one can also construct the same metric-derived energy-momentum tensor from the translation invariance of the action and the equations of motion of the fields, since nothing in that construction actually requires Hermiticity. The advantage of using the metric approach, which also is not sensitive to Hermiticity, is that it ensures that the Hamiltonian that is obtained has the same transformation properties under CP T symmetry as the starting action. 1 = 0.(8) Choosing the minimum in which (g/4)χ 2 1 = m 2 1 − µ 4 /m 2 2 ,ψ 2 = −iµ 2χ 1 /m 2 2 ,χ 2 = 0,ψ 1 = 0, and then expanding around this minimum according to χ 1 =χ 1 +χ 1 , χ 2 =χ 2 , ψ 1 =ψ 1 , ψ 2 =ψ 2 +ψ 2 yields a first-order term in the equations of motion of the form:    − χ 1 − ψ 2 − χ 2 − ψ 1    =    2m 2 1 − 3µ 4 /m 2 2 iµ 2 0 0 iµ 2 m 2 2 0 0 0 0 −µ 4 /m 2 2 −iµ 2 0 0 −iµ 2 m 2 2      χ 1 ψ 2 χ 2 ψ 1    = M   χ 1 ψ 2 χ 2 ψ 1    .(9) As we see, with our choice of basis, we have already block-diagonalized the mass matrix M . We can readily determine the mass eigenvalues, and obtain |M − λI| = λ(λ + µ 4 /m 2 2 − m 2 2 ) λ 2 − λ(2m 2 1 + m 2 2 − 3µ 4 /m 2 2 ) + 2m 2 1 m 2 2 − 2µ 4 .(10) The mass eigenvalue solutions to |M − λI| = 0 are thus λ 0 = 0, λ 1 = m 4 2 − µ 4 m 2 2 , λ ± = 2m 2 1 m 2 2 + m 4 2 − 3µ 4 2m 2 2 ± 1 2m 2 2 (2m 2 1 m 2 2 + m 4 2 − 3µ 4 ) 2 + 8µ 4 m 4 2 − 8m 2 1 m 6 2 1/2 . = 2m 2 1 m 2 2 + m 4 2 − 3µ 4 2m 2 2 ± 1 2m 2 2 (2m 2 1 m 2 2 − m 4 2 − 3µ 4 ) 2 − 4µ 4 m 4 2 1/2 .(11) 6 As is standard, under time reversal χ 1 has even T parity while χ 2 has odd T parity, so that under T χ 1 + iχ 2 has even parity. Under charge conjugation, χ 1 has even C parity while χ 2 has odd C parity. Thus under CP T the P even χ 1 + iχ 2 transforms into χ 1 − iχ 2 . Because of the transformations in the φ 2 sector that are given in (2) ψ 1 has to have odd T parity while ψ 2 has to have even T parity. (However, their C parities are standard, with ψ 1 having even C parity while ψ 2 has odd C parity.) We discuss this pattern of T parity assignments further below, where we will make a commutation relation preserving similarity transformation that will effect ψ 1 → −iψ 1 , ψ 2 → −iψ 2 , to thus change the signs of their T and CP T parities. 7 The P T -symmetric p 2 + ix 3 theory is actually CP T symmetric since p and x are C even and charge conjugation plays no role in non-relativistic systems. Given a mode with λ 0 = 0 (the determinant in the (χ 2 ,ψ 1 ) sector of M being zero), then just as noted in [13], the presence of a massless Goldstone boson is apparent, and the Goldstone theorem is thus seen to hold when a non-Hermitian Hamiltonian has an antilinear symmetry. 8 If we restrict the sign of the factor in the square root in λ ± to be positive (the case considered in [13]), then all mass eigenvalues are real. However, we note that we obtain a mode with λ 0 = 0 regardless of the magnitude of this factor, and thus even obtain a Goldstone boson when the factor in the square root term is negative and mass eigenvalues appear in complex conjugate pairs. Moreover, as we show in Sec. III below, when the factor in the square root term is zero, in the (χ 1 ,ψ 2 ) sector the matrix M becomes Jordan block. The Goldstone boson mode is thus present in all three of the eigenvalue realizations that are allowed by antilinearity (viz. antilinear symmetry). Moreover, technically we do not even need to ascertain what the antilinear symmetry might even be, since as shown in [7,10], once we obtain an eigenvalue spectrum of the form that we have obtained in the (χ 1 ,ψ 2 ) sector, the mass matrix must admit of an antilinear symmetry. Thus antilinearity implies this particular form for the mass spectrum, and this particular form for the mass spectrum implies antilinearity. Finally, we note that if in the (χ 2 ,ψ 1 ) sector we set µ 4 = m 4 2 , then not only does λ 1 become zero just like λ 0 , but as we show in Sec. III the entire sector becomes Jordan block, with the Goldstone boson eigenfunction itself then having the zero norm that is characteristic of Jordan-block systems. III. EIGENVECTORS OF THE MASS MATRIX To discuss the eigenvector spectrum of the mass matrix M , it is convenient to introduce the P T theory V operator. Specifically, it was noted in [7][8][9] that if a time-independent Hamiltonian has an antilinear symmetry there will always exist a time-independent operator V that obeys the so-called pseudo-Hermiticity condition V H = H † V . If V is invertible (this automatically being the case for any finite-dimensional matrix such as the mass matrix M of interest to us here), then H and H † are isospectrally related according to H † = V HV −1 , to thus have the same set of eigenvalues. Since such an isospectral relation requires that the eigenvalues of H be real or in complex pairs, pseudo-Hermiticity is equivalent to antilinearity. If H is not Hermitian, one has to introduce separate right-and left-Schrödinger equations in which H acts to the right or to the left. Then from the relation i∂ t |n = H|n obeyed by solutions to the right-Schrödinger equation we obtain −i∂ t n| = n|H † , with n| then not being a solution to the left-Schrödinger equation as it does not obey −i∂ t n| = n|H. Consequently in the non-Hermitian case the standard Dirac norm n(t)|n(t) = n(0)|e iH † t e −iHt |n(0) is not time independent (i.e. not equal to n(0)|n(0) ), and one cannot use it as an inner product. However, the V norm constructed from V is time independent since i∂ t n(t)|V |n(t) = n(t)|(V H − H † V )|n(t) = 0. Since we can set −i∂ t n| = n|H † = n|V HV −1 , −i∂ t n|V = n|V H,(13) we see that it is the state n|V that is a solution to the left-Schrödinger equation and not the bra n| itself. Moreover, from (13) we obtain n(t)|V |n(t) = n(0)|V e iHt e −iHt |n(0) = n(0)|V |n(0) ,(14) to thus confirm the time independence of the V norm. Through the V operator then we see that time independence of inner products and antilinear symmetry are equivalent. Given that L n | = n|V is a solution to the left-Schrödinger equation, in the event that it is also a left-eigenvector of H and |R n is a right-eigenvector of H, in the antilinear case the completeness relation is given not by |n n| = I but by |n n|V = |R n L n | = I(15) instead. As shown in [15], when charge conjugation is separately conserved, the left-right R n |V |R m V -norm is the same as the overlap of the right-eigenstate |R n with its P T conjugate (like P T conjugation Hermitian conjugation is also antilinear). And more generally, the V -norm is the same as the overlap of a state with its CP T conjugate [7]. In the special case where all the eigenvalues of a Hamiltonian are real and the eigenspectrum is complete, the Hamiltonian must either already obey H = H † or be transformable by a (non-unitary) similarity transformation S into one that does according to SHS −1 = H ′ = H ′ † . For the primed system one has right-eigenvectors that obey i∂ t |R ′ n = H ′ |R ′ n , −i∂ t R ′ n | = R ′ n |H ′ ,(16) with the eigenstates of H and H ′ being related by |R ′ n = S|R n , R ′ n | = R n |S † .(17) On normalizing the eigenstates of the Hermitian H ′ to unity, we obtain R ′ n |R ′ m = R n |S † S|R m = δ m,n .(18) With H ′ = H ′ † we obtain SHS −1 = S †−1 H † S † , S † SHS −1 S †−1 = S † SH[S † S] −1 = H † .(19) We can thus identify S † S with V when all energy eigenvalues are real and H is diagonalizable, and as noted in [7], can thus establish that the V norm is the S † S norm, so that in this case L n |R m = R n |V |R m = R n |S † S|R m = δ m,n is positive definite. 9 The interpretation of the V norms as probabilities is then secured, with their time independence ensuring that probability is preserved in time. Having now presented the general non-Hermitian formalism, a formalism that holds in both wave mechanics and matrix mechanics [3], and holds in quantum field theory [7], we can apply it to the mass matrix M given in (9). And while this matrix does arise in a quantum field theory, all that matters in the following is that it has a non-Hermitian matrix structure. The matrix M breaks up into two distinct two-dimensional blocks, and we can describe each of them by the generic N = C + A iB iB C − A ,(20) where A, B and C are all real. The matrix N is not Hermitian but does have a P T symmetry if we set P = σ 3 and T = K where K effects complex conjugation. The eigenvalues of N are given by Λ ± = C ± (A 2 − B 2 ) 1/2 ,(21) and they are real if A 2 > B 2 and in a complex conjugate pair if A 2 < B 2 , just as required of a non-Hermitian but P T -symmetric matrix. Additionally, the relevant S and V operators are given by S = 1 2(A 2 − B 2 ) 1/4 (A + B) 1/2 + (A − B) 1/2 i[(A + B) 1/2 − (A − B) 1/2 ] −i[(A + B) 1/2 − (A − B) 1/2 ] (A + B) 1/2 + (A − B) 1/2 , S −1 = 1 2(A 2 − B 2 ) 1/4 (A + B) 1/2 + (A − B) 1/2 −i[(A + B) 1/2 − (A − B) 1/2 ] i[(A + B) 1/2 − (A − B) 1/2 ] (A + B) 1/2 + (A − B) 1/2 , V = 1 (A 2 − B 2 ) 1/2 A iB −iB A , V −1 = 1 (A 2 − B 2 ) 1/2 A −iB iB A ,(22) and they effect SN S −1 = N ′ = C + (A 2 − B 2 ) 1/2 0 0 C − (A 2 − B 2 ) 1/2 , V N V −1 = C + A −iB −iB C − A = N †(23) regardless of whether A 2 − B 2 is positive or negative (if A 2 is less than B 2 , then while not Hermitian SN S −1 is still diagonal). However, as we elaborate on below, we note that if A 2 − B 2 is zero then S and V become undefined. Other than at A 2 − B 2 = 0 the matrix N ′ = SN S −1 is diagonal, and with N being given by N = S −1 N ′ S, the righteigenvectors of N that obey N R ± = Λ ± R ± are given by the columns of S −1 , and the left-eigenvectors of N that obey L ± N = Λ ± L ± are given by the rows of S. Given the right-eigenvectors one can also construct the left-eigenvectors by using the V operator. When A 2 > B 2 the left eigenvectors can be constructed as L ± | = R ± |V , and we obtain R + = 1 2(A 2 − B 2 ) 1/4 (A + B) 1/2 + (A − B) 1/2 i[(A + B) 1/2 − (A − B) 1/2 ] R − = 1 2(A 2 − B 2 ) 1/4 −i[(A + B) 1/2 − (A − B) 1/2 ] (A + B) 1/2 + (A − B) 1/2 L + = 1 2(A 2 − B 2 ) 1/4 ( (A + B) 1/2 + (A − B) 1/2 , i[(A + B) 1/2 − (A − B) 1/2 ] ) L − = 1 2(A 2 − B 2 ) 1/4 ( −i[(A + B) 1/2 − (A − B) 1/2 ], (A + B) 1/2 + (A − B) 1/2 ) ,(24) and these eigenvectors are normalized according to the positive definite L n |R m = R n |V |R m = δ m,n , i.e. according to L ± R ± = 1, L ∓ R ± = 0. In addition N and the identity I can be reconstructed as N = |R + Λ + L + | + |R − Λ − L − |, I = |R + L + | + |R − L − |,(25) to thus be diagonalized in the left-right basis. When A 2 − B 2 is negative, the quantity (A − B) 1/2 is pure imaginary, and since R| is the Hermitian conjugate of |R , in the A 2 < B 2 sector up to a phase we have L ∓ | = ± R ± |V . If we set A 2 − B 2 = −D 2 where D is real, the eigenvalues are Λ ± = C ± iD. In a quantum theory with the mass matrix serving as the Hamiltonian, |R ± would evolve as e −i(C±iD)t = e −iCt±Dt , while L ± | would evolve as R ∓ |V , i.e. as e iCt∓Dt . As had been noted in general in [7] and as found here, the only overlaps that would be non-zero would be ∓ L ± |R ± = R ∓ |V |R ± = ±i, and they would be time independent. Since L ± | = R ± |V , these matrix elements would be transition matrix elements between growing and decaying states. Such transition matrix elements are not required to be positive or to even be real. While all of these eigenstates and the S and V operators are well-defined as long as A 2 is not equal to B 2 , at A 2 = B 2 they all become singular. Moreover at A 2 = B 2 the vectors R + and R − become identical to each other (i.e. equal up to an irrelevant overall phase), and equally L + and L − become identical too. The matrix N thus loses both a left-eigenvector and a right-eigenvector at A 2 = B 2 to then only have one left-eigenvector and only one right-eigenvector. At A 2 = B 2 the two eigenvalues become equal (Λ + = Λ − = C) and have to share the same leftand right-eigenvectors. The fact that S becomes singular at A 2 = B 2 means that N cannot be diagonalized, with its eigenspectrum being incomplete. N thus becomes a Jordan-block matrix that cannot be diagonalized. 10 Even though all of L ± , R ± become singular at A 2 = B 2 , N still has left-and right-eigenvectors L and R that are given up to an arbitrary normalization by L = ( 1 i ) , R = 1 i , LN = CN, N R = CR,(26) and no matter what that normalization might be, they obey the zero norm condition characteristic of Jordan-block matrices: LR = ( 1 i ) 1 i = 0.(27) Even though the eigenspectrum of N is incomplete, the vector space on which it acts is still complete. One can take the extra states to be L ′ = ( 1 −i ) , R ′ = 1 −i ,(28) with L ′ R ′ = 0, so that R and R ′ span the space on which N acts to the right, while L and L ′ span the space on which N acts to the left. Comparing now with (9), we see that for the (χ 1 ,ψ 2 ) sector we have C = 2m 2 1 m 2 2 + m 4 2 − 3µ 4 2m 2 2 , A = 2m 2 1 m 2 2 − 3µ 4 − m 4 2 2m 2 2 , B = µ 2 ,(29) while for the (χ 2 ,ψ 1 ) sector we have C = m 4 2 − µ 4 2m 2 2 , A = − (µ 4 + m 4 2 ) 2m 2 2 , B = −µ 2 .(30) From (29) and (30) the eigenvalues given in (11) follow. For the (χ 1 ,ψ 2 ) sector we thus have two eigenvectors with real eigenvalues if (2m 2 1 m 2 2 − m 4 2 − 3µ 4 ) 2 > 4µ 4 m 4 2 , two eigenvectors with complex conjugate eigenvalues if (2m 2 1 m 2 2 − m 4 2 − 3µ 4 ) 2 < 4µ 4 m 4 2 , and lose an eigenvector if (2m 2 1 m 2 2 − m 4 2 − 3µ 4 ) 2 = 4µ 4 m 4 2 . Since none of this affects the (χ 2 ,ψ 1 ) sector, for all three of the possible classes of eigenspectra associated with a non-Hermitian Hamiltonian with an antilinear symmetry we obtain a massless Goldstone boson. For the (χ 2 ,ψ 1 ) sector the eigenvalues are λ 0 = 0 and λ 1 = m 2 2 − µ 4 /m 2 2 . Both are them are real, and we shall take m 4 2 to not be less than µ 4 so that λ 1 could not be negative. Additionally, the left-and right-eigenvectors are given by L 0 = 1 (m 4 2 − µ 4 ) 1/2 ( m 2 2 , iµ 2 ) , R 0 = 1 (m 4 2 − µ 4 ) 1/2 m 2 2 iµ 2 , L 1 = 1 (m 4 2 − µ 4 ) 1/2 ( iµ 2 , −m 2 2 ) , R 1 = 1 (m 4 2 − µ 4 ) 1/2 iµ 2 −m 2 2 ,(31) as normalized to L 0 R 0 = 1, L 1 R 1 = 1, L 0 R 1 = 0, L 1 R 0 = 0.(32) The Goldstone boson is thus properly normalized if one uses the left-right norm, with the two states in the (χ 2 ,ψ 1 ) sector forming a left-right orthonormal basis. Thus in the non-Hermitian case the standard Goldstone theorem associated with the spontaneous breakdown of a continuous symmetry continues to hold but the norm of the Goldstone boson has to be the positive left-right norm (or equivalently the P T theory norm [13]) rather than the standard positive Hermitian theory Dirac norm for which the theorem was first proved [16][17][18][19]. However, something unusual occurs if we set µ 2 = m 2 2 . Specifically, the eigenvalue λ 1 becomes zero, to thus now be degenerate with λ 0 . The eigenvectors R 0 and R 1 collapse onto a common single R and L 0 and L 1 collapse onto a common single L, and the normalization coefficients given in (31) diverge. The (χ 2 ,ψ 1 ) sector thus becomes of non-diagonalizable Jordan-block form. In this limit one can take the left-and right-eigenvectors to be L = ( 1 i ) , R = 1 i ,(33) and they obey the zero norm condition LR = 0.(34) As such this represents a new extension of the Goldstone theorem, and even though the standard Goldstone theorem associated with the spontaneous breakdown of a continuous symmetry continues to hold, the norm of the Goldstone boson is now zero. Since a zero norm state can leave no imprint in a detector, we are essentially able to evade the existence of a massless Goldstone boson, in the sense that while it would still exist it would not be observable. IV. COMPARISON WITH THE WORK OF ALEXANDRE, ELLIS, MILLINGTON AND SEYNAEVE If we do a functional variation of the action given in (1) we obtain δI(φ 1 , φ 2 , φ * 1 , φ * 2 ) = d 4 x [− φ 1 + m 2 1 φ 1 − µ 2 φ 2 − g 2 φ 2 1 φ * 1 ]δφ * 1 + [− φ * 1 + m 2 1 φ * 1 + µ 2 φ * 2 − g 2 (φ * 1 ) 2 φ 1 ]δφ 1 + [− φ 2 − m 2 2 φ 2 + µ 2 φ 1 ]δφ * 2 + [− φ * 2 − m 2 2 φ * 2 − µ 2 φ * 1 ]δφ 2 + ∂ µ [δφ * 1 ∂ µ φ 1 + δφ 1 ∂ µ φ * 1 + δφ * 2 ∂ µ φ 2 + δφ 2 ∂ µ φ * 2 ] .(35) With all variations held fixed at the surface, stationarity leads to − φ 1 + m 2 1 φ 1 − µ 2 φ 2 − g 2 φ 2 1 φ * 1 = 0, − φ 2 − m 2 2 φ 2 + µ 2 φ 1 = 0, (36) − φ * 1 + m 2 1 φ * 1 + µ 2 φ * 2 − g 2 (φ * 1 ) 2 φ 1 = 0, − φ * 2 − m 2 2 φ * 2 − µ 2 φ * 1 = 0,(37) with these equations of motion being completely equivalent to (7). With these equations of motion one readily checks that the electric current (4) is conserved, just as it should be. There is however an immediate problem with these equations of motion, namely if we complex conjugate (36) we obtain not (37) but j µ = i(φ * 1 ∂ µ φ 1 − φ 1 ∂ µ φ * 1 ) + i(φ * 2 ∂ µ φ 2 − φ 2 ∂ µ φ * 2 ) given in− φ * 1 + m 2 1 φ * 1 − µ 2 φ * 2 − g 2 (φ * 1 ) 2 φ 1 = 0, − φ * 2 − m 2 2 φ * 2 + µ 2 φ * 1 = 0 (38) instead. The reason why this problem occurs is because while (37) is associated with ∂I/∂φ 1 and ∂I/∂φ 2 , (38) is associated with (∂I/∂φ * 1 ) * = ∂I * /∂φ 1 and (∂I/∂φ * 2 ) * = ∂I * /∂φ 2 and I is not equal to I * if I is not Hermitian. A similar concern holds for (7) as not one of its four separate equations is left invariant under complex conjugation. In order to get round this the authors of [13] propose that (37) not be valid, but rather one should use (36) and (38) instead. In order to achieve this the authors of [13] propose that one add an additional surface term to (35) so that one no longer imposes stationarity with respect δφ 1 and δφ 2 , but only stationarity with respect to δφ * 1 and δφ * 2 alone. 11 If one does use (36) and (38), the electric current j µ is no longer conserved (i.e. the surface term that is to be introduced must carry off some electric charge), but instead it is the current j ′ µ = i(φ * 1 ∂ µ φ 1 − φ 1 ∂ µ φ * 1 ) − i(φ * 2 ∂ µ φ 2 − φ 2 ∂ µ φ * 2 )(39) that is conserved in solutions to the equations of motion. As such, this j ′ µ current is a non-Noether current that is not associated with a symmetry of the action I (unless the inclusion of the surface term then leads to one), and thus its spontaneous breakdown is somewhat different from the standard one envisaged in [16][17][18][19]. Nonetheless, as noted in [13], when the scalar fields acquire vacuum expectation values, the mass matrix associated with (36) and (38) still has a zero eigenvalue. With the authors of [13] showing that it is associated with the Ward identity for j ′ µ , it can still be identified as a Goldstone boson. The work of [13] thus breaks the standard connection between Goldstone bosons and symmetries of the action. As such, the result of the authors of [13] is quite interesting as it provides possible new insight into the Goldstone theorem. However, the analysis somewhat obscures the issue as it suggests that the generation of Goldstone bosons in non-Hermitian theories is quite different from the generation of Goldstone bosons in Hermitian theories. It is thus of interest to ask whether one could show that one could obtain Goldstone bosons in a procedure that is common to both Hermitian and non-Hermitian theories. To this end we need to find a way to exclude (38) and validate (37), as it is (36) and (37) that we used in our paper in an approach that is completely conventional, one in which the surface term in (35) vanishes in the standard variational procedure way. To reconcile (36) and (37) or to reconcile the equations of motion in (7) with complex conjugation it is instructive to make a particular similarity transformation on the fields, even though doing so initially appears to lead to another puzzle, the Hermiticity puzzle, which we discuss and resolve below. It is more convenient to seek a reconciliation for (7) first, so from I(χ 1 , χ 2 , ψ 1 , ψ 2 ) we identify canonical conjugates for φ 1 and φ 2 of the form Π 1 = ∂ t ψ 1 , Π 2 = ∂ t ψ 2 . With these conjugates we introduce [7] S(ψ 1 ) = exp π 2 d 3 xΠ 1 (x, t)ψ 1 (x, t) , S(ψ 2 ) = exp π 2 d 3 xΠ 2 (x, t)ψ 2 (x, t) ,(40) and obtain S(ψ 1 )ψ 1 S −1 (ψ 1 ) = −iψ 1 , S(ψ 1 )Π 1 S −1 (ψ 1 ) = iΠ 1 , S(ψ 2 )ψ 2 S −1 (ψ 2 ) = −iψ 2 , S(ψ 2 )Π 2 S −1 (ψ 2 ) = iΠ 2 . (41) Since these transformations preserve the equal-time commutation relations [ψ 1 (x, t), Π 1 (y, t)] = iδ 3 (x − y), [ψ 2 (x, t), Π 2 (y, t)] = iδ 3 (x − y) , they are fully permissible transformations that do not modify the content of the field theory. Applying (41) to I(χ 1 , χ 2 , ψ 1 , ψ 2 ) we obtain S(ψ 1 )S(ψ 2 )I(χ 1 , χ 2 , ψ 1 , ψ 2 )S −1 (ψ 2 )S −1 (ψ 1 ) = I ′ (χ 1 , χ 2 , ψ 1 , ψ 2 )(42) where I ′ (χ 1 , χ 2 , ψ 1 , ψ 2 ) = d 4 x 1 2 ∂ µ χ 1 ∂ µ χ 1 + 1 2 ∂ µ χ 2 ∂ µ χ 2 − 1 2 ∂ µ ψ 1 ∂ µ ψ 1 − 1 2 ∂ µ ψ 2 ∂ µ ψ 2 + 1 2 m 2 1 (χ 2 1 + χ 2 2 ) + 1 2 m 2 2 (ψ 2 1 + ψ 2 2 ) − µ 2 (χ 1 ψ 2 − χ 2 ψ 1 ) − g 16 (χ 2 1 + χ 2 2 ) 2 .(43) Stationary variation with respect to χ 1 , χ 2 , ψ 1 , and ψ 2 replaces (7) by − χ 1 = −m 2 1 χ 1 + µ 2 ψ 2 + g 4 (χ 3 1 + χ 1 χ 2 2 ), − χ 2 = −m 2 1 χ 2 − µ 2 ψ 1 + g 4 (χ 3 2 + χ 2 χ 2 1 ), − ψ 1 = m 2 2 ψ 1 + µ 2 χ 2 , − ψ 2 = m 2 2 ψ 2 − µ 2 χ 1 ,(44) and now each one of the equations of motion is separately invariant under complex conjugation. Returning now to the original φ 2 , φ * 2 fields we obtain S(ψ 1 )S(ψ 2 )φ 2 S −1 (ψ 2 )S −1 (ψ 1 ) = −iφ 2 , S(ψ 1 )S(ψ 2 )φ * 2 S −1 (ψ 2 )S −1 (ψ 1 ) = −iφ * 2 ,(45)so that I(φ 1 , φ 2 , φ * 1 , φ * 2 ) transforms into I ′ (φ 1 , φ 2 , φ * 1 , φ * 2 ) = d 4 x ∂ µ φ * 1 ∂ µ φ 1 − ∂ µ φ * 2 ∂ µ φ 2 + m 2 1 φ * 1 φ 1 + m 2 2 φ * 2 φ 2 + iµ 2 (φ * 1 φ 2 − φ * 2 φ 1 ) − g 4 (φ * 1 φ 1 ) 2 ,(46) while the equations of motion become − φ 1 + m 2 1 φ 1 + iµ 2 φ 2 − g 2 φ 2 1 φ * 1 = 0, − φ 2 − m 2 2 φ 2 + iµ 2 φ 1 = 0,(47)− φ * 1 + m 2 1 φ * 1 − iµ 2 φ * 2 − g 2 (φ * 1 ) 2 φ 1 = 0, − φ * 2 − m 2 2 φ * 2 − iµ 2 φ * 1 = 0,(48) and now there is no complex conjugation problem, with (48) being the complex conjugate of (47). 12 In addition we note under the transformations given in (45) the equations given in (38) transform into − φ * 1 + m 2 1 φ * 1 + iµ 2 φ * 2 − g 2 (φ * 1 ) 2 φ 1 = 0, − φ * 2 − m 2 2 φ * 2 + iµ 2 φ * 1 = 0.(49) If we now switch the sign of φ * 2 , (47) is unaffected, while (49) becomes − φ * 1 + m 2 1 φ * 1 − iµ 2 φ * 2 − g 2 (φ * 1 ) 2 φ 1 = 0, − φ * 2 − m 2 2 φ * 2 − iµ 2 φ * 1 = 0.(50) We recognize (50) as being (48). With (47) being unaffected by the switch in sign of φ * 2 , the mass matrix based on (47) and (48) is the same as the mass matrix based on (47) and (50). However, since all we have done in going from (36), (37) and (38) is make similarity transformations that leave determinants invariant, the eigenvalues associated with (36) and (37) (i.e. with (9)) on the one hand and the eigenvalues associated with (36) and (38) on the other hand must be the same. And indeed this is exactly found to be the case, with all four of the eigenvalues given in [13] being precisely the ones given in our (11). One can thus obtain the same mass spectrum as that obtained in [13] using a completely conventional variational procedure. In addition, we note that with (47) and (50) the current j ′ µ given in (39) that is used in [13] now is conserved. In fact, under the transformations given in (45) the j µ current given in (4) transforms into j ′ µ . Thus all that is needed to bring the study of [13]) into the conventional Goldstone framework (standard variation procedure, standard spontaneous breakdown of a symmetry of the action) is to first make a similarity transformation. Now the reader will immediately object to what we have done since now the µ 2 (χ 1 ψ 2 − χ 2 ψ 1 ) term in (43) and the iµ 2 (φ * 1 φ 2 − φ * 2 φ 1 ) term in (46) are both invariant under complex conjugation. Then with the actions in (43) and (46) then seemingly being Hermitian, we are seemingly back to the standard Hermitian situation where the Goldstone theorem readily holds, and we have seemingly gained nothing new. However, it cannot actually be the case that action in (43) could be Hermitian, since similarity transformations cannot change the eigenvalues of the mass matrix M given in (9), and as we have seen for certain values of parameters the eigenvalues can be complex or M could even be Jordan block. We thus need to explain how, despite its appearance, a seemingly Hermitian action might not actually be Hermitian. The answer to this puzzle has been provided in [7], and we describe it below. However, before doing so we note that there are two other approaches that could also achieve a reconciliation. The first alternative involves starting with the fields χ 1 , χ 2 , ψ 1 , ψ 2 as the fields that define the theory, and I(χ 1 , χ 2 , ψ 1 , ψ 2 ) as the input action. In this case one immediately obtains the equations of motion given in (7). As they stand these equations are inconsistent if all the four fields are Hermitian. If we take χ 1 and χ 2 to be Hermitian, then these equations force ψ 1 and ψ 2 to be anti-Hermitian. And if ψ 1 and ψ 2 are taken to be anti-Hermitian, both the equations of motion and the action given in (7) then are invariant under a complex conjugation (i.e. Hermitian conjugation) in which ψ 1 and ψ 2 transform into −ψ 1 and −ψ 2 . Moreover, in such a case the −iψ 1 and −iψ 2 fields that are generated through the similarity transformations given in (41) that would then be Hermitian. Of course then the interaction term given in (6) would be Hermitian as well, and we again have a seemingly Hermitian theory. Now suppose we do take ψ 1 and ψ 2 to be anti-Hermitian. Then if we start with I(χ 1 , χ 2 , ψ 1 , ψ 2 ) we cannot get back to I(φ 1 , φ 2 , φ * 1 , φ * 2 ) given in (1), since in the correspondence given in (5) φ * 2 was recognized as the conjugate of a φ 2 = ψ 1 + iψ 2 field that was expanded in terms of Hermitian ψ 1 and ψ 2 . If we now take φ 2 to still be defined as φ 2 = ψ 1 + iψ 2 , the associated φ * 2 would now be given by −(ψ 1 − iψ 2 ), and thus equal to minus the previous ψ 1 − iψ 2 used in (5). With this definition a rewriting of I(χ 1 , χ 2 , ψ 1 , ψ 2 ) in the (φ 1 , φ 2 , φ * 1 , φ * 2 ) basis would yield I(φ 1 , φ 2 , φ * 1 , −φ * 2 ) = d 4 x ∂ µ φ * 1 ∂ µ φ 1 − ∂ µ φ * 2 ∂ µ φ 2 + m 2 1 φ * 1 φ 1 + m 2 2 φ * 2 φ 2 − µ 2 (φ * 1 φ 2 + φ * 2 φ 1 ) − g 4 (φ * 1 φ 1 ) 2 ,(51) and equations of motion − φ 1 + m 2 1 φ 1 − µ 2 φ 2 − g 2 φ 2 1 φ * 1 = 0, − φ 2 − m 2 2 φ 2 + µ 2 φ 1 = 0,(52)− φ * 1 + m 2 1 φ * 1 − µ 2 φ * 2 − g 2 (φ * 1 ) 2 φ 1 = 0, − φ * 2 − m 2 2 φ * 2 + µ 2 φ * 1 = 0.(53) Now complex conjugation can be consistently applied, with (53) being derivable from (50) by complex conjugation. And again it is j ′ µ that is conserved. A second alternative approach is to reinterpret the meaning of the star operator used in φ * 1 and φ * 2 . Instead of taking it to denote Hermitian conjugation, we could instead take it denote CP T conjugation, i.e. φ * 1 = CP T φ 1 T P C, φ * 2 = CP T φ 2 T P C. Now we had noted in (2) that in order to enforce CP T symmetry on I(φ 1 , φ 2 , φ * 1 , φ * 2 ) we took φ 1 to be even and φ 2 to be odd under CP T , and we had noted that in general a scalar field should be CP T even (i.e. the same CP T parity as the CP T even fermionicψψ [7]). However, if we apply the similarity transformation given in (41) to φ 2 = ψ 1 + iψ 2 to get −iφ 2 , that would change the CP T parity. Thus while φ 2 has negative CP T parity it is similarity equivalent to a field that has the conventional positive CP T parity, with the transformed I ′ (φ 1 , φ 2 , φ * 1 , φ * 2 ) and the resulting equations of motion now being CP T symmetric if φ 2 is taken to have positive CP T parity, viz. CP T φ 2 T P C = φ * 2 , CP T φ * 2 T P C = φ 2 . (We leave φ 1 as given in (2), viz. CP T φ 1 T P C = φ * 1 , CP T φ * 1 T P C = φ 1 .) The difficulty identified by the authors of [13] can thus be resolved by a judicious choice of which fields are Hermitian and which are anti-Hermitian, by a judicious choice of which fields are CP T even and which are CP T odd, or by similarity transformations that generate complex phases that affect both Hermiticity and CP T parity. However in all of these such resolutions we are led to theories that now appear to be Hermitian and yet for certain values of parameters could not be, and so we need to address this issue. V. RESOLUTION OF THE HERMITICITY PUZZLE In [7] the issues of the generality of CP T symmetry and the nature of Hermiticity were addressed. In regard to Hermiticity it was shown that Hamiltonians that appear to be Hermitian need not be, since Hermiticity or selfadjointness is determined not by superficial inspection of the appearance of the Hamiltonian but by construction of asymptotic boundary conditions, as they determine whether or not one could drop surface terms in an integration by parts. And even if one could drop surface terms we still may not get Hermiticity because of the presence of factors of i in H that could affect complex conjugation. In regard to CP T it was shown that if one imposes only two requirements, namely the time independence of inner products and invariance under the complex Lorentz group, it follows that the Hamiltonian must have an antilinear CP T symmetry. Since this analysis involves no Hermiticity requirement, the CP T theorem is thus extended to the non-Hermitian case. As noted above, the time independence of inner products is achieved if the theory has any antilinear symmetry with the left-right V norm being the inner product one has to use. Complex Lorentz invariance then forces the antilinear symmetry to be CP T . In field theories one ordinarily constructs actions so that they are invariant under the real Lorentz group. However, the same analysis that shows that actions with spin zero Lagrangians are invariant under the real Lorentz group (the restricted Lorentz group) also shows that they are invariant under the complex one (the proper Lorentz group that includes P T transformations for coordinates and CP T transformations for spinors). Specifically, the action I = = 2w µν d 4 xx µ ∂ ν L(x) = 2w µν d 4 x[∂ ν [x µ L(x)] − η µν L(x)], and since the metric η µν is symmetric and w µν is antisymmetric, thus given by δI = 2w µν d 4 x∂ ν [x µ L(x)]. Since the change in the action is a total divergence, the familiar invariance of the action under real Lorentz transformations is secured. However, we note that nothing in this argument depended on w µν being real, with the change in the action still being a total divergence even if w µν is complex. The action I = d 4 xL(x) is thus actually invariant under complex Lorentz transformations as well and not just under real ones, with complex Lorentz invariance thus being just as natural to physics as real Lorentz invariance. For our purposes here we note that the Lorentz invariant scalar field action I(φ 1 , φ 2 , φ * 1 , φ * 2 ) given in (1) is thus invariant not just under real Lorentz transformations but under complex ones as well. Since in the above we constructed a time-independent inner product for this theory, the I(φ 1 , φ 2 , φ * 1 , φ * 2 ) action thus must have CP T symmetry. And indeed we explicitly showed in (2) that this was in fact the case. Since theories can thus be CP T symmetric without needing to be Hermitian, it initially looks as though the two concepts are distinct. However, the issue of Hermiticity was addressed in [7], and the unexpected outcome of that study was that the only allowed Hamiltonians that one could construct that were CP T invariant would have exactly the same structure as (or be similarity equivalent to) the ones one constructs in Hermitian theories, namely presumed Hermitian combinations of fields and all coefficients real. 13 These are precisely the theories that one ordinarily refers to as Hermitian. However, thus turns out to not necessarily be the case since theories can appear to be Hermitian but not actually be so. To illustrate the above remarks it is instructive to consider some explicit examples, one involving behavior in time and the other involving behavior in space. For behavior in time consider the neutral scalar field with action I S = d 4 x[∂ µ φ∂ µ φ − m 2 φ 2 ]/2 and Hamiltonian H = d 3 x[φ 2 + ∇φ · ∇φ + m 2 φ 2 ]/2. Solutions to the wave equation −φ+∇ 2 φ−m 2 φ = 0 obey ω 2 (k) = k 2 +m 2 . Thus the poles in the scalar field propagator are at ω(k) = ±[k 2 +m 2 ] 1/2 , Hermitian just by superficial inspection. Rather, one has to construct its eigenstates first and look at their asymptotic behavior. In order to obtain eigenvectors for H PU (ω 1 , ω 2 ) that are normalizable the authors of [21] made the similarity transformation y = e πpzz/2 ze −πpzz/2 = −iz, q = e πpzz/2 p z e −πpzz/2 = ip z , on the operators of the theory so that [y, q] = i. Under this same transformation H PU (ω 1 , ω 2 ) transforms into e πpz z/2 H PU (ω 1 , ω 2 )e −πpzz/2 =H P U (ω 1 , ω 2 ) = p 2 2γ − iqx + γ 2 ω 2 1 + ω 2 2 x 2 + γ 2 ω 2 1 ω 2 2 y 2 ,(59) where for notational simplicity we have replaced p x by p, so that [x, p] = i. With the eigenvalue z of the operator z being replaced in ψ 0 (z, x) by −iz (i.e. continued into the complex z plane), the eigenfunctions are now normalizable. 15 When acting on the eigenfunctions ofH P U (ω 1 , ω 2 ) the y and q = −i∂ y operators are Hermitian (as are x and p = −i∂ x ). However, as the presence of the factor i in the −iqx term indicates,H P U (ω 1 , ω 2 ) is not Hermitian. Since in general to establish Hermiticity one has to integrate by parts, drop surface terms and complex conjugate, we see that while we now can drop surface terms forH P U (ω 1 , ω 2 ) we do not recover the generic H ij = H * ji when we complex conjugate, even as we can now drop surface terms for the momentum operators when they act on the eigenstates ofH P U (ω 1 , ω 2 ) and achieve Hermiticity for them. 16 When ω 1 and ω 2 are real and unequal, the eigenvalues of the HamiltonianH P U (ω 1 , ω 2 ) are all and the eigenspectrum (two sets of harmonic oscillators) is complete. In that caseH P U (ω 1 , ω 2 ) can actually be brought to a form in which it is Hermitian by a similarity transformation. Specifically, one introduces an operator Q Q = αpq + βxy, α = 1 γω 1 ω 2 log ω 1 + ω 2 ω 1 − ω 2 , β = αγ 2 ω 2 1 ω 2 2 ,(60) and obtainsH ′ P U (ω 1 , ω 2 ) = e −Q/2H P U (ω 1 , ω 2 )e Q/2 = p 2 2γ + q 2 2γω 2 1 + γ 2 ω 2 1 x 2 + γ 2 ω 2 1 ω 2 2 y 2 .(61) With the Q similarity transformation not affecting the asymptotic behavior of the eigenstates ofH P U (ω 1 , ω 2 ), and with y, q, x, and p thus all being Hermitian when acting on the eigenstates ofH ′ P U (ω 1 , ω 2 ), the Hermiticity of H ′ P U (ω 1 , ω 2 ) in the conventional Dirac sense is established. We can thus regardH P U (ω 1 , ω 2 ) with real and unequal ω 1 and ω 2 as being Hermitian in disguise. Moreover, in addition we note that since Q becomes singular at ω 1 = ω 2 , at ω 1 = ω 2HP U (ω 1 , ω 2 ) cannot be diagonalized, to thus confirm that H P U (ω) is Jordan block. 17 In general then we see that a Hamiltonian may not be Hermitian even though it may appear to be so, and may be (similarity equivalent to) Hermitian even when it does not appear to be so. And moreover, one cannot tell beforehand, as one needs to first solve the theory and see what its solutions look like. Other then possibly needing to continue into the complex plane in order to get convergence, when a Hamiltonian has all eigenvalues real and eigenspectrum complete it is always possible to similarity transform it into a form in which it is Hermitian in the standard Dirac sense. If a Hamiltonian obeys H = H † , then under a similarity transform that effects H ′ = SHS −1 , we note that H ′ † = S −1 † H † S † = S −1 † HS † = S −1 † S −1 H ′ SS † = [SS † ] −1 H ′ SS † . Thus unless S is unitary H ′ † is not equal to H ′ , with the H ij = H * ji Hermiticity condition being a condition that is not preserved under a general similarity transformation. Thus if one starts with some general H ′ that does not obey H ′ = H ′ † , it might be similarity equivalent to a Hermitian H but one does not know a priori. It only will be similarity equivalent to a Hermitian H if the eigenvalues of H ′ are all real and the eigenspectrum is complete. And the necessary 15 As noted in [6,7], the analog statement for the Pais-Uhlenbeck two-oscillator theory path integral is that the path integral measure has to be continued into the complex plane in order to get the path integration to converge. A similar situation pertains to the path integral associated with the relativistic neutral scalar field theory with action I S = (1/2) d 4 x[∂µ∂ν φ∂ µ ∂ ν φ − (M 2 1 + M 2 2 )∂µφ∂ µ φ + M 2 1 M 2 2 φ 2 ] , a theory whose non-relativistic limit is the Pais-Uhlenbeck theory. 16 The use of the similarity transformations given in (58) parallels the use of (40) in Sec. IV. However, while using the similarity transformation of (40) was mainly a convenience, for H P U (ω 1 , ω 2 ) the similarity transformation of (58) is a necessity because of the need to construct normalizable wave functions. The presence of the factor i in (59) is thus related to the intrinsic structure of the Pais-Uhlenbeck theory. 17 The transformation with Q is the analog of the transformation of the spontaneously broken scalar field theory mass matrix given in (22), and the singularity in Q at ω 1 = ω 2 is the analog of that in (22) when A = B. condition for that to be the case is that H ′ possess an antilinear symmetry. However, unlike a Hermiticity condition a commutation relation is preserved under a similarity transformation (even a commutation relation that involves an antilinear operator [7]), with antilinear operators being more versatile than Hermitian operators. So much so in fact that in [7] it was argued that one should use CP T symmetry as the guiding principle for constructing quantum theories rather than Hermiticity. 18 When we characterize an operator such as z, p z , x, or p x as being Hermitian we are only referring to representations of the [z, p z ] = i and [x, p x ] = i commutation relations, without any reference to a Hamiltonian that might contain these operators. A Hamiltonian can thus be built out of Hermitian operators and can have all real coefficients, and yet not be Hermitian itself. The equal-frequency and complex-frequency Pais-Uhlenbeck models are particularly instructive in this regard. In the equal-frequency case none of the z, p z , x, or p x operators themselves are Jordan block, only H P U (ω) is. The spectrum of eigenstates of the position and momentum operators are complete, and all are contained in the space on which H PU (ω) acts. However, not all of these states are eigenstates of the Hamiltonian [22], with the one-particle sector of H P U (ω) behaving just like the example given in (26) and (28). Moreover, in the complex H PU (α, β) case all the eigenvalues of the position and momentum operators are real even though those of the Hamiltonian that is built out of them are not. As the equal-frequency and complex-frequency Pais-Uhlenbeck models show, one cannot tell whether a Hamiltonian might be Hermitian just by superficial inspection. One needs to solve the theory first and see what the eigenspectrum looks like. Thus one can have Hamiltonians that do not look Hermitian but are similarity equivalent to ones that are Hermitian, and one can have Hamiltonians that do look Hermitian but are not at all. As we see from these examples, whether or not an action is CP T symmetric is an intrinsic property of the unconstrained action itself prior to any stationary variation, but whether or not a Hamiltonian is Hermitian is a property of the stationary solution alone. 19 Hermiticity of a Hamiltonian cannot be assigned a priori, and can only be determined after the theory has been solved. However, the CP T properties of actions or fields can be assigned a priori (i.e. prior to a functional variation of the action, and thus a property of every variational path and not just the stationary one), and thus that is how Hamiltonians and fields should be characterized. One cannot write down any CP T invariant theory that up to similarity transformations does not have the same form as a Hermitian theory, though whether any such CP T invariant Hamiltonian actually is similarity equivalent to a Hermitian one is only establishable by constructing the solutions to the theory and cannot be determined ahead of time. Turning now to the study of [13], we note that it displays all of the features that we have just described. The interest of the authors of [13] was in exploring the status of the Goldstone theorem in non-Hermitian but P T -symmetric theories, and so they took as an example a relativistic field theory whose action was not Hermitian, i.e. not Hermitian by superficial inspection. However, by a similarity transformation it could be brought to a form given in (46) in which the action is Hermitian by superficial inspection (i.e. no factors of i and operators that are presumed to be Hermitian). However, while it now appears to be Hermitian it could not be since in the tree approximation that they studied the ensuing mass matrix was not Hermitian either. With the mass matrix having the three possible P T symmetry realizations (real and unequal eigenvalues, real and equal eigenvalues, eigenvalues in complex conjugate pairs) for various values of its parameters, the tree approximation to the model of [13] completely parallels the discussion of the three realizations of Pais-Uhlenbeck two-oscillator model given in (54), (55) and (56), where the Hamiltonian looks to be Hermitian but is not. It is of interest to note that to establish that the Pais-Uhlenbeck two-oscillator model theory is not Hermitian we had to construct wave functions and examine there asymptotic behavior, while for the tree approximation to the model of [13] we only need to look at a finite-dimensional matrix. Thus we can start with a fully-fledged field theory such as that based on the action given in (1), (46) or (51) and not need to identify the region in the complex plane where the functional path integral might exist or need to descend to the quantum mechanical limit and look at the asymptotic behavior of wave functions in order to determine whether or not the theory is Hermitian. 20 In the broken symmetry case we only need look at the finite-dimensional mass matrix that we get in tree approximation. For parameters in the model of [13] that obey (2m 2 1 m 2 2 − m 4 2 − 3µ 4 ) 2 − 4µ 4 m 4 2 > 0, the mass matrix can be brought to a Hermitian form by the similarity transformation presented in (22). Thus in this case the mass matrix is Hermitian in disguise. For this particular example the Goldstone theorem is the standard one, since if one can derive the Goldstone theorem in a Hermitian theory, it continues to hold if one makes a similarity transformation on it. 21 Whether or not the mass matrix given in (11) actually can be transformed to a Hermitian matrix depends on the values of the parameters in the action. However, as we have seen, no matter what the values of these parameters, and no matter whether the CP T -invariant mass matrix is realized by real eigenvalues, complex pairs of eigenvalues, or is of Jordan-block form, for any choice of the parameters one is able to obtain a Goldstone theorem. One can thus anticipate a Englert-Brout-Higgs mechanism for a local extension of the continuous symmetry that we have broken spontaneously, and we turn now to this issue. VI. SPONTANEOUSLY BROKEN NON-HERMITIAN THEORY WITH A CONTINUOUS LOCAL SYMMETRY Now that we have seem that we can consistently implement the Goldstone mechanism in a CP T -symmetric, non-Hermitian theory, it is natural to ask whether we can also implement the familiar Englert-Brout-Higgs mechanism developed in [24][25][26][27]. To this end we introduce a local gauge invariance and a gauge field A µ , and with F µν = ∂ µ A ν − ∂ ν A µ replace (1) and (3) by I(φ 1 , φ 2 , φ * 1 , φ * 2 , A µ ) = d 4 x (−i∂ µ + eA µ )φ * 1 (i∂ µ + eA µ )φ 1 + (−i∂ µ + eA µ )φ * 2 (i∂ µ + eA µ )φ 2 + m 2 1 φ * 1 φ 1 − m 2 2 φ * 2 φ 2 − µ 2 (φ * 1 φ 2 − φ * 2 φ 1 ) − g 4 (φ * 1 φ 1 ) 2 − 1 4 F µν F µν ,(62) and φ 1 → e iα(x) φ 1 , φ * 1 → e −iα(x) φ * 1 , φ 2 → e iα(x) φ 2 , φ * 2 → e −iα(x) φ 2 , eA µ → eA µ + ∂ µ α(x).(63) With (2), the I(φ 1 , φ 2 , φ * 1 , φ * 2 , A µ ) action is CP T invariant since both i and A µ are CP T odd (spin one fields have odd CP T [14]). We make the same decomposition of φ 1 and φ 2 fields as in (5), and replace (6) by I(χ 1 , χ 2 , ψ 1 , ψ 2 , A µ ) = d 4 x 1 2 ∂ µ χ 1 ∂ µ χ 1 + 1 2 ∂ µ χ 2 ∂ µ χ 2 + 1 2 ∂ µ ψ 1 ∂ µ ψ 1 + 1 2 ∂ µ ψ 2 ∂ µ ψ 2 + 1 2 m 2 1 (χ 2 1 + χ 2 2 ) − 1 2 m 2 2 (ψ 2 1 + ψ 2 2 ) − iµ 2 (χ 1 ψ 2 − χ 2 ψ 1 ) − g 16 (χ 2 1 + χ 2 2 ) 2 − eA µ (χ 1 ∂ µ χ 2 − χ 2 ∂ µ χ 1 + ψ 1 ∂ µ ψ 2 − ψ 2 ∂ µ ψ 1 ) + e 2 2 A µ A µ χ 2 1 + χ 2 2 + ψ 2 1 + ψ 2 2 − 1 4 F µν F µν ,(64) In the tree approximation minimum used above in which (g/4)χ 2 1 = m 2 1 − µ 4 /m 2 2 ,ψ 2 = −iµ 2χ 1 /m 2 2 ,χ 2 = 0,ψ 1 = 0, we induce a mass term for A µ of the form m 2 (A µ ) = e 2 χ 2 1 +χ 2 2 +ψ 2 1 +ψ 2 2 = e 2χ2 1 1 − µ 4 m 4 2 = 4e 2 g (m 2 1 m 2 2 − µ 4 )(m 4 2 − µ 4 ) m 6 2 .(65) However, before assessing the implications of (65) we recall that in Sec. IV we had to reconcile I(χ 1 , χ 2 , ψ 1 , ψ 2 ) with the Hermiticity concern raised in [13]. The same is now true of I(χ 1 , χ 2 , ψ 1 , ψ 2 , A µ ). In Sec. IV we had identified three solutions for I(χ 1 , χ 2 , ψ 1 , ψ 2 ), and all can be implemented for I(χ 1 , χ 2 , ψ 1 , ψ 2 , A µ ). Thus we can consider a judicious choice of which fields are Hermitian and which are anti-Hermitian, a judicious choice of which fields are CP T even and which are CP T odd, or can apply similarity transformations that generate complex phases that affect both Hermiticity and CP T parity. In regard to Hermiticity, if we take A µ to be Hermitian (i.e. complex conjugate even), and as before take ψ 1 and ψ 2 to be anti-Hermitian (complex conjugate odd), then I(χ 1 , χ 2 , ψ 1 , ψ 2 , A µ ) will be invariant under complex conjugation, as will then be the equations of motion and tree approximation minimum that follow from it, and (65) will hold. Also, as we had noted in Sec. V, even though I(χ 1 , χ 2 , ψ 1 , ψ 2 , A µ ) might now be invariant under complex conjugation it does not follow that the scalar field mass matrix M given in (9) has to be Hermitian, and indeed it is not. in fact they did not. It is however the case for the study that we have presented here. If we transform ψ 1 and ψ 2 as in (41) but make no transformation on A µ , we obtain S(ψ 1 )S(ψ 2 )I(χ 1 , χ 2 , ψ 1 , ψ 2 , A µ )S −1 (ψ 2 )S −1 (ψ 1 ) = I ′ (χ 1 , χ 2 , ψ 1 , ψ 2 )(66) where I ′ (χ 1 , χ 2 , ψ 1 , ψ 2 , A µ ) = d 4 x 1 2 ∂ µ χ 1 ∂ µ χ 1 + 1 2 ∂ µ χ 2 ∂ µ χ 2 − 1 2 ∂ µ ψ 1 ∂ µ ψ 1 − 1 2 ∂ µ ψ 2 ∂ µ ψ 2 + 1 2 m 2 1 (χ 2 1 + χ 2 2 ) + 1 2 m 2 2 (ψ 2 1 + ψ 2 2 ) − µ 2 (χ 1 ψ 2 − χ 2 ψ 1 ) − g 16 (χ 2 1 + χ 2 2 ) 2 − eA µ (χ 1 ∂ µ χ 2 − χ 2 ∂ µ χ 1 − ψ 1 ∂ µ ψ 2 + ψ 2 ∂ µ ψ 1 ) + e 2 2 A µ A µ χ 2 1 + χ 2 2 − ψ 2 1 − ψ 2 2 − 1 4 F µν F µν ,(67) As constructed, I ′ (χ 1 , χ 2 , ψ 1 , ψ 2 , A µ ) is invariant under complex conjugation if ψ 1 and ψ 2 are even under complex conjugation. Now the tree approximation minimum is given by (g/4)χ 2 1 = m 2 1 − µ 4 /m 2 2 ,ψ 2 = µ 2χ 1 /m 2 2 ,χ 2 = 0, ψ 1 = 0, A µ = 0, and we induce a mass term for A µ of the form m 2 (A µ ) = e 2 χ 2 1 +χ 2 2 −ψ 2 1 −ψ 2 2 = e 2χ2 1 1 − µ 4 m 4 2 = 4e 2 g (m 2 1 m 2 2 − µ 4 )(m 4 2 − µ 4 ) m 6 2 . (68) The mass of the gauge boson is thus again given by (65). Finally, in regard to interpreting the star symbol as the CP T conjugate, since A µ is real there is no change in the discussion presented in Sec. IV, and (65) continues to hold. As we can see from (65) and (68), the gauge boson does indeed acquire a non-zero mass unless m 2 1 m 2 2 = µ 4 or m 4 2 = µ 4 . The first of these conditions is not of significance since if m 2 1 m 2 2 = µ 4 it follows thatχ 1 (and thusψ 2 ) is zero and there is no symmetry breaking, and the gauge boson stays massless. 22 However, the condition m 4 2 = µ 4 is related to the symmetry breaking since it does not obligeχ 1 to vanish. Moreover, since the m 4 2 = µ 4 condition does not constrain the (χ 1 ,ψ 2 ) sector λ ± eigenvalues given in (11) in any particular way (m 2 1 not being constrained by the m 4 2 = µ 4 condition), we see that regardless of whether or not m 4 2 and µ 4 are in fact equal to each other, we obtain the Englert-Brout-Higgs mechanism in the (χ 2 ,ψ 1 ) sector no matter how the antilinear symmetry is realized in the (χ 1 ,ψ 2 ) sector, be it all eigenvalues real, eigenvalues in a complex pair, or mass matrix being of non-diagonalizable Jordan-block form. In the (χ 2 ,ψ 1 ) sector both λ 0 and λ 1 as given in (11) are real, and not degenerate with each other as long as m 4 2 = µ 4 . However, something very interesting occurs if m 4 2 = µ 4 . Then the (χ 2 ,ψ 1 ) sector becomes Jordan block and the Goldstone boson acquires zero-norm. Since the Goldstone boson can no longer be considered to be a normal positive norm particle, it cannot combine with the gauge boson to give the gauge boson a longitudinal component and make it massive. And as we see, and just as required by consistency, in that case the gauge boson stays massless. Thus in a non-Hermitian but CP T -symmetric theory it is possible to spontaneously break a continuous local symmetry and yet not obtain a massive gauge boson. VII. SUMMARY In the non-relativistic antilinear symmetry program one replaces Hermiticity of a Hamiltonian by antilinearity as the guiding principle for quantum mechanics for both infinite-dimensional wave mechanics and either finite-or infinite-dimensional matrix mechanics. For infinite-dimensional relativistic quantum field theories whose actions are invariant under the complex Lorentz group the antilinear symmetry is uniquely prescribed to be CP T . Hamiltonians that have an antilinear symmetry can of course be Hermitian as well, with all energy eigenvalues then being real and all energy eigenvectors being complete. However, in general, antilinear symmetry permits two additional options for Hamiltonians that cannot be realized in Hermitian theories, namely energy eigenvalues could be real while energy eigenvectors could be incomplete (Jordan block), or energy eigenvectors could still be complete but energy eigenvalues could come in complex conjugate pairs. In the first case all Hilbert space inner products are positive definite, in the 22 If m 2 1 m 2 2 −µ 4 = 0, the unbroken (χ 1 , ψ 2 ) sector mass matrix given in (64) is of the form (1/2m 2 2 )(µ 2 χ 1 −im 2 2 ψ 2 ) 2 . It has two eigenvalues, λa = µ 4 /m 2 2 − m 2 2 and λ b = 0. In the (χ 1 , ψ 2 ) basis the right-eigenvector for λa is (µ 2 , −im 2 2 ), while the right-eigenvector for λ b is (m 2 2 , −iµ 2 2 ). The fact that λ b is zero is not of significance since it occurs in the absence of spontaneous symmetry breaking, and would thus not be maintained under radiative corrections. second (Jordan-block) case norms are zero, and in the third case the only norms are transition matrix elements and their values are not constrained to be positive or to even be real. Moreover, in the antilinear symmetry program Hamiltonians that look to be Hermitian by superficial inspection do not have to be, while Hamiltonians that do not look to be Hermitian by superficial inspection can actually be similarity equivalent to Hamiltonians that are Hermitian (viz. Hermitian in disguise). In applications of antilinear symmetry to relativistic systems it is of interest to see how many standard results that are obtained in Hermitian theories might still apply in the non-Hermitian case, and whether one could obtain new features that could not be realized in the Hermitian case. To address these issues Alexandre, Ellis, Millington and Seynaeve studied how spontaneously broken symmetry ideas and results translate to the non-Hermitian environment. With broken symmetry and the possible existence of massless Goldstone bosons being intrinsically relativistic concepts, they explored a CP T symmetric but non-Hermitian two complex scalar field relativistic quantum field theory with a continuous global symmetry. Their actual treatment of the problem was somewhat unusual in that they allowed for non-vanishing surface terms to contribute in the functional variation of the action, with this leading to a specific nonstandard set of equations of motion of the fields. Their reason for doing this was that the equations of motion obtained by the standard variational procedure with endpoints held fixed were not invariant under complex conjugation. However, they still found a broken symmetry solution with an explicit massless Goldstone boson. In the treatment of the same model that we provide here we use a conventional variational calculation in which fields are held fixed at the endpoints. However, to get round the complex conjugation difficulty we make a similarity transformation on the fields which then allows us to be able to maintain invariance of the equations of motion under complex conjugation (the similarity transformation itself being complex). However, on doing this we obtain an action that appears to be Hermitian, and if it indeed were to be Hermitian there would be nothing new to say about broken symmetry that had not already been said for Hermitian theories. However, while appearing to be Hermitian the theory in fact is not, and thus it does fall into the non-Hermitian but antilinearly symmetric category. In their analysis Alexandre, Ellis, Millington and Seynaeve studied the tree approximation to the equations of motion of the theory and found broken symmetry solutions. What is particularly noteworthy of their analysis is that even though they were dealing with a fully-fledged infinite-dimensional quantum field theory, in the tree approximation the mass matrix that was needed to determine whether there might be any massless Goldstone boson was only four dimensional (viz. the same number as the number of independent fields in the two complex scalar field model that they studied). As such, the mass matrix that they obtained is not Hermitian, and given the underlying antilinear CP T symmetry of the model and thus of the mass matrix, the mass matrix is immediately amenable to the full apparatus of the antilinear symmetry program as that apparatus holds equally for fields and matrices. Alexandre, Ellis, Millington and Seynaeve studied just one realization of the antilinear symmetry program, namely the one where all eigenvalues of the mass matrix are real and the set of all of its eigenvectors is complete. In our analysis we obtain the same mass matrix (which we must of course since all we have done is make a similarity transformation on their model), and show that in this particular realization the mass matrix can be brought to a Hermitian form by a similarity transformation, to thus be Hermitian in disguise. However, this same mass matrix admits of the two other realizations of antilinear symmetry as well, namely the nondiagonalizable Jordan-block case and the complex conjugate eigenvalue pair case. And in all of these cases we show that there is a massless Goldstone boson. In this regard the Jordan-block case is very interesting because it permits the Goldstone boson itself to be one of the zero norm states that are characteristic of Jordan-block matrices. That these cases can occur at all is because while the similarity transformed action that we use appears to be Hermitian it actually is not, something however that one cannot ascertain without first solving the theory. Finally, we extend the model to a local continuous symmetry by introducing a massless gauge boson, and find that the massless Goldstone boson can be incorporated into the massless gauge boson and make it massive by the Englert-Brout-Higgs mechanism in all realizations of the antilinear symmetry except one, namely the Jordan-block Goldstone mode case. In that case we find that since the Goldstone boson then has zero norm, it does not get incorporated into the gauge boson, with the gauge boson staying massless. In this case we have a spontaneously broken local gauge symmetry and yet do not get a massive gauge boson. This option cannot be obtained in the standard Hermitian case where all states have positive norm, to thus show how rich the non-Hermitian antilinear symmetry program can be. In general the fields used in the tree approximation to a quantum field theory are c-number matrix elements of q-number quantum fields. Given the left-and right-eigenstates introduced in Sec. III we can identify which particular states are involved in (8). Specifically, we can identify the c-number tree approximation fields as the matrix elements Ω L |φ|Ω R = Ω R |V φ|Ω R , i.e. c-number matrix elements of the q-number fields between the left and right vacua. In quantum field theory one introduces a generating functional via the Gell-Mann-Low adiabatic switching method. The discussion in the Hermitian case is standard and of ancient vintage, and following the convenient discussion in [28], here we adapt it to the non-Hermitian case. In the adiabatic switching procedure one introduces a quantummechanical Lagrangian density L 0 of interest, switches on a real local c-number source J(x) for some quantum field φ(x) at time t = −∞, and switches J(x) off at t = +∞. While the source is active the Lagrangian density of the theory is given by L J = L 0 + J(x)φ(x). Before the source is switched on the system is in the ground state of the Hamiltonian H 0 associated with L 0 with right-eigenvector |Ω − R and left-eigenvector Ω − L | = Ω − R |V . (Here V implements V H 0 V −1 = H † 0 and is independent of J.) And after the source is switched off the system is in the state with right-eigenvector |Ω + R and left-eigenvector Ω + L | = Ω + R |V . (V again implements V H 0 V −1 = H † 0 .) While |Ω − R and |Ω + R are both eigenstates of H 0 , they differ by a phase, a phase that is fixed by J(x) according to Ω + L |Ω − R | J = Ω + R |V |Ω − R | J = Ω J L |T exp i d 4 x(L 0 + J(x)φ(x)) |Ω J R = e iW (J) ,(A1) as written in terms of the vacua when J is active. This expression serving to define the functional W (J), with W (J) serving as the generator of the connected J = 0 theory Green's functions with the Γ n 0 (x 1 , ..., x n ) being the one-particle-irreducible, φ C = 0, Green's functions of the quantum field φ(x). Functional variation of Γ(φ C ) then yields δΓ(φ C ) δφ C = δW δJ δJ δφ C − J − δJ δφ C φ C = −J,(A6) to relate δΓ(φ C )/δφ C back to the source J. On expanding in momentum space around the point where all external momenta vanish, we can write Γ(φ C ) as Γ(φ C ) = d 4 x −V (φ C ) + 1 2 Z(φ C )∂ µ φ C ∂ µ φ C + .... .(A7) The quantity V (φ C ) = n 1 n! Γ n 0 (q i = 0)φ n C (A8) is known as the effective potential as introduced in [19,29] (a potential that is spacetime independent if φ C is), while the Z(φ C ) term serves as the kinetic energy of φ C . The significance of V (φ C ) is that when J is zero and φ C is spacetime independent, we can write V (φ C ) as V (φ C ) = 1 V ( S L |H 0 |S R − N L |H 0 |N R )(A9) in a volume V , where |S R and |N R are spontaneously broken and normal vacua in which S L |φ|S R is nonzero and N L |φ|N R is zero. The search for non-trivial tree approximation minima is then a search for states |S R in which V (φ C ) would be negative. In the non-Hermitian case then the V (φ C ) associated with left and right vacua is the needed effective potential. 23 In reference to the Goldstone theorem, we note that in writing down Ward identities one begins with operator relations for time-ordered products of general fields and current operators of the generic form ∂ µ [T (j µ (x)A(0))] = δ(x 0 )[j 0 (x), A(0)] + T (∂ µ j µ A(0)) , where A(0) is a product of fields at the origin of coordinates. We restrict to the case where ∂ µ j µ (x) = 0, and take matrix elements in the vacuum (normal or spontaneously broken), only unlike in the Hermitian case in the non-Hermitian case we take matrix elements in the left-and right-vacua. Since there is only one four-momentum p µ in the problem in Fourier space we can set Ω L |T (j µ (x)B(0))|Ω R = 1 (2π) 4 d 4 pe ip·x p µ F (p 2 ),(A11) where F (p 2 ) is a scalar function. On introducing Q(t) = d 3 xj 0 (x), we integrate both sides of (A11) with d 4 x. Should the right-hand side of (A12) not vanish (i.e. Q(t = 0)|Ω R = 0 or Ω L |Q(t = 0) = 0), there would then have to be a massless pole at p 2 = 0 on the left-hand side. This then is the Goldstone theorem, as adapted to the non-Hermitian case. As we see, by formulating non-Hermitian theories in terms of left-and right-eigenvectors, the extension of the discussion of spontaneously broken symmetry to the non-Hermitian case is straightforward. The specific structure of Ward identities such as that given in (A12) only depends on the symmetry behavior associated with the currents of interest. Since relations such as (A10) are operator identities they hold independent of the states in which one calculates matrix elements of them. In the non-Hermitian but CP T -symmetric situation, in order to look for a spontaneous breaking of the continuous global symmetry associated with the currents of interest one takes matrix elements of the relevant Ward identity in the S L | and |S R states, and as discussed in [13], one looks to see if the consistency of the Ward identity matrix elements in those states requires the existence of massless Goldstone bosons. In regard to the spontaneous breakdown of a continuous local symmetry in the non-Hermitian but CP T -symmetric case, the authors of [13] had left open the question of whether one could achieve the Englert-Brout-Higgs mechanism if one uses their non-standard variational procedure. 24 Since we use a standard variational procedure and standard Noether theorem approach and continue to use the same S L | and |S R states, we can readily extend our approach to the local symmetry case. And we find that in all realizations of the antilinear symmetry we can achieve the Englert-Brout-Higgs mechanism just as in the standard Hermitian case, save only for the particular Jordan-block situation in which the Goldstone boson itself has zero norm, a case in which, despite the spontaneous symmetry breaking, the gauge boson stays massless. d 4 xL(x) with spin zero L(x) is left invariant under real Lorentz transformations of the form exp(iw µν M µν ) where the six antisymmetric w µν = −w νµ are real parameters and the six M µν = −M νµ are the generators of the Lorentz group.To see this we note that with M µν acting on the Lorentz spin zero L(x) as x µ p ν − x ν p µ , under an infinitesimal Lorentz transformation the change in the action is given by δI A: Meaning of the Tree Approximation in the non-Hermitian Case G n 0 0(x 1 , ..., x n ) = Ω L |T [φ(x 1 )...φ(x n )]|Ω R x 1 ...d 4 x n G n 0 (x 1 , ..., x n )J(x 1 )...J(x n ). (A3)Given W (J), via functional variation we can construct the so-called classical (c-number) field φ C (x) φ C ) = W (J) − d 4 xJ(x)φ C (x 1 ...d 4 x n Γ n 0 (x 1 , ..., x n )φ C (x 1 )...φ C (x n ), 4 pδ 4 (p)p 2 F (p) = Ω L |[Q(t = 0), B(0)]|Ω R . For any non-diagonalizable two-dimensional Jordan-block and thus necessarily non-Hermitian Hamiltonian for instance, since the eigenspectrum is incomplete the Hamiltonian has just one eigenvector even though there are two eigenvalue solutions to |H − λI| = 0. These two eigenvalue solutions must then be equal to each other since they have to share just the one eigenvector. If in addition the Hamiltonian has an antilinear symmetry, by being equal to each other the two eigenvalues could then not be in a complex conjugate pair. In consequence, the two eigenvalue solutions to |H − λI| = 0 must be real -to thus show directly that one can have real eigenvalues if a Hamiltonian is not Hermitian. While the (χ 2 ,ψ 1 ) sector of the mass matrix is not Hermitian, its antilinear symmetry cannot be realized in the complex conjugate pair realization because by being zero the Goldstone boson eigenvalue λ 0 is real. Consequently, λ 1 must be real too. As shown in[15], to identify the V norm with the P T norm one has to choose the phase of the P T conjugate of a state to be the same as the P T eigenvalue of the state that is being conjugated. This prescription obviates any need to use the P T theory C operator norm that is described in[3], with the P T norm then having the same positivity as the V norm. Moreover, it was shown that not every P T symmetric theory will possess a P T theory C operator, but all P T theories will possess V and P T norms. Even though one loses diagonalizability when A 2 = B 2 , the matrix N remains PT symmetric at A 2 = B 2 , as it is invariant under the P T = σ 3 K transformation for all values of its parameters as long as they are real. The additional surface term is akin to the Hawking-Gibbons surface term used in general relativity. Specifically, in general relativity the variation of the Einstein-Hilbert action leads to variations of both gµν and its first derivative at the surface. Variations with respect to the derivatives are then cancelled by the Hawking-Gibbons term. The appearance of a negative kinetic energy term for φ 2 in (46) is only an artifact of the similarity transformation, since there are no such negative kinetic energy terms in our starting I(χ 1 , χ 2 , ψ 1 , ψ 2 ) and one cannot change the signature of a Hilbert space by a similarity transformation. While for instance I(χ 1 , χ 2 , ψ 1 , ψ 2 ) of (6) contains factors of i, its similarity transformed I ′ (χ 1 , χ 2 , ψ 1 , ψ 2 ) given in (43) does not. Moreover this is even true of the H = p 2 + ix 3 paradigm for P T symmetry. With S(θ) = exp(−θpx) effecting the [x, p] = i preserving S(θ)pS(−θ) = exp(−iθ)p, S(θ)xS(−θ) = exp(iθ)x, transforming with S(π/2) effects S(π/2)(p 2 +ix 3 )S(−π/2) = −p 2 +x 3 , and in passing we note that S(π) effects S(π)(p 2 + ix 3 )S(−π) = p 2 − ix 3 = (p 2 + ix 3 ) † . In fact in[7] it was shown in general that CP T invariance of a relativistic theory entails that one can always find an appropriate similarity transformation that would bring the Hamiltonian to a form in which all coefficients are real. Since the action is CP T symmetric, if there are to be any complex frequencies they must appear in complex conjugate pairs. Thus rather than being optional, according to[7] one has to interpret the star symbol in (1) as a CP T transform.19 While one can construct the Hamiltonian from the energy-momentum tensor, the energy-momentum tensor is only conserved in solutions to the equations of motion. Hermiticity is thus tied to the solutions to the theory in a way that CP T is not.20 These issues would only start to come up in fluctuations around the tree approximation minimum, with a one loop calculation having been provided in[13].21 Technically, that would have automatically been the case if the authors of[13] had used a conventional variational procedure, though the field can be expanded as φ(x, t) = [a(k) exp(−iω(k)t + ik· x) + a † (k) exp(+iω(k)t − ik· x)], and the Hamiltonian is given by H = [k 2 + m 2 ] 1/2 [a † (k)a(k) + a(k)a † (k)]/2.For either sign of m 2 the I S action is CPT symmetric, and for both signs I S appears to be Hermitian. For m 2 > 0, H and φ(x, t) are indeed Hermitian and all frequencies are real. However, for m 2 < 0, frequencies become complex when k 2 < −m 2 . The poles in the propagator move into the complex plane, the field φ(x, t) then contains modes that grow or decay exponentially in time,14while H contains energies that are complex. Thus neither H nor φ is Hermitian even though I S appears to be so.For behavior in space consider the Pais-Uhlenbeck two-oscillator theory[20]as studied in[21,22]. In the theory there are two sets of oscillator operators, which obey [z, p z ] = i, [x, p x ] = i, and the Hamiltonian is given byAs noted in[21]this theory is P T symmetric, and as noted in[22]it in addition is the non-relativistic limit of a relativistic fourth-order neutral scalar field theory, one whose CP T symmetry reduces to P T symmetry in the nonrelativistic limit. Initially the ω 1 and ω 2 frequencies are taken to be real and positive, and the energy eigenvalues are the real and positive E(n 1 , n 2 ) = (n 1 + 1/2)ω 1 + (n 2 + 1/2)ω 2 . However, if we now take the two frequencies to be equal to ω, the Hamiltonian takes the formand while H PU (ω) looks to be just as Hermitian as before, the Hamiltonian turns out to be Jordan block[22,23], to thus necessarily not be Hermitian at all. Since the CP T invariance of H PU (ω 1 , ω 2 ) is not affected by settingMoreover, if we take the two frequencies to be in a complex pair ω 1 = α + iβ, ω 2 = α − iβ with α > 0, β > 0, the Hamiltonian takes the form[7]The H PU (α, β) Hamiltonian still looks to be Hermitian but its energy eigenvalues are now in complex conjugate pairs. With all the coefficients in H PU (α, β) being real, H PU (α, β) is CP T symmetric. Thus all three of H PU (ω 1 , ω 2 ), H PU (ω) and H PU (α, β) are CP T invariant and for all three of them all coefficients are real, just as required by[7]. However, despite their appearance, H PU (ω) and H PU (α, β) are necessarily non-Hermitian. As written in (54), H PU (ω 1 , ω 2 ) is actually not Hermitian (or self-adjoint) either[21], since the real issue is not the appearance of the Hamiltonian but whether in an integration by parts one can drop spatially asymptotic surface terms. To see this we make a standard representation of the momentum operators of the form p z = −i∂ z , p x = −i∂ x , and find that for the Schrödinger problem associated with H PU (ω 1 , ω 2 ) the ground state wave function ψ 0 (z, x) with energy E(0, 0) = (ω 1 + ω 2 )/2 is given bySince this wave function is divergent at large z it is not normalizable (though it is convergent at large x). Consequently, one cannot throw surface terms away in an integration by parts, and despite its appearance H PU (ω 1 , ω 2 ) is not selfadjoint. In the three realizations described above (ω 1 and ω 2 real and unequal, real and equal, in a complex conjugate pair) we find that ω 1 + ω 2 and ω 1 ω 2 are all real and positive. Thus in all three realizations the wave functions diverge at large z, and in all three cases the Hamiltonian is not self-adjoint when acting on ψ 0 (z, x) By the same token one cannot throw surface terms away for p z and p x when they act on the eigenstates of H PU (ω 1 , ω 2 ). Thus even though p z and p x are Hermitian when acting on their own eigenstates they are not Hermitian when acting on the eigenstates of H PU (ω 1 , ω 2 ). Thus building a Hamiltonian out of Hermitian operators (i.e. ones that are Hermitian when acting on their own eigenstates) does not necessarily produce a Hamiltonian that is Hermitian when the Hamiltonian acts on its own eigenstates. In fact, until one has constructed the eigenstates of a Hamiltonian one cannot even tell whether or not a Hamiltonian is Hermitian at all. One thus cannot declare a Hamiltonian to be . C M Bender, S Boettcher, 10.1103/PhysRevLett.80.5243Phys. Rev. Lett. 805243C. M. Bender and S. Boettcher, Phys. Rev. Lett. 80, 5243 (1998). . C M Bender, S Boettcher, P N Meisinger, 10.1063/1.532860J. Math. Phys. 402201C. M. Bender, S. Boettcher and P. N. Meisinger, J. Math. Phys. 40, 2201 (1999). . C M Bender, http:/iopscience.iop.org/article/10.1088/0034-4885/70/6/R03/metaRep. Prog. Phys. 70947C. M. Bender, Rep. Prog. Phys. 70, 947 (2007). C M Bender, M Dekieviet, S P , Theme issue on PT quantum mechanics. 371Theme issue on PT quantum mechanics, C. M. Bender, M. DeKieviet and S. P. Klevansky (Guest Editors) Phil. Trans. R. Soc. A 371 . P D Mannheim, Phil. Trans. R. Soc. A. 37120120060P. D. Mannheim, Phil. Trans. R. Soc. A 371, 20120060 (2013). . P D Mannheim, http:/iopscience.iop.org/article/10.1088/1751-8121/aac035/metaJ. Phys. A: Math. Theor. 51315302P. D. Mannheim, J. Phys. A: Math. Theor. 51, 315302 (2018). . A Mostafazadeh, 10.1063/1.1418246J. Math. Phys. 43205A. Mostafazadeh, J. Math. Phys. 43, 205 (2002); . 10.1063/1.1461427J. Math. Phys. 432814J. Math. Phys. 43, 2814 (2002); . 10.1063/1.1489072J. Math. Phys. 433944J. Math. Phys. 43, 3944 (2002). . L Solombrino, 10.1063/1.1504485J. Math. Phys. 435439L. Solombrino, J. Math. Phys. 43, 5439 (2002). . C M Bender, P D Mannheim, 10.1016/j.physleta.2010.02.032Phys. Lett. A. 3741616C. M. Bender and P. D. Mannheim, Phys. Lett. A 374, 1616 (2010). |Ω R as the connected Green's functions that are relevant in the non-Hermitian case, we can write them as path integrals, and we refer the reader to. In addition we note that once one we have identified Ω L |T [φ(x 1 )...φ(xn). 6, 7, 13] for further detailsIn addition we note that once one we have identified Ω L |T [φ(x 1 )...φ(xn)]|Ω R as the connected Green's functions that are relevant in the non-Hermitian case, we can write them as path integrals, and we refer the reader to [6, 7, 13] for further details. Gauge invariance and the Englert-Brout-Higgs mechanism in non-Hermitian field theories. arXiv:1808.00944Millington and Seynaeve released a follow up paper. hep-th. 30]. In it the authors extend the analysis of [13] to the Englert-Brout-Higgs mechanism. Their paper can be regarded as complementary to our ownAfter we had finished this paper Alexandre, Ellis, Millington and Seynaeve released a follow up paper "Gauge invariance and the Englert-Brout-Higgs mechanism in non-Hermitian field theories", arXiv:1808.00944 [hep-th] [30]. In it the authors extend the analysis of [13] to the Englert-Brout-Higgs mechanism. Their paper can be regarded as complementary to our own. . R F Streater, A S Wightman, Pct, Spin, W A Statistics, Benjamin, New YorkR. F. Streater and A. S. Wightman, PCT, Spin and Statistics, and all that W. A. Benjamin, New York (1964). . P D Mannheim, 10.1016/j.physletb.2015.12.033Phys. Lett. B. 753288P. D. Mannheim, Phys. Lett. B 753, 288 (2016). . J Alexandre, J Ellis, P Millington, D Seynaeve, 10.1103/PhysRevD.98.045001Phys. Rev. D. 9845001J. Alexandre, J. Ellis, P. Millington, D. Seynaeve, Phys. Rev. D 98, 045001 (2018) S Weinberg, 10.1017/CBO9781139644167The Quantum Theory of Fields: Volume I. Cambridge U. K.Cambridge University PressS. Weinberg The Quantum Theory of Fields: Volume I Cambridge University Press, Cambridge U. K. (1995). . P D Mannheim, 10.1103/PhysRevD.97.045001Phys. Rev. D. 9745001P. D. Mannheim, Phys. Rev. D 97, 045001 (2018). . Y Nambu, 10.1103/PhysRevLett.4.380Phys. Rev. Lett. 4380Y. Nambu, Phys. Rev. Lett. 4, 380 (1960). . J Goldstone, 10.1007/BF02812722Nuovo Cim. 19154J. Goldstone, Nuovo Cim. 19, 154 (1961). . Y Nambu, G Jona-Lasinio, 10.1103/PhysRev.122.345Phys. Rev. 122345Y. Nambu and G. Jona-Lasinio, Phys. Rev. 122, 345 (1961). . J Goldstone, A Salam, S Weinberg, 10.1103/PhysRev.127.965Phys. Rev. 127965J. Goldstone, A. Salam and S. Weinberg, Phys. Rev. 127, 965 (1962). . A Pais, G E Uhlenbeck, 10.1103/PhysRev.79.145Phys. Rev. 79145A. Pais and G. E. Uhlenbeck, Phys. Rev. 79, 145 (1950). . C M Bender, P D Mannheim, 10.1103/PhysRevLett.100.110402Phys. Rev. Lett. 100110422C. M. Bender and P. D. Mannheim, Phys. Rev. Lett. 100, 110422 (2008). . C M Bender, P D Mannheim, 10.1103/PhysRevD.78.025022Phys. Rev. D. 7825022C. M. Bender and P. D. Mannheim, Phys. Rev. D 78, 025022 (2008). . P D Mannheim, A Davidson, 10.1103/PhysRevA.71.042110arXiv:hep-th/0001115Phys. Rev. A. 7142110P. D. Mannheim and A. Davidson, arXiv:hep-th/0001115; Phys. Rev. A 71, 042110 (2005). . F Englert, R Brout, 10.1103/PhysRevLett.13.321Phys. Rev. Lett. 13321F. Englert and R. Brout, Phys. Rev. Lett. 13, 321 (1964). . P W Higgs, 10.1016/0031-9163(64)91136-9Phys. Lett. 12132P. W. Higgs, Phys. Lett. 12, 132 (1964). . P W Higgs, 10.1103/PhysRevLett.13.508Phys. Rev. Lett. 13508P. W. Higgs, Phys. Rev. Lett. 13, 508 (1964). . G S Guralnik, C R Hagen, T W B Kibble, 10.1103/PhysRevLett.13.585Phys. Rev. Lett. 13585G. S. Guralnik, C. R. Hagen and T. W. B. Kibble, Phys. Rev. Lett. 13, 585 (1964). . P D Mannheim, 10.1016/j.ppnp.2017.02.001Prog. Part. Nucl. Phys. 94125P. D. Mannheim, Prog. Part. Nucl. Phys. 94, 125 (2017). . G Jona-Lasinio, 10.1007/BF02750573Nuovo Cim. 341790G. Jona-Lasinio, Nuovo Cim. 34, 1790 (1964). Gauge invariance and the Englert-Brout-Higgs mechanism in non-Hermitian field th. J Alexandre, J Ellis, P Millington, D Seynaeve, J. Alexandre, J. Ellis, P. Millington, D. Seynaeve, Gauge invariance and the Englert-Brout-Higgs mechanism in non-Hermitian field th
[]
[ "Successes and critical failures of neural networks in capturing human-like speech recognition Successes and critical failures of neural networks in capturing human-like speech recognition", "Successes and critical failures of neural networks in capturing human-like speech recognition Successes and critical failures of neural networks in capturing human-like speech recognition", "Successes and critical failures of neural networks in capturing human-like speech recognition Successes and critical failures of neural networks in capturing human-like speech recognition", "Successes and critical failures of neural networks in capturing human-like speech recognition Successes and critical failures of neural networks in capturing human-like speech recognition" ]
[ "Federico Adolfi \nErnst Strüngmann Institute (ESI) for Neuroscience in Cooperation with Max Planck Society\nSchool of Psychological Science\nUniversity of Bristol\nFrankfurt, BristolGermany, United Kingdom\n", "Jeffrey S Bowers \nSchool of Psychological Science\nDepartment of Psychology\nErnst Strüngmann Institute (ESI) for Neuroscience in Cooperation with Max Planck Society\nUniversity of Bristol\nBristol, FrankfurtUnited Kingdom, Germany\n", "David Poeppel \nMax Planck NYU Center for Language, Music, and Emotion\nErnst Strüngmann Institute (ESI) for Neuroscience in Cooperation with Max Planck Society\nNew York University\nNew York, Frankfurt, Germany, FrankfurtNew York, NYUnited States, Germany\n", "Federico Adolfi \nSchool of Psychological Science\nUniversity of Bristol\nBristolUnited Kingdom\n", "Jeffrey S Bowers \nSchool of Psychological Science\nDepartment of Psychology\nErnst Strüngmann Institute (ESI) for Neuroscience in Cooperation with Max Planck Society\nUniversity of Bristol\nBristol, FrankfurtUnited Kingdom, Germany\n", "David Poeppel \nMax Planck NYU Center for Language, Music, and Emotion\nNew York University\nNew York, Frankfurt, GermanyNew York, NYUnited States\n", "Federico Adolfi \nErnst Strüngmann Institute (ESI) for Neuroscience in Cooperation with Max Planck Society\nSchool of Psychological Science\nUniversity of Bristol\nFrankfurt, BristolGermany, United Kingdom\n", "Jeffrey S Bowers \nSchool of Psychological Science\nDepartment of Psychology\nErnst Strüngmann Institute (ESI) for Neuroscience in Cooperation with Max Planck Society\nUniversity of Bristol\nBristol, FrankfurtUnited Kingdom, Germany\n", "David Poeppel \nMax Planck NYU Center for Language, Music, and Emotion\nErnst Strüngmann Institute (ESI) for Neuroscience in Cooperation with Max Planck Society\nNew York University\nNew York, Frankfurt, Germany, FrankfurtNew York, NYUnited States, Germany\n", "Federico Adolfi \nSchool of Psychological Science\nUniversity of Bristol\nBristolUnited Kingdom\n", "Jeffrey S Bowers \nSchool of Psychological Science\nDepartment of Psychology\nErnst Strüngmann Institute (ESI) for Neuroscience in Cooperation with Max Planck Society\nUniversity of Bristol\nBristol, FrankfurtUnited Kingdom, Germany\n", "David Poeppel \nMax Planck NYU Center for Language, Music, and Emotion\nNew York University\nNew York, Frankfurt, GermanyNew York, NYUnited States\n" ]
[ "Ernst Strüngmann Institute (ESI) for Neuroscience in Cooperation with Max Planck Society\nSchool of Psychological Science\nUniversity of Bristol\nFrankfurt, BristolGermany, United Kingdom", "School of Psychological Science\nDepartment of Psychology\nErnst Strüngmann Institute (ESI) for Neuroscience in Cooperation with Max Planck Society\nUniversity of Bristol\nBristol, FrankfurtUnited Kingdom, Germany", "Max Planck NYU Center for Language, Music, and Emotion\nErnst Strüngmann Institute (ESI) for Neuroscience in Cooperation with Max Planck Society\nNew York University\nNew York, Frankfurt, Germany, FrankfurtNew York, NYUnited States, Germany", "School of Psychological Science\nUniversity of Bristol\nBristolUnited Kingdom", "School of Psychological Science\nDepartment of Psychology\nErnst Strüngmann Institute (ESI) for Neuroscience in Cooperation with Max Planck Society\nUniversity of Bristol\nBristol, FrankfurtUnited Kingdom, Germany", "Max Planck NYU Center for Language, Music, and Emotion\nNew York University\nNew York, Frankfurt, GermanyNew York, NYUnited States", "Ernst Strüngmann Institute (ESI) for Neuroscience in Cooperation with Max Planck Society\nSchool of Psychological Science\nUniversity of Bristol\nFrankfurt, BristolGermany, United Kingdom", "School of Psychological Science\nDepartment of Psychology\nErnst Strüngmann Institute (ESI) for Neuroscience in Cooperation with Max Planck Society\nUniversity of Bristol\nBristol, FrankfurtUnited Kingdom, Germany", "Max Planck NYU Center for Language, Music, and Emotion\nErnst Strüngmann Institute (ESI) for Neuroscience in Cooperation with Max Planck Society\nNew York University\nNew York, Frankfurt, Germany, FrankfurtNew York, NYUnited States, Germany", "School of Psychological Science\nUniversity of Bristol\nBristolUnited Kingdom", "School of Psychological Science\nDepartment of Psychology\nErnst Strüngmann Institute (ESI) for Neuroscience in Cooperation with Max Planck Society\nUniversity of Bristol\nBristol, FrankfurtUnited Kingdom, Germany", "Max Planck NYU Center for Language, Music, and Emotion\nNew York University\nNew York, Frankfurt, GermanyNew York, NYUnited States" ]
[]
Natural and artificial audition can in principle acquire different solutions to a given problem. The constraints of the task, however, can nudge the cognitive science and engineering of audition to qualitatively converge, suggesting that a closer mutual examination would potentially enrich artificial hearing systems and process models of the mind and brain. Speech recognitionan area ripe for such exploration -is inherently robust in humans to a number transformations at various spectrotemporal granularities. To what extent are these robustness profiles accounted for by high-performing neural network systems? We bring together experiments in speech recognition under a single synthesis framework to evaluate state-of-the-art neural networks as stimuluscomputable, optimized observers. In a series of experiments, we (1) clarify how influential speech manipulations in the literature relate to each other and to natural speech, (2) show the granularities at which machines exhibit out-of-distribution robustness, reproducing classical perceptual phenomena in humans, (3) identify the specific conditions where model predictions of human performance differ, and (4) demonstrate a crucial failure of all artificial systems to perceptually recover where humans do, suggesting alternative directions for theory and model building. These findings encourage a tighter synergy between the cognitive science and engineering of audition.
10.1016/j.neunet.2023.02.032
[ "https://export.arxiv.org/pdf/2204.03740v4.pdf" ]
248,069,362
2204.03740
b0dae536823c6e19a41e9baf9174b88b8b4fbd33
Successes and critical failures of neural networks in capturing human-like speech recognition Successes and critical failures of neural networks in capturing human-like speech recognition 19 Apr 2023 Federico Adolfi Ernst Strüngmann Institute (ESI) for Neuroscience in Cooperation with Max Planck Society School of Psychological Science University of Bristol Frankfurt, BristolGermany, United Kingdom Jeffrey S Bowers School of Psychological Science Department of Psychology Ernst Strüngmann Institute (ESI) for Neuroscience in Cooperation with Max Planck Society University of Bristol Bristol, FrankfurtUnited Kingdom, Germany David Poeppel Max Planck NYU Center for Language, Music, and Emotion Ernst Strüngmann Institute (ESI) for Neuroscience in Cooperation with Max Planck Society New York University New York, Frankfurt, Germany, FrankfurtNew York, NYUnited States, Germany Federico Adolfi School of Psychological Science University of Bristol BristolUnited Kingdom Jeffrey S Bowers School of Psychological Science Department of Psychology Ernst Strüngmann Institute (ESI) for Neuroscience in Cooperation with Max Planck Society University of Bristol Bristol, FrankfurtUnited Kingdom, Germany David Poeppel Max Planck NYU Center for Language, Music, and Emotion New York University New York, Frankfurt, GermanyNew York, NYUnited States Successes and critical failures of neural networks in capturing human-like speech recognition Successes and critical failures of neural networks in capturing human-like speech recognition 19 Apr 2023Correspondence should be addressed to Federico Adolfi ([email protected])auditionspeechneural networksrobustnesshuman-like AI Keywords: auditionspeechneural networksrobust- nesshuman-like AI Natural and artificial audition can in principle acquire different solutions to a given problem. The constraints of the task, however, can nudge the cognitive science and engineering of audition to qualitatively converge, suggesting that a closer mutual examination would potentially enrich artificial hearing systems and process models of the mind and brain. Speech recognitionan area ripe for such exploration -is inherently robust in humans to a number transformations at various spectrotemporal granularities. To what extent are these robustness profiles accounted for by high-performing neural network systems? We bring together experiments in speech recognition under a single synthesis framework to evaluate state-of-the-art neural networks as stimuluscomputable, optimized observers. In a series of experiments, we (1) clarify how influential speech manipulations in the literature relate to each other and to natural speech, (2) show the granularities at which machines exhibit out-of-distribution robustness, reproducing classical perceptual phenomena in humans, (3) identify the specific conditions where model predictions of human performance differ, and (4) demonstrate a crucial failure of all artificial systems to perceptually recover where humans do, suggesting alternative directions for theory and model building. These findings encourage a tighter synergy between the cognitive science and engineering of audition. Introduction Audition systems -artificial and biological -can in principle acquire qualitatively different solutions to the same ecological problem. For instance, redundancy at the input or lack thereof, relative to the structure and complexity of the problem, can encourage systems towards divergent or convergent evolution. Whether performance-optimized engineering solutions and biological perception converge for a particular problem determines, in part, the extent to which artificial auditory systems can play a role as process models of the mind and brain (Ma & Peters, 2020). Although neural networks for audio have achieved remarkable performance in tasks such as speech recognition, most of the links to computational cognitive science , which is optionally converted to a spectrogram-like representation in the time-frequency domain (C). It is subsequently segmented in parallel at various spectrotemporal scales (D). The resulting slices become the input to a transformation (E) -which may involve shuffling, reversing, masking, silencing, chimerizing, mosaicizing, time warping, or repackaging. Finally, the outputs are sequenced and the resulting time-domain signals are presented to both humans and optimized observer models (F). have come from vision, with audition being comparatively neglected (Cichy & Kaiser, 2019). Audition as a field has its own set of unique challenges: explaining and building systems that must integrate sound information at various spectrotemporal scales to accomplish even the most basic recognition task (Poeppel, Idsardi, & van Wassenhove, 2008;Poeppel & Assaneo, 2020). Nevertheless, research into audition can avoid pitfalls in model evaluation by looking at emerging critiques of neural networks for vision and adopting a more qualitative and diverse approach (Navarro, 2019). We therefore set out to characterize the solutions acquired by machine hearing systems as compared to humans, drawing bridges across influential research lines in auditory cognitive science and engineering. An area of audition where the two disciplines once worked in close allegiance is speech recognition. The engineering of machine hearing has produced a zoo of task-optimized architectures -convolutional (Veysov, 2020), recurrent (Amodei et al., 2015;Hannun et al., 2014), and more recently, transformer-based (Baevski, Zhou, Mohamed, & Auli, 2020;Schneider, Baevski, Collobert, & Auli, 2019) -achieving performance levels impressive enough (on benchmark tasks) to afford numerous real-world applications. The cognitive science of audition provides a complementary perspective from biological hearing. A research program based on multi-scale perturbations to natural signals -going back to the 1950s (Miller & Licklider, 1950), active through decades (Saberi & Perrott, 1999;Smith, Delgutte, & Oxenham, 2002;Shannon, Zeng, Kamath, Wygonski, & Ekelid, 1995), and still thriving (Gotoh, Tohyama, & Houtgast, 2017;Ueda, Nakajima, Ellermeier, & Kattner, 2017), has provided detailed descriptions of performance patterns in humans. The question is whether these engineering and scientific insights converge, and to what extent they can more explicitly inform each other. Speech recognition in humans is inherently resistant to a number of perturbations at various granularities, exhibiting a form of out-of-distribution robustness analogous to how biological (but typically not artificial) vision generalizes to contour images and other transformations (Evans, Malhotra, & Bowers, 2021). This has been uncovered by a large set of experiments which process natural speech in a selective manner at multiple spectrotemporal scales (e.g., Saberi & Perrott, 1999;Smith et al., 2002;Shannon et al., 1995). The results are suggestive of the properties of mid-level stages of audition that drive any downstream task such as prediction and categorization. Are these robustness profiles accounted for by modern neural network systems? We make explicit the synthesis space implied by these experiments, bringing them together under a single framework ( Fig. 1) that allows us to simulate behavior exhaustively in search for human-machine alignment. By this we mean that each classical experiment implicitly defines a space of possible simulations given by the experimental parameters (e.g., the temporal scale at which perturbations are performed). We combine and vary these in order to cover more ground than what was the case in the original experiments. In this way we can give the qualitatively human-like performance patterns a chance to emerge in the results without limiting their manifestation to the narrow parameter range of past studies. The broader rationale is that insights about a perceptual system and its input signal (in this case, speech) can be gleaned by observing the transformations and spectrotemporal granularities for which systems show perturbation-robust behavior. Systems will show performance curves reflecting whether (a) they rely on perturbation-invariant transformations at various granularities, and (b) information evolving at these scales is present and relevant for the downstream task. These in turn depend on the relevant signal cues being unique such that all solutions, artificial or biological, tend towards exploiting it. With this framework in place, we perform multi-scale audio analysis and synthesis, evaluate state-of-the-art neural networks as stimuluscomputable optimized observers, and compare the simulated predictions to human performance. This paper is organized as follows. First, we clarify how the different audio manipulations in the literature relate to each other by describing their effects in a common space: the sparsity statistics of the input. This allows us to link the distribution of experimental stimuli in human cognitive science to that of training and testing examples for artificial systems. Synthetic and natural speech fill this space and show regions where human and machine performance is robust outside the training distribution. Second, in a series of experiments we find that, while several classical perceptual phenomena are wellpredicted by high-performing, speech-to-text neural network architectures, more destructive perturbations reveal differences amongst models and between these and humans. Finally, we demonstrate a systematic failure of all artificial systems to perceptually recover where humans do, which is suggestive of alternative directions for theorizing, computational cognitive modeling, and, more speculatively, improvement of engineering solutions. Results We characterize the input space and report performance on speech recognition, measured by the word error rate (WER), for multiple experiments with trained neural networks including convolutional, recurrent and transformer models (see Methods for details). Our experimental framework (Fig. 1) systematizes and integrates classical speech perturbations. These re-synthesis procedures split the signal into segments and apply a transformation within each segment, such as shuffling, reversing, masking, silencing, chimerizing, mosaicizing, time warping, or repackaging (see Fig. 2 for example spectrograms of natural and perturbed signals, and Methods section for details). Then the segments are concatenated together and the resulting perturbed speech is presented to machines. The performance of the models under dif-ferent perturbations is therefore evaluated and plotted separately at various scales and perturbation parameter values. The rationale for choosing these perturbations, which are not variants of natural speech, is that (i) they represent a cohesive family of manipulations to the speech Figure 2: Spectrogram and waveform representations of natural and resynthesized speech for all perturbations of a single 3-second utterance: "computational complexity theory". To illustrate the effect of various perturbations on the signal, we show moderate perturbation magnitudes: shuffling is done at a 2 ms timescale; reversing at 150 ms; masking and silencing are done at 300 ms; chimerizing is done with 30 bands and targeting the envelopes for reversal at 100 ms; mosaicizing is done with 60 bands and a frequency window of 6 bands and time window of 100 ms; time warping is applied with a warp factor of 0.5 (stretching); repackaging is done with a warp factor of 3.0 (compressing), a time window of 250 ms and an insertion of silence of 167ms. Refer to the main text and Methods section for details on the audio perturbations and resynthesis procedures. signal with well-known human performance profiles; (ii) they represent a unique opportunity to test for outof-distribution robustness/generalization, as humans are robust to these perturbations at specific timescales without having been trained explicitly; and (iii) they each allow informative interpretations of the results in terms of (a) the specific invariances learned by neural networks and (b) the timescales at which these invariances operate. For instance, if a trained model's performance is unaffected by a specific perturbation at time scale X (e.g., 250 ms) which destroys the structure of feature Y (e.g., phase spectrum) but preserves that of feature Z (e.g., magnitude spectrum), then we can infer that the transformation learned by the model is likely invariant in this particular sense. To avoid pervasive problems Dujmović, Bowers, Adolfi, & Malhotra, 2022;Guest & Martin, 2023) with monolithic, quantitative assessments of predictive accuracy (e.g., a single brain activity prediction score), in this work we focus instead on the qualitative fit (Navarro, 2019;Bowers et al., 2022) between machines and humans. That is, we first identify the canonical performance curve exhibited by humans in response to parametric speech perturbations, and then we search for this pattern by systematic visual inspection in the performance profile of neural networks across many combinations of experimental parameters, including the original one used in human studies. For instance, if humans exhibit a Ushaped performance curve as a perturbation parameter value is increased, we search for such a curve in the performance profile of neural network models. In all cases, we plot the results on axes chosen to match the classical experiments we build on, to facilitate comparisons. The main results summarizing the findings of our more comprehensive evaluation are presented here succinctly and later discussed more comprehensively. Input statistics: sparsity and out-of-distribution robustness Since it is natural to think of the family of experiments conducted here as affecting the distribution of signal energy (in time and frequency) in proportion to the magnitude of the synthesis parameters (see below and Fig. 1), we accordingly use sparsity as a summary statistic. We do this with descriptive aims, as it allows us to (a) visualize an interpretable, low-dimensional representation of the input, (b) unify synthesis procedures traditionally considered separate, and (c) reason about out-of-distribution robustness. To examine how the different speech synthesis techniques relate to each other, we quantify and summarize their effect on the distribution over the input space: we compute the sparsity (Hurley & Rickard, 2009) of natural and experimental signals in the time and frequency domains. A high sparsity representation of a signal contains a small number of its coefficients (under some encoding) accounting for most of the signal's energy. We observe that this measure is reliably modulated by our synthesis procedures and experimental parameters, which makes it a useful summary representation of the resulting input statistics. We visualize the joint distributions of both synthetic and natural speech samples and find that the family of manipulations approximately fills the space (see Fig. 3 for a schematic summary). Natural speech sits roughly at the center, with the extremes in this space representing regions of canonical non-speech signals like noise, clicks, simple tones, and beeps. The magnitude of the experimental manipulations relates to how much synthetic samples are pushed away from natural speech in various directions. In the next sections we will present similar graphs alongside the main results, for each experiment separately, to aid in the description of the data. As we detail in the experiments below, we observe that top performance (80-100%) on perturbed synthetic speech includes limited regions outside the natural distribution where both humans and machines exhibit inherent robustness (see Fig. 4-7 right-hand panels for individual experiment distributions). In sum, the family of perturbations considered here are naturally described as spanning the space of sparsity, and can parametrically drive speech stimuli outside the training distribution, where machines and humans exhibit some generalization. Convergent robustness: artificial systems exhibit humanlike multi-scale invariances to classical perturbations We find that machines display qualitatively similar performance patterns to humans in classical experiments where the temporal and spectral granularities, and the perturbations themselves, are manipulated . We summarize the findings next. Shuffling, destroys information to a greater extent than, for instance, reversal of the time-domain samples, as it affects the local spectrum. The manipulation pushes speech towards a region of reduced spectral, and, eventually, temporal sparsity (Fig. 4B). Consequently, humans show a more dramatic decline with increasing temporal extent (Gotoh et al., 2017). We observe the same effect in machines (Fig. 4A). Performance declines steadily with increasing window size until speech is rendered unrecognizable at around the 2-ms timescale. All models show this basic pattern and cutoff, although with varying rates of decline. Reversal, which affects the temporal order but preserves the local magnitude spectrum -leaving the sparsity statistics largely untouched (Fig. 4D), produces a complicated performance contour in humans (Gotoh et al., 2017). Perfect performance for window sizes between 5 and 50 ms, and even partial intelligibility for those exceeding 100 ms is readily achieved by humans even Figure 3: Schematic of how natural and experimental distributions fill the input space defined by sparsity in time and frequency. The natural speech distribution is shown in grayscale hexagons located at the center. A subset of the processed audio samples are shown in color according to 4 example experiments (color code on the top right). Each dot represents a speech utterance that has been perturbed according to an example resynthesis procedure (here shuffling [orange], masking [purple], silencing [violet] and mosaicizing [red]; see main text and Methods for details). The perturbed signal is run through the sparsity analysis, obtaining one value for time sparsity and another for frequency sparsity. Hue and size indicates the magnitude of the perturbation according to its respective parameter set (e.g., window length). Marginal distributions are 'filled' such that the proportion of samples for different experiments is reflected at each point. It can be seen that audio transformations systematically push samples away from the training set. Canonical signals (noise, tone, beep, click) are annotated at the extremes for reference. The sparsity plots for each perturbation are reported later individually. though speech sounds carry defining features evolving rapidly within the reversal window. We find that this timescale-specific resistance to reversal (Saberi & Perrott, 1999) is closely traced -with increasing precision as more accurate estimates are obtained (Ueda et al., 2017) -by automatic speech recognition systems (Fig. 4C). Time warping alters the duration of the signal without affecting the spectral content. Similar to size in vision, a system confronted with a time warped sound needs to handle an 'object' that has been rescaled. Humans can cope with stretched or compressed speech with decreasing performance up to a factor of 3-4, with a faster decline for compression (Fu et al., 2001). Stretching and compression manifest in the input space as translation in the time-domain sparsity axis in opposite directions (Fig. 4F). We find that neural network performance follows the U-shaped curve found in humans and exhibits the characteristic asymmetry as well (Fig. 4E). Performance is worst when the warp factor is either 4.0 (compression) or 0.25 (stretching) and it shows a steeper ascent when decreasing compression than when decreasing stretching. The best performance is achieved, as expected, when the warp factor is 1.0 (no compression or stretching, i.e., the natural signal). Mosaic sounds are analogous to pixelated images and therefore better suited than reversed speech to probe the resolution the system needs for downstream tasks (Nakajima et al., 2018). Values within bins in the timefrequency canvas are pooled such that the representation is 'pixelated'. The size of the bins is manipulated to affect the available resolution. This corresponds to a decrease in sparsity that scales with the spectrotemporal bin size (Fig. 5B). When the temporal resolution of the auditory system is probed in this way at multiple scales, we find that, as seen in humans, a systematic advantage emerges over the locally reversed counterpart in ANNs (Fig. 5A). Chimaeric sounds factor the signal into the product of sub-band envelopes and temporal fine structure to combine one and the other component extracted from different sounds. Although the importance of the envelopes has been emphasized (Shannon et al., 1995), recent experiments suggest that this may only be part of the mechanism, with the fine structure having a unique contribution to speech intelligibility . Speech-noise chimeras can be constructed such that taskrelated information is present in the envelopes or the fine structure only (Smith et al., 2002). We observe that fine-structure speech shows up as less sparse in the time-domain due to the removal of envelope information, and its frequency-domain sparsity is modulated by the number of bands (Fig. 5E). Both humans and machines show a characteristic sensitivity to the number of bands used for synthesis: performance over the entire range is boosted or attenuated depending on whether information is present in the envelopes or the fine structure ( Fig. 5C). An additional effect concerns the perceptual advantage of locally reversed speech at the level of sub-band envelopes over both the time-domain waveform reversal and the speech-noise chimera with reversed envelopes . We find that models, too, exhibit this uniform advantage (Fig. 5D). Performance is best when the reversal timescale is roughly less than 50 ms and then rapidly declines and plateaus after the 100ms timescale where speech is unrecognizable. Following this general trend, the speech-noise chimeras produce the least resilient performance. Divergent robustness: multi-scale interruptions reveal differential humanlikeness among machines Speech interruptions perturb a fraction of the windows at a given timescale with either silence or a noise mask. With this manipulation, the system is presented with 'glimpses' of the input signal. The redundancies in the speech signal are such that at various interruption fre-quencies, for example between 10 and 100 ms window size, humans show good performance even though a substantial fraction of the input has been perturbed or elim- inated (Miller & Licklider, 1950). Mask interruptions corrupt a fraction of the signal by adding noise. This shifts the speech samples mainly towards regions of decreasing spectral sparsity (Fig. 6B). Interruptions of silence, on the other hand, zero out a fraction of the signal, effectively removing all the information in it. As a consequence, speech samples become increasingly temporally sparse (Fig. 6D). We find that models exhibit idiosyncratic performance patterns across timescales such that they pairwise agree to different extents depending on the perturbation window size. Humans, as well as some of the models we tested, exhibit a perceptual profile where obstructions (mask or silence) at large timescales produce moderately bad performance, later recover almost completely at intermediate timescales, they achieve their worst performance at moderately short timescales, and finally slightly improve at the smallest timescales. As the masking window size decreases from 1000 ms to 100 ms some models' performance declines to their worst and then quickly recover such that they achieve their best at around 50 ms and shorter timescales. On the other hand, a recent transformer architecture with a waveform front end, pretrained using self-supervision, shows an overall better qualitative match to human performance, although quantitative differences are still apparent in all cases (Fig. 6A,C). Nonrobustness: machines fail to exhibit humanlike performance profiles in response to repackaging Repackaging combines different aspects of the previous audio manipulations -time warping, multiple-timescale windowing, and insertions of silence -to reallocate, as opposed to remove or corrupt, speech information in time. Repackaged speech therefore can be made more temporally sparse (Fig. 7B) without losing information. As we have shown above, the performance of both humans and machines degrades with increasing temporal compression. Here we focus further on a key finding: when perceiving compressed speech humans benefit from repackaging (Ghitza & Greenberg, 2009). Insertions of silence, roughly up to the amount necessary to compensate for the compression, recovers performance dramatically -an effect that has been replicated and further characterized numerous times (Ghitza & Greenberg, 2009;Ghitza, 2012Ghitza, , 2014Bosker & Ghitza, 2018;Penn, Ayasse, Wingfield, & Ghitza, 2018;Ramus, Carrion-Castillo, & Lachat, 2021). We find that, across the entire space of experimental parameters, machines fail to show any such recovery (Fig. 7A). The canonical performance profile in humans shows the worst performance when the signal is compressed by a factor approaching 3 and no silence is inserted. As the amount of silence inserted compensates for the extent lost due to temporal compression, the performance improves. After that, it declines, producing a characteristic U shape. The systems tested here, on the other hand, show bad performance with heavily compressed speech which simply worsens with increasing insertions of silence and shows no inflection when the insertion length precisely compensates for the temporal compression. Discussion In this paper we considered the possibility that engineering solutions to artificial audition might qualitatively converge, in more ways than merely performance level, with those implemented in human brains. If the task is too simple, then it is conceivable that many qualitatively different solutions can in principle be possible. In this case, convergence of performance between humans and engineered systems would not be surprising. On the other hand, convergence of algorithmic solutions would be. If the constraints on the task become more nuanced, however, then any system learning to solve the task would be forced into a narrower space of possible algorithmic solutions. In this latter case, similar performance levels might be suggestive of similar algorithmic solutions. Here we set out to investigate whether this might be the current scenario regarding human speech perception and neural networks for automatic speech recognition. In a set of studies we had high-performing speech-totext neural networks perform theoretically-driven tasks while sweeping through the parameter space of foundational speech experiments. We additionally explored how the audio perturbations in each experiment relate to their unpurturbed counterparts and to canonical audio signals. We found that a subset of multi-scale per-turbations including reversal, shuffling, time warping, chimerizing and mosacizing yield performance curves and effects that are similar amongst models and to some extent aligned with those of humans. More destructive perturbations such as masking and silencing reveal performance patterns where models differ from each other and from humans. The most informative outcome we observed comes from the repackaging experiment, whereby all models resemble each other closely while systematically failing to capture human performance. This finding highlights a set of possible endogenous mechanisms currently absent from state-of-the-art neural network models. We focus here on the broad qualitative trends that are informative for theory and model development as we discuss the implications for the reverse-engineering and (more speculatively) the forward-engineering of hearing. We found that several classical phenomena in speech perception are well-predicted by high-performing models. These comprise the performance curves across scales in response to reversed (Saberi & Perrott, 1999), shuffled (Gotoh et al., 2017), time-warped (Fu et al., 2001), mosacized (Nakajima et al., 2018), and chimerized (Smith et al., 2002) speech. Humans and machines perform well in these non-ecological conditions at qualitatively similar scales, and this emerges simply as a result of training on the downstream task of recognition. This need not have been the case, for example, if different solutions to the problem are possible (e.g., equally predictive cues) and systems have various inductive biases that push towards them differently. Overall, these similarities could be interpreted as a form of shared out-ofdistribution robustness: neither humans nor machines need any specific training to achieve it. These effects do not correspond to the perturbations having no measurable effect whatsoever, as they are known to lie well above the detection threshold; stimuli appear unnatural even to untrained listeners and the results generally agree with foundational studies (e.g., Shannon et al., 1995). Broad agreement between different architectures for multiple-spectrotemporal-scale manipulations, as we have found, would tentatively suggest that the problem of speech recognition offers enough constraints such that humans and artificial systems naturally converge in high-performance regimes. This is the prevailing view behind studies predicting brain activity using these kinds of models (e.g., Millet & King, 2021): that highperforming networks settle on hyperparameter regions which, although chosen for engineering reasons, turn out to be human-like in some relevant way (e.g., having similar receptive field sizes). However, we observe some marked differences emerging among artificial systems and between these and humans when the signal is perturbed more aggressively. These comprise the masking and silencing manipulations (Miller & Licklider, 1950), where the performance profiles vary more widely. The perturbations we deploy are not natural but they have been designed to probe attributes of perception that the developing auditory system must acquire as it is confronted with natural signals, such as resilience to temporal distortions due to reverberation and various forms of masking. The possible reasons for differences between the models themselves are of secondary importance here as we are specifically concerned with their ability or not to capture qualitative human behavioral patterns. Although there might be a way to reconcile these diverse performance patterns by altering minor parameters in the architectures, our work together with a parallel effort using different methods (Weerts, Rosen, Clopath, & Goodman, 2021) highlights a more fundamental difficulty of these architectures to perform well in the presence of noise. Weerts et al. (2021) compared 3 artificial systems with human performance using a test battery of psychometric experiments (e.g., spectral invariance, peak and center clipping, spectral and temporal modulations, target periodicity, competing talker backgrounds, and masker modulations and periodicity) to measure the importance of various auditory cues in sentence-or word-based speech recognition. They find that systems display similarities and differences in terms of what features they are tuned to (e.g., spectral vs. temporal modulations, and the use of temporal fine structure). As in our work, the self-supervised CNN-Transformer model exhibited a relatively greater similarity to humans, which follows a recent trend in vision (Tuli, Dasgupta, Grant, & Griffiths, 2021). Both these similarities and differences have alternative interpretations. With regards to the dissimilarities, it could be argued that the performance patterns point to differently tuned parameters of similar mechanisms (e.g., different effective receptive field sizes of architecturally similar systems), or alternatively, to more important mechanistic differences. With regards to the similarities, the results could be a consequence of how information is distributed in the input signal (i.e., where in the signal and envelope spectra information is carried), and as such, not provide compelling evidence that these models processed signals in a human-like way. By visual analogy, if image content was consistently concentrated in certain locations in the canvas, perturbations applied systematically and selectively across the canvas would affect similarly any systems that make use of such information (i.e., produce similarly complicated performance curves). This certainly tells us about the way task-related information is distributed in the signal and that high-performing problem solutions are constrained to the set that exploit such information, but it does not provide much mechanistic insight otherwise. On the other hand, these similarities may reflect important aspects of convergence between human and machine solutions. Therefore, although this class of findings is informative in many ways, the outcomes do not point unambiguously to mechanistic differences and similarities. The repackaging experiments, however, yield consistent and unambiguous failures that allow stronger conclusions to be drawn. The perception of temporally repackaged speech (Ghitza & Greenberg, 2009;Ghitza, 2012) is a scenario where the similarity between neural network models and their substantial deviation from human performance is remarkably consistent. Our repackaging experiments demonstrate a systematic failure of all models to recover perceptual performance in the specific conditions that humans naturally do: when the windowed compression of speech is compensated by insertions of silence. This consistent pattern emerges across diverse models, demonstrating its robustness against substantial architectural variation. Our simulations cover the whole set of experimental parameter combinations such that we can rule out the presence of the effect even in cases where it would show up in a parameter region away from where experiments in humans have been specifically conducted (e.g., for different compression ratios or window sizes). The human behavioral profile in response to repackaged speech (Ramus et al., 2021;Ghitza & Greenberg, 2009) can be interpreted in landmark-based (e.g., 'acoustic edges') and oscillation-based (e.g., theta rhythms) frameworks. On the former view (e.g., Oganian & Chang, 2019;Hamilton, Oganian, Hall, & Chang, 2021) acoustic cues in the signal envelope increasingly resemble the original as compression is compensated by insertions of silence. On the latter view (Ghitza & Greenberg, 2009;Ghitza, 2012Ghitza, , 2014, which has been the sub-ject of further developments regarding neural implementation (Poeppel & Assaneo, 2020;Giraud & Poeppel, 2012;Teng, Tian, Doelling, & Poeppel, 2017;, insertions of silence enable an alignment with endogenous time constraints embodied by neural oscillations at specific time scales. A related conceptual framework, which is compatible with both the acoustic landmark and oscillation-based accounts, explains the effect in terms of concurrent multiple-timescale processing (Poeppel et al., 2008;Poeppel, 2003;Teng, Tian, Rowland, & Poeppel, 2017;Teng, Tian, & Poeppel, 2016): the auditory system would elaborate the input signal simultaneously at 2 timescales (roughly, 25-50 and 150-250 ms), and therefore an inherent compensatory strategy when the local information is distorted (e.g., compressed) is to perform the task based on the global information that remains available (e.g., due to insertions of silence). The important point for present purposes is that all these accounts of repackaged speech involve endogenous mechanisms (e.g., neural excitability cycles) currently absent from state-of-the-art neural network models, with each theoretical proposal attributing model failures to these architectural shortcomings. These might be crucial for a better account of human audition, and could provide inductive biases for machines that might enable robustness in various real-world auditory environments. A promising direction, therefore, is incorporating oscillations into the main mechanisms of computational models (e.g., Effenberger, Carvalho, Dubinin, & Singer, 2022;Kaushik & Martin, 2022;ten Oever & Martin, 2021), or otherwise introducing commitments to dynamic temporal structure (e.g., using spiking neural networks; Stimberg, Brette, & Goodman, 2019) beyond excitability cycles. A further line of reasoning about the dissimilarities observed in repackaging experiments has to do with the computational complexity of the processes involved (van Rooij & Wareham, 2007). Repackaging manipulations have been interpreted as tapping into segmentation (Ghitza & Greenberg, 2009;Ghitza, 2014) -a subcomputation that has been widely assumed to be computationally hard in a fundamental way (surveyed briefly in Adolfi, Wareham, & van Rooij, 2022a;e.g., Friston et al., 2021;Poeppel, 2003;Cutler, 1994). On this view, any system faced with a problem that involves segmentation as a sub-problem (e.g., speech recognition) would be forced to acquire the (possibly unique) solution that, through exploiting ecological constraints, renders the problem efficiently computable in the restricted case. However, contrary to common intuitions, it is possible that segmentation is efficiently computable in the absence of such constraints (Adolfi, Wareham, & van Rooij, 2022b). Since it is conceivable segmentation is not a computational bottleneck in this sense, its intrinsic complexity might not be a driving force in pushing different artificial or biological systems to acquire similar solutions to problems involving this subcomputation. This constitutes, from a theoretical and formal standpoint, a complementary explanation for the qualitative divergence between humans and machines observed in our results. Closing remarks Our work and recent independent efforts (Weerts et al., 2021) suggest that, despite some predictive accuracy in neuroimaging studies (Millet & King, 2021;Kell, Yamins, Shook, Norman-Haignere, & McDermott, 2018;Tuckute, Feather, Boebinger, & McDermott, 2022; but see Thompson, Bengio, & Schoenwiesner, 2019), automatic speech recognition systems and humans diverge substantially in various perceptual domains. Our results further suggest that, far from being simply quantitative (e.g., receptive field sizes), these shortcomings are likely qualitative (e.g., lack of flexibility in task performance through exploiting alternative spectrotemporal scales) and would not be solved by such strategies as introducing different training regimens or increasing the models' capacity. They would require possibly substantial architectural modifications for meaningful effects such as repackaging to emerge. The qualitative differences we identify point to possible architectural constraints and improvements, and suggest which regions of experimental space (i.e., which effects) are useful for further model development and comparison. Since repackaging is where all models systematically resemble each other and clearly fail in capturing human behavior, this effect offers alternative directions for theorizing, computational cognitive modeling, and, more speculatively, potential improvement of engineering solutions. To develop a deeper understanding of how the models themselves can be made independently more robust, one could implement data augmentation schemes with the perturbations we deployed here. It is conceivable that these could act as proxies for the natural distortions humans encounter in the wild, and therefore help close performance gaps where it is desired for engineering purposes. A related line of research could pursue the comparison of frontend-backbone combinations to evaluate whether particular pairings are effective in some systematic manner in combination with such data augmentation schemes. More generally, our approach and results showcase how a more active synergy between the cognitive science and engineering of hearing could be mutually beneficial. Historically, there was a close relationship between work in the cognitive sciences, broadly construed, and engineering. Researchers were mindful of both behavioral and neural data in the context of building models (e.g., Ghitza, 1986;Bell Laboratories). Perhaps as a consequence of exclusively quantitative, benchmark-driven development and the recent disproportionate focus on prediction at the expense of explanation (see , for a review of how this played out in vision), this productive alliance has somewhat diminished in its depth and scope, but the potential gains from a possible reconnection and realignment between disciplines are considerable. Methods Framework Generalizing from the particular studies examining audition at multiple scales, we build a unified evaluation environment centered around selective transformations at different spectrotemporal granularities 1 and their influence on perceptual performance (Fig. 1). We deliberately shift the focus away from quantitative measures of fit and towards a qualitative assessment (see Navarro, 2019, for details on the rationale). Contrary to problematic practices centered on predictive accuracy that have led to misleading conclusions (see Bowers et al., 2022 for a thorough review), we focus on assessing whether artificial systems -here treated as stimulus-computable, optimized observer models -qualitatively capture a whole family of idiosyncratic performance patterns in human speech perception that point to the kinds of solutions systems have acquired. Our framework situates existing experiments in humans as a subset of the possible simulations, allowing us to exhaustively search for qualitative signatures of human-like performance even when these show up away from the precise original location in experimental space (we show a representative summary of our results throughout). Audio synthesis Multiscale windowing. Common to all conditions is the windowing of the signal separately at multiple spectral and/or temporal scales. We used a rectangular window function to cut the original speech into frames and faded the concatenated frames to avoid discontinuities (although these turn out to not affect the results). Transformations with known properties (see below) are applied either in the time domain directly or in the timefrequency domain, to each window (i.e., chunk of the signal). The window size is a general parameter that determines the scale selectivity of the manipulations described below. The timescales depend on the experiment and range from a few milliseconds to over one second. The performance of the models under different perturbations is then evaluated separately at various scales. Reversal. The signal is locally reversed in time (Saberi & Perrott, 1999;Gotoh et al., 2017), resulting in frame-wise time-reversed speech. This affects the order of the samples but preserves the local average magnitude spectrum. The performance curve is estimated at Shuffling. Audio samples are locally shuffled such that the temporal order within a given window is lost, consequently destroying temporal order at the corresponding scale. This random permutation is more aggressive than reversal in the sense that it does affect the local magnitude spectrum (Gotoh et al., 2017). The performance curve is estimated at 58 timescales on a logarithmic scale ranging from 0.125 to 1200 ms. Time warping. Signals are temporally compressed or stretched in the time-frequency domain, effectively making speech faster or slower, such that the pitch is unaffected (Park et al., 2019). The modified shorttime Fourier transform is then inverted to obtain the final time-domain, time-warped signal (Perraudin, Balazs, & Søndergaard, 2013). The average magnitude spectrum is approximately invariant whereas the local spectrum is equivariant when compared between equivalent timescales (Ghitza & Greenberg, 2009;Fu et al., 2001). The performance curve is estimated at 40 parameter values on a logarithmic scale ranging from compression by a factor of 4 to stretching by a factor of 4. Chimerism. Signals are factored into their envelope and fine structure parts, allowing the resynthesis of chimeras which combine the slow amplitude modulations of one sound with the rapid carriers of another (Smith et al., 2002). Here we combine these features from speech and Gaussian noise. To extract the two components, signals are passed through a bank of band-pass filters modeled after the human cochlea, yielding a spectrogramlike representation in the time-frequency domain called cochleagram. A cochleagram is then a time-frequency decomposition of a sound which shares features of human cochlear processing (Glasberg & Moore, 1990). Using the filter outputs, the analytical signal is computed via the Hilbert transform. Its magnitude is the envelope part, and dividing it out from the analytic signal leaves only the fine structure. The spectral acuity of the synthesis procedure can be varied with the number of bands used to cover the bandwidth of the signal. Multiple timescale manipulations, such as reversal, are directed to either the envelope or fine structure prior to assembling the sound chimera . The performance curve is estimated at 32 timescales on a logarithmic scale ranging from 10 to 1200 ms. Mosaicism. Speech signals can be mosaicized in the time and frequency coordinates, by manipulating the coarse-graining of the time-frequency bins. Similar to a pixelated image, a mosaicized sound will convey a signal whose spectrotemporal resolution has been altered systematically. The procedure is done on the envelopefine-structure representation before inverting back to the waveform. Two parameters affect the spectral and temporal granularity of the manipulation: the window length in time and in frequency. This yields a grid in the time-frequency domain. The envelope in each cell is averaged and the 'pixelated' cochlear envelopes are used to modulate the fine structure of Gaussian noise. Finally the signal is resynthesized by adding together the modulated sub-bands (Nakajima et al., 2018). The performance curve is estimated at 32 timescales on a logarithmic scale ranging from 10 to 1200 ms. Interruptions. A fraction of the within-window signal, which is parametrically varied, is corrupted either with Gaussian noise at different signal-to-noise ratios or by setting the samples to zero (Miller & Licklider, 1950). Sparsity is increased or decreased while preserving the original configuration of the unmasked fraction of the signal. The performance curve is estimated at 30 timescales on a logarithmic scale ranging from 2 to 2000 ms. Repackaging. A repackaged signal locally redistributes the original chunks of samples in time (Ghitza & Greenberg, 2009;Ramus et al., 2021). Within each window, the signal is temporally compressed without affecting its pitch (see above) and a period of silence is concatenated. The time-compressed signal can alternatively be thought of as a baseline before adding the insertions of silence. Two parameters control the sparsity of the resulting signal: the amount of compression and the length of the inserted silence. Other parameters that mitigate discontinuities, such as additive noise and amplitude ramps, do not affect the results. For a signal that has been compressed by a factor of 2, inserting silence of length equal to 1/2 of the window size will locally redistribute the original signal in time while keeping the overall duration intact. The performance curve (explored at multiple compression ratios and window sizes but shown to match human experiments) is estimated at 10 audioto-silence duration ratios ranging from 0.5 to 2.0 on a logarithmic scale. Neural network models We evaluate a set of state-of-the-art speech recognition systems with diverse architectures and input types (Table 1; available through the cited references below). These include fully-trained convolutional, recurrent, and transformer-based, with front ends that interface with either waveform or spectrogram inputs. Their accuracy under natural (unperturbed) conditions is high (∼ 80% correct, under word error rate) and comparable (see e.g., Fig. 4E when the warp factor equals 1, i.e., no perturbation). Deepspeech is based on a recurrent neural network architecture that works on the MFCC features of a normalized spectrogram representation (Hannun et al., 2014). This type of Long-short-term-memory (LSTM) architecture emerged as a solution to the problem of modeling large termporal scale dependencies. The input is transformed by convolutional, recurrent and finally linear layers projecting into classes representing a vocabulary of English characters. It was trained on the Librispeech corpus (Panayotov, Chen, Povey, & Khudanpur, 2015) using a CTC loss (Graves, Fernández, Gomez, & Schmidhuber, 2006). Silero works on a short-time Fourier transform of the waveform, obtaining a tailored spectrogram-like representation that is further transformed using a cascade of separable convolutions (Veysov, 2020). It was trained using a CTC loss (Graves et al., 2006) on the Librispeech corpus, with alphabet letters as modeling units. The Fairseq-S2T model is transformer-based and its front end interfaces with log-mel filterbank features (Wang et al., 2020). It is an encoder-decoder model with 2 convolutional layers followed by a transformer architecture of 12 multi-level encoder blocks. The input is a log-mel spectrogram of 80 mel-spaced frequency bins normalized by de-meaning and division by the standard deviation. This architecture is trained on the Librispeech corpus using a cross-entropy loss and a unigram vocabulary. Wav2vec2 is a convolutional-and transformer-based architecture (Baevski et al., 2020;Schneider et al., 2019). As opposed to the previous architectures, it works directly on the waveform representation of the signal and it was pretrained using self-supervision. In this case, the relevant features are extracted by the convolutional backbone, which performs convolution over the time dimension. The temporal relationships are subsequently modeled using the transformer's attention mechanism. The input to the model is a sound waveform of unit variance and zero mean. It is trained via a contrastive loss where the input is masked in latent space and the model needs to distinguish it from distractors. To encourage the model to use samples equally often, a diversity loss is used in addition. The fine tuning for speech recognition is done by minimizing a CTC loss (Graves et al., 2006) with a vocabulary of 32 classes of English characters. The model was trained on the Librispeech corpus. Evaluation We measure the number of substitutions S, deletions D, insertions I, and correct words C, and use them to compute the word error rate (WER) reflecting the overall performance of models on speech recognition, as follows: W ER = S + D + I S + D + C(1) A lower score indicates fewer errors overall and therefore better performance. Since N ref = S + D + C is the number of words in the ground truth labels and it appears in the denominator, the WER can reach values greater than 1.0. We evaluate all models on the Librispeech test set (Panayotov et al., 2015), which none of the models have seen during training, manipulated and resynthesized for each experiment according to our synthesis procedures. In all cases we plot average performance scores across this large set of utterances; with negligible variability. We use English language speech, as the effects we focus on in humans appear to be independent of language (e.g., (Gotoh et al., 2017)). Input statistics Sparsity. The input samples in the natural and synthetic versions of the evaluation set are characterized by their sparsity in time and frequency. We compute the Gini coefficient G on an encoding of signal a x of length n (time or frequency representation), which exhibits a number of desirable properties as a measure of signal sparsity (Hurley & Rickard, 2009). G = n i n j |x i − x j | 2n 2x(2) We characterize the joint distributions of sparsity in the time and frequency domain from the point of view of audition systems, which process sounds sequentially over restricted timescales. Specifically, we compute a timewindowed Gini, G w , at various window lengths w, resulting in a multiple-timescale dynamic sparsity measure. We focus on the 220 ms timescale which roughly aligns with both human cognitive science results (Poeppel, 2003) and receptive field sizes of neural network models. The result for a given timescale is summarized by statistics on the Gini coefficients across n signal slices of length w:Ĝ w = f ({G(x i )} n i )(3) where f (.) may be the mean (our case), standard deviation, etc. We obtain in this way both a time sparsity and a frequency sparsity measure for each speech utterance, for all natural and perturbed test signals. Since G is sensitive to the experimental manipulations, this allows us to summarize and visualize a low-dimensional, interpretable description of the distributions at the input of the systems (e.g., Evans et al., 2021). Figure 1 : 1Human speech (A) is recorded and represented as a 1-dimensional signal in the time domain (B) Figure 5 :Figure 6 : 56Mosaicized and chimerized speech reveal relatively similar reliance on subband envelopes and fine structure across timescales in humans and machines. We plot performance (WER; left-hand panels) as a function of either window length or number of bands (shade indicates 95% CI summarizing similar performance across all models; all insets show human performance with x/y-axis ranges comparable to the corresponding main graphs). Speech mosaics (A) with different temporal bin widths (increasing spectral widths shown in lighter shades of red) elicit a uniform advantage relative to multipletimescale reversal (inset adapted fromNakajima et al., 2018 shows performance for mosaic speech in squares and locally timereversed speech in triangles, the latter exhibiting a steeper decline). Speech-noise chimeras (C) reveal human-like performance modulations as a function of the number of bands used for synthesis and whether speech information is present in the envelopes or the fine structure (inset adapted fromSmith et al., 2002 shows increasing performance for envelope in circles and decreasing performance for fine structure in triangles; solid lines represent the relevant speech-noise chimeras). A further time reversal manipulation selectively on the subband envelopes (D; shades of red represent number of bands), preserving speech fine structure, shows a systematic relation to time-domain reversal, as seen in humans (inset adapted from shows performance on time-domain reversal [dotted line] declines earlier than envelope reversal [solid line]). The effect of the manipulations on the input distributions is visualized with hues and sizes representing synthesis parameters B: window length, E: number of bands; dashed contour shows region of 85-100% model performance) Multiple-timescale masking and silencing reveals heterogeneity in model predictions (all insets show human performance with x/y-axis ranges comparable to the corresponding main graphs). We plot performance (WER; left-hand panels) as a function of perturbation timescale (window length in ms). (A) Masking experiment (inset adapted fromMiller & Licklider, 1950 shows human performance for various signal-to-noise ratios). Individual model performance is shown for a fraction of 0.5 and SNR of -9 db. (C) Silencing experiment (inset adapted fromMiller & Licklider, 1950 shows human performance contours for various silence fractions). Individual model performance (color coding on panel A) is shown for a fraction of 0.5 for succinctness. In both experiments, the transformer architecture with waveform input qualitatively shows the most human-like perceptual behavior. The effect of the manipulations on the input distributions (right-hand panels) is visualized with hues and sizes representing synthesis parameters (B: window length, mask fraction, D: window length, silence fraction, respectively; dashed contour shows region of 85-100% model performance) Figure 7 : 7None of the architectures predict recovery of performance with repackaging as seen in humans (A; inset adapted fromGhitza & Greenberg, 2009 shows the canonical U-shaped human performance pattern, with x/y-axis ranges comparable to the main graph; solid lines with circle markers represent the relevant manipulation with insertions of silence. We plot performance (WER; left-hand panel) on compressed speech (by a factor of 2) as a function of audio:silence ratio parameterizing the insertion of silence. Note here the y-axis is reversed, with lower error towards the origin). Although there is some robustness to compression outside the natural distribution, performance worsens steadily as the insertion length increases. The effect of compressed audio with insertions of silence on sparsity is visualized with hues and sizes representing synthesis parameters (B; dashed contour shows region of 85-100% model performance). Figure 4: Machines and humans are resistant to (A) shuffling, (C) reversal, and (E) time warping, at comparable granularities, exhibiting qualitatively similar patterns of out-of-distribution robustness (insets adapted fromGotoh et al., 2017;Fu et al., 2001 have x/y-axis ranges comparable to the corresponding main graphs; inset on panel E shows performance for normal hearing listeners [filled shapes] and cochlear implant users [blank shapes]). Color coding of models is indicated in panel A. We plot performance (WER) as a function of perturbation timescale (window length in ms) for shuffling and reversal, and as a function of warp factor for time warping (left-hand panels). The effect of the manipulations on the input distributions (right-hand panels) is visualized with hues and sizes representing synthesis parameters (B: window length, D: window length, F: warp factor, respectively; dashed contour shows region of 85-100% model performance).A 0.1 0.7 2 7 26 90 304 1030 Window length [ms] 0.0 0.5 1.0 1.5 Word error rate [WER] model average humans (inset) 2DConv-TF 1DConv-TF Sep-2DConv LSTM B 0.1 0.4 0.7 Time-domain sparsity 0.1 0.4 0.7 Frequency-domain sparsity natural shuffled C D 0.2 0.4 0.6 Time-domain sparsity 0.4 0.6 0.8 Frequency-domain sparsity natural reversed E F 0.2 0.4 0.6 Time-domain sparsity 0.4 0.6 0.8 Frequency-domain sparsity natural warped Table 1 : 1Algorithmic models.Model Architecture Input deepspeech LSTM Spect. wav2vec 2.0 1DConv-TF. Wave fairseq-s2t 2DConv-TF. Spect. silero Sep-2DConv. Spect. Code implementing the analyses and resynthesis methods described here is available at https://tinyurl.com/2e5echc8 58 timescales on a logarithmic scale ranging from 0.125 to 1200 ms. AcknowledgmentsWe thank Oded Ghitza for clarifications on the original repackaging experiments and Franck Ramus for providing information on various replications and extensions. We thank 3 anonymous reviewers for constructive feedback that allowed us to improve a previous version of the manuscript. This project has received funding from the European Research Council (ERC) under the Euro-pean498 Union's Horizon 2020 research and innovation programme (grant agreement No 741134), and the Ernst Strüngmann Foundation. Computational Complexity of Segmentation. F Adolfi, T Wareham, I Van Rooij, Proceedings of the Annual Meeting of the Cognitive Science Society. the Annual Meeting of the Cognitive Science SocietyAdolfi, F., Wareham, T., & van Rooij, I. (2022a). Com- putational Complexity of Segmentation. Proceedings of the Annual Meeting of the Cognitive Science Soci- ety. A Computational Complexity Perspective on Segmentation as a Cognitive Subcomputation. F Adolfi, T Wareham, I Van Rooij, 10.1111/tops.12629Topics in Cognitive Science. 19Adolfi, F., Wareham, T., & van Rooij, I. (2022b). A Computational Complexity Perspective on Segmenta- tion as a Cognitive Subcomputation. Topics in Cog- nitive Science, 19. doi: 10.1111/tops.12629 Deep Speech 2 : End-to-End Speech Recognition in English and Mandarin. D Amodei, S Ananthanarayanan, R Anubhai, J Bai, E Battenberg, C Case, . . Zhu, Z , 10Amodei, D., Ananthanarayanan, S., Anubhai, R., Bai, J., Battenberg, E., Case, C., . . . Zhu, Z. (2015). Deep Speech 2 : End-to-End Speech Recognition in English and Mandarin. , 10. Wav2vec 2.0: A Framework for Self-Supervised Learning of Speech Representations. A Baevski, H Zhou, A Mohamed, M Auli, 12Baevski, A., Zhou, H., Mohamed, A., & Auli, M. (2020). Wav2vec 2.0: A Framework for Self-Supervised Learn- ing of Speech Representations. , 12. Entrained theta oscillations guide perception of subsequent speech: Behavioural evidence from rate normalisation. Language. H R Bosker, O Ghitza, 10.1080/23273798.2018.1439179Cognition and Neuroscience. Bosker, H. R., & Ghitza, O. (2018). Entrained theta oscillations guide perception of subsequent speech: Behavioural evidence from rate normalisation. Lan- guage, Cognition and Neuroscience, 1-13. doi: 10.1080/23273798.2018.1439179 Deep Problems with Neural Network Models of Human Vision. J S Bowers, G Malhotra, M Dujmović, M L Montero, C Tsvetkov, V Biscione, . . Blything, R , Behavioral and Brain Sciences. Bowers, J. S., Malhotra, G., Dujmović, M., Montero, M. L., Tsvetkov, C., Biscione, V., . . . Blything, R. (2022). Deep Problems with Neural Network Models of Human Vision. Behavioral and Brain Sciences, 1- 74. Deep Neural Networks as Scientific Models. R M Cichy, D Kaiser, 10.1016/j.tics.2019.01.009Trends in Cognitive Sciences. 234Cichy, R. M., & Kaiser, D. (2019). Deep Neural Networks as Scientific Models. Trends in Cognitive Sciences, 23 (4), 305-317. doi: 10.1016/j.tics.2019.01.009 The perception of rhythm in language. A Cutler, Cognition. Cutler, A. (1994). The perception of rhythm in language. Cognition, 79-81. The pitfalls of measuring representational similarity using representational similarity analysis. M Dujmović, J S Bowers, F Adolfi, G Malhotra, bioRxivDujmović, M., Bowers, J. S., Adolfi, F., & Malhotra, G. (2022). The pitfalls of measuring representational similarity using representational similarity analysis. bioRxiv. A biology-inspired recurrent oscillator network for computations in high-dimensional state space. F Effenberger, P Carvalho, I Dubinin, W Singer, bioRxivEffenberger, F., Carvalho, P., Dubinin, I., & Singer, W. (2022). A biology-inspired recurrent oscillator net- work for computations in high-dimensional state space. bioRxiv. Biological convolutions improve DNN robustness to noise and generalisation. bioRxiv. B D Evans, G Malhotra, J S Bowers, 10.1101/2021.02.18.431827Evans, B. D., Malhotra, G., & Bowers, J. S. (2021). Bio- logical convolutions improve DNN robustness to noise and generalisation. bioRxiv, 2021.02.18.431827. doi: 10.1101/2021.02.18.431827 K J Friston, N Sajid, D R Quiroga-Martinez, T Parr, C J Price, E Holmes, Active listening. Hearing Research. 107998Friston, K. J., Sajid, N., Quiroga-Martinez, D. R., Parr, T., Price, C. J., & Holmes, E. (2021). Active listening. Hearing Research, 107998. Recognition of time-distorted sentences by normal-hearing and cochlear-implant listeners. Q.-J Fu, J J Galvin, X Wang, 10.1121/1.1327578The Journal of the Acoustical Society of America. 1091Fu, Q.-J., Galvin, J. J., & Wang, X. (2001). Recog- nition of time-distorted sentences by normal-hearing and cochlear-implant listeners. The Journal of the Acoustical Society of America, 109 (1), 379-384. doi: 10.1121/1.1327578 Auditory nerve representation as a front-end for speech recognition in a noisy environment. O Ghitza, Computer Speech & Language. Ghitza, O. (1986). Auditory nerve representation as a front-end for speech recognition in a noisy environ- ment. Computer Speech & Language, 109-130. On the Role of Theta-Driven Syllabic Parsing in Decoding Speech: Intelligibility of Speech with a Manipulated Modulation Spectrum. O Ghitza, 10.3389/fpsyg.2012.00238Frontiers in Psychology. 3Ghitza, O. (2012). On the Role of Theta-Driven Syllabic Parsing in Decoding Speech: Intelligibility of Speech with a Manipulated Modulation Spectrum. Frontiers in Psychology, 3 . doi: 10.3389/fpsyg.2012.00238 Behavioral evidence for the role of corticalθoscillations in determining auditory channel capacity for speech. O Ghitza, 10.3389/fpsyg.2014.00652Frontiers in Psychology. 5Ghitza, O. (2014). Behavioral evidence for the role of corticalθoscillations in determining auditory channel capacity for speech. Frontiers in Psychology, 5 . doi: 10.3389/fpsyg.2014.00652 On the Possible Role of Brain Rhythms in Speech Perception: Intelligibility of Time-Compressed Speech with Periodic and Aperiodic Insertions of Silence. O Ghitza, S Greenberg, 10.1159/000208934Phonetica. 661-2Ghitza, O., & Greenberg, S. (2009). On the Possible Role of Brain Rhythms in Speech Perception: Intelli- gibility of Time-Compressed Speech with Periodic and Aperiodic Insertions of Silence. Phonetica, 66 (1-2), 113-126. doi: 10.1159/000208934 Cortical oscillations and speech processing: Emerging computational principles and operations. A.-L Giraud, D Poeppel, Nature Neuroscience. 154Giraud, A.-L., & Poeppel, D. (2012). Cortical oscilla- tions and speech processing: Emerging computational principles and operations. Nature Neuroscience, 15 (4), 511-517. Derivation of auditory filter shapes from notched-noise data. B R Glasberg, B C J Moore, Hearing Research. 471Glasberg, B. R., & Moore, B. C. J. (1990). Deriva- tion of auditory filter shapes from notched-noise data. Hearing Research, 47 (1), 103-138. The effect of permutations of time samples in the speech waveform on intelligibility. S Gotoh, M Tohyama, T Houtgast, 10.1121/1.4992027The Journal of the Acoustical Society of America. 1421Gotoh, S., Tohyama, M., & Houtgast, T. (2017). The effect of permutations of time samples in the speech waveform on intelligibility. The Journal of the Acoustical Society of America, 142 (1), 249-255. doi: 10.1121/1.4992027 Connectionist temporal classification: Labelling unsegmented sequence data with recurrent neural networks. A Graves, S Fernández, F Gomez, J Schmidhuber, Proceedings of the 23rd international conference on Machine learning. the 23rd international conference on Machine learningNew York, NY, USAAssociation for Computing MachineryGraves, A., Fernández, S., Gomez, F., & Schmidhuber, J. (2006). Connectionist temporal classification: La- belling unsegmented sequence data with recurrent neu- ral networks. In Proceedings of the 23rd international conference on Machine learning (pp. 369-376). New York, NY, USA: Association for Computing Machin- ery. On Logical Inference over Brains, Behaviour, and Artificial Neural Networks. O Guest, A E Martin, 10.1007/s42113-022-00166-xComputational Brain & Behavior. Guest, O., & Martin, A. E. (2023). On Logical In- ference over Brains, Behaviour, and Artificial Neural Networks. Computational Brain & Behavior. doi: 10.1007/s42113-022-00166-x Parallel and distributed encoding of speech across human auditory cortex. L S Hamilton, Y Oganian, J Hall, E F Chang, Cell. Hamilton, L. S., Oganian, Y., Hall, J., & Chang, E. F. (2021). Parallel and distributed encoding of speech across human auditory cortex. Cell. Deep Speech: Scaling up end-to-end speech recognition. A Hannun, C Case, J Casper, B Catanzaro, G Diamos, E Elsen, . . Ng, A Y , arXiv:1412.5567csHannun, A., Case, C., Casper, J., Catanzaro, B., Di- amos, G., Elsen, E., . . . Ng, A. Y. (2014). Deep Speech: Scaling up end-to-end speech recognition. arXiv:1412.5567 [cs]. N P Hurley, S T Rickard, arXiv:0811.4706Comparing Measures of Sparsity. cs, mathHurley, N. P., & Rickard, S. T. (2009). Comparing Measures of Sparsity. arXiv:0811.4706 [cs, math]. A mathematical neural process model of language comprehension, from syllable to sentence. K R Kaushik, A E Martin, PsyArXivKaushik, K. R., & Martin, A. E. (2022). A mathematical neural process model of language comprehension, from syllable to sentence. PsyArXiv. A Task-Optimized Neural Network Replicates Human Auditory Behavior, Predicts Brain Responses, and Reveals a Cortical Processing Hierarchy. A J Kell, D L Yamins, E N Shook, S V Norman-Haignere, J H Mcdermott, Neuron. 983e16Kell, A. J., Yamins, D. L., Shook, E. N., Norman- Haignere, S. V., & McDermott, J. H. (2018). A Task- Optimized Neural Network Replicates Human Audi- tory Behavior, Predicts Brain Responses, and Reveals a Cortical Processing Hierarchy. Neuron, 98 (3), 630- 644.e16. A neural network walks into a lab: Towards using deep nets as models for human behavior. W J Ma, B Peters, arXiv:2005.02181cs, q-bioMa, W. J., & Peters, B. (2020). A neural network walks into a lab: Towards using deep nets as models for hu- man behavior. arXiv:2005.02181 [cs, q-bio]. The Intelligibility of Interrupted Speech. G A Miller, J C R Licklider, 7Miller, G. A., & Licklider, J. C. R. (1950). The Intelli- gibility of Interrupted Speech. , 7. Inductive biases, pretraining and fine-tuning jointly account for brain responses to speech. J Millet, J.-R King, arXiv:2103.01032cs, eess, q-bioMillet, J., & King, J.-R. (2021). Inductive biases, pre- training and fine-tuning jointly account for brain re- sponses to speech. arXiv:2103.01032 [cs, eess, q-bio]. Temporal Resolution Needed for Auditory Communication: Measurement With Mosaic Speech. Y Nakajima, M Matsuda, K Ueda, G B Remijn, 10.3389/fnhum.2018.00149Frontiers in Human Neuroscience. 12Nakajima, Y., Matsuda, M., Ueda, K., & Remijn, G. B. (2018). Temporal Resolution Needed for Au- ditory Communication: Measurement With Mosaic Speech. Frontiers in Human Neuroscience, 12 . doi: 10.3389/fnhum.2018.00149 Between the Devil and the Deep Blue Sea: Tensions Between Scientific Judgement and Statistical Model Selection. D J Navarro, Computational Brain & Behavior. Navarro, D. J. (2019). Between the Devil and the Deep Blue Sea: Tensions Between Scientific Judgement and Statistical Model Selection. Computational Brain & Behavior. A speech envelope landmark for syllable encoding in human superior temporal gyrus. Y Oganian, E F Chang, Science Advances. Oganian, Y., & Chang, E. F. (2019). A speech envelope landmark for syllable encoding in human superior tem- poral gyrus. Science Advances. Librispeech: An ASR corpus based on public domain audio books. V Panayotov, G Chen, D Povey, S Khudanpur, 10.1109/I-CASSP.2015.71789642015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). Panayotov, V., Chen, G., Povey, D., & Khudanpur, S. (2015). Librispeech: An ASR corpus based on public domain audio books. In 2015 IEEE Inter- national Conference on Acoustics, Speech and Signal Processing (ICASSP) (pp. 5206-5210). doi: 10.1109/I- CASSP.2015.7178964 SpecAugment: A Simple Data Augmentation Method for Automatic Speech Recognition. D S Park, W Chan, Y Zhang, C.-C Chiu, B Zoph, E D Cubuk, Q V Le, 10.21437/Interspeech.2019-2680Proc. interspeech. interspeechPark, D. S., Chan, W., Zhang, Y., Chiu, C.-C., Zoph, B., Cubuk, E. D., & Le, Q. V. (2019). SpecAugment: A Simple Data Augmentation Method for Automatic Speech Recognition. In Proc. interspeech 2019 (pp. 2613-2617). doi: 10.21437/Interspeech.2019-2680 The possible role of brain rhythms in perceiving fast speech: Evidence from adult aging. L R Penn, N D Ayasse, A Wingfield, O Ghitza, 10.1121/1.5054905The Journal of the Acoustical Society of America. 1444Penn, L. R., Ayasse, N. D., Wingfield, A., & Ghitza, O. (2018). The possible role of brain rhythms in per- ceiving fast speech: Evidence from adult aging. The Journal of the Acoustical Society of America, 144 (4), 2088-2094. doi: 10.1121/1.5054905 A fast griffin-lim algorithm. N Perraudin, P Balazs, P L Søndergaard, 10.1109/WASPAA.2013.67018512013 ieee workshop on applications of signal processing to audio and acoustics. Perraudin, N., Balazs, P., & Søndergaard, P. L. (2013). A fast griffin-lim algorithm. In 2013 ieee workshop on applications of signal processing to audio and acoustics (p. 1-4). doi: 10.1109/WASPAA.2013.6701851 The analysis of speech in different temporal integration windows: Cerebral lateralization as 'asymmetric sampling in time. D Poeppel, 10.1016/S0167-6393(02Speech Communication. 411Poeppel, D. (2003). The analysis of speech in dif- ferent temporal integration windows: Cerebral later- alization as 'asymmetric sampling in time'. Speech Communication, 41 (1), 245-255. doi: 10.1016/S0167- 6393(02)00107-3 Speech rhythms and their neural foundations. D Poeppel, M F Assaneo, 10.1038/s41583-020-0304-4Nature Reviews Neuroscience. 216Poeppel, D., & Assaneo, M. F. (2020). Speech rhythms and their neural foundations. Nature Reviews Neu- roscience, 21 (6), 322-334. doi: 10.1038/s41583-020- 0304-4 Speech perception at the interface of neurobiology and linguistics. D Poeppel, W J Idsardi, V Van Wassenhove, 10.1098/rstb.2007.2160Philosophical Transactions of the Royal Society B: Biological Sciences. 363Poeppel, D., Idsardi, W. J., & van Wassenhove, V. (2008). Speech perception at the interface of neuro- biology and linguistics. Philosophical Transactions of the Royal Society B: Biological Sciences, 363 (1493), 1071-1086. doi: 10.1098/rstb.2007.2160 . F Ramus, A Carrion-Castillo, A Lachat, Intelligibility of temporally packaged speechRamus, F., Carrion-Castillo, A., & Lachat, A. (2021). Intelligibility of temporally packaged speech. Cognitive restoration of reversed speech. K Saberi, D R Perrott, 10.1038/19652Nature. 3986730Saberi, K., & Perrott, D. R. (1999). Cognitive restora- tion of reversed speech. Nature, 398 (6730), 760-760. doi: 10.1038/19652 S Schneider, A Baevski, R Collobert, M Auli, arXiv:1904.05862Wav2vec: Unsupervised Pre-training for Speech Recognition. csSchneider, S., Baevski, A., Collobert, R., & Auli, M. (2019). Wav2vec: Unsupervised Pre-training for Speech Recognition. arXiv:1904.05862 [cs]. Speech Recognition with Primarily Temporal Cues. R V Shannon, F.-G Zeng, V Kamath, J Wygonski, M Ekelid, 10.1126/science.270.5234.303Science. 2705234Shannon, R. V., Zeng, F.-G., Kamath, V., Wygonski, J., & Ekelid, M. (1995). Speech Recognition with Primarily Temporal Cues. Science, 270 (5234), 303- 304. doi: 10.1126/science.270.5234.303 Chimaeric sounds reveal dichotomies in auditory perception. Z M Smith, B Delgutte, A J Oxenham, 10.1038/416087aNature. 6876Smith, Z. M., Delgutte, B., & Oxenham, A. J. (2002). Chimaeric sounds reveal dichotomies in au- ditory perception. Nature, 416 (6876), 87-90. doi: 10.1038/416087a . M Stimberg, R Brette, D F Goodman, Stimberg, M., Brette, R., & Goodman, D. F. (2019). Brian 2, an intuitive and efficient neural simulator. eLife, e47314. Brian 2, an intuitive and efficient neural simulator. eLife, e47314. An oscillating computational model can track pseudo-rhythmic speech by using linguistic predictions. eLife. S Oever, A E Martin, 68066Oever, S., & Martin, A. E. (2021). An oscillat- ing computational model can track pseudo-rhythmic speech by using linguistic predictions. eLife, e68066. Speech fine structure contains critical temporal cues to support speech segmentation. X Teng, G B Cogan, D Poeppel, 10.1016/j.neuroimage.2019.116152NeuroImage. 202116152Teng, X., Cogan, G. B., & Poeppel, D. (2019). Speech fine structure contains critical temporal cues to sup- port speech segmentation. NeuroImage, 202 , 116152. doi: 10.1016/j.neuroimage.2019.116152 X Teng, D Poeppel, Theta and Gamma Bands Encode Acoustic Dynamics over Wide-ranging Timescales. bioRxiv. Teng, X., & Poeppel, D. (2019). Theta and Gamma Bands Encode Acoustic Dynamics over Wide-ranging Timescales. bioRxiv. Theta band oscillations reflect more than entrainment: Behavioral and neural evidence demonstrates an active chunking process. X Teng, X Tian, K Doelling, D Poeppel, European Journal of Neuroscience. Teng, X., Tian, X., Doelling, K., & Poeppel, D. (2017). Theta band oscillations reflect more than entrainment: Behavioral and neural evidence demonstrates an active chunking process. European Journal of Neuroscience. Testing multiscale processing in the auditory system. X Teng, X Tian, D Poeppel, Scientific Reports. 61Teng, X., Tian, X., & Poeppel, D. (2016). Testing multi- scale processing in the auditory system. Scientific Re- ports, 6 (1). Concurrent temporal channels for auditory processing: Oscillatory neural entrainment reveals segregation of function at different scales. X Teng, X Tian, J Rowland, D Poeppel, PLOS Biology. 15112000812Teng, X., Tian, X., Rowland, J., & Poeppel, D. (2017). Concurrent temporal channels for auditory processing: Oscillatory neural entrainment reveals segregation of function at different scales. PLOS Biology, 15 (11), e2000812. J A F Thompson, Y Bengio, M Schoenwiesner, The effect of task and training on intermediate representations in convolutional neural networks revealed with modified RV similarity analysis. 2019 Conference on Cognitive Computational Neuroscience. Thompson, J. A. F., Bengio, Y., & Schoenwiesner, M. (2019). The effect of task and training on intermediate representations in convolutional neural networks re- vealed with modified RV similarity analysis. 2019 Con- ference on Cognitive Computational Neuroscience. Many but not all deep neural network audio models capture brain responses and exhibit hierarchical region correspondence. G Tuckute, J Feather, D Boebinger, J H Mcdermott, bioRxivTuckute, G., Feather, J., Boebinger, D., & McDermott, J. H. (2022). Many but not all deep neural network audio models capture brain responses and exhibit hier- archical region correspondence. bioRxiv. Are Convolutional Neural Networks or Transformers more like human vision?. S Tuli, I Dasgupta, E Grant, T L Griffiths, arXiv:2105.07197csTuli, S., Dasgupta, I., Grant, E., & Griffiths, T. L. (2021). Are Convolutional Neural Networks or Trans- formers more like human vision? arXiv:2105.07197 [cs]. Intelligibility of locally time-reversed speech: A multilingual comparison. K Ueda, Y Nakajima, W Ellermeier, F ; I Kattner, T Wareham, 10.1038/s41598-017-01831-zvanRooijdoi: 10.1093/comjnl/bxm034Parameterized Complexity in Cognitive Modeling: Foundations, Applications and Opportunities. 7Scientific ReportsUeda, K., Nakajima, Y., Ellermeier, W., & Kattner, F. (2017). Intelligibility of locally time-reversed speech: A multilingual comparison. Scientific Reports, 7 (1), 1782. doi: 10.1038/s41598-017-01831-z van Rooij, I., & Wareham, T. (2007). Parameterized Complexity in Cognitive Modeling: Foundations, Ap- plications and Opportunities. The Computer Journal, 385-404. doi: 10.1093/comjnl/bxm034 Toward's an imagenet moment for speech-to-text. The Gradient. A Veysov, Veysov, A. (2020). Toward's an imagenet moment for speech-to-text. The Gradient. C Wang, Y Tang, X Ma, A Wu, D Okhonko, J Pino, arXiv:2010.05171Fairseq S2T: Fast Speech-to-Text Modeling with fairseq. cs, eessWang, C., Tang, Y., Ma, X., Wu, A., Okhonko, D., & Pino, J. (2020). Fairseq S2T: Fast Speech-to-Text Modeling with fairseq. arXiv:2010.05171 [cs, eess]. The Psychometrics of Automatic Speech Recognition. bioRxiv. L Weerts, S Rosen, C Clopath, D F M Goodman, 10.1101/2021.04.19.440438Weerts, L., Rosen, S., Clopath, C., & Goodman, D. F. M. (2021). The Psychometrics of Automatic Speech Recognition. bioRxiv, 2021.04.19.440438. doi: 10.1101/2021.04.19.440438
[]
[ "On SL(2, R)-cocycles over irrational rotations with secondary collisions", "On SL(2, R)-cocycles over irrational rotations with secondary collisions", "On SL(2, R)-cocycles over irrational rotations with secondary collisions", "On SL(2, R)-cocycles over irrational rotations with secondary collisions" ]
[ "Alexey V Ivanov ", "Alexey V Ivanov " ]
[]
[]
We consider a skew product FA = (σω, A) over irrational rotation σω(x) = x+ω of a circle T 1 . It is supposed that the transformation A : T 1 → SL(2, R) being a C 1 -map has the form A(x) = R(ϕ(x))Z(λ(x)), where R(ϕ) is a rotation in R 2 over the angle ϕ and Z(λ) = diag{λ, λ −1 } is a diagonal matrix. Assuming that λ(x) ≥ λ0 > 1 with a sufficiently large constant λ0 and the function ϕ be such that cos ϕ(x) possesses only simple zeroes, we study hyperbolic properties of the cocycle generated by FA. We apply the critical set method to show that, under some additional requirements on the derivative of the function ϕ, the secondary collisions compensate weakening of the hyperbolicity due to primary collisions and the cocycle generated by FA becomes hyperbolic in contrary to the case when secondary collisions can be partially eliminated.
10.1134/s1560354723020053
[ "https://export.arxiv.org/pdf/2204.05402v1.pdf" ]
248,118,930
2204.05402
4cee8fcc9c356f68367005abcbe8ba7ad3602157
On SL(2, R)-cocycles over irrational rotations with secondary collisions Alexey V Ivanov On SL(2, R)-cocycles over irrational rotations with secondary collisions linear cocyclehyperbolicityLyapunov exponentcritical set MSC 2010: 37C5537D2537B5537C60 We consider a skew product FA = (σω, A) over irrational rotation σω(x) = x+ω of a circle T 1 . It is supposed that the transformation A : T 1 → SL(2, R) being a C 1 -map has the form A(x) = R(ϕ(x))Z(λ(x)), where R(ϕ) is a rotation in R 2 over the angle ϕ and Z(λ) = diag{λ, λ −1 } is a diagonal matrix. Assuming that λ(x) ≥ λ0 > 1 with a sufficiently large constant λ0 and the function ϕ be such that cos ϕ(x) possesses only simple zeroes, we study hyperbolic properties of the cocycle generated by FA. We apply the critical set method to show that, under some additional requirements on the derivative of the function ϕ, the secondary collisions compensate weakening of the hyperbolicity due to primary collisions and the cocycle generated by FA becomes hyperbolic in contrary to the case when secondary collisions can be partially eliminated. Introduction One of the fundamental problems in the theory of smooth dynamics is to establish for a given dynamical system which hyperbolic properties it possesses. In particular, it is important to determine whether the system is uniformly hyperbolic or not, and in the latter case, to prove (or disprove) the positiveness of its Lyapunov exponents. Often, instead of one particular system, a family of dynamical systems is considered. Thus, it naturally arises a necessity to answer the questions mentioned above with respect to parameter values of the family. There is wide literature on this subject (see e.g. [6], [7] and references therein). One may note that the difficulty of the problem increases together with growth of dimension of considered dynamical systems. Due to this fact, the case of one-dimensional discrete systems is much more explored in comparison even with two-dimensional case. An intermediate position is occupied by skew-products over one-dimensional cascades: they keep many features of multidimensional systems, but the governing dynamics is much easier. They have been studied for several decades and from different points of view. We refer the reader to the following (of course, far from to be complete) list of papers: [1], [5], [10], [12], [13], [17], [24]. However, it has to be noted that there is a lack of direct constructive methods, which allow to solve the mentioned problem for a particular dynamical system or a family of systems (see e.g. [2], [11], [13], [25]). In this paper we continue a study of skew-products over irrational rotation started in [15]. Let F A : T 1 × R 2 → T 1 × R 2 (1.1) be a skew-product map defined by (x, v) → (σ ω (x), A(x)v), (x, v) ∈ T 1 × R 2 where σ ω (x) = x + ω is a rotation of a circle T 1 = R/Z with an irrational rotation number ω and A : T 1 → SL(2, R) is a C 1 -function. Interest to such skew-products is determined not only by the fact that they can be considered as a bridge between one-and two-dimensional cascades, but also their direct applications in physics. One may associate to (1.1) the following difference equation ψ(y + ω) = A(y)ψ(y), y ∈ R. (1.2) Here A = A • π st is a 1-periodic matrix-valued function, π st : R → T 1 = R/Z is the quotient map and ψ = (ψ 1 , ψ 2 ) tr is an unknown vector-function. Eliminating the second component ψ 2 leads to a second-order difference equation for ψ 1 : m(y)ψ 1 (y + ω) + n(y)ψ 1 (y) + p(y)ψ 1 (y − ω) = 0. (1.3) Here m, n, p are known 1-periodic real-valued functions expressed in terms of entries of A. Such difference equations have many applications. In particular, they appear in spectral theory of the Schrödinger operator on l 2 (Z) and in the stability problem for the Hill's equation with quasiperiodic potentials [1], [4], [14]. Another application comes from the electromagnetic-wave diffraction in a wedge-shaped region. Indeed, the Sommerfeld-Malyuzhinets representation for the electic field leads to a system of linear difference equations for two coupled spectral functions (see e.g. [20]). Eliminating one of them gives a second-order difference equation of type (1.3) for the remaining spectral function. It is remarkable fact that the property of equation (1.3) to possess a solution in one or another functional space correlates with dynamical properties of the corresponding skew-product [21], [23]. For example, the property of exponential dichotomy for (1.2) is equivalent to the uniform hyperbolicity for the skew-product [3]. In the present paper we consider a skew-product (1.1) satisfying the following assumptions. Namely, we suppose that the transformation A can be represented as A(x) = R(ϕ(x)) · Z(λ(x)), (1.4) where R(ϕ) = cos ϕ sin ϕ − sin ϕ cos ϕ , Z(λ) = λ 0 0 λ −1 with some C 1 -functions ϕ : T 1 → 2πT 1 , λ : T 1 → R such that (H 1 ) x ∈ T 1 : cos(ϕ(x)) = 0 = N j=0 {c j }, (H 2 ) ∀ j = 0, . . . , N, C (j) 1 ε −1 ≤ |ϕ (x)| ≤ C (j) 2 ε −1 , ∀x ∈ U ε (c j ), ε 1; V ar(φ, U ε (c j )) ∼ O(1), (H 3 ) cos(ϕ(x)) ≥ C 3 ∀x ∈ T 1 \ N j=0 U ε (c j ), (H 4 ) ind(ϕ) = 0, (H 5 ) λ(x) ≥ λ 0 > 1 ∀x ∈ T 1 . Here and after C k denotes a positive constant, U ε (x) is the ε-neighbourhood of a point x ∈ T 1 , V ar(φ, U ε (c j )) is the variation of the function ϕ in the neighborhood U ε (c j ) and ind(ϕ) stands for the index of a closed curve ϕ(T 1 ). Additionally, we assume that functions ϕ and λ depend smoothly on a real parameter t ∈ [a, b] ⊂ R such that (H 6 ) dρ(c j (t), c k (t)) dt > C 4 > 0, ∀t ∈ [a, b], j = k, where ρ denotes the standard distance in T 1 . To the skew product (1.1) one may assign a cocycle M (x, n) defined as M (x, n) = A(σ n−1 ω (x)) . . . A(x), n > 0; M (x, n) = A(σ −n ω (x)) . . . A(σ −1 ω (x)) −1 , n < 0; M (x, 0) = I. In this paper we study how the property of hyperbolicity is related to the cocycle parameters. Definition 1 We say that a cocycle M is uniformly hyperbolic (UH) if there exist continuous maps E u,s : T 1 → Gr(2, 1) and positive constants C, Λ such that the subspaces E u,s (x) are invariant with respect to the map (1.1) (i.e. E u,s (σ ω (x)) = A(x)E u,s (x)) and ∀x ∈ T 1 , n ≥ 0 M (x, −n)| E u (x) ≤ Ce −Λn , M (x, n)| E s (x) ≤ Ce −Λn . Here Gr(2, 1) stands for the set of 1-dimensional subspaces of R 2 . We note that the Oseledets theorem guarantees the existence of such invariant subspaces for a.e. x ∈ T 1 , but, in general, the maps E u,s are only measurable. For our purpose it is more convenient to use an alternative version of this definition (see e.g. [3]). Namely, the cocycle M is said to be UH if there exist positive constants C, Λ 0 such that ∀x ∈ T 1 and n ≥ 0 M (x, n) ≥ Ce Λ0n . (1.5) It has to be noted that due to Kingman's subadditive ergodic theorem for a.e. x ∈ T 1 there exists the Lyapunov exponent Λ(x) = lim n→∞ 1 n log M (x, n) . Moreover, since ω is assumed to be irrational, the rotation σ ω is ergodic and Λ(x) =Λ 0 a.e., whereΛ 0 is the integrated Lyapunov exponent Λ 0 = Λ(x)dx = lim n→∞ 1 n log M (x, n) dx. Hence, the UH implies positiveness ofΛ 0 , but the opposite is not true (see e.g. [13]). One calls a cocycle M non-uniformly hyperbolic (NUH) ifΛ 0 > 0, but M is not UH. Uniformly hyperbolic cocycles constitute an open subset in the set of all cocycles. On the other hand, if a cocycle is NUH, its Lyapunov exponent can be made equal to zero by arbitrary small C 0 -perturbation. Thus, even the problem of finding an example of NUH with positive Lyapunov exponent is not trivial. In [13] M. Herman provided the first example of such cocycle. The constructed cocycle corresponds to a skew-product of type (1.4) such that function λ is constant λ(x) = λ 0 > 1 and ind(ϕ) = 1. The latter condition guarantees a topologic obstacle for uniform hyperbolicity. Remark 1 One may remark (see e.g. [8], [25]) that the problem admits the projectivization in the following sense. Consider the standard covering (p, T 1 , RP 1 ) of the real projective line, where the projection p identifies the diametrically opposite points of the circle. For any continuous functionφ : T 1 → T 1 it generates a continuous functionφ p = p • ϕ : T 1 → RP 1 . On the other hand, for a given continuous function ϕ p : T 1 → RP 1 its liftφ is not necessary continuous. Moreover, there exists exactly two continuous lifts, which will be denoted byφ k , k = 1, 2. Letφ k , k = 1, 2 be two arbitrary (not necessary continuous) lifts ofφ p . It follows from Definition 1 that the cocycles, defined by (1.1) and corresponding to these lifts, are either both UH or both not UH. Particularly, if they are both UH, the stable (unstable) subspaces E s,(u) k (x) coincide E s,(u) 1 (x) = E s,(u) 2 (x) . This remark enables us to consider the function ϕ from (1.4) as a lift of some continuously differentiable function ϕ p : T 1 → RP 1 . In the present work we use an approach developed in [16], [8], [25], [19]. Devoted to different objects these papers have a common framework. The idea suggested initially in [16] can be described as follows. Consider a family of skew-product maps F A,t = (f, A t ) dependent on a parameter t and defined (similarly to (1.1)) on a vector bundle V over a base B. Properties of the fiber transformation A t may vary with respect to a point of the base. Selecting those points of the base which correspond to violation of some specific property (e.g. hyperbolicity) of A t , we construct the so-called critical set C 0 . Taking a small neighbourhood of C 0 , one needs to study the dynamics of this set under the map f . However, due to dependence on the parameter t, interactions between different parts of C 0 may have degeneracies. Detuning the parameter t, we exclude such degeneracies and put the interactions in a general position. Finally, using properties of the base map f (e.g. ergodicity) and non-degeneracy of interactions of the critical set, one may extract an additional information on the whole system. The paper is organized as follows. In Section 2 we define the critical set and introduce a notion of dominance for primary and secondary collisions. We also formulate necessary conditions which guarantee the dominance of primary collisions and, as a consequence, non-uniform hyperbolicity of the cocycle. In section 3 the case, which corresponds to dominance of secondary collisions for the simplest critical set consisting of two points, is considered. We perform asymptotic analysis as λ 0 → +∞, ε → 0 to describe effect of interaction between small neighborhoods of two critical points on the cocycle. Resonance conditions on the rotation number and parameter t are presented. Under these conditions we prove the uniform hyperbolicity for the cocycle (1.4). Dynamics of the critical set Note that, by definition, the cocycle M corresponding to (1.1) is a product of matrices A k such that A k ≥ λ 0 are sufficiently large. However, the product may not admit estimate (1.5). The obstacle to this fact is the presence of the critical set, which can be defined for the skew product (1.4) as C 0 = N j=0 {c j }. (2.6) To understand a role of the critical set, we formulate the following technical lemma (see e.g. [15]) Lemma 1 For any fixed φ ∈ [0, π) and λ 1 , λ 2 such that λ k > 1, k = 1, 2 the following representation holds true Z(λ 2 )R(φ)Z(λ 1 ) = R(ψ − χ)Z(µ)R(χ), where ψ ∈ [0, π), χ ∈ [−π/4, π/4], µ ≥ 1 µ = a 2 1 + c + (1 + c) 2 − 4a −2 1/2 , a = λ 1 λ 2 · λ 2 2 cos 2 φ + β sin 2 φ cos 2 φ + β 2 sin 2 φ 1/2 , b = λ 2 λ 1 2 1 − λ −4 2 1 + λ −2 1 λ −2 2 · sin φ cos φ λ 2 2 cos 2 φ + β sin 2 φ , (2.7) c = b 2 + a −2 , β = λ −2 1 + λ −2 2 1 + λ −2 1 λ −2 2 , tan ψ = β tan φ, tan χ = − √ 2b(1 − c) −1 1 + 2b 2 (1 − c) −2 + 1 + 4b 2 (1 − c) −2 . One may note that the norm of a product P 1 = Z(λ 2 )R(φ)Z(λ 1 ) equals to µ and, by Lemma 1, µ = 1 ⇔ a = 1, b = 0. Besides, if λ 1 , λ 2 are sufficiently large, then we have the following implications: 1. cos φ 0 ⇒ µ ∼ a ∼ λ 1 λ 2 cos φ , 2. cos φ ∼ 0 ⇒ µ ∼ a ∼ λ 1 λ 2 , 3. φ = π 2 mod π ⇒ ψ = π 2 mod π, χ = 0. It means that, if | cos φ| is bounded away from zero, the norm of P 1 increases with respect to λ 1 , since it is multiplied by a large quantity proportional to λ 2 cos φ . On the other hand, if | cos φ| is sufficiently small, the norm of P 1 decreases with respect to λ 1 , as it is divided by a quantity proportional to λ 2 . Moreover, in the latter case the angle ψ satisfies cos ψ ∼ 0. Applying this lemma to the cocycle M , we conclude that, each time a trajectory of a point x ∈ T 1 under the rotation σ ω falls into a small neighbourhood of the set C 0 , the growth of the cocycle norm changes its behaviour from increasing to decreasing or vice versa. This behaviour continues until the next visit of the trajectory to a small neighborhood of C 0 . Since σ ω is ergodic, every point x ∈ T 1 reaches a small neighbourhood of C 0 after some iterations. Thus, the hyperbolic properties of M (x, n) strongly depend on the dynamics of the critical set itself. To describe the dynamics of the critical set, we introduce for a fixed sufficiently small δ > 0 a δneighbourhood of C 0 denoted by U δ = N j=0 I j (δ), where I j (δ) = {x ∈ T 1 : ρ(x, c j ) < δ}. Then, due to ergodicity of σ ω , a trajectory of any point c j enters each interval I k (δ) infinitely many times. It enables us to introduce notions of collision and time of collision. Definition 2 Let τ j,j be the minimum of integer k > 0 such that σ k ω (c j ) ∩ I j = ∅. We say that the points c j and c j collide with accuracy δ at the time τ j,j and call such event a collision and τ j,j the time of collision. A collision is called primary if j = j and secondary if j = j . There is essential difference in behaviour of the primary and secondary collisions with respect to the parameter t. First, we note that the times of primary collisions τ j,j do not depend on j and we may denote them by τ 0 . It is a characteristic of the rotation number ω and the parameter δ only. On the other hand, the assumption (H 6 ) implies that relative positions of the points c j vary with respect to t and, hence, the times of secondary collisions depend on the parameter t in a non-trivial way. Following [15] we give a definition. Definition 3 For a fixed δ > 0 we say that primary collisions dominate if for all pairs (j, j ) such that j = j τ j,j (δ) > τ 0 (δ). Otherwise, we say that secondary collisions dominate. In [15] we investigated the problem of elimination of the secondary collisions by detuning the parameters of the cocycle. Due to ergodicity of σ ω , one cannot eliminate these collisions completely. However, if the rotation number ω satisfies two number-theoretical conditions, it is possible to achieve domination of primary collision for some decreasing sequence {δ k }. To formulate these conditions, denote by p n /q n the best rational approximation of order n to ω. We assume that ω satisfies the Brjuno's condition with a constant C B (see [9], [22] ), i.e. ∞ n=1 log(2q n+1 ) q n = C B < ∞. (2.8) The second condition can be formulated as follows. Introduce a set of functions H = {h : R + → R + : h(x) > h(y) ∀ x > y; lim x→∞ h(x)/x = 0}, where R + = (0, +∞). Then we say that ω satisfies condition (A) with a function h ∈ H (see [15]) if there exist a subsequence {q nj } ∞ j=1 and positive constants C t , C δ such that C δ < 1 q nj +1 > q nj h(q nj ), ∀ j ∈ N (2.9) and for all k ∈ N there exists an index J k : 1 q n J k log q n J k h(q n J k ) + log C −1 δ < log q n J k+1 h(q n J k+1 ) q n J k < C t . (2.10) It has to be noted that the set of those ω ∈ (0, 1) which satisfy the Brjuno's condition is of full measure [9]. The situation with the condition (A) depends on a function h ∈ H. It is known (see e.g. [18]) that, on the one hand, for all ω ∈ (0, 1) the denominators q n satisfy q n > 2 n−1 2 , ∀ n ∈ N, whereas, on the other hand, for almost all ω ∈ (0, 1) there exists constant C L such q n < e C L n , ∀ n ∈ N. If the series 2 ) converges, the set of those ω ∈ (0, 1) which satisfy condition (A) has measure zero. However, for any h ∈ H this set is dense in (0, 1) [15]. Under these two assumptions on the rotation number, the following theorem can be proved 3. for any t ∈ E h the cocycle M is NUH. Remark 2 The proof of this theorem is similar to the proof of Theorm 4 in [15]. Indeed, the Brjuno's condition provides the lower bound for the Lyapunov exponent on a set of positive Lebesgue measure and, hence, the positiveness ofΛ 0 . On the other hand, the condition (A) implies the dominance of the primary collisions for a sequence {δ j } ∞ j=0 with δ j = 1 q nj h(q nj ) . This leads to existence of a limit critical set such that in it's neighbourhood there exist points with an arbitrary small Lyapunov exponent. Such effect can be illustrated in the following way. Consider a trajectory of a point x from a small neighbourhood U δ (c 0 ). After τ 0 (δ) iterations it enters this neighbourhood again and the norm of the corresponding cocycle changes its growth. If one performs the next τ 0 (δ) iterations, the trajectory hits U δ (c 0 ) once more. However, since the products of matrices, corresponding to such two passages from U δ (c 0 ) to itself, are very close to each other, the norm of their superposition becomes of order O(1). Finally, we remark that, to make the number of such repetitions larger, one needs to take the size of neighbourhoods, δ, smaller. That is why the condition (A) was imposed. It is to be noted that, due to specific parameter dependence of the cocycle in [15], the number of points, constituting the critical set, grew up with decreasing of ε. To overcome this difficulty Theorem 4 in [15] was proved under an assumption that the rotation number satisfies the condition (A) with the function h(x) = x γ , 0 < γ < 1. As it was mentioned above, the set of such ω ∈ (0, 1) is dense, but has measure zero. Hypotheses (H 1 − H 4 ) enable us to avoid this restriction and state the result in Theorem 1 of the present paper for any function h ∈ H. Secondary collisions In this section we study the case corresponding to dominance of the secondary collisions. As it was mentioned above, the secondary collisions depend on the parameter t non-trivially. It means that, on the one hand, one may detune the parameter to avoid some specific secondary collisions and, thus, to consider only generic ones. But, on the other hand, if such generic collision occurs, it exists for some interval of t. To simplify the exposition, we consider the simplest skew product of type (1.4), whose critical set consists of two points C 0 = {c 0 , c 1 }. We fix a positive δ 1 and assume that for some n ∈ N τ 0,1 (δ) = n, τ 0 (δ) > n. (3.11) We formulate two statements which are direct consequences of Lemma 1. Lemma 2 Let x ∈ T 1 be such that cos ϕ σ k ω (x) ≥ C 3 , k = 0, ..., n − 1. Assume also that λ 0 1. Then there exists a positive constant C M < C 3 such that M (x, n) ≥ (C M λ 0 ) n . Thus, if the finite trajectory of a point x ∈ T 1 does not enter a small neighbourhood of the critical set, then the cocycle M satisfies condition (1.5). The second statement is Lemma 3 Assume λ 1 , λ 2 1. Then there exists a positive constant C such that for any φ 1 the norm, µ, of the product P 1 = Z(λ 2 )R(φ 1 )Z(λ 1 ) admits an estimate 1. λ 1 λ 2 ⇒ µ ≥ C λ 1 λ 2 1 + O λ 2 2 λ 2 1 , C < 3 2 √ 2 , 2. λ 2 λ 1 ⇒ µ ≥ C λ 2 λ 1 1 + O λ 2 1 λ 2 2 , C < 1. PROOF: First, we introduce a parameter ξ = cos 2φ 1 + 1 and a function F = a (1 + c), where a and c are from Lemma 1. Represent P 1 = Z(λ 2 )R(φ)Z(λ 1 ) as P 1 = R(Φ 1 )Z(µ 1 )R(χ 1 ). Then, by Lemma 1, the norm, µ 1 , of the product P 1 is of the form µ = 1 2 F + F 2 − 4 1/2 (3.12) and can be considered as a function of ξ. Moreover, since F = a + 1 a + b 2 a ≥ 2, the functions µ 1 and F (as functions of ξ) achieve their minima simultaneousely. Define the following polynomials P (ξ) = 2β + (λ 2 2 − β)ξ, S(ξ) = 2β 2 + (1 − β 2 )ξ, Q(ξ) = 4 λ 2 λ 1 2 β 2 + 2 λ 2 λ 1 4 γ 2 + λ 2 λ 1 2 (1 − β 2 ) ξ − λ 2 λ 1 4 γ 2 , R(ξ) = P 2 (ξ) + Q(ξ), γ = 1 − λ −4 2 1 + λ −2 1 λ −2 2 . Then one may represent F = λ 1 √ 2λ 2 R(ξ) P (ξ)S 1/2 (ξ) , F = λ 1 √ 2λ 2 P (ξ)S(ξ)R (ξ) − R(ξ)S(ξ)P (ξ) − 1 2 P (ξ)R(ξ)S (ξ) P 2 (ξ)S 3/2 (ξ) , where the prime stands for the derivative with respect to ξ. Note that P SR − RSP − 1 2 P RS is a polynomial of degree 3. We denote coefficients of this polynomial by A k , k = 0, . . . , 3 and represent P SR − RSP − 1 2 P RS = A 0 + A 1 ξ + A 2 ξ 2 + A 3 ξ 3 . Taking into account definition of P, Q, R, S and assumption λ 1,2 1, one obtains A 0 = 4λ 2 2 λ −2 1 + λ −2 2 4 + O λ −12 1 + λ −12 2 , A 1 = −2λ 4 2 2λ −6 1 + 5λ −4 1 λ −2 2 + 4λ −2 1 λ −4 2 + λ −6 2 + O λ −10 1 + λ −10 2 , A 2 = −λ 6 2 λ −4 1 + 3λ −2 1 λ −2 2 + 2λ −4 2 + O λ −8 1 + λ −8 2 , A 3 = λ 6 2 λ −4 1 + 3λ −2 1 λ −2 2 + 2λ −4 2 + O λ −8 1 + λ −8 2 . Hence, the function F achieves its minimum at ξ 0 ≈ −A 0 /A 1 . Precisely, in the case λ 2 λ 1 we have ξ 0 = λ −2 1 λ −2 2 1 + O λ 1 λ 2 2 , F (ξ 0 ) = λ 2 λ 1 1 + O λ 1 λ 2 2 . In the case λ 1 λ 2 one has ξ 0 = 2λ −4 2 1 + O λ 2 λ 1 2 , F (ξ 0 ) = 3 2 √ 2 λ 1 λ 2 1 + O λ 2 λ 1 2 . Thus, in both cases F 1. Taking this into account together with formula (3.12), we conclude that µ 1 = F + O(F −1 ), what finishes the proof. Consider a product P 2 = R(φ 2 )Z(λ 2 )R(φ 1 )Z(λ 1 ). Then, by Lemma 1, it can be represented in a form (3.13) where Φ 2 = φ 2 + Φ 1 , Φ 1 = ψ 1 − χ 1 , µ 2 = µ 1 , χ 2 = χ 1 and angles ψ 1 , χ 1 together with µ 1 are described by (2.7). First, we remark that, due to Lemma 3, the difference in magnitude of λ 1 and λ 2 (under assumption λ 1,2 1) implies the norm of P 2 becomes large for any φ 1,2 . On the other hand, if both angles φ 1,2 are close to π/2 mod π, the sum φ 2 + Φ 1 is close to 0 mod π. P 2 = R(Φ 2 )Z(µ 2 )R(χ 2 ), Our goal now is to investigate how interaction between two small δ-neighborhoods of the critical points c 0 , c 1 influences the parameters of the cocycle M . To model such interaction we note that, taking ε, δ to be sufficiently small, function ϕ can be approximated in the δ-neighborhood of c j bŷ ϕ(y; L − , L + , k) = ± π 2 + ϑ(y; L − , L + , k), ϑ(y; L − , L + , k) = kyΘ(ky − L − )Θ(L + − ky) + L − Θ(L − − ky) + L + Θ(ky − L + ),(3.14) where y = x − c j , Θ is the Heaviside function and k = ϕ (c j ). Note that hypotheses (H 2 ), (H 4 ) imply L − · L + < 0, k ∼ ε −1 . A graph of such functionφ is shawn on Fig.1. +,min , k (j) ) ≤ |ϕ(x)| ≤ φ(x − c j ; L (j) −,max , L (j) +,max , k (j) ) , ∀ x ∈ U δ (c j ) (3.15) with some parameters L (j) ±,min , L (j)(j) ±,max , k (j) . Taking this into account we set the angles φ j in the product P 2 to be equal to φ 1 (x) =φ(x − ∆; L (1) − , L(1)+ , k (1) ), φ 2 (x) =φ(x; L (2) − , L(2)+ , k (2) ) (3.16) with fixed parameters L (j) ± , k (j) = ε −1 r (j) , j = 1, 2 and a detuning parameter ∆, which will be specified later. Due to assumption (H 4 ), one has ϕ(c 0 ) = ϕ(c 1 ) mod 2π. Hence, the signs before π/2 in the definition of φ 1,2 coincide. We consider the case of sign '+' (the case of sign '-' can be studied similarly). Lemma 4 Assume λ 1 , λ 2 1 and λ 1 λ 2 . If φ 1 , φ 2 are described by (3.16) with ∆ satisfying |∆| < ε √ β r (1) r (2) 1/2 , then product P 2 has the following characteristics: µ 2 ≥ λ 1 λ 2 1 + O λ 2 2 λ 2 1 , |χ 2 | max = γ 2 λ 2 λ 1 2 1 + O 1 λ 4 2 + λ 2 2 λ 2 1 , |Φ 2 | max = π 2 − β r (2) r (1) 1/2 1 + O 1 λ 4 2 + λ 2 2 λ 2 1 , where |χ 2 | max , |Φ 2 | max stand for the maximum values of χ 2 and Φ 2 , respectively. PROOF: Substituting (3.16) into (2.7) one obtains expressions for ψ 1 , χ 1 as functions of variable x. It is to be noted that | tan χ| is an increasing, odd function of a variable η = √ 2b(1 − c) −1 . On the other hand, using notations from Lemma 3, we have |η| = κγ ξ(2 − ξ)P (ξ) P 2 (ξ) − Q(ξ) , κ = λ 2 λ 1 2 . (3.17) Differentiating (3.17) with respect to ξ, one concludes that η achieves maximum at ξ * , which solves the following equation ξ(2 − ξ) P (ξ)Q (ξ) − P 2 (ξ) + Q (ξ) P (ξ) + (1 − ξ)P (ξ) P 2 (ξ) − Q(ξ) = 0. (3.18) We note that the left hand side of (3.18) is a polynomial of degree 3 and can be represented as ξ(2 − ξ) P Q − P 2 + Q P + (1 − ξ)P P 2 − Q = B 0 + B 1 ξ + B 2 ξ 2 + B 3 ξ 3 , where B k , k = 0, . . . , 3 are constants. Under assumptions λ 1,2 1, κ 1 the coefficients B j admit the following representation B 0 = 8β 3 1 + O βλ −2 2 + κ , B 1 = 4β 2 λ 2 2 1 + O βλ −2 2 + κ , B 2 = −2βλ 4 2 1 + O βλ −2 2 + κ , B 3 = −λ 6 2 1 + O βλ −2 2 + κ . The only positive solution of (3.18) satisfies ξ * = 2βλ −2 2 1 + O βλ −2 2 + κ . and the maximum value of η equals The graph of tan χ as a function of φ is presented on Fig. 2. We substitute (3.16) into (2.7) and conclude that under assumptions λ 1,2 1, κ 1, ε 1 η max = η(ξ * ) = κγ 2λ 2 √ β 1 + O βλ −2 2 + κ .tan Φ 1 (x) = α 1 (x) tan φ 1 (x), α 1 (x) = β + κγ cos 2 φ1(x) λ 2 2 cos 2 φ1(x)+β sin 2 φ1(x) (1 + O (κ)) 1 − βκγ sin 2 φ1(x) λ 2 2 cos 2 φ1(x)+β sin 2 φ1(x) (1 + O (κ)) , (3.21) where the factor α 1 satisfies β < α 1 (x) < β + κγλ −2 2 1 − κγ = β (1 + O (κ)) . (3.22) Hence, the graph of Φ 1 (x) is very similar to those one of φ 1 (x) exept the slope of the graph at x = ∆ is much higher for Φ 1 (x) than for φ 1 (x), since Φ 1 (0) = ε −1 β −1 r (1) (1 + O (κ)). Moreover, new levelsL ± = arctan (α 1 (±δ) tan (L ± )) become close to 0 mod π as α 1. Particularly, one haŝ L ± = O(β). (3.23) We represent Φ 1 (x) as Φ 1 (x) = π 2 + arctan α −1 1 (x) tan ϑ(x − ∆, L (1) − , L(1)+ , k (1) ) and consider the sum Φ 2 (x) = φ 2 (x) + Φ 1 (x) Φ 2 (x) = ϑ 2 (x) +θ 1 (x − ∆) mod π, ϑ 2 (x) = ϑ(x, L(2) − , L + , k (2) ), (3.24) ϑ 1 (x) = arctan α −1 1 (x) tan ϑ(x, L (1) − , L(2)+ , k (1) ) .(1) Note that graphs of both summands in the right hand side of (3.24) are of the same shape. However, as we study the secondary collisions, assumption (H 4 ) implies ϕ (c 0 ) · ϕ (c 1 ) < 0 or equivalently k (1) · k (2) < 0. Fig. 3 illustrates relative position of these two graphs. If ∆ = 0 both ϑ 2 (x) andθ 1 (x) vanish at x = 0, but are of different signs as x = 0. Hence, differentiating (3.24) one obtains that |Φ 2 | attains maximum at x * , which satisfies − = −2/3, L(1)+ = 3/4, k (1) = 1, L(1)− = 2/3, L(2)α −1 (x)|k (1) | − |k (2) | = α −2 (x) − 1 |k (2) | sin 2 k (1) x . (3.25) Assumption (H 2 ) together with (3.22) yields x * = 1 k (1) k (1) k (2) 1/2 (1 + O(β)) and |Φ 2 | max = |Φ 2 (x * )| = π 2 − 2 β k (2) k (1) 1/2 (1 + O(β)) . If ∆ = 0 then |Φ 2 | attains maximum at x * x * = ∆ − sign(∆) 1 k (1) k (1) k (2) 1/2 (1 + O(β)) and |Φ 2 | max = |Φ 2 (x * )| = π 2 − 2 β k (2) k (1) 1/2 (1 + O(β)) + ∆ k (2) . We assume that |∆| < ε √ β r (1) r (2) 1/2 . (3.26) In this case the maximum value |Φ 2 | max is bounded by |Φ 2 | max < π 2 − β k (2) k (1) 1/2 (1 + O(β)) . This finishes the proof. Remark 3 One may note that, if functions φ 1,2 in the product P 2 satisfy φ(x − ∆; L (1) −,min , L(1)+,min , k(1)min ) ≤ |φ 1 (x)| ≤ φ(x − ∆; L (1) −,max , L(1)+,max , k (1) max ) , φ(x; L(2) −,min , L +,min , k(2)min ) ≤ |φ 2 (x)| ≤ φ(x; L(2) −,max , L +,max , k (2) max ) (3.27)(2) and indices j 1 , j 2 ∈ {1, 2} are such that j 1 = j 2 , k (j1) max ≥ k (j2) max , then |φ 1 (x) − φ 2 (x)| ≤ ϑ(x − ∆; L (1) −,max , L (1) +,max , k (1) max ) − ϑ(x; L(2) −,min , L +,min , k min ) , j 1 = 1; |φ 1 (x) − φ 2 (x)| ≤ ϑ(x − ∆; L (1) −,min , L(1)+,min , k (1) min ) − ϑ(x; L (2) −,max , L(2)+,max , k (2) max ) , j 1 = 2. Taking this into account we arrive at the following . Finally, consider a product P 3 = Z(λ 3 )P 2 = Z(λ 3 )R(φ 2 )Z(λ 2 )R(φ 1 )Z(λ 1 ). By Lemma 1 we may represent it in a form . P 3 = R(Φ 3 )Z(µ 3 )R(χ 3 As a result we obtain a factorized word w f (x) of length j consisting of three letters B, G, H. Note that j → ∞ as l → ∞. Moreover, letter B may appear in the word w f no more than two times. If letter B occurs exactly two times, we define k 1 , k 2 to be the indices such that w f ki = B, i = 1, 2 and k 1 < k 2 . When B appears less than two times, we consider three cases. If w f k = B, k = 1, . . . , [j/2], we set k 1 = 0 and k 2 such that w f k2 = B. If w f k = B, k = [j/2] + 1, . . . , j, we set k 2 = j + 1 and k 1 such that w f k1 = B. Finally, if letter B does not appear in the word w f (x), we define k 1 = 0, k 2 = j + 1. Clearly, we have k 1 < τ 0 (δ), j − k 2 < τ 0 (δ). (3.42) Then one may define a truncated wordŵ by the rulê w i = w f k1+i , i = 1, . . . , k 2 − k 1 − 1. The truncated word corresponds to a product of matrices satisfying the conditions of Lemma 2. On the other hand, due to (3.42) k 2 − k 1 − 1 j → 1 as j → ∞. Hence, for sufficiently large j we may apply Lemma 3 to the cocycle M (x, l). This yields the following Theorem 2 Let hypotheses (H 1 ) -(H 6 ) be satisfied, λ 0 1 and the critical set C 0 consists of two points. Assume t res ∈ [a, b] is resonant of order n and the time of primary collisions τ 0 > 1 + 2 log λ max log(C M λ min ) n. Then there exists ε 0 > 0 such that for any ε ∈ (0, ε 0 ) there exists positive h such that h = O ε 2 λ −2n 0 and the cocycle M (x, n) is uniformly hyperbolic for all t ∈ (t res − h, t res + h). Remark 5 It has to be noted that in Theorem 2 we do not impose any conditions on the rotation number in contrary to Theorem 1. The reason for that is the fact that those parameter values, which correspond to the uniform hyperbolicity of a cocycle, constitute an open set. h(e C L n ) diverges, the set of those ω ∈ (0, 1) which satisfy condition (A) has full measure[15]. Particularly, h(x) = log(1 + x) provides an example of such functions. On the other hand, if the series Theorem 1 1Assume hypotheses (H 1 )−(H 6 ) hold true. If ω satisfies the Brjuno's condition and the condition (A) with C t = log λ 0 − C B , then there exist sufficiently small ε 0 > 0, positive constants C Λ < 1, C 0 and a subset E h ⊂ [a, b] such that 1. the Lebesgue measure leb ([a, b] \ E h ) = O e −C0/ε0 ;2. for any t ∈ E h the integrated Lyapunov exponentΛ 0 > C Λ log λ 0 ; Figure 1 : 1Graph of functionφ with L − = −2/3, L + = 3/4 and k = 1. More precisely, hypotheses (H 2 ), (H 3 ) guarantee the existence of a positive δ > C 3 ε such that φ(x − c j ; L (j) −,min , L Figure 2 : 2= √ 2b (1 + O (κ)) = κγ sin φ cos φ λ 2 2 cos 2 φ + β sin 2 φ (1 + O (κ)) .(3.20) Graph of tan χ, corresponding to λ 1 = 10, λ 2 = 7. Figure 3 3: a. Graphs of functions Φ 1 (solid line) and φ 2 (dashed line), corresponding to λ 1 = 10, λ 2 = 7, L + = −1/2, k (2) = −1, ∆ = 0.1; b. graph of tan Ψ 2 , corresponding to different values of ∆ (∆ = 0 -solid line, ∆ = 0.1 -dashed line, ∆ = −0.1 -dotted line; values of other parameters are the same as atFig.3a) Corollary 1 Assume λ 1 , λ 2 1, λ 1 λ 2 and φ 1 , φ 2 satisfy (3.27) with ∆ such thatThen product P 2 has the following characteristics:|∆| < ε √ β r (j1) max r (j2) min 1/2 . µ 1 ≥ λ 1 λ 2 1 + O λ 2 2 λ 2 1 , |χ 1 | max ≤ γ 2 λ 2 λ 1 2 1 + O 1 λ 4 2 + λ 2 2 λ 2 1 , |Φ 2 | max ≤ π 2 − β r (j2) min r (j1) max 1/2 1 + O 1 λ 4 2 + λ 2 2 λ 2 1 Lemma 5 Assume conditions of Lemma 4 are satisfied. If, additionally, λ 1 λ , then product P 3 has the following characteristics:). (3.28) Then we obtain 3/2 2 and λ 3 > λ 1/2 2 |tan Φ 3 | < r (1) r (2) 1/2 1 + O 1 λ 4 2 + λ 2 2 λ 2 1 , µ 3 ≥ 1 2 1 + r (2) r (1) λ 1 λ 3/2 2 1 + O 1 λ 4 2 + λ 3 2 λ 2 1 AcknowledgementsThe research was supported by RFBR grant (project No. 20-01-00451/22).Proof: Using representation (3.13), we apply Lemmas 1 and 4 to the product Z(λ 3 )R(Φ 2 )Z(µ 2 ) and obtain that Z(λ 3 )R(Φ 2 )Z(µ 2 ) = R(Φ 3 )Z(µ 3 )R(χ 3 − χ 2 ) with |tan Φ 3 | < β 2 β r(1)Then, taking into account that β = λ −2 2 (1 + O(κ)) as κ 1 and assuming λ 3 > λ 1/2 2 , one obtains |tan Φ 3 | < r(1)Besides, Lemmas 1 and 4 imply that parameter µ 3 (see(2.7)for definition), corresponding to product P 3 , admits an estimate(3.29)An expression in the right hand side of (3.29) attains its minimum with respect to λ 3 at λ 3, * , which is a solution ofLemma 4 and the condition λ 1 λMoreover, we obtainThis finishes the proof. We apply Lemmas 1 -5 to the cocycle M . Assuming that for a positive δ 1 the secondary collision occures, i.e. τ 0,1 (δ) = n, τ 0 (δ) > n, n ∈ N, one may note that finite trajectory σ k ω (x), k = 1, . . . , n − 1 of any point x ∈ U δ (c 0 ) does not fall into U δ (C 0 ). Then, applying consecutively Lemma 1, we represent the product Z(λ(σ n ω (x)))M (σ ω (x), n−1) for x ∈ U δ (c 0 ) in the following wayTaking into account conditions (3.11) and (H 5 ), one obtains that the angles Φ n (x), χ n (x) are small, particularly,Moreover, µ n (x) is large and satisfieswhere λ min , λ max stand for the minimum and maximum values of function λ over x ∈ T 1 and C M is a constant from Lemma 2. Besides, we suppose that there exist integers n − , n + such thatWe note that Φ n,± , χ n,± satisfy (3.32) andThe central part of this product, namely,, n − + n + n + R −χ n,− (x) has the structure of product P 3 from Lemma 5 with parameters(3.37)We note that provided δ to be sufficiently small,(3.38)These solutions admit estimates39)We emphasize here that assumption (H 6 ) guarantees non-trivial dependence of the distance between points c 0 , c 1 on the parameter t. Without loss of generality we may assume that position of c 0 is fixed and c 1 varies with respect to t. Then solution x 2 can be considered as a function of t, whereas x 1 is constant.Definition 4We say that a value t res is resonant of order n if it solvesIt means that for resonant t res the n-th iteration σ n ω (c 0 ) falls not exactly on c 1 , but close to it, i.e.where ∆ res n (t) is a small correction such that ∆ res n (t) = ε r −1 0 (χ n (c 0 ) + Φ n,− (c 0 )) − r −1 1 χ n,+ (σ −n ω (c 1 (t))) + Φ n (σ −n ω (c 1 (t))) (1 + O(δ)) .As parameters Φ n , Φ n,± , χ n , χ n,± are of the order of λ −2 0 , we obtain ∆ res n = O(ελ −2 0 ).Remark 4In the simplest case n = 1 one has ∆ res n ≡ 0.We note that if there exists t 0 ∈ [a, b] such that σ −n ω (c 1 (t 0 )) = c 0 one may apply the implicit function theorem to conclude the existence of t res , which satisfiesLet parameter t be close to a resonant one, i.e. we suppose that τ 0,1 (δ) = n and dist (σ n ω (c 0 ), c 1 ) = ∆ n 1.Consider cocycle M (σ −n− ω (x), n − + n + n + ) for x ∈ I 0 (ε) and its singular decomposition (3.36). Then we arrive at the following Lemma 6 Let n ∈ N be fixed. Assume that λ 0 1 and there exist integers n − , n + such that (3.34) holds. If, additionally, there exists t 0 ∈ [a, b] such that σ −n ω (c 1 (t 0 )) = c 0 , then there exists ε 0 such that for any ε ∈ (0, ε 0 ) the following holds: there exists unique resonant value t res in a small neighborhood of t 0 andMoreover, for any x ∈ U δ (c 0 ) and any t ∈ (t res − h, t res + h) the cocycle M (σ PROOF: The proof follows from the direct application of Lemmas 1-6 to the presentation (3.36) of the cocycle M (σ −n− ω (x), n − + n + n + ). We only mention that to due to smooth dependence of all objects on the parameter t and Lemma 4, the size of a neighborhood, h, can be bounded bywhere µ n (x) is defined by (3.31). Then estimate (3.41) is a consequence of (3.33). Lemma 6 is a key tool for establishing the hyperbolicity of the cocycle M . First, we introduce the following notations. We consider the matrix A(x) defined by (1.4) and denote it by letter B if x ∈ U δ (C 0 ) . In the case x / ∈ U δ (C 0 ), we denote A(x) by letter G. Then for any x ∈ T 1 one may assign to the cocycle M (x, l) a word w(x) = [w 1 , w 2 , . . . , w l ] of length l, consisting of letters B and G such that w i = B if σ i−1 ω (x) ∈ U δ (C 0 ) and w i = G otherwise. Finally, for any x ∈ U δ (c 0 ) the product M (σ n− ω (x), n − + n + n + ) = n+n+ k=−n− A(σ k ω (x)) will be denoted by letter H. Note that, under resonance conditions, Lemma 6 guarantees hyperbolicity of the product M (σ n− ω (x), n − + n + n + ). One may factorize the word w(x) by means of the latter notation. We replace any subword [w s , . . . , w s+n+n−+n+ ] of length n + n − + n + by one letter H if σ s+n− ω (x) ∈ U δ (c 0 ). A Avila, arXiv:1006.0704Almost reducibility and absolute continuity I. A. Avila, Almost reducibility and absolute continuity I, arXiv: 1006.0704 (2010). A formula with some applications to the theory of Lyapunov exponents. A Avila, J Bochi, Israel Journal of Mathematics. 131A. Avila, J. Bochi, A formula with some applications to the theory of Lyapunov exponents, Israel Journal of Mathematics 131 (2002), pp. 125-137. A uniform dichotomy for generic SL(2,R)-cocycles over a minimal base. A Avila, J Bochi, Bull. Soc. Math. France. 135A. Avila, J. Bochi, A uniform dichotomy for generic SL(2,R)-cocycles over a minimal base, Bull. Soc. Math. France, 135 (2007), pp. 407-417. Opening gaps in the spectrum of strictly ergodic Schrdinger operators. A Avila, J Bochi, D Damanik, J. Eur. Math. Soc. 14A. Avila, J. Bochi, D. Damanik, Opening gaps in the spectrum of strictly ergodic Schrdinger operators, J. Eur. Math. Soc., 14 (2012), pp. 61-106. Reducibility or non-uniform hyoerbolicity for quasiperiodic Schrdinger cocycles. A Avila, R Krikorian, Annals of Mathematics. 164A. Avila, R. Krikorian, Reducibility or non-uniform hyoerbolicity for quasiperiodic Schrdinger cocycles, Annals of Mathematics, 164 (2006), pp. 911-940. L Barreira, Y Pesin, Nonuniform hyperbolicity: dynamics of systems with nonzero Lyapunov exponents. CambridgeL. Barreira, Y. Pesin, Nonuniform hyperbolicity: dynamics of systems with nonzero Lyapunov exponents, Cambridge, (2007). C Bonatti, L Diaz, M Viana, Dynamics beyond uniform hyperbolicity. SpringerC. Bonatti, L. Diaz, M. Viana, Dynamics beyond uniform hyperbolicity, Springer, (2005). The dynamics of the Hénon map. M Benedicks, L Carleson, Ann. Math. 133M. Benedicks, L. Carleson, The dynamics of the Hénon map, Ann. Math., 133 (1991), pp. 73-169. Convergence of transformations of differential equations to normal forms. A D Brjuno, Dokl. Akad. Nauk USSR. 165A. D. Brjuno, Convergence of transformations of differential equations to normal forms, Dokl. Akad. Nauk USSR, 165 (1965), pp. 987-989. Continuity of the Lyapunov exponent for quasiperiodic operators with analytic potential. J Bourgain, S Jitomirskaya, J. Statist. Phys. 108J. Bourgain, S. Jitomirskaya, Continuity of the Lyapunov exponent for quasiperiodic operators with analytic potential, J. Statist. Phys., 108(5-6) (2002), pp. 1203-1218. Monodromization and Harper equation. V S Buslaev, A A Fedotov, Séminares sur lesÉquations aux Dérivées Partielles. PalaiseauExp. no. XXI, 23 pp.,École PolytechV. S. Buslaev, A. A. Fedotov, Monodromization and Harper equation, Séminares sur lesÉquations aux Dérivées Partielles, 1993-1994, Exp. no. XXI, 23 pp.,École Polytech., Palaiseau, (1994). Floquet solutions for the 1-dimensional quasi-periodic Schrödinger equation. L H Eliasson, Comm. Math. Phys. 1463L. H. Eliasson, Floquet solutions for the 1-dimensional quasi-periodic Schrödinger equation, Comm. Math. Phys., 146(3) (1992), pp. 447-482. Une methode pour minorer les exposants de Lyapunov et quelques exemples montrant le charactere local d'un theoreme d'Arnold et de Moser sur le tore en dimension 2. M Herman, Commun. Math. Helv. 58M. Herman, Une methode pour minorer les exposants de Lyapunov et quelques exemples montrant le charactere local d'un theoreme d'Arnold et de Moser sur le tore en dimension 2, Commun. Math. Helv., 58 (1983), pp. 453-502. Connecting orbits near the adiabatic limit of Lagrangian systems with turning points. A V Ivanov, Reg. & Chaotic Dyn. 225A. V. Ivanov, Connecting orbits near the adiabatic limit of Lagrangian systems with turning points., Reg. & Chaotic Dyn., 22 (5) (2017), pp. 479-501. On singularly perturbed linear cocycles over irrational rotations. A V Ivanov, Reg. & Chaotic Dyn. 263A. V. Ivanov, On singularly perturbed linear cocycles over irrational rotations, Reg. & Chaotic Dyn., 26(3) (2021), pp. 205-221. Absolutely continuous invariant measures for one-parameter families of one-dimensional maps. M Jakobson, Comm. Math. Phys. 81M. Jakobson, Absolutely continuous invariant measures for one-parameter families of one-dimensional maps, Comm. Math. Phys., 81 (1981), pp. 39-88. The rotation number for almost periodic potentials. R Johnson, J Moser, Comm. Math. Phys. 84R. Johnson, J. Moser, The rotation number for almost periodic potentials, Comm. Math. Phys., 84 (1982), pp. 403-438. A Ya, Khinchin, Continued fractions. Chicago, Ill.-LondonThe University of Chicago PressA. Ya. Khinchin, Continued fractions, The University of Chicago Press, Chicago, Ill.-London, (1964) Making fractals fat. V F Lazutkin, Reg. & Chaotic Dyn. 41V. F. Lazutkin, Making fractals fat, Reg. & Chaotic Dyn., 4(1) (1999), pp. 51-69. A solution procedure for second-order difference equations and its application to electromagnetic-wave diffraction in a wedge-shaped region. M A Lyalinov, N Y Zhu, Proc. R. Soc. Lond. A. R. Soc. Lond. A459M. A. Lyalinov, N. Y. Zhu, A solution procedure for second-order difference equations and its application to electromagnetic-wave diffraction in a wedge-shaped region, Proc. R. Soc. Lond. A, 459(2040) (2003), pp. 3159-3180. Reducibility of quasi-periodic skew-products and the spectrum of Schrödinger operators. J Puig, Thesis. J. Puig, Reducibility of quasi-periodic skew-products and the spectrum of Schrödinger operators, Thesis, (2004) On the one-dimensional Schödinger equation with a quasiperiodic potential. H Rüssman, New York Acad. Sci. 357Nonlinear dynamicsH. Rüssman, On the one-dimensional Schödinger equation with a quasiperiodic potential, Nonlinear dynamics, Ann. New York Acad. Sci., 357, (1980), pp. 90-107. Linear skew-product dynamical systems. R Sacker, Proc. 3rd. Mexico-U. S. A. Symposium, in "Ecuaciones Differenciales. Carlos Imaz.3rd. Mexico-U. S. A. Symposium, in "Ecuaciones DifferencialesR. Sacker, Linear skew-product dynamical systems,Proc. 3rd. Mexico-U. S. A. Symposium, in "Ecua- ciones Differenciales" by Carlos Imaz.(1976) Positive Lyapunov exponents for Schr"odinger operators with quasi-periodic potentials. E Sorets, T Spencer, Comm. Math. Phys. 1423E. Sorets, T. Spencer, Positive Lyapunov exponents for Schr"odinger operators with quasi-periodic potentials, Comm. Math. Phys., 142(3) (1991), pp. 543-566. Lyapunov exponents for some quasi-periodic cocycles. L.-S Young, Ergod. Th. & Dynam. Sys. 17L.-S. Young, Lyapunov exponents for some quasi-periodic cocycles, Ergod. Th. & Dynam. Sys., 17 (1997), pp. 483-504.
[]
[ "Higgs mechanism and cosmological constant in N = 1 supergravity with inflaton in a vector multiplet", "Higgs mechanism and cosmological constant in N = 1 supergravity with inflaton in a vector multiplet" ]
[ "Yermek Aldabergenov [email protected] \nDepartment of Physics\nTokyo Metropolitan University\nMinami-ohsawa 1-1, Hachioji-shi192-0397TokyoJapan\n", "Sergei V Ketov [email protected] \nDepartment of Physics\nTokyo Metropolitan University\nMinami-ohsawa 1-1, Hachioji-shi192-0397TokyoJapan\n\nKavli Institute for the Physics and Mathematics of the Universe (IPMU)\nThe University of Tokyo\n277-8568ChibaJapan\n\nInstitute of Physics and Technology\nTomsk Polytechnic University\n30 Lenin Ave634050TomskRussian Federation\n" ]
[ "Department of Physics\nTokyo Metropolitan University\nMinami-ohsawa 1-1, Hachioji-shi192-0397TokyoJapan", "Department of Physics\nTokyo Metropolitan University\nMinami-ohsawa 1-1, Hachioji-shi192-0397TokyoJapan", "Kavli Institute for the Physics and Mathematics of the Universe (IPMU)\nThe University of Tokyo\n277-8568ChibaJapan", "Institute of Physics and Technology\nTomsk Polytechnic University\n30 Lenin Ave634050TomskRussian Federation" ]
[]
The N = 1 supergravity models of cosmological inflation with inflaton belonging to a massive vector multiplet and spontaneous SUSY breaking after inflation are reformulated as the supersymmetric U(1) gauge theories of a massless vector superfield interacting with the Higgs and Polonyi chiral superfields, all coupled to supergravity. The U(1) gauge sector is identified with the U(1) gauge fields of the super-GUT coupled to supergravity, whose gauge group has a U(1) factor. A positive cosmological constant (dark energy) is included. The scalar potential is calculated, and its de Sitter vacuum solution is found to be stable. PLANCK observations [1, 2, 3] of the Cosmic Microwave Background (CMB) radiation favour chaotic slow-roll inflation in its single-field realization, i.e. the large-field inflation driven by a single scalar called inflaton with an approximately flat scalar potential.Embedding inflationary models into N = 1 four-dimensional supergravity is needed to connect them to particle physics theory beyond the Standard Model of elementary particles and to quantum gravity. Most of the literature about inflation in supergravity is based on an assumption that the inflaton belongs to a chiral (scalar) multiplet -see e.g., the reviews[4,5]. However, the inflaton can also be assigned to a massive N = 1 vector multiplet. It has some theoretical advantages because there is only one real scalar in an N = 1 massive vector multiplet. The η-problem does not arise because the scalar potential of a vector multiplet in supergravity is of the D-type instead of the F -type. The minimal inflationary models with the inflaton belonging to a massive vector multiplet were constructed in Ref.[6] by exploiting the non-minimal self-coupling of a vector multiplet to supergravity[7]. The supergravity inflationary models [6] have the single-field scalar potential given by an arbitrary real function squared. Those scalar potentials are always bounded from below and allow any desired values of the CMB observables n s and r. However, the minima of the scalar potentials of [6] have a vanishing cosmological constant and the vanishing Vacuum Expectation Value (VEV) of the auxiliary field D, so that they allow only Minkowski vacua where supersymmetry is restored after inflation.A simple extension of the inflationary models [6] was proposed in Ref.[8] by adding a Polonyi (chiral) multiplet [9] with a linear superpotential. The inflationary models [8] also accommodate arbitrary values of n s and r, and have a Minkowski vacuum after inflation, but with spontaneously broken supersymmetry (SUSY). In this paper we further extend the models of Ref.[8] by allowing them to have a positive cosmological constant, i.e. a de-Sitter vacuum after inflation.Yet another motivation comes from an exposition of the super-Higgs effect in supergravity by presenting the new U(1) gauge-invariant form of the class of inflationary models under investigation. This paves the way towards embedding our models into the superymmetric Grand Unification Theories (sGUT) coupled to supergravity, when they have a spontaneously broken U(1) factor in the sGUT gauge group. The physical scale of cosmological inflation can be identified with the Hubble (curvature) scale H ≈ 10 14 GeV or the inflaton mass m inf ≈ 10 13 GeV. The inflationary scale is thus less (though, not much less!) than the sGUT scale of 10 16 GeV. The simple sGUT groups SU(5), SO(10) and E 6 are well motivated in the Calabi-Yau compactified heterotic strings, however, they usually come with at least one extra "undesired" U(1) factor in the gauge group. The well known examples include the gauge symmetry breaking E 6 → SO(10) × U(1), SO(10) → SU(5) × U(1), and the "flipped" SU(5) × U X (1) sGUT originating from heterotic strings. Exploiting the Higgs mechanism in supergravity allows us to propose an identification of the U(1) gauge vector multiplet of those sGUT models with the inflaton vector multiplet we consider, thus unifying inflation with those sGUT in supergravity. Besides the sGUT gauge unification, related proton decay and baryon number violation, having the U(1) factor in the sGUT gauge group allows one to get rid of monopoles, because the gauge group is not semi-simple [10]. And having a positive cosmological constant takes into account dark energy too.Our paper is organized as follows. In Sec. 2 we briefly review the supergravity models [8]. In Sect. 3 we present their U(1) gauge-invariant formulation and the Higgs mechanism. A positive cosmological constant is added in Sec. 4. The scalar potential and it stability are studied in Sec. 5. Our conclusion is given by Sec. 6. − µ 2 e AĀ+2J |1 + Aβ + AĀ| 2 − 3 − 2 J ′ 2 J ′′ |A + β| 2 , (5) 1 Our notation and conventions coincide with the standard ones in Ref.[11], including the spacetime signature (−, +, +, +). The N = 1 superconformal calculus[6,7]after the superconformal gauge fixing is equivalent to the curved superspace description of N = 1 Poincaré supergravity.2 Our J-function differs by the sign from that in Ref.[6,7].
10.1140/epjc/s10052-017-4807-8
[ "https://arxiv.org/pdf/1701.08240v2.pdf" ]
119,243,753
1701.08240
4418a79d5523cd30a14b51c36afa778883124212
Higgs mechanism and cosmological constant in N = 1 supergravity with inflaton in a vector multiplet 27 Mar 2017 March 2017 Yermek Aldabergenov [email protected] Department of Physics Tokyo Metropolitan University Minami-ohsawa 1-1, Hachioji-shi192-0397TokyoJapan Sergei V Ketov [email protected] Department of Physics Tokyo Metropolitan University Minami-ohsawa 1-1, Hachioji-shi192-0397TokyoJapan Kavli Institute for the Physics and Mathematics of the Universe (IPMU) The University of Tokyo 277-8568ChibaJapan Institute of Physics and Technology Tomsk Polytechnic University 30 Lenin Ave634050TomskRussian Federation Higgs mechanism and cosmological constant in N = 1 supergravity with inflaton in a vector multiplet 27 Mar 2017 March 2017revised version The N = 1 supergravity models of cosmological inflation with inflaton belonging to a massive vector multiplet and spontaneous SUSY breaking after inflation are reformulated as the supersymmetric U(1) gauge theories of a massless vector superfield interacting with the Higgs and Polonyi chiral superfields, all coupled to supergravity. The U(1) gauge sector is identified with the U(1) gauge fields of the super-GUT coupled to supergravity, whose gauge group has a U(1) factor. A positive cosmological constant (dark energy) is included. The scalar potential is calculated, and its de Sitter vacuum solution is found to be stable. PLANCK observations [1, 2, 3] of the Cosmic Microwave Background (CMB) radiation favour chaotic slow-roll inflation in its single-field realization, i.e. the large-field inflation driven by a single scalar called inflaton with an approximately flat scalar potential.Embedding inflationary models into N = 1 four-dimensional supergravity is needed to connect them to particle physics theory beyond the Standard Model of elementary particles and to quantum gravity. Most of the literature about inflation in supergravity is based on an assumption that the inflaton belongs to a chiral (scalar) multiplet -see e.g., the reviews[4,5]. However, the inflaton can also be assigned to a massive N = 1 vector multiplet. It has some theoretical advantages because there is only one real scalar in an N = 1 massive vector multiplet. The η-problem does not arise because the scalar potential of a vector multiplet in supergravity is of the D-type instead of the F -type. The minimal inflationary models with the inflaton belonging to a massive vector multiplet were constructed in Ref.[6] by exploiting the non-minimal self-coupling of a vector multiplet to supergravity[7]. The supergravity inflationary models [6] have the single-field scalar potential given by an arbitrary real function squared. Those scalar potentials are always bounded from below and allow any desired values of the CMB observables n s and r. However, the minima of the scalar potentials of [6] have a vanishing cosmological constant and the vanishing Vacuum Expectation Value (VEV) of the auxiliary field D, so that they allow only Minkowski vacua where supersymmetry is restored after inflation.A simple extension of the inflationary models [6] was proposed in Ref.[8] by adding a Polonyi (chiral) multiplet [9] with a linear superpotential. The inflationary models [8] also accommodate arbitrary values of n s and r, and have a Minkowski vacuum after inflation, but with spontaneously broken supersymmetry (SUSY). In this paper we further extend the models of Ref.[8] by allowing them to have a positive cosmological constant, i.e. a de-Sitter vacuum after inflation.Yet another motivation comes from an exposition of the super-Higgs effect in supergravity by presenting the new U(1) gauge-invariant form of the class of inflationary models under investigation. This paves the way towards embedding our models into the superymmetric Grand Unification Theories (sGUT) coupled to supergravity, when they have a spontaneously broken U(1) factor in the sGUT gauge group. The physical scale of cosmological inflation can be identified with the Hubble (curvature) scale H ≈ 10 14 GeV or the inflaton mass m inf ≈ 10 13 GeV. The inflationary scale is thus less (though, not much less!) than the sGUT scale of 10 16 GeV. The simple sGUT groups SU(5), SO(10) and E 6 are well motivated in the Calabi-Yau compactified heterotic strings, however, they usually come with at least one extra "undesired" U(1) factor in the gauge group. The well known examples include the gauge symmetry breaking E 6 → SO(10) × U(1), SO(10) → SU(5) × U(1), and the "flipped" SU(5) × U X (1) sGUT originating from heterotic strings. Exploiting the Higgs mechanism in supergravity allows us to propose an identification of the U(1) gauge vector multiplet of those sGUT models with the inflaton vector multiplet we consider, thus unifying inflation with those sGUT in supergravity. Besides the sGUT gauge unification, related proton decay and baryon number violation, having the U(1) factor in the sGUT gauge group allows one to get rid of monopoles, because the gauge group is not semi-simple [10]. And having a positive cosmological constant takes into account dark energy too.Our paper is organized as follows. In Sec. 2 we briefly review the supergravity models [8]. In Sect. 3 we present their U(1) gauge-invariant formulation and the Higgs mechanism. A positive cosmological constant is added in Sec. 4. The scalar potential and it stability are studied in Sec. 5. Our conclusion is given by Sec. 6. − µ 2 e AĀ+2J |1 + Aβ + AĀ| 2 − 3 − 2 J ′ 2 J ′′ |A + β| 2 , (5) 1 Our notation and conventions coincide with the standard ones in Ref.[11], including the spacetime signature (−, +, +, +). The N = 1 superconformal calculus[6,7]after the superconformal gauge fixing is equivalent to the curved superspace description of N = 1 Poincaré supergravity.2 Our J-function differs by the sign from that in Ref.[6,7]. 1 Introduction 2 Scalar potential and SUSY breaking with a massive vector multiplet in the absence of a cosmological constant The inflationary models of Ref. [8] are defined in curved superspace of N = 1 supergravity [11] by the Lagrangian (M Pl = 1) 1 L = d 2 θ2E 3 8 (DD − 8R)e − 1 3 (K+2J) + 1 4 W α W α + W + h.c. ,(1) in terms of chiral superfields Φ i , representing ordinary (other than inflaton) matter, with a Kähler potential K = K(Φ i , Φ i ) and a chiral superpotential W = W(Φ i ), and interacting with the vector (inflaton) superfield V described by a real function J = J(V ) and having the superfield strength W α ≡ − 1 4 (DD − 8R)D α V . We have also introduced the chiral density superfield 2E and the chiral scalar curvature superfield R [11]. After eliminating the auxiliary fields and changing the initial (Jordan) frame to Einstein frame, the bosonic part of the Lagrangian (1) reads [8] e −1 L = − 1 2 R − K ij * ∂ m A i ∂ mĀ j − 1 4 F mn F mn − 1 2 J ′′ ∂ m C∂ m C − 1 2 J ′′ B m B m − V ,(2) and has the scalar potential V = 1 2 J ′ 2 + e K+2J K −1 ij * (W i + K i W)(W j + K j * W) − 3 − 2 J ′ 2 J ′′ WW ,(3) where we have introduced the vierbein determinant e ≡ dete a m , the spacetime scalar curvature R, the complex scalars A i as physical components of Φ i ; the real scalar C and the real vector B m , with the corresponding field strength F mn = D m B n − D n B m , as physical components of V . The functions K, J and W now represent the lowest components (A i and C) of the corresponding superfields. As regards their derivatives, we use the notation K i ≡ ∂K ∂A i , K i * ≡ ∂K ∂A i , K ij * ≡ ∂ 2 K ∂A i ∂A j , J ′ ≡ ∂J ∂C , W i ≡ ∂W ∂A i , W i ≡ ∂W ∂A i . As is clear from Eq. (2), the absence of ghosts requires J ′′ (C) > 0, where the primes denote differentiations with respect to the given argument. 2 For our purposes here, we restrict ourselves to a single chiral superfield Φ whose Kähler potential and the superpotential are those of the Polonyi model [9]: K = ΦΦ , W = µ(Φ + β) ,(4) with the parameters µ and β. The choice (4) is quite natural (and unique) for a nilpotent (Volkov-Akulov) chiral superfield Φ obeying the constraint Φ 2 = 0, though we do not employ the nilpotency condition here, in order to avoid its possible clash with unitarity at high energies. A substitution of Eq. (4) into the Lagrangian (2) yields where the complex scalar A is the lowest component of the Polonyi chiral superfield Φ. The Minkowski vacuum conditions V = 1 2 J ′ 2 + µ 2 e AĀ+2J |1 + Aβ + AĀ| 2 − 3 − 2 J ′ 2 J ′′ |A + β| 2 = 0 ,(6)∂ĀV = µ 2 e AĀ+2J A(1 +Āβ + AĀ) + (A + β)(1 + Aβ + AĀ) − 3 − 2 J ′ 2 J ′′ (A + β) +A|1 + Aβ + AĀ| 2 − 3 − 2 J ′ 2 J ′′ A|A + β| 2 = 0 ,(7)∂ C V = J ′ J ′′ + 2µ 2 e AĀ+2J |1 + Aβ + AĀ| 2 − 1 − 2 J ′ 2 J ′′ + J ′ J ′′′ J ′′ 2 |A + β| 2 = 0 ,(8) can be satisfied when J ′ = 0 that separates the Polonyi multiplet from the vector multiplet. The Polonyi field VEV is then given by A = ( √ 3 − 1) and β = 2 − √ 3 [9]. This solution describes a stable Minkowski vacuum with spontaneous SUSY breaking at an arbitrary scale F = µ. The related gravitino mass (at the minimum having J ′ = 0) is given by m 3/2 = µe 2− √ 3 . There is also a massive scalar of mass 2m 3/2 and a massless fermion in the physical spectrum. As a result, the Polonyi field does not affect the inflation driven by the inflaton scalar C belonging to the massive vector multiplet and having the D-type scalar potential V (C) = 1 2 J ′ 2 with a real J-function. Of course, the true inflaton field should be canonically normalized via the proper field redefinition of C. Massless vector multiplet and super-Higgs mechanism The matter-coupled supergravity model (1) can also be considered as a supersymmetric (Abelian, non-minimal) gauge theory (coupled to supergravity and a Higgs superfield) in the (supersymmetric) gauge where the Higgs superfield is gauged away (say, equal to 1). When the gauge U(1) symmetry is restored by introducing back the Higgs (chiral) superfield, the vector superfield V becomes the gauge superfield of a spontaneously broken U(1) gauge group. In this Section we restore the gauge symmetry in the way consistent with local supersymmetry, and then compare our results with those of the previous Section. We start with a Lagrangian having the same form as (1) , L = d 2 θ2E 3 8 (DD − 8R)e − 1 3 (K+2J) + 1 4 W α W α + W(Φ i ) + h.c. ,(9) where K = K(Φ i , Φ j ) and the indices i, j, k refer to the chiral (matter) superfields, excluding the Higgs chiral superfield that we denote as H, H. Now, in contrast to the previous Section, the real function J also depends on the Higgs superfield as J = J(He 2V H), while the vector superfield V is massless. The Lagrangian (9) is invariant under the supersymmetric U(1) gauge transformations H → H ′ = e −iZ H , H → H ′ = e iZ H ,(10)V → V ′ = V + i 2 (Z − Z) ,(11) whose gauge parameter Z itself is a chiral superfield. The Lagrangian (1) of Sec. 2 is recovered from Eq. (9) in the gauge H = 1, after the redefinition J new (e 2V ) = J old (V ). The U(1) gauge symmetry of the Lagrangian (9) allows us to choose a different (Wess-Zumino) supersymmertic gauge by "gauging away" the chiral and anti-chiral parts of the general superfield V via the appropriate choice of the superfield parameters Z and Z as V | = D α D β V | = DαDβV | = 0, DαD α V | = σ αα m B m , D α W β | = 1 4 σ αα m σα βn (2iF mn ) + δ α β D , DDDDV | = 16 3 b m B m + 8D , where the vertical bars denote the leading field components of the superfields. It is straightforward (but tedious) to calculate the bosonic part of the Lagrangian in terms of the superfield components in Einstein frame, after elimination of the auxiliary fields and Weyl rescaling. We find e −1 L = − 1 2 R − K ij * ∂ m A i ∂ mĀj − 1 4 F mn F mn − 2J hh ∂ m h∂ mh − 1 2 J V 2 B m B m + iB m (J V h ∂ m h − J Vh ∂ mh ) − V ,(12) where h,h are the Higgs field and its conjugate. We use the notation J hh ≡ ∂ 2 J ∂h∂h |, J V h ≡ ∂ 2 J ∂h∂V | and J V 2 ≡ ∂ 2 J ∂V 2 |. As regards the scalar potential, we get V = 1 2 J 2 V + e K+2J (K + 2J) IJ * (W I + (K + 2J) I W )(W J * + (K + 2J) J * W ) − 3W W ,(13) where the capital Latin indices I, J collectively denote all chiral superfields (as well as their lowest field components) including the Higgs superfield. The standard U(1) Higgs mechanism setting appears after employing the canonical function J = 1 2 he 2Vh . As regards the Higgs sector, it leads to e −1 L Higgs = −∂ m h∂ mh + iB m (h∂ m h − h∂ mh ) − hhB m B m − V .(14) When parameterizing h andh as h = 1 √ 2 (ρ + ν)e iζ ,h = 1 √ 2 (ρ + ν)e −iζ ,(15) where ρ is the (real) Higgs boson, ν ≡ h = h is the Higgs VEV, and ζ is the Goldstone boson, in the unitary gauge of h → h ′ = e −iζ h and B m → B ′ m = B m + ∂ m ζ, we reproduce the standard result [12] e −1 L Higgs = − 1 2 ∂ m ρ∂ m ρ − 1 2 (ρ + ν) 2 B m B m − V .(16) The same result is also achieved by considering the super-Higgs mechanism where, in order to get rid of the Goldstone mode, we employ the super-gauge transformations (10) and (11), and define the relevant field components of Z and i(Z − Z) as Z| = ζ + iξ , i 2 DαD α (Z − Z)| = σ m αα ∂ m ζ .(17) Examining the lowest components of the transformation (10), we find that the real part of Z| and Z| cancels the Goldstone mode of (15). Similarly, applying the derivatives Dα and D α to (11) and taking their lowest components (recalling then DαD α V | = σ m αα B m ), we conclude that the vector field "eats up" the Goldstone mode indeed, as B ′ m = B m + ∂ m ζ .(18) 4 Adding a cosmological constant A cosmological constant (or dark energy) can be introduced into our framework without breaking any symmetries, via a simple modification of the Polonyi sector and its parameters α and β introduced in Sec. 2. 3 Just adding a (very) small positive constant δ and assuming that J ′ = 0 at the minimum of the potential modify the (Minkowski) vacuum condition V = 0 of Sec. 2 to V = µ 2 e α 2 δ = m 2 3/2 δ .(19) By comparing the condition (19) to Eq. (6) we find a relation (1 + αβ + α 2 ) 2 − 3(α + β) 2 = δ .(20) A solution to Eqs. (20) and (7) with V = m 2 3/2 δ is the true minimum, and it reads α = ( √ 3 − 1) + 3−2 √ 3 3( √ 3−1) δ + O(δ 2 ) , β = (2 − √ 3) + √ 3−3 6( √ 3−1) δ + O(δ 2 ) .(21) This yields a de Sitter vacuum with the spontaneously broken SUSY after inflation. Inserting the solution into the superpotential and ignoring the O(δ 2 )-terms, we find W = µ(α + β) = µ(a + b − 1 2 δ) ,(22) where a ≡ ( √ 3 − 1) and b ≡ (2 − √ 3) are the SUSY breaking vacuum solutions to the Polonyi parameters in the absence of a cosmological constant (Sec. 2). Scalar potential and vacuum stability For completeness, stability of our vacuum solutions should also be examined. On the one hand, in our model the vacuum stability is almost guaranteed because both functions J ′ 2 and J ′′ enter the scalar potential V = 1 2 J ′ 2 + µ 2 e AĀ+2J |1 + Aβ + AĀ| 2 − 3 − 2 J ′ 2 J ′′ |A + β| 2(23) with the positive sign, while the function J ′′ is required to be positive for the ghost-freedom. On the other hand, the only term with the negative sign in the scalar potential (23) is −3|A + β| 2 but it grows slower than the positive quartic term |1 + Aβ + AĀ| 4 . The non-negativity of the scalar potential (23) for |A| < 1 is not as apparent as that for |A| ≥ 1. That is why we supply Figs. 1 and 2 where the non-negativity becomes apparent too. In accordance to the previous Sec. 4, we can also add a positive cosmological constant that shifts the minimum to V = m 2 3/2 δ describing a de Sitter vacuum. Conclusion Our new results are given in Secs. 3, 4 and 5. The new gauge-invariant formulation of our models can be used for unification of inflation with super-GUT in the context of supergravity, and has a single inflaton scalar field having a positive definite scalar potential, a spontaneous SUSY breaking and a de Sitter vacuum after inflation. Our approach does not preserve the R-symmetry. Our upgrade of the earlier resuls in Ref. [8] is not limited to the generalized matter couplings in supergravity, given by Eqs. (12) and (13). The standard approach to inflation in supergravity is based on the assumption that inflaton belongs to a chiral (scalar) multiplet. It leads to the well known problems such as the so-called η-problem, stabilization of other scalars, getting SUSY breaking and a dS vacuum after inflation, etc. Though some solutions to these problems exist in the literature, they are rather complicated and include the additional "hand-made" input such as extra (stabilizing) matter superfields, extra (shift) symmetries or extra (nilpotency) conditions. We advocate another approach where inflaton is assumed to belong to a massive vector multiplet, while SUSY breaking and a dS vacuum are achieved with the help of a Polonyi superfield. It is much simpler and more flexible than the standard approach. Physical applications of our approach to super-GUT and reheating are crucially dependent upon the way how the fields present in our models interact with the super-GUT fields. Consistency of sGUT with inflation may lead to some new constraints on both. For instance, inflaton couplings to other matter have to be smaller than 10 −3 , in order to preserve flatness of the inflaton scalar potential and match the observed spectrum of CMB density perturbations. In particular, Yukawa couplings of inflaton to right-handed (sterile) neutrino are crucial to address the leptogenesis via inflaton decay and the subsequent reheating via decays of the right-handed neutrino into visible particles of the Standard Model. Unfortunately, all this appears to be highly model-dependent at present. A derivation of our supergravity models from superstrings, if any, is desirable because it would simultaneously fix those (unknown) interactions and thus provide specific tools for a computation of reheating temperature, matter abundance, etc. after inflation, together with the low-energy predictions via gravity-or gauge-mediated SUSY breaking to the electro-weak scale -see e.g., Ref. [14] for the previous studies along these lines. Our models can be further extended in the gauge-sector to the Born-Infeld-type gauge theory coupled to supergravity and other matter, along the lines of Refs. [15,16], thus providing further support towards their possible origin in superstring (flux-)compactification. Figure 1 :Figure 2 : 12The scalar potentialṼ = µ −2 e −AĀ−2J V as a function of Re(A) and Im(A) at J ′ = 0. The real slice at Im(A) = 0 of Fig. 1 around the minimum ofṼ . A similar idea was used in Ref.[13], though in the different context, where the Polonyi potential was needed to prevent the real part of the stabilizer field from vanishing at the minimum by imposing the condition m gravitino ≪ m inf laton . In our approach, there is no stabilizer field, while the inflation comes from the D-type potential. P A R Ade, Planck CollaborationarXiv:1502.01589Planck 2015 results. XIII. Cosmological parameters. astro-ph.COPlanck Collaboration, P. A. R. Ade et al., "Planck 2015 results. XIII. Cosmological parameters," arXiv:1502.01589 [astro-ph.CO]. P A R Ade, Planck CollaborationarXiv:1502.02114Planck 2015 results. XX. Constraints on inflation. astro-ph.COPlanck Collaboration, P. A. R. Ade et al., "Planck 2015 results. XX. Constraints on inflation," arXiv:1502.02114 [astro-ph.CO]. Improved Constraints on Cosmology and Foregrounds from BICEP2 and Keck Array Cosmic Microwave Background Data with Inclusion of 95 GHz Band. P A R Ade, BICEP2 ; Keck Array Collaboration10.1103/PhysRevLett.116.031302arXiv:1510.09217Phys. Rev. Lett. 11631302astro-ph.COBICEP2, Keck Array Collaboration, P. A. R. Ade et al., "Improved Constraints on Cosmology and Foregrounds from BICEP2 and Keck Array Cosmic Microwave Background Data with Inclusion of 95 GHz Band," Phys. Rev. Lett. 116 (2016) 031302, arXiv:1510.09217 [astro-ph.CO]. Supergravity based inflation models: a review. M Yamaguchi, 10.1088/0264-9381/28/10/103001arXiv:1101.2488Class. Quant. Grav. 28103001astro-ph.COM. Yamaguchi, "Supergravity based inflation models: a review," Class. Quant. Grav. 28 (2011) 103001, arXiv:1101.2488 [astro-ph.CO]. S V Ketov, 10.1142/S0217751X13300214arXiv:1201.2239Supergravity and Early Universe: the Meeting Point of Cosmology and High-Energy Physics. 281330021hep-thS. V. Ketov, "Supergravity and Early Universe: the Meeting Point of Cosmology and High-Energy Physics," Int. J. Mod. Phys. A28 (2013) 1330021, arXiv:1201.2239 [hep-th]. Minimal Supergravity Models of Inflation. S Ferrara, R Kallosh, A Linde, M Porrati, 10.1103/PhysRevD.88.085038arXiv:1307.7696Phys. Rev. 88885038hep-thS. Ferrara, R. Kallosh, A. Linde, and M. Porrati, "Minimal Supergravity Models of Inflation," Phys. Rev. D88 no. 8, (2013) 085038, arXiv:1307.7696 [hep-th]. Massive Vector Multiplets in Supergravity. A Van Proeyen, 10.1016/0550-3213(80)90345-4Nucl. Phys. 162376A. Van Proeyen, "Massive Vector Multiplets in Supergravity," Nucl. Phys. B162 (1980) 376. SUSY breaking after inflation in supergravity with inflaton in a massive vector supermultiplet. Y Aldabergenov, S V Ketov, arXiv:1607.05366Phys. Lett. B. 761115hep-thY. Aldabergenov and S. V. Ketov, "SUSY breaking after inflation in supergravity with inflaton in a massive vector supermultiplet," Phys. Lett. B 761 (2016) 115, arXiv:1607.05366 [hep-th]. Generalization of the Massive Scalar Multiplet Coupling to the Supergravity. J Polonyi, KFKI-77-93Hungary Central Inst. Res. 5unpublishedJ. Polonyi, "Generalization of the Massive Scalar Multiplet Coupling to the Supergravity", Hungary Central Inst. Res. KFKI-77-93 (1977, REC. JUL 1978), 5 p. KFKI-77-93, unpublished. Solitons, monopoles and duality: from Sine-Gordon to Seiberg-Witten. S V Ketov, arXiv:9611209Fortsch. Phys. 45237hep-thS. V. Ketov, "Solitons, monopoles and duality: from Sine-Gordon to Seiberg-Witten", Fortsch. Phys. 45 (1997) 237, arXiv:9611209 [hep-th]. Supersymmetry and supergravity. J Wess, J Bagger, Princeton University PressPrinceton, NJJ. Wess and J. Bagger, "Supersymmetry and supergravity", Princeton University Press, Princeton, NJ, 1992. General Theory of Broken Local Symmetries. S Weinberg, Phys. Rev. D. 71068S. Weinberg, "General Theory of Broken Local Symmetries," Phys. Rev. D 7 (1973) 1068. On inflation, cosmological constant, and SUSY breaking. A Linde, arXiv:1608.00119JCAP. 16112hep-thA. Linde, "On inflation, cosmological constant, and SUSY breaking," JCAP 1611 (2016) 002, arXiv:1608.00119 [hep-th]. Higgs inflation, reheating and gravitino production in no-scale supersymmetric GUTs. J Ellis, H.-J He, Z.-Z Xianyu, arXiv:1606.02202JCAP. 180868hep-thJ. Ellis, H.-J. He and Z.-Z. Xianyu, "Higgs inflation, reheating and gravitino production in no-scale supersymmetric GUTs", JCAP 1808 (2016) 068, arXiv:1606.02202 [hep-th]. Massive vector multiplet inflation with Dirac-Born-Infeld type action. H Abe, Y Sakamura, Y Yamada, 10.1103/PhysRevD.91.125042arXiv:1505.02235Phys. Rev. 91125042hep-thH. Abe, Y. Sakamura, and Y. Yamada, "Massive vector multiplet inflation with Dirac-Born-Infeld type action," Phys. Rev. D91 (2015) 125042, arXiv:1505.02235 [hep-th]. S Aoki, Y Yamada, arXiv:1611.08426More on DBI action in 4D N = 1 supergravity. hep-thS. Aoki and Y. Yamada, "More on DBI action in 4D N = 1 supergravity," arXiv:1611.08426 [hep-th].
[]
[ "Examining Cross-lingual Contextual Embeddings with Orthogonal Structural Probes", "Examining Cross-lingual Contextual Embeddings with Orthogonal Structural Probes" ]
[ "Tomasz Limisiewicz [email protected] \nInstitute of Formal and Applied Linguistics\nFaculty of Mathematics and Physics\nCharles University\nPragueCzech Republic\n", "David Mareček [email protected] \nInstitute of Formal and Applied Linguistics\nFaculty of Mathematics and Physics\nCharles University\nPragueCzech Republic\n" ]
[ "Institute of Formal and Applied Linguistics\nFaculty of Mathematics and Physics\nCharles University\nPragueCzech Republic", "Institute of Formal and Applied Linguistics\nFaculty of Mathematics and Physics\nCharles University\nPragueCzech Republic" ]
[ "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing" ]
State-of-the-art contextual embeddings are obtained from large language models available only for a few languages. For others, we need to learn representations using a multilingual model. There is an ongoing debate on whether multilingual embeddings can be aligned in a space shared across many languages. The novel Orthogonal Structural Probe (Limisiewicz and Mareček, 2021) allows us to answer this question for specific linguistic features and learn a projection based only on mono-lingual annotated datasets. We evaluate syntactic (UD) and lexical (WordNet) structural information encoded in MBERT's contextual representations for nine diverse languages. 1 We observe that for languages closely related to English, no transformation is needed. The evaluated information is encoded in a shared cross-lingual embedding space. For other languages, it is beneficial to apply orthogonal transformation learned separately for each language. We successfully apply our findings to zero-shot and few-shot cross-lingual parsing.
10.18653/v1/2021.emnlp-main.376
[ "https://www.aclanthology.org/2021.emnlp-main.376.pdf" ]
237,485,576
2109.04921
73a6eb7b5b9c9a4564f6a809bfa25978e23108d9
Examining Cross-lingual Contextual Embeddings with Orthogonal Structural Probes Association for Computational LinguisticsCopyright Association for Computational LinguisticsNovember 7-11, 2021. 2021 Tomasz Limisiewicz [email protected] Institute of Formal and Applied Linguistics Faculty of Mathematics and Physics Charles University PragueCzech Republic David Mareček [email protected] Institute of Formal and Applied Linguistics Faculty of Mathematics and Physics Charles University PragueCzech Republic Examining Cross-lingual Contextual Embeddings with Orthogonal Structural Probes Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing the 2021 Conference on Empirical Methods in Natural Language ProcessingAssociation for Computational LinguisticsNovember 7-11, 2021. 20214589 State-of-the-art contextual embeddings are obtained from large language models available only for a few languages. For others, we need to learn representations using a multilingual model. There is an ongoing debate on whether multilingual embeddings can be aligned in a space shared across many languages. The novel Orthogonal Structural Probe (Limisiewicz and Mareček, 2021) allows us to answer this question for specific linguistic features and learn a projection based only on mono-lingual annotated datasets. We evaluate syntactic (UD) and lexical (WordNet) structural information encoded in MBERT's contextual representations for nine diverse languages. 1 We observe that for languages closely related to English, no transformation is needed. The evaluated information is encoded in a shared cross-lingual embedding space. For other languages, it is beneficial to apply orthogonal transformation learned separately for each language. We successfully apply our findings to zero-shot and few-shot cross-lingual parsing. Introduction The representation learned by language models has been successfully applied in various NLP tasks. Multilingual pre-training allows utilizing the representation for various languages, including lowresource ones. There is an open discussion about to what extent contextual embeddings are similar across languages (Søgaard et al., 2018;Hartmann et al., 2019;Vulić et al., 2020). The motivation for our work is to answer: Q1 Is linguistic information uniformly encoded in the representations of various languages? And if this assumption does not hold: Q2 Is it possible to learn orthogonal transformation to align the embeddings? We probe for the syntactic and lexical structures encoded in multilingual embeddings with the new Orthogonal Structural Probes (Limisiewicz and Mareček, 2021). Previously, Chi et al. (2020) employed structural probing (Hewitt and Manning, 2019) to evaluate cross-lingual syntactic information in MBERT and visualize how it is distributed across languages. Our approach's advantage is learning an orthogonal transformation that maps the embeddings across languages based on monolingual linguistic information: dependency syntax and lexical hypernymy. This new capability allows us to test different probing scenarios. We measure how adding assumptions of isomorphism and uniformity of the representations across languages affect probing results to answer our research questions. Related Work Probing It is a method of evaluating linguistic information encoded in pre-trained NLP models. Usually, a simple classifier for the probing task is trained on the frozen model's representation (Linzen et al., 2016;Belinkov et al., 2017;Blevins et al., 2018). The work of Hewitt and Manning (2019) introduced structural probes that linearly transform contextual embeddings to approximate the topology of dependency trees. Limisiewicz and Mareček (2021) proposed new structural tasks and introduced orthogonal constraint allowing to decompose projected embeddings into parts correlated with specific linguistic features. Kulmizev et al. (2020) probed different languages to examine what type of syntactic dependency annotation is captured in an LM. Hall Maudslay et al. (2020) modify the loss function, improving syntactic probes' ability to parse. Cross-lingual embeddings There is an essential branch of research studying relationships of embeddings across languages. Mikolov et al. (2013) showed that distributions of the word vectors in different languages could be aligned in shared space. Following research analyzed various methods of aligning cross-lingual static embeddings (Faruqui and Dyer, 2014;Artetxe et al., 2016;Smith et al., 2017) and gradually dropped the requirement of parallel data for alignment (Artetxe et al., 2018;Zhang et al., 2017;Lample et al., 2018). Significant attention was also devoted to the analysis of multilingual and contextual embeddings of MBERT (Pires et al., 2019;Libovický et al., 2020). There is also no conclusive answer to whether the alignment of such representations is beneficial to cross-lingual transfer. Wang et al. (2019) show that the alignment facilitates zero-shot parsing, while results of Wu and Dredze (2020) for multiple tasks put in doubt the benefits of the alignment. Method The Structural Probe (Hewitt and Manning, 2019) is a gradient optimized linear projection of the contextual word representations produced by a pretrained neural model (e.g. BERT Devlin et al. (2019), ELMO Peters et al. (2018)). In a Distance Probe, the Euclidean distance between projected word vectors approximates the distance between words in a dependency tree: d B (h i , h j ) 2 = (B(h i − h j )) T (B(h i − h j )), (1) B is the Linear Transformation matrix and h i , h j are the vector representations of words at positions i and j. Another type of a probe is a Depth Probe, where the token's depth in a dependency tree is approximated by the Euclidean norm of a projected word vector: ||h i || 2 B = (Bh i ) T (Bh i )(2) Orthogonal Structural Probes Limisiewicz and Mareček (2021) proposed decomposing matrix B and then gradient optimizing a vector and orthogonal matrix. The new formulation of an Orthogonal Distance Probe is 2 : dd V T (h i , h j ) 2 = (d V T (h i − h j )) T (d V T (h i − h j )),(3) where V is an orthogonal matrix (Orthogonal Transformation) andd is a Scaling Vector, which can be changed during optimization for each task to allow multi-task joint probing. This procedure allowed optimizing a separate Scaling Vectord for a specific objective, allowing probing for multiple linguistic tasks simultaneously. In this work, an individual Orthogonal Transformation V is trained for each language, facilitating multi-language probing. This approach assumes that the representations are isomorphic across languages; we examine this claim in our experiments. Our implementation is available at GitHub: https://github.com/Tom556/ OrthogonalTransformerProbing. Experiments We examine vector representations obtained from multilingual cased BERT (Devlin et al., 2019). Data and Probing Objectives We probe for syntactic structure annotated in Universal Dependencies treebanks (Nivre et al., 2020) and for lexical hypernymy trees from WordNet (Miller, 1995). We optimize depth and dependency probes in both types of structures jointly. For both dependency and lexical probes, we use sentences from UD treebanks in nine languages. For each treebank, we sampled 4000 sentences to diminish the effect of varying size datasets in probe optimization. Lexical depths and distances for each sentence are obtained from hypernymy trees that are available for each language in Open Multilingual Wordnet (Bond and Foster, 2013). 3 Choice of Layers We probe the representations of the 7th layer for dependency information and representations of the 5th layer for lexical information. These layers achieve the highest performance for the respective features. Multilingual Evaluation We utilize the new joint optimization capability of Orthogonal Structural Probes to analyze how the encoding of linguistic phenomena are expressed across different languages in MBERT representations. To answer our research question, we evaluate three settings of multilingual Orthogonal Structural Probe training. The approaches are sorted by expressiveness; the most expressive one makes the weakest assumption about the likeness of representations across languages: IN-LANG no assumption We train a separate instance of Orthogonal Structural Probe for each language. Neither Scaling Vector nor Orthogonal Transformation is shared between languages. MAPPEDLANGS isomorphity assumption We train a shared Scaling Vector for each probing task and a separate Orthogonal Transformation per language. If the embedding subspaces are orthogonal across languages, the orthogonal mapping will be learned during probe training, and the setting will achieve similar results as the previous one. ALLLANGS: uniformity assumption Both the Scaling Vector and Orthogonal Transformation are shared across languages. If the same embedding subspace encodes the probed information across languages, the results of this setting will be on par with the first approach. The first and the last approach was proposed analyzed for Structural Probes by Chi et al. (2020). MAPPEDLANGS setting is possible thanks to the new probing formulation of Limisiewicz and Mareček (2021). For evaluation, we compute Spearman's correlations between predicted and gold depths and distances. In this evaluation, we use supervision for a target language. Furthermore, we analyze the impact of two language-specific features on the results: a) size of the MBERT training corpus in a given language; b) typological similarity to English. The former is expressed in the number of tokens in Wikipedia. The latter is a Hamming similarity between features in WALS (Dryer and Haspelmath, 2013). 4 Zero-and Few-shot Parsing We extract directed trees from the predictions of dependency probes. For that purpose, we employ the Maximum Spanning Tree algorithm on the predicted distances and the algorithm's extension of Kulmizev et al. (2020) to extract directed trees based on predicted depths. We examine cross-lingual transfer for parsing sentences in Chinese, Basque, Slovene, Finnish, and Arabic. For each of them, we train the probe on the remaining eight languages. In a few-shot setting, we also optimize on 10 to 1000 examples from the target language. In -L a n g ∆ M a p p e d L ∆ A ll L T O K E N S W A L S E N LEXICAL In -L a n g ∆ M a p p e d L ∆ A ll L T O K E N S W A L S E N DEPENDENCY -0. Results Sperman's correlation Using IN-LANG probes for each language gives high Spearman's correlations across the languages. The MAPPEDLANGS approach brings only a slight difference for most of the configuration while imposing uniformity constraint (ALLLANGS) deteriorates the results for some of the languages, as shown in Table 1. The drop in correlation is especially high for Non-Indo-European languages (except for lexical distance where the difference between Indo-European and Non-Indo-European groups is small). In Fig. 1, we present the Pearson's correlations between results from Table 1 and two languagespecific features. The key observation is that topological similarity to English is strongly correlated with ∆ALLLANGS. Hence, a shared probe achieves relatively good for English, Spanish, and French. It shows that lexical and dependency infor- mation is uniformly distributed in the embedding space for those languages. We bear in mind that the European languages are over-represented in the MBERT's pre-training corpus. However, the size of pre-training corpora is correlated to a lesser extent with ∆ALLLANGS than WALS similarity, suggesting that the latter has a more prominent role than the former. There is no significant correlation between ∆MAPPEDLANGS and typological similarity; the embeddings of diverse languages can be similarly well mapped into a shared space. Notably, we observe that some languages with the lower performance of IN-LANG probes can benefit from mapping (e.g., Slovene, Finnish, and Basque in the lexical depth). We view it as a benefit of cross-lingual transfer from more resourceful languages. Zero-shot Parsing For all languages except Finnish in zero-shot configuration, our ALLLANGS approach is better than other works that utilize a biaffine parser (Dozat and Manning, 2017) on top of MBERT representations, shown in Table 2. Without any supervision, our MAPPEDLANGS approach performs poorly because mapping cannot be learned effectively. When some annotated data is added to the training, the difference between ALLLANGS and MAPPEDLANNGS decreases. We observe that between 100 and 1000 training samples are needed to learn the Orthogonal Transformation effectively. Also, with higher supervision, we observe that the results reported by (Lauscher et al., 2020) notably outperform our approach. The outcome was anticipated because they fine-tune MBERT and use biaffine with a larger capacity than a probe. For their approach, the introduction of even small supervision is more advantageous than for probing. Conclusions We propose an effective way to multilingually probe for syntactic dependency (UD) and lexical hypernymy (WordNet). Our algorithm learns probes for multiple tasks and multiple languages jointly. The formulation of Orthogonal Structural Probe allows learning cross-lingual transformation based on mono-lingual supervision. Our comparative evaluation indicates that the evaluated information is similarly distributed in the MBERT's representations for languages typologically similar to English: Spanish, French, and Finnish. We show that aligning the embeddings with Orthogonal Transformation improves the results for other examined languages, suggesting that the representations are isomorphic. We show that the probe can be utilized in zero-and few-shot parsing. The method achieves better UAS results for Chinese, Slovene, Basque, and Arabic in a zero-shot setting than previous approaches, which use a more complex biaffine parser. Limitations In our choice of languages, we wanted to ensure diversity. Nevertheless, four of the analyzed languages belong to an Indo-European family that could facilitate finding shared encoding subspace for those languages. Acknowledgments We thank anonymous EMNLP reviewers for their valuable comments and suggestions for improvement. This work has been supported by grant 338521 of the Charles University Grant Agency and by Progress Q48 grant of Charles University. We have been using language resources and tools developed, stored, and distributed by the LINDAT/CLARIAH-CZ project of the Ministry of Education, Youth and Sports of the Czech Republic (project LM2018101). In Fig. 2, we present typological similarities between languages. Bases on Fig. 3 we observe that typological similarity to languages related to English: Spanish, Finnish, French is correlated to ∆ALLLANGS. Moreover, the correlation between similarity to these languages and the number of tokens in Wikipedia is smaller than for English 5 . It supports our claim that typological similarity is more important for uniformity assumption than the size of the pre-training corpus. References LEXICAL B Pre-training corpus size Sizes of Wikipedia in eight analyzed languages are presented in Table 3. C Datasets In Table 4 we aggregate all the datasets used in our experiments. D Information separation In line with the findings of Limisiewicz and Mareček (2021) we have observed that in multilingual setting Orthogonal Structural Probes disentangle the subspaces responsible for encoding lexical and dependency structures E.1 Number of Parameters A Scaling Vector for each of 4 objectives has a size 768 × 1 and an Orthogonal Transformation for each language is a matrix of size 768 × 768. In MAPPEDLANGS, our largest memory-wise setting, we train 8 Orthogonal Transformations. In this configuration, our probe has 4, 721, 664 parameters. E.2 Computation Time We optimized probes on a GPU core GeForce GTX 1080 Ti. Training a probe in MAPPEDLANGS configuration takes about 3 hours. F Supplementary Results F.1 UUAS results The Table 6 contains the results for undirected dependency trees. We use the same probing setting as in Section 3.2 without assigning directions to the edges. Similarly to Chi et al. (2020), we exclude punctuation from the evaluation. F.2 Validation Results In Table 7, we present the validation results corresponding to the test results in Table 1 Figure 2 : 2Typological (WALS) similarities between languages. Dependency similarities in the upper-right triangle and lexical similarities in the lower-left triangle. Figure 3 : 3Pearson's correlation between WALS similarity to a specific language and ∆ALLLANGS, the number of tokens in Wikipedia. "IE avg." stands for average similarity to analyzed Indo-European languages, i.e.et al. (2011) Basque BDT M. et al. (2015) Multilingual Central Repository Pociello et al. (2011) Slovene SSJ Dobrovoljc et al. (2017) sloWNet Fišer et al. (2012) Table 1: Spearman's correlation between gold and predicted depths and distances. ∆ denotes the differences from IN-LANG results. Each of our results is an average of 6 randomly initialized probing experiments. Statistically significant differences are circled. The three last columns present averages for Indo-European, Non-Indo-European, and all languages. The evaluation is not zero-shot, we use data in a target language. Correlations for dependency distance are compared with Standard Structural Probes reported byChi et al. (2020).Approach EN ES SL ID ZH FI AR FR EU AVERAGE I-E N-I-E All Dependency Distance Spearman's Correlation IN-LANG .812 .858 .857 .841 .830 .788 .838 .856 .769 .846 .813 .828 Chi et al. .817 .859 - .807 .777 .812 .822 .864 - .847 .805 .823 ∆ MAPPEDL .000 -.001 .001 -.003 .000 .001 -.001 -.002 .001 -.001 .000 .000 ∆ ALLL .000 -.007 -.006 -.013 -.039 .000 -.027 -.006 -.032 -.005 -.022 -.015 Chi et al. -.011 -.011 - -.018 -.060 -.010 -.037 -.011 - -.011 -.031 -.023 Dependency Depth Spearman's Correlation IN-LANG .843 .868 .867 .855 .844 .822 .865 .877 .797 .864 .837 .849 ∆ MAPPEDL -.004 -.003 -.002 -.002 .000 -.002 .001 -.002 -.001 -.002 -.001 -.002 ∆ ALLL -.006 -.007 -.008 -.011 -.035 -.005 -.031 -.010 -.031 -.008 -.023 -.016 Lexical Distance Spearman's Correlation IN-LANG .756 .841 .639 .719 .800 .657 .733 .794 .679 .757 .717 .735 ∆ MAPPEDL -.003 .005 -.011 -.001 .010 .001 .042 .001 -.008 -.002 .009 .004 ∆ ALLL -.038 -.025 -.042 -.051 -.014 -.043 .025 -.013 -.063 -.030 -.029 -.030 Lexical Depth Spearman's Correlation IN-LANG .853 .881 .779 .852 .875 .784 .906 .844 .842 .839 .850 .845 ∆ MAPPEDL .004 -.005 .013 -.011 .006 .023 -.024 .007 .021 .004 .005 .005 ∆ ALLL -.027 -.048 -.040 -.124 -.068 -.006 -.305 -.032 -.020 -.037 -.103 -.079 Figure 1: Pearson's correlation between results fromTable 1for each language and two language-specific features: typological similarity to English and number of tokens in Wikipedia. Correlations for dependency probes are in the upper-right triangle and for lexical probes in the lower-left triangle.59 -0.22 0.09 0.10 -0.02 0.20 -0.11 0.09 0.59 0.69 -0.28 0.18 0.03 0.32 -0.05 -0.01 -0.06 0.44 0.65 0.60 Table 2 : 2UAS of extracted dependency trees. Our two approaches are compared to the previous works that use a biaffine parser (Lauscher et al., 2020; Wang et al., 2019). We probed the representations of the 7th layer. *): fine-tuning of MBERT is used. **): the multilin- gual dictionary is used to align the embeddings. Table 5 . 5English is especially over-represented in the pre-trained corpus5 Table 3 : 3The number of articles and tokens in Wikipedia for analyzed languages. The data come from https://github.com/mayhewsw/ multilingual-data-stats/tree/main/ wikiE Probing setupWe use the same setup for training the Orthogonal Structural Probe as Limisiewicz and Mareček (2021), i.e. Adam optimizer (Kingma and Ba, 2014), initial training rate 0.02, and learning rate decay. We use Double Soft Orthogonality Regularization to coerce orthogonality of the matrix V . of the main paper.EN ES SL ID ZH FI AR FR EU IE avg. ∆ AllL TOKENS 0.65 0.53 0.20 -0.04 -0.38 0.66 0.27 0.69 -0.24 0.73 0.60 0.19 0.22 -0.15 -0.20 0.12 0.12 0.40 -0.21 0.49 (a) Dependency EN ES SL ID ZH FI AR FR EU IE avg. ∆ AllL TOKENS 0.59 0.51 0.66 0.20 0.09 0.47 -0.22 0.47 0.37 0.68 0.69 0.27 0.15 -0.11 -0.13 0.01 -0.04 0.37 -0.20 0.45 (b) Lexical Table 4 : 4The datasets used for training dependency and lexical probes.DEP LEX Depth Dist. Depth Dist. DEP Depth 98 65 1 0 Dist. 142 0 0 LEX Depth 22 13 Dist. 58 Table 5 : 5The number of shared dimensions selected by Scaling Vector after the joint training of probe in MAPPEDLANGS setting on top of the 7th layer.N ZH EU SL FI AR Chi et al. 0 51.30 - - 70.70 70.40 MAPPEDL 39.99 46.96 41.58 43.91 40.95 ALLL 57.82 64.59 75.06 68.70 68.70 MAPPEDL 10 42.37 47.06 41.07 46.38 36.81 ALLL 58.06 64.65 75.30 69.06 68.59 MAPPEDL 50 51.64 56.67 59.34 53.53 57.77 ALLL 58.73 65.18 74.99 69.08 68.81 MAPPEDL 100 62.36 62.44 64.51 57.95 62.36 ALLL 68.71 66.00 75.16 68.97 68.71 MAPPEDL 1000 66.43 70.50 76.10 67.08 68.85 ALLL 62.36 68.60 76.79 69.73 69.57 Table 6 : 6UUAS of extracted dependency trees in zeroand few-shot setting. The result of Structural Probe reported by Chi et al. (2020) for reference. English, Spanish, Slovene, Indonesian, Chinese, Finnish, Arabic, French, and Basque Reformulation of an Orthogonal Depth Probe is analogical. List of all the datasets used in this work can be found in Appendix. In this work, we consider all the features in the areas: Nominal Categories, Verb Categories, and Lexicon for computing a lexical typological similarity, and features in the areas: Nominal Syntax, Word Order, Simple Clauses, and Complex Sentences as a syntactic typological similarity. Each area contains multiple typological features. Table 7: Validation Spearman's correlation between gold and predicted depths and distances. We probe the representations of 7th layer for dependency information and representations of 5th layer for lexical information. Wordnet: A lexical database for english. A George, Miller, 10.1145/219717.219748Commun. ACM. 3811George A. Miller. 1995. Wordnet: A lexical database for english. Commun. ACM, 38(11):39-41. Creating the open Wordnet Bahasa. Nurril Hirfana, Mohamed Noor, Suerya Sapuan, Francis Bond, Proceedings of the 25th Pacific Asia Conference on Language, Information and Computation (PACLIC 25). the 25th Pacific Asia Conference on Language, Information and Computation (PACLIC 25)SingaporeNurril Hirfana Mohamed Noor, Suerya Sapuan, and Francis Bond. 2011. Creating the open Wordnet Ba- hasa. In Proceedings of the 25th Pacific Asia Con- ference on Language, Information and Computation (PACLIC 25), pages 258-267, Singapore. Universal Dependencies v2: An evergrowing multilingual treebank collection. Joakim Nivre, Marie-Catherine De Marneffe, Filip Ginter, Jan Hajič, Christopher D Manning, Sampo Pyysalo, Sebastian Schuster, Francis Tyers, Daniel Zeman, Proceedings of the 12th Language Resources and Evaluation Conference. the 12th Language Resources and Evaluation ConferenceMarseille, FranceEuropean Language Resources AssociationJoakim Nivre, Marie-Catherine de Marneffe, Filip Gin- ter, Jan Hajič, Christopher D. Manning, Sampo Pyysalo, Sebastian Schuster, Francis Tyers, and Daniel Zeman. 2020. Universal Dependencies v2: An evergrowing multilingual treebank collection. In Proceedings of the 12th Language Resources and Evaluation Conference, pages 4034-4043, Mar- seille, France. European Language Resources Asso- ciation. Deep contextualized word representations. Matthew E Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, Luke Zettlemoyer, Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesNew Orleans, LouisianaAssociation for Computational Linguistics1Long PapersMatthew E. Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word rep- resentations. In Proceedings of the 2018 Confer- ence of the North American Chapter of the Associ- ation for Computational Linguistics: Human Lan- guage Technologies, Volume 1 (Long Papers), New Orleans, Louisiana. Association for Computational Linguistics. How multilingual is multilingual BERT?. Telmo Pires, Eva Schlinger, Dan Garrette, 10.18653/v1/P19-1493Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. the 57th Annual Meeting of the Association for Computational LinguisticsFlorence, ItalyAssociation for Computational LinguisticsTelmo Pires, Eva Schlinger, and Dan Garrette. 2019. How multilingual is multilingual BERT? In Pro- ceedings of the 57th Annual Meeting of the Asso- ciation for Computational Linguistics, pages 4996- 5001, Florence, Italy. Association for Computa- tional Linguistics. Methodology and construction of the Basque wordnet. Language Resources and Evaluation. Elisabete Pociello, 45Eneko Agirre, and Izaskun AldezabalElisabete Pociello, Eneko Agirre, and Izaskun Aldez- abal. 2011. Methodology and construction of the Basque wordnet. Language Resources and Evalua- tion, 45(2):121-142. Building a free French wordnet from multilingual resources. Benoît Sagot, Darja Fišer, Proceedings of the Sixth International Language Resources and Evaluation (LREC'08). the Sixth International Language Resources and Evaluation (LREC'08)Marrakech, MoroccoBenoît Sagot and Darja Fišer. 2008. Building a free French wordnet from multilingual resources. In Pro- ceedings of the Sixth International Language Re- sources and Evaluation (LREC'08), Marrakech, Mo- rocco. A gold standard dependency corpus for English. Natalia Silveira, Timothy Dozat, Marie-Catherine De Marneffe, Samuel Bowman, Miriam Connor, John Bauer, Christopher D Manning, Proceedings of the Ninth International Conference on Language Resources and Evaluation. the Ninth International Conference on Language Resources and Evaluation2014Natalia Silveira, Timothy Dozat, Marie-Catherine de Marneffe, Samuel Bowman, Miriam Connor, John Bauer, and Christopher D. Manning. 2014. A gold standard dependency corpus for English. In Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC- 2014). Offline bilingual word vectors, orthogonal transformations and the inverted softmax. L Samuel, Smith, H P David, Steven Turban, Nils Y Hamblin, Hammerla, 5th International Conference on Learning Representations. Toulon, FranceConference Track ProceedingsSamuel L. Smith, David H. P. Turban, Steven Hamblin, and Nils Y. Hammerla. 2017. Offline bilingual word vectors, orthogonal transformations and the inverted softmax. In 5th International Conference on Learn- ing Representations, ICLR 2017, Toulon, France, April 24-26, 2017, Conference Track Proceedings. Dependency treebank : A word on the million words. Otakar Smrž, Viktor Bielický, Iveta Kouřilová, Jakub Kráčmar Zemánek, Otakar Smrž, Viktor Bielický, Iveta Kouřilová, and Jakub Kráčmar Zemánek. 2008. Dependency tree- bank : A word on the million words. On the limitations of unsupervised bilingual dictionary induction. Anders Søgaard, Sebastian Ruder, Ivan Vulić, 10.18653/v1/P18-1072Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics. the 56th Annual Meeting of the Association for Computational LinguisticsMelbourne, AustraliaLong Papers1Association for Computational LinguisticsAnders Søgaard, Sebastian Ruder, and Ivan Vulić. 2018. On the limitations of unsupervised bilingual dictionary induction. In Proceedings of the 56th An- nual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 778- 788, Melbourne, Australia. Association for Compu- tational Linguistics. Ancora: Multilevel annotated corpora for catalan and spanish. Mariona Taulé, Maria Antònia Martí, Marta Recasens, Proceedings of the International Conference on Language Resources and Evaluation. the International Conference on Language Resources and EvaluationMoroccoEuropean Language Resources AssociationMariona Taulé, Maria Antònia Martí, and Marta Re- casens. 2008. Ancora: Multilevel annotated corpora for catalan and spanish. In Proceedings of the In- ternational Conference on Language Resources and Evaluation, LREC 2008, 26 May -1 June 2008, Mar- rakech, Morocco. European Language Resources Association. Are all good word vector spaces isomorphic?. Ivan Vulić, Sebastian Ruder, Anders Søgaard, 10.18653/v1/2020.emnlp-main.257Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP). the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)Online. Association for Computational LinguisticsIvan Vulić, Sebastian Ruder, and Anders Søgaard. 2020. Are all good word vector spaces isomorphic? In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 3178-3192, Online. Association for Computa- tional Linguistics. Building the chinese open wordnet (cow): Starting from core synsets. Shan Wang, Francis Bond, Sixth International Joint Conference on Natural Language Processing. Shan Wang and Francis Bond. 2013. Building the chi- nese open wordnet (cow): Starting from core synsets. In Sixth International Joint Conference on Natural Language Processing, pages 10-18. Cross-lingual BERT transformation for zero-shot dependency parsing. Yuxuan Wang, Wanxiang Che, Jiang Guo, Yijia Liu, Ting Liu, 10.18653/v1/D19-1575Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP). the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)Hong Kong, ChinaAssociation for Computational LinguisticsYuxuan Wang, Wanxiang Che, Jiang Guo, Yijia Liu, and Ting Liu. 2019. Cross-lingual BERT trans- formation for zero-shot dependency parsing. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Lan- guage Processing (EMNLP-IJCNLP), pages 5721- 5727, Hong Kong, China. Association for Computa- tional Linguistics. Do explicit alignments robustly improve multilingual encoders?. Shijie Wu, Mark Dredze, 10.18653/v1/2020.emnlp-main.362Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP). the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)Online. Association for Computational LinguisticsShijie Wu and Mark Dredze. 2020. Do explicit align- ments robustly improve multilingual encoders? In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 4471-4482, Online. Association for Computa- tional Linguistics. Earth mover's distance minimization for unsupervised bilingual lexicon induction. Meng Zhang, Yang Liu, Huanbo Luan, Maosong Sun, 10.18653/v1/D17-1207Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing. the 2017 Conference on Empirical Methods in Natural Language ProcessingCopenhagen, DenmarkAssociation for Computational LinguisticsMeng Zhang, Yang Liu, Huanbo Luan, and Maosong Sun. 2017. Earth mover's distance minimization for unsupervised bilingual lexicon induction. In Pro- ceedings of the 2017 Conference on Empirical Meth- ods in Natural Language Processing, pages 1934- 1945, Copenhagen, Denmark. Association for Com- putational Linguistics.
[ "https://github.com/Tom556/", "https://github.com/mayhewsw/" ]
[ "On Balanced Games with Infinitely Many Players: Revisiting Schmeidler's Result", "On Balanced Games with Infinitely Many Players: Revisiting Schmeidler's Result" ]
[ "David Bartl [email protected] \nDepartment of Informatics and Mathematics\nSchool of Business Administration in Karviná\nCorvinus Center for Operational Research\nInstitute of Advanced Studies\nUniversity in Opava\nCorvinus University of Budapest\n\n", "Miklós Pintér \nDepartment of Informatics and Mathematics\nSchool of Business Administration in Karviná\nCorvinus Center for Operational Research\nInstitute of Advanced Studies\nUniversity in Opava\nCorvinus University of Budapest\n\n" ]
[ "Department of Informatics and Mathematics\nSchool of Business Administration in Karviná\nCorvinus Center for Operational Research\nInstitute of Advanced Studies\nUniversity in Opava\nCorvinus University of Budapest\n", "Department of Informatics and Mathematics\nSchool of Business Administration in Karviná\nCorvinus Center for Operational Research\nInstitute of Advanced Studies\nUniversity in Opava\nCorvinus University of Budapest\n" ]
[]
We consider transferable utility cooperative games with infinitely many players and the core understood in the space of bounded additive set functions. We show that, if a game is bounded below, then its core is non-empty if and only if the game is balanced.This finding is a generalization of Schmeidler's (1967) original result "On Balanced Games with Infinitely Many Players", where the game is assumed to be non-negative. We furthermore demonstrate that, if a game is not bounded below, then its core might be empty even though the game is balanced; that is, our result is tight.We also generalize Schmeidler's (1967) result to the case of restricted cooperation too.
10.1016/j.orl.2023.01.011
[ "https://export.arxiv.org/pdf/2207.14672v1.pdf" ]
251,196,996
2207.14672
1d4ca12d1f91badfb6264d2f9c780647b7138cfb
On Balanced Games with Infinitely Many Players: Revisiting Schmeidler's Result Jul 2022 David Bartl [email protected] Department of Informatics and Mathematics School of Business Administration in Karviná Corvinus Center for Operational Research Institute of Advanced Studies University in Opava Corvinus University of Budapest Miklós Pintér Department of Informatics and Mathematics School of Business Administration in Karviná Corvinus Center for Operational Research Institute of Advanced Studies University in Opava Corvinus University of Budapest On Balanced Games with Infinitely Many Players: Revisiting Schmeidler's Result Jul 2022TU games with infinitely many playerscorebalancednessTU games with restricted cooperationsigned TU gamesbounded additive set func- tions 2020 Mathematics Subject Classification: 91A1291A07 JEL Classification: C71 We consider transferable utility cooperative games with infinitely many players and the core understood in the space of bounded additive set functions. We show that, if a game is bounded below, then its core is non-empty if and only if the game is balanced.This finding is a generalization of Schmeidler's (1967) original result "On Balanced Games with Infinitely Many Players", where the game is assumed to be non-negative. We furthermore demonstrate that, if a game is not bounded below, then its core might be empty even though the game is balanced; that is, our result is tight.We also generalize Schmeidler's (1967) result to the case of restricted cooperation too. Introduction The core (Shapley, 1955;Gillies, 1959) is one of the most important solution concepts of cooperative game theory. It is important not only from the theory viewpoint, but for its simple and easy to understand nature, it also helps to solve various problems arising in practice. In the transferable utility setting (henceforth TU games) the Bondareva-Shapley Theorem (Bondareva, 1963;Shapley, 1967;Faigle, 1989) provides a necessary and sufficient condition for the non-emptiness of the core of a finite TU game; it states that the core of a finite TU game with our without restricted cooperation is not empty if and only if the TU game is balanced. The textbook proof of the Bondareva-Shapley Theorem goes by the strong duality theorem of linear programs, see e.g. Peleg and Sudhölter (2007). Schmeidler (1967), Kannai (1969Kannai ( , 1992, and Pintér (2011), among others, considered TU games with infinitely many players. All these papers studied the case when the core consists of bounded additive set functions. Schmeidler (1967) and Kannai (1969) showed respectively that the core of a non-negative TU game with infinitely and countably infinitely many players is not empty if and only if the TU game is balanced. In this paper we consider infinite signed TU games (sign unrestricted TU games with infinite many players) with and without restricted cooperation. Particularly, we follow Schmeidler (1967) and assume that the allocations are bounded additive set functions. Applications of infinite signed TU games go back in times at least as early as Shapley and Shubik (1969b) (economic systems with externalities), which generalize market games (Shapley and Shubik, 1969a). Further applications are (semi-) infinite transportation games (Sanchez-Soriano et al, 2002;Timmer and Llorca, 2002), infinite sequencing games (Fragnelli et al, 2010), and somehow less directly the line of literature represented by e.g. Montrucchio and Semeraro (2008) among others. While we can analyze the non-emptiness of the core in the finite setting by using the aforementioned Bondareva-Shapley Theorem (Bondareva, 1963;Shapley, 1967;Faigle, 1989), we have been missing an appropriate tool for such TU games with infinitely many players. Our contribution is an extension of Schmeidler's (1967) result saying a nonnegative infinite TU game without restricted cooperation has a non-empty core if and only if it is balanced, to the general case saying a bounded below infinite TU game with or without restricted cooperation has a non-empty core if and only if it is balanced (Theorems 4 and 8). It is worth mentioning that neither Schmeidler's (1967) nor Kannai's (1969;1992) approach (proof) can be applied to achieve our generalization (Theorems 4 and 8). Our approach is different from the previous ones. The set-up of this paper is as follows. In Sections 2 and 3, we introduce basic notions of TU games with infinitely many players, including the core and balancedness, and we present our main result (Theorem 4). In Sections 4 and 5, we recall some useful concepts pertaining functional spaces, topology and compactness, and we prove our main result. We additionally give examples to show the tightness of our main result and we also mention an interesting "limiting" property of the core. Finally, in Section 6, we discuss the case of restricted cooperation and give our second main result (Theorem 8). Preliminaries of infinite TU games We consider transferable utility cooperative games with a finite or infinite set N of players. A coalition is a subset S ⊆ N , so the power set P(N ) = { S : S ⊆ N } is the collection of all coalitions that can be considered. Let A ⊆ P(N ) be the collection of all feasible coalitions, which are those that can potentially emerge. In the case of no restricted cooperation, we assume that A is a field of sets over N ; that is, the collection A is such that ∅ ∈ A and, if S, T ∈ A, then N \ S ∈ A and also S ∪ T ∈ A. In the case of restricted cooperation, we assume only that ∅, N ∈ A. Then a transferable utility cooperative game (henceforth game for short) is represented by its coalition function, which is a mapping v : A → R such that v(∅) = 0. For any coalition S ∈ A, the value v(S) is understood as the payoff that the coalition S receives if it is formed. Assume that the players form the grand coalition N ∈ A. Then v(N ) ∈ R is the value of the grand coalition N , and the issue is to allocate this value among the players. Following Schmeidler (1967), we define the allocations as bounded additive set functions µ : A → R; that is, a function such that µ(S) ≤ C for all S ∈ A for some constant C ∈ R and µ(S ∪ T ) = µ(S) + µ(T ) for any disjoint S, T ∈ A. Let ba(A) = { µ : A → R : µ is a bounded additive set function } denote the space of all bounded additive set functions on A. Then the core of the game by v is the set ba-core(v) = µ ∈ ba(A) : µ(N ) = v(N ) , µ( S ) ≥ v( S ) for all S ∈ A \ {N } . In words, the core consists of all the allocations of the value v(N ) among the players (efficiency) such that any coalition S ∈ A \ {N } that could potentially emerge gets by the proposed allocations at least as much as the value v(S) (coalitional rationality), see Shapley (1955), Gillies (1959), Kannai (1992), and Zhao (2018). It is worth noticing that calling any game where the class of feasible coalitions is a field even if the field is not the power set of the player set is not misleading because any additive set function defined on a subfield can be extended to the power set. Therefore, an allocation from the core of a game without restricted cooperation gives rise (typically in a non-unique way) to an allocation defined on the power set of the player set. The case when the class of feasible coalitions is not a field, however, leads to the very same features of the core as restricted cooperation leads in the finite setting (see Faigle (1989)), explaining why we call this case restricted cooperation. The key question is whether the core is non-empty. An answer is provided by the Bondareva-Shapley Theorem. The Bondareva-Shapley Theorem Consider a game having finitely many players without restricted cooperation. In this case, we have N = {1, 2, . . . , n} for some natural number n and A = P(N ). Moreover, in this setting, the allocations of the value v(N ) among the players are given by payoff vector s, any of them is an n-tuple a = (a i ) n i=1 ∈ R N of real numbers; number a i means the payoff allocated to player i for i = 1, 2, . . . , n. Then the core of this game is defined to be the set core(v) = a ∈ R N : i∈N a i = v(N ) , i∈S a i ≥ v( S ) for all S ∈ P(N ) \ {N } . The intuitive meaning of the core(v) is the same as that of the ba-core(v), see above. Clearly, given a payoff vector a ∈ R N , we can define the corresponding additive set function µ : P(N ) → R by µ(S) = i∈S a i for any S ∈ P(N ). Conversely, given an additive set function µ : P(N ) → R, we can define the corresponding payoff vector a ∈ R N by a i = µ {i} for i = 1, 2, . . . , n. Here any additive set function µ : P(N ) → R is bounded as the number of the players is finite. We thus have a one-to-one correspondence between the ba-core(v) and the core(v). Hence, the notion of bounded additive function µ ∈ ba(A) naturally extends the concept of the payoff vector a ∈ R N when the set N of the players is infinite. Regarding the question whether the core(v) is non-empty, for any coalition S ⊆ N , define its characteristic vector to be the row vector χ S = χ S (1) χ S (2) . . . χ S (n) with χ S (i) = 1 if i ∈ S, and with χ S (i) = 0 if i / ∈ S, for i = 1, 2, . . . , n. We say that a collection S = {S 1 , S 2 , . . . , S r } ⊆ P(N ) of coalitions is balanced if there exist non-negative real numbers λ 1 , λ 2 , . . . , λ r , called balancing weights, such that r p=1 λ p χ Sp = χ N . (1) Moreover, we say that a game v is balanced if r p=1 λ p v(S p ) ≤ v(N )(2) for every balanced collection {S 1 , S 2 , . . . , S r } ⊆ P(N ) of coalitions. The following result due to Bondareva (1963) and Shapley (1967), later extended by Faigle (1989) to the restricted cooperation case, has become classical: Theorem 1 (Bondareva-Shapley Theorem). Consider a game with finitely many players, with or without restricted cooperation, represented by a coalition function v : P(N ) → R. Then the core(v) is non-empty if and only if the game is balanced. Consider now a general game without restricted cooperation; that is, the set N of the players can be finite or infinite and the class of feasible coalitions A ⊆ P(N ) is a field of sets over N . Concerning the question whether ba-core(v) is non-empty we follow Schmeidler (1967), who proceeds analogously as in the classical case; that is: For any subset S ⊆ N , define its characteristic function χ S : N → {0, 1} by letting χ S (i) = 1 if i ∈ S, and χ S (i) = 0 if i / ∈ S, for every i ∈ N . We say that a collection S = {S 1 , S 2 , . . . , S r } ⊆ A of coalitions is balanced if there exist non-negative real numbers λ 1 , λ 2 , . . . , λ r , called balancing weights, such that r p=1 λ p χ Sp = χ N . (3) Furthermore, we say that a game v is balanced if r p=1 λ p v(S p ) ≤ v(N )(4) for every balanced collection {S 1 , S 2 , . . . , S r } ⊆ A of coalitions. Remark 2. Schmeidler (1967) actually defines balancedness in a slightly different way: "A game is balanced if sup i a i v(A i ) ≤ v(S) when the sup is taken over all finite sequences of a i and A i , where the a i are non-negative numbers, the A i are in Σ, and i a i χ Ai ≤ χ S ." Considering non-negative games, Schmeidler explains that his definition is different from the "definition with equality" only in its form: "It is easy to verify that this sup does not change even if it is constrained by i a i χ Ai = χ S (instead of the inequality); also, for balanced games, the sup equals v(S)." -See Schmeidler (1967, p. 1). In the case of non-negative games Schmeidler's definition of balancedness is equivalent with the "definition with equality"; however, in the general, signed case, those are different. Then Schmeidler (1967) proves the following result, see Kannai (1969) for another proof: Theorem 3 (Bondareva-Shapley Theorem, Schmeidler (1967)). Given a finite or infinite set N of the players and a field of sets A ⊆ P(N ) over N , consider a game represented by a coalition function v : A → R. If the game is non-negative; that is, ∀S ∈ A : v(S) ≥ 0 , then the ba-core(v) is non-empty if and only if the game is balanced. It is easy to see that Theorem 3 is a generalization of Theorem 1 if the game is non-negative. Our goal, nonetheless, is to establish the following result: Theorem 4 (Bondareva-Shapley Theorem, a generalization). Given a finite or infinite set N of the players and a field of sets A ⊆ P(N ) over N , consider a game represented by a coalition function v : A → R. If the game is bounded below; that is, ∃L ∈ R ∀S ∈ A : v(S) ≥ L , then the ba-core(v) is non-empty if and only if the game is balanced. Notice that Theorem 4 directly generalizes both Theorems 1 and 3 because a game with finitely many players is always bounded below. Before we present our proof of Theorem 4, we find it appropriate to introduce and recall several notions and concepts. Several notions and concepts Let N be a set and let A ⊆ P(N ) be a field of sets over N . Then the pair (N, A) is called chargeable space. Recall that, for any S ⊆ N , the symbol χ S denotes the characteristic function χ S : N → {0, 1} of the set S. Given a function f : N → R, we say it is a simple function if f = λ 1 χ S1 + λ 2 χ S2 + · · · + λ r χ Sr for some natural number r, for some real numbers λ 1 , λ 2 , . . . , λ r , and for some sets Likewise, notice that the space ba(A) of all bounded additive set functions on A is also a vector space; for a µ ∈ ba(A), define its norm to be µ = sup r∈N S1,S2,...,Sr∈A S1∪S2∪···∪Sr=N Si∩Sj=∅, i =j µ(S 1 ) + µ(S 2 ) + · · · + µ(S r ) . It is well-known that the topological dual (Λ(A)) * of the vector space Λ(A); that is, the space of all continuous linear functionals on Λ(A), is isometrically isomorphic to the space ba(A) (see, e.g., Dunford and Schwartz (1958), Theorem IV.5.1, p. 258). Indeed, a continuous linear functional µ ′ ∈ (Λ(A)) * induces a bounded additive set function µ ∈ ba(A) by letting µ(S) = µ ′ (χ S ) for S ∈ A, and, conversely, a bounded additive set function µ ∈ ba(A) induces a continuous linear functional µ ′ ∈ (Λ(A)) * by letting µ ′ (f ) = λ 1 µ(S 1 ) + λ 2 µ(S 2 ) + · · · + λ r µ(S r ) for any simple function f = λ 1 χ S1 + λ 2 χ S2 + · · · + λ r χ Sr ∈ Λ(A). This is the reason why, for simplicity, we shall identify the space (Λ(A)) * with ba(A). Consider now a game represented by a coalition function v : A → R, and let the game be bounded below; that is, there exists a constant L ∈ R such that v(S) ≥ L for all S ∈ A. Assume that a µ ∈ ba-core(v). Let S 1 , S 2 , . . . , S r ∈ A be pairwise disjoint and such that N = S 1 ∪ S 2 ∪ · · · ∪ S r . Then r p=1 µ(S p ) = r p=1 µ(Sp)≥0 µ(S p ) − r p=1 µ(Sp)<0 µ(S p ) = µ r p=1 µ(Sp)≥0 S p − µ r p=1 µ(Sp)<0 S p = µ N \ r p=1 µ(Sp)≥0 (N \ S p ) − µ r p=1 µ(Sp)<0 S p = µ(N ) − µ r p=1 µ(Sp)≥0 (N \ S p ) − µ r p=1 µ(Sp)<0 S p ≤ µ(N ) − 2L = v(N ) − 2L . By taking the definition (5) of the norm into account, it follows the ba-core(v) is contained in the closed ball B R = µ ∈ ba(A) : µ ≤ v(N ) − 2L of radius R = v(N ) − 2L. (Notice that v(N ) − 2L ≥ 0, for if we had v(N ) < 2L, then the ba-core(v) would obviously be empty, contradicting the assumption that µ ∈ ba-core(v).) We endow the space ba(A) with the weak* topology with respect to Λ(A). The topology will be introduced if we describe all the neighborhoods of a point. A set U ⊆ ba(A) is a weak* neighborhood of a µ 0 ∈ ba(A) if there exist a natural number r and functions f 1 , f 2 , . . . , f r ∈ Λ(A) such that r p=1 µ ∈ ba(A) : µ ′ (f p ) − µ ′ 0 (f p ) < 1 ⊆ U , where µ ′ and µ ′ 0 is the continuous linear functional induced by µ and µ 0 , respectively, see (6). By Alaoglu's Theorem (see, e.g., Aliprantis and Border (2006), Theorem 6.21, p. 235), the closed ball B R is compact in the weak* topology. That is, if G i ⊆ ba(A) are weakly* open sets for i ∈ I, where I is an index set, such that i∈I G i ⊇ B R , then n j=1 G ij ⊇ B R for some natural number n and for some i 1 , i 2 , . . . , i n ∈ I. Let F i ⊆ B R be weakly* closed sets for i ∈ I, where I is an index set. We say the collection {F i } i∈I is a centered system of sets if n j=1 F ij = ∅ for any natural number n and for any i 1 , i 2 , . . . , i n ∈ I. By considering the complements (G i = ba(A) \ F i ), it follows i∈I F i = ∅. In our proof of Theorem 4 we consider the weakly* closed sets F S = µ ∈ ba(A) : µ(N ) = v(N ) and µ(S) ≥ v(S) and µ ≤ R for S ∈ A. The main idea is to show that, if the game v is balanced, then the system {F S } S∈A is centered. Noticing that ba-core(v) = S∈A F S = ∅, the proof will be done. We are now ready to present our proof of Theorem 4. Proof of Theorem 4 Below we give our proof of Theorem 4. The aforegiven notions and concepts are utilized in the poof, and it will be seen that its main ingredience is the use of compactness. Proof of Theorem 4. Assume that the given coalition function v : A → R is bounded below by L. We are to show that ba-core(v) = ∅ if and only if the given game is balanced. The "only if" part is obvious. Assume that a µ ∈ ba-core(v) and let S = {S 1 , S 2 , . . . , S r } ⊆ A be a balanced collection of coalitions, so that (3) holds for some non-negative balancing weights λ 1 , λ 2 , . . . , λ r . Then (4) is satisfied, and the game is balanced. It remains to prove the "if" part. r p=1 λ p v(S p ) ≤ r p=1 λ p µ(S p ) = µ(N ) = v(N ), so Pick up any sets S 0 , S 1 , . . . , S n ∈ A. Our purpose is to show that n j=0 F Sj = ∅. We can assume w.l.o.g. that the sets S 0 , . . . , S n are distinct with S 0 = ∅ and S n = N , and that the collection {S 0 , . . . , S n } ⊆ A is a field of sets. (Roughly speaking, the more sets we pick up, the smaller the intersection n j=0 F Sj is. Having to show the intersection is non-empty anyway, we can include the empty and the grand coalition among the sets. Moreover, we can add further sets from A so that the collection {S 0 , . . . , S n } becomes a finite field of sets. We can also assume w.l.o.g. that S 1 , . . . , S n ′ are all the atoms of the field; that is, they are all the minimal elements in the collection {S 1 , . . . , S n }. Obviously, the atoms S 1 , . . . , S n ′ are pairwise disjoint, and it holds n = 2 n ′ − 1. Now, the sets ∅ = S 0 , S 1 , . . . , S n being fixed, we apply balancedness to the sets S 1 , . . . , S n : ∀λ 1 , . . . , λ n ≥ 0 : λ 1 χ S1 + · · · + λ n χ Sn = χ N =⇒ λ 1 v(S 1 ) + · · · + λ n v(S n ) ≤ v(N ) . It shall follow hence that the system of relations µ( N ) = v( N ) , µ(S 1 ) ≥ v(S 1 ) , . . . . . . . . . . . . . . . µ(S n ) ≥ v(S n )(8) has a solution µ ∈ ba(A) such that µ ≤ R. To see that, we apply the Bondareva-Shapley Theorem for finite games (Theorem 1). Consider a new finite game v ′ : P(N ′ ) → R with the set of the players N ′ = {1, . . . , n ′ }. Define the game as follows. Recall first that the collection {S 0 , . . . , S n } is a field of sets and that S 1 , . . . , S n ′ are all its atoms, which are pairwise disjoint. Now, for an S ′ ⊆ N ′ , let S = i ′ ∈S ′ S i ′ , notice S ∈ {S 0 , . . . , S n }, and put v ′ (S ′ ) = v(S). The new finite game v ′ has been defined thus. Now, condition (7) equivalently says that the new game v ′ is balanced. By the Bondareva-Shapley Theorem (Theorem 1), its core is non-empty: there exist a 1 , . . . , a n ′ ∈ R such that n ′ i ′ =1 a i ′ = v ′ (N ′ ) and i ′ ∈S ′ a i ′ ≥ v ′ (S ′ ) for any S ′ ⊆ N ′ . The atoms S 1 , . . . , S n ′ being non-empty sets, there exist elements x i ′ ∈ S i ′ for i ′ = 1, . . . , n ′ . Consider the measure µ = a 1 δ x1 + · · · + a n ′ δ x n ′ , where δ x i ′ is the Dirac measure concentrated at x i ′ . We have µ(N ) = µ(S 1 ∪· · ·∪ S n ′ ) = a 1 +· · ·+a n ′ = v(N ). For any j = 1, . . . , n, let S ′ j = { i ′ ∈ N ′ : S i ′ ⊆ S j }. Then S j = i ′ ∈S ′ j S i ′ , and µ(S j ) = i ′ ∈S ′ j a i ′ ≥ v ′ (S ′ j ) = v(S j ). We have shown thus that the µ is a solution to the system of inequalities (8). Finally, let us calculate the norm µ of the solution, see (5). For a T ∈ A, we observe that µ(T ) = n ′ i ′ =1 x i ′ ∈T a i ′ . Given pairwise disjoint sets T 1 , . . . , T s ∈ A such that N = T 1 ∪ · · · ∪ T s , and recalling n ′ i ′ =1 a i ′ = v(N ), we have s q=1 µ(T q ) = s q=1 n ′ i ′ =1 x i ′ ∈Tq a i ′ ≤ s q=1 n ′ i ′ =1 x i ′ ∈Tq |a i ′ | = n ′ i ′ =1 |a i ′ | = n ′ i ′ =1 a i ′ ≥0 a i ′ − n ′ i ′ =1 a i ′ <0 a i ′ = v(N ) − 2 n ′ i ′ =1 a i ′ <0 a i ′ ≤ v(N ) − 2v n ′ i ′ =1 a i ′ <0 S i ′ ≤ v(N ) − 2L = R . It follows that µ ≤ R. To conclude, we have a µ ∈ ba(A) such that it is a solution to (8) and µ ≤ R, which means µ ∈ n j=1 F Sj . Since F Sj ⊆ F ∅ for j = 1, . . . , n, it holds µ ∈ n j=0 F Sj . We have shown thus that the system {F S } S∈A is centered. As the closed R-ball B R is weakly* compact, we have ba-core(v) = S∈A F S = ∅. The following example demonstrates that Theorem 4 cannot be generalized further. It presents an unbounded below game that is balanced, but its core is empty. Example 5. Let the player set be N = N, and let A = { S ⊆ N : S is finite or N \ S is finite }. Consider the game represented by the coalition function v : A → R defined as follows: for any S ∈ A, let v(S) =          1 if S = {1}, 1 + 1 n if S = {1, n} for n = 2, 3, . . . , − n∈T 1 n if S = N \ T for a finite T ∈ A, 0 otherwise. It is easy to see that this game is balanced. Assuming that a µ ∈ ba-core(v), then µ {1} ≥ v {1} = 1 and µ N \ {1} ≥ v N \ {1} = −1. Since µ {1} + µ N \ {1} = µ(N ) = v(N ) = 0, we have µ {1} = 1. As µ {1, n} ≥ v {1 , n} = 1 + 1/n, it follows µ {n} ≥ 1/n for all n = 2, 3, . . . Summing up, we have µ {1, . . . , n} ≥ ln(n + 1), so µ / ∈ ba(A) because µ is not bounded. It follows ba-core(v) = ∅. The following "limiting" property of the ba-core is interesting. It is obtained as a corollary of Theorem 4 by considering the balancedness condition (3) and (4). Corollary 6. Given a finite or infinite set N of the players and a field of sets A ⊆ P(N ) over N , let the game represented by a coalition function v : A → R be bounded below. For any ε > 0, define the coalition function v ε : A → R as follows: let v ε (N ) = v(N ) + ε and v ε (S) = v(S) for all S ∈ A \ {N }. Then: if ba-core(v ε ) = ∅ for all ε > 0, then ba-core(v) = ∅. Under the assumptions of Corollary 6, the converse statement is clear: if ba-core(v) = ∅, then ba-core(v ε ) = ∅ for all ε > 0. We thus conclude that a game represented by the coalition function v is balanced if and only if ba-core(v ε ) = ∅ for all ε > 0. Games with restricted cooperation We now pay attention to games with restricted cooperation. In general, the cooperation is restricted whenever the collection A ⊆ P(N ) of coalitions that can potentially emerge is a proper subset of P(N ). In this sense, Theorem 4 covers the case of restricted cooperation, under the additional assumption that A ⊆ P(N ) is a field of sets over N , too. Now, let A ′ ⊆ P(N ) be the collection of all coalitions that can potentially emerge; the collection A ′ need not be a field of sets now. Assume only ∅, N ∈ A ′ . Then any coalition function v ′ : A ′ → R, such that v(∅) = 0, represents a game with restricted cooperation. To introduce the concept of core of this game with restricted cooperation, let A = field(A ′ ) be the field hull of A ′ ; that is, the minimal collection A ⊇ A ′ that is a field of sets over N . Then the core of a game v ′ with restricted cooperation is the set ba-core(v ′ ) = µ ∈ ba field(A ′ ) : µ(N ) = v ′ (N ) , µ( S ) ≥ v ′ ( S ) for all S ∈ A ′ \ {N } . We again ask whether ba-core(v ′ ) = ∅. The following example presents a non-negative game with restricted cooperation which is balanced as by (3) and (4), but only the feasible coalitions are considered, but its core is empty. Notice that this game is analogous to that presented in Example 5. The field hull of A ′ is A = { S ⊆ N : S is finite or N \S is finite }. The fact that this game is balanced as by (3) and (4), but replace A and v with A ′ and v ′ , respectively, is clear. To show that ba-core(v ′ ) = ∅, it is enough to follow the arguments presented in Example 5. Due to Example 7, we have to introduce a new notion of balancedness in the case of restricted cooperation. A game represented by a coalition function v ′ defined on the class of feasible coalitions A ′ is bounded-balanced if there exists a bounded below balanced game v defined on A, where A is the field hull of A ′ such that for every S ∈ A ′ it holds that v(S) = v ′ (S). It is clear that if A ′ is a field and the game is bounded below (as in Theorem 4), then we get back to the notion of balancedness applied in Theorem 4. Moreover, notice that for finite games bounded-balancedness and balancedness by Faigle (1989) are equivalent. The game in Example 7 above has empty core because even if it is nonnegative, none of its bounded below "extensions" onto A is balanced and non of its balanced "extensions" onto A is bounded below. Then the following theorem extends Theorem 4 to the class of games with restricted cooperation, hence it extends Faigle (1989). Theorem 8. Consider a coalition function v ′ : A ′ → R, where ∅, N ∈ A ′ ⊆ P(N ) and N is a finite or infinite set of the players. If v ′ is bounded below, then the ba-core(v ′ ) = ∅ if and only if v ′ is bounded-balanced. Proof. If A ′ is a field then we are back at Theorem 4, hence nothing to do. Suppose that A ′ is not a field. The game v ′ is bounded below, take any game v which makes v ′ be bounded below. Then by Theorem 4 ba-core(v) = ∅ if and only v is balanced. Since if v is balanced then v ′ is bounded-balanced by v, we get if v ′ be bounded below, then the ba-core v ′ = ∅ if and only if v ′ is bounded-balanced. Notice that the game v ′ of Example 7 is not bounded-balanced, hence ba-core(v ′ ) = ∅. S 1 , 1S 2 , . . . , S r ∈ A. Let Λ(A) = { f : N → R : f is a simple function } denote the vector (i.e. linear) space of all simple functions defined over (N, A), where the sum of two functions and the multiplication of a function by a constant are both defined in the usual way, i.e. pointwise. For a simple function f ∈ Λ(A), define its norm to be f = sup i∈N f (i) , so Λ(A) is a normed linear space. Example 7 . 7Let the player set be N = N and let A ′ = ∅ ∪ {1, i} : i = 1, 2, 3, . . . ∪ N \ {1} ∪ N . Consider the game represented by the coalition function v ′ : A ′ → R defined as follows: for any S ∈ A, S = {1, n} for n = 2, 3, . . . , 0 if S = N \ {1} or S = ∅, 1 if S = N . AcknowledgementsDavid Bartl acknowledges the support of the Czech Science Foundation under grant number GAČR 21-03085S. A part of this research was done while he was visiting the Corvinus Institute for Advanced Studies; the support of the CIAS during the stay is gratefully acknowledged.Miklós Pintér acknowledges the support by the Hungarian Scientific Research Fund under projects K 133882 and K 119930. Infinite Dimensional Analysis, Third Edition. C D Aliprantis, K C Border, Springer-VerlagAliprantis CD, Border KC (2006) Infinite Dimensional Analysis, Third Edition. Springer-Verlag Some Applications of Linear Programming Methods to the Theory of Cooperative Games (in Russian). O N Bondareva, Problemy Kybernetiki. 10Bondareva ON (1963) Some Applications of Linear Programming Methods to the Theory of Cooperative Games (in Russian). Problemy Kybernetiki 10:119-139 Linear Operators, Part I: General Theory. N Dunford, J T Schwartz, Wiley-InterscienceDunford N, Schwartz JT (1958) Linear Operators, Part I: General Theory. Wiley-Interscience Cores of games with restricted cooperation. U Faigle, Zeitschrift für Operations Research. 336Faigle U (1989) Cores of games with restricted cooperation. Zeitschrift für Op- erations Research 33(6):405-422 Convex games with an infinite number of players and sequencing situations. V Fragnelli, N Llorca, J Sanchez-Soriano, S H Tijs, R Branzei, Journal of Mathematical Analysis and Applications. 3621Fragnelli V, Llorca N, Sanchez-Soriano J, Tijs SH, Branzei R (2010) Convex games with an infinite number of players and sequencing situations. Journal of Mathematical Analysis and Applications 362(1) Solutions to general non-zero-sum games. D B Gillies, Contributions to the Theory of Games. Princeton University PressGillies DB (1959) Solutions to general non-zero-sum games, Contributions to the Theory of Games, vol IV. Princeton University Press Countably additive measures in cores of games. Y Kannai, Journal of Mathematical Analysis and its Applications. 27Kannai Y (1969) Countably additive measures in cores of games. Journal of Mathematical Analysis and its Applications 27:227-240 The core and balancedness. Y Kannai, Handbook of Game Theory with Economic Applications. North-Holland1Kannai Y (1992) The core and balancedness, Handbook of Game Theory with Economic Applications, vol 1. North-Holland Refinement derivatives and values of games. L Montrucchio, P Semeraro, Mathematics of Operations Research. 331Montrucchio L, Semeraro P (2008) Refinement derivatives and values of games. Mathematics of Operations Research 33(1):97-118 On the core of an economic system with externalities. L S Shapley, M Shubik, The American Economic Review. 594Shapley LS, Shubik M (1969b) On the core of an economic system with exter- nalities. The American Economic Review 59(4):678-684 J Timmer, N Llorca, chap Linear (Semi-) Infinite Programs and Cooperative Games. Kluwer Academic PublishersGame Theory in honor of Stef TijsTimmer J, Llorca N (2002) Chapters in Game Theory in honor of Stef Tijs, Kluwer Academic Publishers, chap Linear (Semi-) Infinite Programs and Co- operative Games. Theory and Decision Library C Three little-known and yet still significant contributions of lloyd shapley. J Zhao, Games and Economic Behavior. 108Zhao J (2018) Three little-known and yet still significant contributions of lloyd shapley. Games and Economic Behavior 108:592-599
[]
[ "CONSTRAINTS ON THE TOPOLOGY OF THE UNIVERSE FROM THE 2-YEAR COBE DATA", "CONSTRAINTS ON THE TOPOLOGY OF THE UNIVERSE FROM THE 2-YEAR COBE DATA" ]
[ "Angélica De Oliveira-Costa [email protected] \nSpace Sciences Laboratory and Center for Particle Astrophysics\nLawrence Berkeley Laboratory\nUniversity of California\nBuilding 50-20594720BerkeleyCA\n\nInstituto Nacional de Pesquisas Espaciais (INPE), Astrophysics Division\nSão Paulo 12227-010São José dos CamposBrazil\n", "George F Smoot \nSpace Sciences Laboratory and Center for Particle Astrophysics\nLawrence Berkeley Laboratory\nUniversity of California\nBuilding 50-20594720BerkeleyCA\n" ]
[ "Space Sciences Laboratory and Center for Particle Astrophysics\nLawrence Berkeley Laboratory\nUniversity of California\nBuilding 50-20594720BerkeleyCA", "Instituto Nacional de Pesquisas Espaciais (INPE), Astrophysics Division\nSão Paulo 12227-010São José dos CamposBrazil", "Space Sciences Laboratory and Center for Particle Astrophysics\nLawrence Berkeley Laboratory\nUniversity of California\nBuilding 50-20594720BerkeleyCA" ]
[]
The cosmic microwave background (CMB) is a unique probe of cosmological parameters and conditions. There is a connection between anisotropy in the CMB and the topology of the Universe. Adopting a universe with the topology of a 3-Torus, or a universe where only harmonics of the fundamental mode are allowed, and using 2-years of COBE/DMR data, we obtain constraints on the topology of the Universe. Previous work constrained the topology using the slope information and the correlation function of the CMB. We obtain more accurate results by using all multipole moments, avoiding approximations by computing their full covariance matrix. We obtain the best fit for a cubic toroidal universe of scale 7200h −1 Mpc for n = 1. The data set a lower limit on the cell size of 4320h −1 Mpc at 95% confidence and 5880h −1 Mpc at 68% confidence.These results show that the most probable cell size would be around 1.2 times larger than the horizon scale, implying that the 3-Torus topology is no longer an interesting cosmological model.Subject headings: cosmic microwave background, large-scale structure of universe.
10.1086/175977
[ "https://arxiv.org/pdf/astro-ph/9412003v2.pdf" ]
444,355
astro-ph/9412003
f38cd902bb97044e3f024fa32c820baa0635e0e6
CONSTRAINTS ON THE TOPOLOGY OF THE UNIVERSE FROM THE 2-YEAR COBE DATA May 1997 Angélica De Oliveira-Costa [email protected] Space Sciences Laboratory and Center for Particle Astrophysics Lawrence Berkeley Laboratory University of California Building 50-20594720BerkeleyCA Instituto Nacional de Pesquisas Espaciais (INPE), Astrophysics Division São Paulo 12227-010São José dos CamposBrazil George F Smoot Space Sciences Laboratory and Center for Particle Astrophysics Lawrence Berkeley Laboratory University of California Building 50-20594720BerkeleyCA CONSTRAINTS ON THE TOPOLOGY OF THE UNIVERSE FROM THE 2-YEAR COBE DATA May 1997Received ; accepted Published in ApJ, 448:477(1995). -2 -arXiv:astro-ph/9412003v2 14 The cosmic microwave background (CMB) is a unique probe of cosmological parameters and conditions. There is a connection between anisotropy in the CMB and the topology of the Universe. Adopting a universe with the topology of a 3-Torus, or a universe where only harmonics of the fundamental mode are allowed, and using 2-years of COBE/DMR data, we obtain constraints on the topology of the Universe. Previous work constrained the topology using the slope information and the correlation function of the CMB. We obtain more accurate results by using all multipole moments, avoiding approximations by computing their full covariance matrix. We obtain the best fit for a cubic toroidal universe of scale 7200h −1 Mpc for n = 1. The data set a lower limit on the cell size of 4320h −1 Mpc at 95% confidence and 5880h −1 Mpc at 68% confidence.These results show that the most probable cell size would be around 1.2 times larger than the horizon scale, implying that the 3-Torus topology is no longer an interesting cosmological model.Subject headings: cosmic microwave background, large-scale structure of universe. INTRODUCTION One of the basic assumptions in modern cosmology, the Cosmological Principle, is that on large-scale average our Universe is spatially homogeneous and isotropic. The apparent isotropy on large scales is normally explained as a consequence of spatial homogeneity, in turn understood as a natural result of an "inflationary" period of the early universe (see e.g. Kolb and Turner, 1990). An alternative approach to explaining the apparent homogeneity is to assume an expanding universe with small and finite space sections with a non-trivial topology (Ellis and Schreiber, 1986), the "small universe" model. The "small universe", as its name suggests, should be small enough that we have had time to see the universe around us many times since the decoupling time. The topology of the spatial sections can be quite complicated (Ellis, 1971); however, it is possible to obtain small universe models that reproduce a Friedmann-Lemaître model by choosing certain simple geometries. For example, choosing a rectangular basic cell with sides L x , L y and L z and with opposite faces topologically connected, we obtain a toroidal topology for the small universe known as T 3 . The never-ending repetition of this T 3 basic cell should reproduce, at least locally, the Friedmann-Lemaître universe model with zero curvature. The small universe model has received considerable attention in the past few years, since the topology of the Universe is becoming an important problem for cosmologists. From the theoretical point of view, it is possible to have quantum creation of the Universe with a nontrivial topology, i.e., a multiply-connected topology (Zel'dovich and Starobinsky, 1984). From the observational side, this model has been used to explain "observed" periodicity in the distributions of quasars (Fang and Sato, 1985) and galaxies (Broadhurst et al., 1990). There are four known approaches for placing lower limits on the cell size of the T 3 model. The first two methods constrain the parameter R, an average length scale of the small universe, defined as R = (L x L y L z ) 1/3 . The third and fourth methods constrain the parameter L/y, the ratio between the cell size L, here defined as L = L x = L y = L z , and radius of the decoupling sphere y, where y = 2cH −1 o . The first method constrains R assuming that it is larger than any distinguishable structure. Using this method, Fairall (1985) suggests that R > 500Mpc. The second method constrains R based on "observed" periodicity in quasar redshifts. Attempting to identify opposite pairs of quasars, Fang and Liu (1988) suggested that R > 400h −1 Mpc and using quasar redshift periodicity, Fang and Sato (1985) suggested R > 600h −1 Mpc. The third and fourth methods constrain L/y using the CMB. With the third method, Stevens et al. (1993) obtain the constraint L/y = 0.8 using the slope information from the 1st year of COBE/DMR data (Smoot et al., 1992) while, with the fourth method, Jing and Fang (1994) obtain a best fit L/y ≈ 1.2 using the correlation function from the 2-year COBE/DMR data. As pointed out by Zel'dovich (1973), the power spectrum of density perturbations is continuous (i.e., all wave numbers are possible) if the Universe has a Euclidean topology, and discrete (i.e., only some wave numbers are possible) if the topology has finite space sections. Many years later these ideas were related with the expected CMB power spectrum (Fang and Mo, 1987;Sokolov, 1993;Starobinsky, 1993), mainly after the quadrupole component had been detected by COBE/DMR. Our goal is to place new and accurate limits on the cell size of a small universe using the harmonic decomposition technique to obtain the data power spectrum (Górski, 1994) and likelihood technique (Bunn and Sugiyama, 1994) to constrain L/y. The method that we use to constrain the parameter L/y is quite different from previous work. The method adopted by Stevens et al. (1993) constrains the cell size based in the power spectrum of the CMB; they graphically compare the power spectrum of the standard model with the power spectrum expected for the small universe, normalizing to the l = 20 component. Jing and Fang (1994) adopt a different approach: they constrain the cell size using the correlation function of the CMB and making the approximation that bins of the correlation function are uncorrelated. Our analysis, however, is exact. We compute the full covariance matrix for all multipole components and use this covariance matrix to make a χ 2 fit of the power spectrum extracted from the 2 years of COBE/DMR data to the power spectrum expected for a small universe with different cell sizes L. For simplicity, we limit our calculation to the case of a T 3 cubic universe. We present, in the next sections, a description of the power spectrum expected in a T 3 cubic universe model and the likelihood technique used to constrain L/y. POWER SPECTRUM OF THE T 3 UNIVERSE MODEL If the density fluctuations are adiabatic and the Universe is spatially flat, the Sachs-Wolfe fluctuations in the CMB are given by (Peebles, 1982), where x is a vector with length y ≡ 2cH −1 o that is pointed in the direction of observation (θ, φ), H o is the Hubble constant (written here as 100h km/s/Mpc) and δ k is the density fluctuation in Fourier space with the sum taken over all wave numbers k. δT T (θ, φ) = − 1 2 H 2 o c 2 k δ k k 2 e ik·x(1) It is customary to expand the CMB anisotropy in spherical harmonics δT T (θ, φ) = ∞ l=0 l m=−l a lm Y lm ( x),(2) where a lm are the spherical harmonic coefficients and x is the unit vector in direction x. The coefficients a lm are given by a lm = −2πi l H 2 o c 2 k δ k k 2 j l (ky)Y * lm ( k),(3) where j l are spherical Bessel functions of order l. If we assume that the CMB anisotropy is a Gaussian random field, the coefficients a lm are independent Gaussian random variables with zero mean and variance (Fang and Mo, 1987;Stevens et al., 1993). Assuming a power-law power spectrum with shape P (k) = |δ k | 2 = Ak n , where A is the amplitude of scalar perturbations and n the spectral index, it is possible to perform the sum in (4), replacing it by an integral, and to obtain |a lm | 2 = 16π k |δ k | 2 (ky) 4 j 2 l (ky)(4)|a lm | 2 = C 2 Γ( 9−n 2 ) Γ( 3+n 2 ) Γ(l + n−1 2 ) Γ(l + 5−n 2 )(5) (see e.g. Bond and Efsthatiou, 1987). In the literature, the average over the canonical ensemble of universes |a lm | 2 is usually denoted by C l ≡ |a lm | 2 ,(6) where the power spectrum C l is related to the rms temperature fluctuation by |δT /T | 2 rms ≡ l (2l + 1)C l /4π. Note that in a Euclidean topology the Sachs-Wolfe spectrum C l is an integral over the power spectrum; however, in the T 3 universe this is not the case. In this model, only wave numbers that are harmonics of the cell size are allowed. We have a discrete k spectrum (Sokolov, 1993), where L 1 , L 2 and L 3 are the dimensions of the cell and p i are integers. For simplicity, assuming L = L x = L y = L z and the same power-law power spectrum cited before, eq. k 2 = 3 i=1 2π L i 2 p 2 i(7) (4) can be written as |a lm | 2 = 16πA y n px py pz L 2πyp 4−n j 2 l 2πyp L ,(8) where p 2 = p 2 x + p 2 y + p 2 z . According to (8), the lth multipole of the CMB temperature is function of the ratio L/y. This shows that the more multipole components we use in our fit, the stronger our constraints on the cell size will be. However, we cannot use an infinite number of multipole components. The maximum number of multipole components, l max , will be limited by two things: the limit where the map is noise dominated and the limit where we can truncate the Fourier series without compromising the harmonic decomposition technique (see Górski, 1994). Using eq. (8), we calculated the expected power spectrum for a T 3 universe with different cell sizes L/y from 0.1 to 3.0, n = 1 and l max = 30, where l max = 30 is the limit at which we truncate our data power spectrum. In Figure 1, we plot l(l + 1)C l versus l and normalize all values to the last multipole component l = 30. Note that for very small cells (L ≪ y), the low order multipoles are suppressed. The power spectrum for small cells (as L/y = 0.1, 0.5 or 1.0) shows the presence of "bumps" that disappear as the cell size increases (L/y ∼ > 1.5). The power spectrum finally becomes flat for large cell sizes (L/y ∼ > 3.0). These "bumps" can be explained if we remember that only the harmonics of the cell size are allowed to be part of the sum in (8). When the cell size is small there are fewer modes of resonance, and no modes larger than the cell size appear in the sum in (8). As the cell size increases, the sum approaches an integral and the T 3 power spectrum becomes flat. We restrict our analysis to n = 1. This assumption, however, does not weaken our results, since the T 3 model with other n-values tends to fit the data as poorly as with n = 1. For instance, we obtain the maximum likelihood at the same ratio L/y for n = 1 and n = 1.5. This happens because the "bumps", and not the overall slope, are responsible for the disagreement between the model and the data. DATA ANALYSIS Each DMR sky map is composed of 6144 pixels and each pixel i contains a measurement of the sky temperature at position x i . Considering that the temperatures are smoothed by the DMR beam and contaminated with noise, the sky temperatures are described by δT T i = lm a lm B l Y lm ( x i ) + n i ,(9) where B l is the DMR beam pattern and n i is the noise in pixel i. We use the values of B l given by Wright et al. (1994a), which describes the actual beam pattern of the DMR horns, an imperfect gaussian beam. We model the quantities n i in (9) as Gaussian random variables with mean n i = 0 and variance n i n j = σ 2 i δ ij , assuming uncorrelated pixel noise (Lineweaver et al., 1994). When we have all sky coverage, the a lm coefficients are given by a lm = 4π δT T Y * lm ( x)dΩ.(10) In the real sky maps, we do not have all sky coverage. Because of the uncertainty in Galaxy emission, we are forced to remove all pixels between 20 • below and above the Galaxy plane. This cut represents a loss of almost 34% of all sky pixels and destroys the orthogonality of the spherical harmonics. Replacing the integral in (10) by a sum over the number of pixels that remain in the sky map after the Galaxy cut, N pix , we define a new set of coefficients by b lm ≡ w N pix i=1 δT T i Y * lm ( x i ),(11) where the normalization is chosen to be w ≡ 4π/N pix . Substituting (9) into (11), we obtain b lm = l 1 m 1 a l 1 m 1 B l 1 W ll 1 mm 1 + w N pix i=1 n i Y * lm ( x i ),(12) with covariance b lm b * l ′ m ′ = l 1 m 1 W ll 1 mm 1 W l ′ l 1 m ′ m 1 C l 1 B 2 l 1 + w 2 N pix i=1 σ 2 i Y * lm ( x i )Y l ′ m ′ ( x i ),(13) where W ll 1 mm 1 ≡ w N pix i=1 Y * lm ( x i )Y l 1 m 1 ( x i ).(14) Defining our multipole estimates as C DM R l ≡ 1 2l + 1 m b lm b * lm ,(15) their expectation values are simply C DM R l ≡ 1 2l + 1 m b lm b * lm(16) and their covariance matrix M is given by M ll ′ ≡ 2 (2l + 1)(2l ′ + 1) mm ′ b lm b * l ′ m ′ 2 .(17) The C DM R l coefficients are not good estimates of the true multipole moments C l . However, they are useful for constraining our cosmological parameters. The likelihood and the χ 2 are, respectively, defined by where C T and C are l max -dimensional row and column vectors with entries C l = C DM R l − C DM R l and M is the covariance matrix as described in (17) with dimensions l max x l max . Here C DM R l denotes the C DM R l -coefficients actually extracted from the data. Because the perturbations depend on an unknown constant A, the power spectrum normalization, we have to constrain two parameters at once. In practice, this calculation is done by fixing the ratio L/y and changing the normalization by a small factor. We multiply the first term on the right side of (13) by this factor and calculate a new covariance matrix. Repeating this procedure for each cell size, we finally get a likelihood grid that constrains the ratio L/y and the normalization parameter. RESULTS In Figure 2, we show the angular power spectrum C DM R l extracted from the data. We use a 2-year combined 53 plus 90 GHz map, with Galaxy cut of 20 • , monopole and dipole removed. We plot l(l + 1) C DM R l versus l from l = 2 to l = 30, with bias ( C DM R l − C l ) removed and error bars given by the diagonal terms of the covariance matrix M . In computing the bias and error bars, we assume eq. (5) with n = 1. The shape of this power spectrum and its multipole values are consistent with values reported by Wright et al. (1994b), and for l > 15 the power spectrum is basically dominated by noise. We computed the likelihood function L(L/y, σ 7 • ), using it to constrain the ratio L/y and the normalization σ 7 • , where σ 7 • is the rms variance at 7 • . For the data set described above, we found the maximum likelihood at (L/y, σ 7 • ) = (1.2, 37.4µK). In Figure 3, we plot the likelihood function L(L/y, σ 7 • ). Notice that the likelihoods cannot be normalized because they do not converge to zero for very large cell sizes, i.e., the volume under the likelihood function is infinite. Since the likelihoods are not zero for very large cell sizes, we could naively consider that the probability of the universe being small is essentially zero. However, this conclusion is clearly exaggerated and based on the fact that we multiplied our likelihoods by a uniform prior, and there is nothing special about adopting a uniform prior. In order to obtain rigorous confidence limits for our analysis, we replace the maximum likelihood fit by a minimum χ 2 fit. We compute the chi-squared function χ 2 (L/y, σ 7 • ) and use it to constrain the ratio L/y and the normalization σ 7 • . In Figure 4, we plot the probability that the T 3 model is consistent with the data as a function of the ratio L/y and the normalization σ 7 • (bottom). Confidence limits of 68%, 95% and 99.7% are shown in the contour plot (top). We found the highest consistency probability (minimum χ 2 ) at (L/y, σ 7 • ) = (1.2, 49.7µK), represented by a cross in the contour plot. Removing the quadrupole, we obtained similar results, see Table 1 for the lower limits on cell sizes. We obtain the constraint L/y = 1.2 +∞ −0.48 at 95% confidence. We cannot place an upper limit on the cell size: all large cells are equally probable. CONCLUSIONS The strong constraint from our analysis comes from the predicted power spectrum of the T 3 universe; see Figure 1. According to this plot, a reduction in the cell size to values below the horizon scale should suppress the quadrupole and low multipole anisotropies, while the suppression is negligible if the cell is very large, at least, larger than the horizon. It is possible to notice these properties in Figures 3 and 4: both favor large cell sizes. The observed presence of the quadrupole and other low order anisotropies automatically constrains our cell to be very large. In other words, even before making the χ 2 fit, we expect to obtain very large cells. We remind the reader that our analysis is for n = 1. We made this assumption because the results of fitting the T 3 model seem to be relatively insensitive to changes in n and the "bumps", not the overall slope, are responsible for the poor fit between the model and the data. In other words, our results are independent of any particular inflationary model. From the COBE/DMR data, we obtain the best χ 2 fit for a toroidal universe with L/y = 1.2, which corresponds to a cell size of L = 7200h −1 Mpc. A cell size below 72% of the size of the horizon (L/y < 0.72) is incompatible with the COBE measurements at 95% confidence, and a shown in the contour plot (top). We found the highest consistency probability (minimum χ 2 ) at L/y = 1.2, represented by a cross in the contour plot. FIGURE CAPTIONS Figure 1 : CAPTIONS1Expected power spectrum for the T 3 universe model with n = 1 for different cell sizes with L/y from 0.1 to 3.0. Figure 2 : 2Power spectrum of the 2-year combined 53+90 GHz COBE/DMR data with bias removed and the error bars given by the diagonal terms of the covariance matrix M . Figure 3 : 3The likelihood function L(L/y, σ 7 • ) for the T 3 universe model with n = 1. Figure 4 : 4The probability that the T 3 model is consistent with the data is plotted as a function of the ratio L/y and the normalization σ 7 • (bottom). Confidence limits of 68%, 95% and 99.7% are Table 1 : 1Lower limits on L/y Confidence Level L/y with C 2 L/y without C 268 % 0.98 0.97 90 % 0.75 0.68 95 % 0.72 0.65 99.7% 0.61 0.60 Since the T 3 topology is interesting if the cell size is considerably smaller than the horizon, this model loses most of its appeal. We would like to thank Jon Aymon. Douglas Scott, Joseph Silk, Daniel Stevens and Max Tegmark98) is ruled out at 68% confidencesize below the size of the horizon (L/y < 0.98) is ruled out at 68% confidence. Since the T 3 topology is interesting if the cell size is considerably smaller than the horizon, this model loses most of its appeal. We would like to thank Jon Aymon, Douglas Scott, Joseph Silk, Daniel Stevens and Max Tegmark for many useful comments and help with the manuscript. AC acknowledges SCT-PR/CNPq This work was supported in part by the Director. Office of Energy Research, Office of High Energy and Nuclear Physics, Division of High Energy Physics of the U.S. Department of Energy under contract No.DE-AC03-76SF00098(NV)Conselho Nacional de Desenvolvimento Científico e Tecnológico for her financial support under process No.201330/93-8(NV). This work was supported in part by the Director, Office of Energy Research, Office of High Energy and Nuclear Physics, Division of High Energy Physics of the U.S. Department of Energy under contract No.DE-AC03-76SF00098. . J R Bond, G Efsthatiou, Mon. Not. R. Astr. Soc. 226655Bond, J.R. & Efsthatiou, G. 1987, Mon. Not. R. Astr. Soc., 226:655. . T J Broadhurst, Nature. 343726Broadhurst, T.J. et al. 1990, Nature, 343:726. . E Bunn, N Sugiyama, preprint (astro-ph/9407069Bunn, E. & Sugiyama, N. 1994, preprint (astro-ph/9407069). . G F R Ellis, Gen. Rel. and Grav. 217Ellis, G.F.R. 1971, Gen. Rel. and Grav., 2(1):7. . G F R Ellis, G Schreiber, Phys. Lett. A. 115397Ellis, G.F.R. & Schreiber, G. 1986, Phys. Lett. A, 115(3):97. . A P Fairall, Mon. Not. R. Astr. Soc. of South. Africa. 4411114Fairall, A.P. 1985, Mon. Not. R. Astr. Soc. of South. Africa, 44(11):114. . L Z Fang, Y L Liu, Mod. Phys. Lett. A. 3131221Fang, L.Z. & Liu, Y.L. 1988, Mod. Phys. Lett. A, 3(13):1221. . L Z Fang, H Mo, Mod. Phys. Lett. A. 24229Fang, L.Z. & Mo, H. 1987, Mod. Phys. Lett. A, 2(4):229. . L Z Fang, H Sato, Gen. Rel. and Grav. 17111117Fang, L.Z. & Sato, H. 1985, Gen. Rel. and Grav., 17(11):1117. . K M Górski, Ap. J. Lett. 43085Górski, K.M. 1994, Ap. J. Lett., 430:L85. . Y P Jing, L Z Fang, Phy. Rev. Lett. 73141882Jing, Y.P. & Fang, L.Z., 1994, Phy. Rev. Lett., 73(14):1882. The Early Universe. E W Kolb, M S Turner, Addison-WesleyKolb, E.W. & Turner, M.S. 1990, The Early Universe, Addison-Wesley. . C Lineweaver, Ap. J. 436452Lineweaver, C. et al. 1994, Ap. J., 436:452. . P J E Peebles, Ap. J. Lett. 2631Peebles, P.J.E. 1982, Ap. J. Lett., 263:L1. . G F Smoot, Ap. J. Lett. 3961Smoot, G.F. et al. 1992, Ap. J. Lett., 396:L1. . I Y Sokolov, JETP Lett. 5710617Sokolov, I.Y. 1993, JETP Lett., 57(10):617. . A A Starobinsky, JETP Lett. 5710622Starobinsky, A.A. 1993, JETP Lett., 57(10):622. . D Stevens, Phys. Rev. Lett. 71120Stevens, D. et al. 1993, Phys. Rev. Lett., 71(1):20. . E L Wright, Ap. J. 4201Wright, E.L. et al. 1994a, Ap. J., 420:1. . E L Wright, Ap. J. 436433Wright, E.L. et al. 1994b, Ap. J., 436:433. . Zel&apos;dovich, B Ya, Comm. Astrophys. Space Sci. 56169Zel'dovich, Ya B. 1973, Comm. Astrophys. Space Sci., 5(6):169. . Zel&apos;dovich, B Ya, A A Starobinsky, Sov. Astron. Lett. 103135Zel'dovich, Ya B. and Starobinsky, A.A. 1984, Sov. Astron. Lett., 10(3):135.
[]
[ "From Shadow Segmentation to Shadow Removal", "From Shadow Segmentation to Shadow Removal" ]
[ "Hieu Le \nStony Brook University\n11794Stony BrookNYUSA\n", "Dimitris Samaras \nStony Brook University\n11794Stony BrookNYUSA\n" ]
[ "Stony Brook University\n11794Stony BrookNYUSA", "Stony Brook University\n11794Stony BrookNYUSA" ]
[]
The requirement for paired shadow and shadow-free images limits the size and diversity of shadow removal datasets and hinders the possibility of training large-scale, robust shadow removal algorithms. We propose a shadow removal method that can be trained using only shadow and non-shadow patches cropped from the shadow images themselves. Our method is trained via an adversarial framework, following a physical model of shadow formation. Our central contribution is a set of physics-based constraints that enables this adversarial training. Our method achieves competitive shadow removal results compared to state-of-the-art methods that are trained with fully paired shadow and shadow-free images. The advantages of our training regime are even more pronounced in shadow removal for videos. Our method can be fine-tuned on a testing video with only the shadow masks generated by a pre-trained shadow detector and outperforms state-of-the-art methods on this challenging test. We illustrate the advantages of our method on our proposed video shadow removal dataset.
10.1007/978-3-030-58621-8_16
[ "https://arxiv.org/pdf/2008.00267v1.pdf" ]
220,935,474
2008.00267
c2784e5e8af954d3e5da3cc26c189ab8308b5a94
From Shadow Segmentation to Shadow Removal Hieu Le Stony Brook University 11794Stony BrookNYUSA Dimitris Samaras Stony Brook University 11794Stony BrookNYUSA From Shadow Segmentation to Shadow Removal Shadow RemovalGANWeakly-supervisedIllumination modelUnpairedImage-to-Image The requirement for paired shadow and shadow-free images limits the size and diversity of shadow removal datasets and hinders the possibility of training large-scale, robust shadow removal algorithms. We propose a shadow removal method that can be trained using only shadow and non-shadow patches cropped from the shadow images themselves. Our method is trained via an adversarial framework, following a physical model of shadow formation. Our central contribution is a set of physics-based constraints that enables this adversarial training. Our method achieves competitive shadow removal results compared to state-of-the-art methods that are trained with fully paired shadow and shadow-free images. The advantages of our training regime are even more pronounced in shadow removal for videos. Our method can be fine-tuned on a testing video with only the shadow masks generated by a pre-trained shadow detector and outperforms state-of-the-art methods on this challenging test. We illustrate the advantages of our method on our proposed video shadow removal dataset. Introduction Shadows are present in most natural images. Shadow effects make objects harder to detect or segment [23], and scenes with shadows are harder to process and analyze [20]. Realistic shadow removal is an integral part of image editing [3] and can greatly improve performance on various computer vision tasks [32,41,56,24,21], getting increased attention in recent years [37,13,11]. Data-driven approaches using deep learning models have achieved remarkable performance on shadow removal [5,22,17,15,47,55] thanks to recent large-scale datasets [45,47]. Most of the current deep-learning shadow removal approaches are end-to-end mapping functions trained in a fully supervised manner. Such systems require pairs of shadow images and their shadow-free counter-parts as training signals. However, this type of data is cumbersome to obtain, lacks diversity, and is errorprone: all current shadow removal datasets exhibit color mismatches between the shadow images and their shadow-free ground truth (see Fig. 1 -left panel). Moreover, there are no images with self-cast shadows because the occluders are never visible in the image in the current data acquisition setups [47,37,15]. This dependency on paired data significantly hinders building large-scale, robust shadow-removal systems. A recent method trying to overcome this issue shadow-free} images which are expensive to collect, lack diversity, and are sensitive to errors due to possible color mismatches between the two images. Note the slightly different color tone between the two images. In this paper, we propose to learn shadow removal from unpaired shadow and non-shadow patches cropped from the same shadow image (right). This eliminates the need for shadow free images. is MaskShadow-GAN [15], which learns shadow removal from unpaired shadow and shadow-free images. However, such cycle-GAN [58] based systems usually require enough statistical similarity between the two sets of images [25,2]. This requirement can be hard to satisfy when capturing shadow-free images is tricky, such as shadow-free images of urban areas [4] or moving objects [18,36]. In this paper, we propose an alternative solution to the data dependency issue. We first observe that image patches alongside the shadow boundary contain critical information for shadow removal, including non-shadow, umbra and penumbra areas. They sufficiently reflect the characteristics of the shadowing effects, including the color differences between shadow and non-shadow areas as well as the gradual changes of the shadow effects across the shadow boundary [34,33,14]. If we further assume that the shadow effects are fairly consistent in the umbra areas, a patch-based shadow removal can be used to remove shadows in the whole image. Based on this observation, we propose training a patch-based shadow removal system for which we use unpaired shadow and non-shadow patches directly cropped from the shadow images themselves as training data. This approach eliminates the dependency on paired training data and opens up the possibility of handling different types of shadows, since it can be trained with any kind of shadow image. Compared to MaskShadow-GAN, shadow and non-shadow patches cropped from the same image naturally ensure significant statistical similarity. The only supervision required in this data processing scheme are the shadow masks, which are relatively easy to obtain, either manually, semi-interactively [45,11], or automatically using shadow detection methods [5,59,57,23]. Automatic shadow detection is improving, with the main challenge being generalization across datasets. At some point, one can expect to get very good shadow masks automatically, which would allows training our shadow removal method with very little annotation cost. In particular, to obtain shadow and shadow-free patches, we crop the shadow images into small overlapping patches of size n × n with a step size of m. Based on the shadow masks, we group these patches into three sets: a non-shadow set (N ) containing patches having no shadow pixels, a shadow-boundary set (B) containing patches lying on the shadow boundaries, and a full-shadow set (F) containing patches where all pixels are in shadow. With small enough patch size n and step size m, we can obtain enough training patches in each set. With this training set, we train a shadow removal system to learn a mapping from patches in the shadow-boundary set B to patches in the non-shadow set N . Essentially, this mapping needs to infer the color difference alongside the shadow edges, including the chromatic attributes of the light source and the smooth change of the shadow effects across the shadow boundary, in order to transform a shadow patch to a non-shadow patch. This is, in spirit, similar to early shadow removal approaches that focus on shadow edges to remove shadows [38,9,8,44,46]. By simply cropping shadow images into patches, we are posing the shadow removal as an unpaired image-to-image cross-domain mapping [54,2,29] that can be estimated via an adversarial framework. In particular, we seek a mapping function G that takes as input a shadow-boundary patch x from the set B, and outputs an image patchx, such that a critic function D cannot distinguish whetherx was drawn from the non-shadow set N or generated by G. Note that one potential solution here is to use Cycle-GAN or MaskShadow-GAN to estimate this transformation. However, the mapping functions learned by these methods are not able to remove shadows from patches in the full-shadow set F. Training such an unpaired image-to-image mapping for shadow removal is challenging. The mapping is under-constrained and training can collapse easily. [12,28,27,30,42,31]. Here, we propose to systematically constrain the shadow removal process by a physical model of shadow formation [39] and incorporate a number of physical properties of shadows into the framework. We show that these physics-based priors define a transformation closely modelling shadow removal. Driven by an adversarial signal, our framework effectively learns physicallyplausible shadow removal without any direct supervision from paired data. Specifically, we constrain the shadow removal process to a shadow image decomposition model [22] that extracts a set of parameters and a matting layer from the shadow image. This set of shadow parameters is responsible for removing shadows on the umbra areas of the shadows via a linear function. Thus, once we estimate these shadow parameters from shadow boundary patches, we can use them to remove shadows for patches fully covered by the same shadow under the assumption that they share the same set of shadow parameters. Based on the physical properties of shadows, we apply the following constraints to the model: -We limit the search space of the shadow parameters and shadow matte to the appropriate value ranges that correspond to shadow removal. -Our matting and smoothness losses ensure that shadow removal only happens in the shadow areas and transitions smoothly across shadow boundaries. -Our boundary loss on the generated shadow-free image enforces color similarity between the inner and outer areas alongside shadow boundaries. With these constraints and the adversarial signal, our method achieves shadow removal results that are competitive with state-of-the-art methods that were trained in a fully supervised manner with paired shadow and non-shadow images [22,47,37]. We further compare our method to state-of-the-art methods on a novel and challenging video shadow removal dataset including static videos with various scenes and shadow conditions. This test exposes the weaknesses of datadriven methods trained on datasets lacking diversity. Our patch-based method seems to generalize better than other methods when evaluated on this video shadow removal test. Most importantly, we can easily fine-tune our pre-trained model on a single testing video to further improve shadow removal results, showcasing this advantage of our training scheme. In short, our contributions are: -We propose the use of an adversarial critic to train a shadow remover from unpaired shadow and non-shadow patches, providing an alternative solution to the paired data dependency issue. -We propose a set of physics-based constraints that define a transformation closely modelling shadow removal, which enables shadow remover training with only an adversarial training signal. -Our system trained without any shadow-free images has competitive results compared to fully-supervised state-of-the-art methods on the ISTD dataset. -We collect a novel video shadow removal dataset. Our shadow removal system can be fine-tuned for free to better remove shadows on testing videos. Related Work Shadows are physical phenomena. Early shadow removal works, without much training data, usually focused on studying different physical shadow properties [8,7,9,6,1,10,26,53]. Many works look for cues to remove shadows starting from shadow edges. Finlayson et al. [9] used shadow edges to estimate a scaling factor that differentiates shadow areas from their non-shadow counterparts. Wu & Tang [51] imposed a smoothness constraint alongside the shadow boundaries to handle penumbra areas. Wu et al. [50] detected strong shadow-edges to remove shadows on the whole image. Shor & Lischinki [39] defined an affine relationship between shadow and non-shadow pixels where they used the areas surrounding the shadow edges to estimate the parameters of such affine transforms. Shadow boundary effects can also be modeled via image matting [14]. Wu et al. [52] estimated a matte layer representing the pixel-wise shadow probability to estimate a color transfer function to remove shadows. Chuang et al. [3] computed a shadow matte from video for shadow editing. They computed the lit and shadow images by finding min-max values at each pixel location throughout all frames of a video captured by a static camera. We use this technique to create a video dataset for testing shadow removal methods in Sec. 4.4. Current shadow removal methods [22,17,55,5,47] use deep-learning models trained with full supervision on large-scale datasets [47,37] of paired shadow and shadow-free images. Pairs are obtained by taking a photo with shadows, then removing the occluders from the scene to take the photo without shadows. Deshadow-Net [37] extracted multi-context features to predict a matte layer that removes shadows. Some works use adversarial frameworks to train their shadow removal. In [47] a unified adversarial framework predicted shadow masks and removed shadows. Similarly, Ding et al. [5] used an adversarial signal to improve shadow removal in an iterative manner. Note that these methods use the shadowfree image as the main training signal while our method is trained only through an adversarial loss. In prior work [22] we constrained shadow removal by a physical model of shadow formation. We trained networks to extract shadow parameters and a matte layer to remove shadows. We adapt this model to patch-based shadow removal. Note that in [22], all shadow parameters and matting layers were pre-computed using paired training images and the network was trained to simply regress those values, whereas our model automatically estimates them through adversarial training. MaskShadow-GAN [17] is the only deep-learning method that learns shadow removal from just unpaired training data. Method We describe our patch-based shadow removal in Sec. 3.1. Our whole image pipeline for shadow removal is described in Sec. 3.2. For both image-level and patch-level shadow removal, we use shadow matting [3,35,40,49] to express a shadow-free image I shadow-free by: I shadow-free = I relit · α + I shadow · (1 − α)(1) with I shadow the shadow image, α the matting layer, and I relit the relit image. The relit image contains shadow pixels relit to their non-shadow values, computed via a linear function following a physical shadow formation model [22,39]: I relit i = w · I shadow i + b(2) The unknown factors in this shadow matting formula are the set of shadow parameters (w, b) which define the linear function that removes the shadow effects in the umbra areas of the shadow, and the matte layer α that models the shadow effects on the shadow boundaries. We train a system of three networks to estimate these unknown factors via adversarial training. We use the annotated shadow segmentation masks for training. For testing, we obtain a segmentation mask for the image using the shadow detector proposed by Zhu et al. [59]. Patch-based Shadow Removal Fig. 2 summarizes our framework to remove shadows from a single image patch, which consists of three networks: Param-Net, Matte-Net, and D-Net. Param-Net and Matte-Net predict the shadow parameters (w, b) and the matte layer α respectively to jointly remove shadows. D-Net is the critic distinguishing between the generated image patches and the real shadow-free patches. With Param-Net [1,10] ; b∈ [-25,25] α ∈ [0,1] Fig. 2: Weakly-supervised shadow decomposition. Our framework consists of three networks: Param-Net, Matte-Net, and D-Net. Param-Net and Matte-Net predict the shadow parameters (w, b) and the matte layer α respectively to jointly remove the shadow. Param-Net takes as input the input image patch and its shadow mask to predict three sets of shadow parameters (w, b) for the three color channels, which is used to obtain a relit image. The input image patch, shadow mask, and relit image are input into Matte-Net to predict a matte layer. D-Net is the critic function distinguishing between the generated image patches and the real shadow-free patches. The only supervision signal is the set of shadow-free patches. The four losses guiding this training are the matting loss, smoothness loss, boundary loss, and adversarial loss. and Matte-Net being the generators and D-Net being the discriminator, the three networks form an adversarial training framework where the main source of training signal is the set of shadow-free patches. In theory, as D-Net is trained to distinguish patches containing shadow boundaries from patches without any shadows, a natural solution to fool D-Net is to remove the shadows in the input shadow patches to make them indistinguishable from shadow-free patches. However, such an adversarial signal from D-Net alone often cannot guide the generators, (Param-Net and Matte-Net) to actually remove shadows. The parameter search space is very large and the mapping is extremely under-constrained. In practice, we observe that without any constraints, Param-Net tends to output consistently high values of (w, b) as they would directly increase the overall brightness of the image patches, and Matte-Net tends to introduce artifacts similar to visual patterns frequently appearing in the non-shadow areas. Thus, our main idea is to constrain this framework with physical shadow properties. Constraining the output shadow parameters, shadow mattes, and combined shadow-free images, forces the networks to only transform the input images in a manner consistent with shadow removal. First, Param-Net estimates a scaling factor w and an additive constant b, for each R,G,B color channel, to remove the shadow effects on the shadowed pixels in the umbra areas of the shadows via Eq. (2). Here we hypothesize that the main component that explains the shadow effects is the scaling factor w. Accordingly, we bound its search space to the range [1; s max ]. The minimum value of w = 1 ensures that the transformation always scales up the values of the shadowed pixels. We set the search space for b to the range [−c, c] where we choose a relatively small value of c = 25 (the pixel intensity varies in the range [0,255]). Our intuition is to force the network to define the mapping mainly via the scaling factor w. We choose s max = 10. This upper bound of w prevents the network from collapsing as w increases. As we show in the ablation study, the network fails to learn a shadow removal without proper search space limitation. Matte-Net estimates a blending layer α that combines the shadow image patch and the relit image patch into a shadow-free image patch via Eq.1. The value of a pixel i in the output image patch, I output i , is computed as: I output i = I relit i · α i + I shadow i · (1 − α i )(3) We map the output of Matte-Net to [0,1] as α is being used as a matting layer and constrain the value of α i as follows: -If i indicates a non-shadow pixel, we enforce α i = 0 so that the value of the output pixel I output where the umbra, non-shadow or penumbra areas can be roughly specified using the shadow mask. We define two areas alongside the shadow boundary, denoted as M in and M out -see Fig.3. M out is the area right outside the boundary, computed by subtracting the shadow mask, M, from its dilated version M dilated . The inside area M in is computed similarly by subtracting an eroded shadow mask from the shadow mask. These two areas M in and M out roughly define a small area surrounding the shadow boundary, which can be considered as the penumbra area of the shadow. Then the above constraints are implemented as the matting loss L mat−α computed by the following formula for every pixel i: L mat−α = i∈(M−Min) |α i − 1| + i / ∈M dilated |α i |(4) Moreover, since the shadow effects are assumed to vary smoothly across the shadow boundaries, we enforce an L1 smoothness loss on the spatial gradients of the matte layer, α. This smoothness loss L sm also prevents Matte-Net from producing undesired artifacts since it enforces local uniformity. This loss is: L sm−α = |∇α|(5) Then, given a set of estimated parameters (w, b) and a matte layer α, we obtain an output image I output via the image decomposition formula (1). We penalize the L1 difference between the average intensity of pixels lying right Input Image Shadow Mask Min (green) & Mout (red) Fig. 3: The penumbra area of the shadow. We define two areas alongside the shadow boundary, denoted as M in (shown in green) and M out (shown in red). These two areas roughly define a small region surrounding the shadow boundary, which can be considered as the penumbra area of the shadow. outside and inside the shadow boundary, which are the two areas M in and M out . This shadow boundary loss L bd is computed by: L bd = i∈Min I output i i∈Min − i∈Mout I output i i∈Mout(6) Last, we compute the adversarial loss with the feedback from D-Net: L GAN = log(1 − D(I output ))(7) where D(·) denotes the output of D-Net. The final objective function to train Param-Net and Matte-Net is to minimize a weighted sum of the above losses: L f inal = λ sm L sm−α + λ mat L mat−α + λ bd L bd + λ adv L GAN(8) All these losses are essential for training our networks, as shown in our ablation study in Sec. 4.3. By using all the proposed losses together, our method is able to automatically extract a set of shadow parameters and an α layer from an input image patch. Fig. 4 visualizes the components extracted from our framework for two challenging input patches. In the first row, a dark shadow area is lit correctly to its non-shadow value. In the second row, the matte layer α is not affected by the dark material of the surface. Image Shadow Removal using a patch-based model. We estimate a set of shadow parameters and a matte layer for the input image to remove shadows via Eq. (1). First, we obtain a shadow mask using the shadow detector of Zhu et al. [59]. We crop the input shadow image into overlapping patches. All patches containing the shadow boundaries are then input into the three networks. We approximate the whole image shadow parameters from the patch shadow parameters, under the assumption that they share the same or very similar parameters. We simply compute the image shadow parameters as Fig. 4: Weakly-supervised shadow image decomposition. With only shadow mask supervision, our method automatically learns to decompose the shadow effect in the input image patch I sd into a matte layer α and a relit image I relit . The matte layer α combines I sd and I relit to obtain a shadow-free image patch I output via Eq. (1). I sd α I relit I relit * α I sd * (1 − α) I output a linear combination of the patch shadow parameters. Similarly, we compute the values of each pixel in the matte layer by combining the overlapping matte patches. We set the matte layer pixels in the non-shadow area to 0 and those in the umbra area to 1. We observe that the classification scores obtained from the critic function D-Net correlate with the quality of the generated image patches. Thus, we normalize these scores to sum to 1 and use them as coefficients for the linear combinations that form the image shadow parameters and matte layer. Experiments Network Architectures and Implementation Details. We use a VGG-19 architecture for Param-Net and a U-Net architecture for Matte-Net. D-Net is a simple 5-layer convolutional network. To map the outputs of the networks to a certain range, we use Tanh functions together with scaling and additive constants. We use stochastic gradient descent with the Adam solver [19] to train our model. The initial learning rate for Matte-Net and D-Net is 0.0002 and for Param-Net is 0.00002. All networks were trained from scratch. We experimentally set our training parameters (λ bd , λ mat−α , λ sm−α , λ adv ) to (0.5, 100, 10, 0.5). We train our network with batch size 96 for 150 epochs. 1 We use the ISTD dataset [47] for training. Each original training image of size 640×480 is cropped into patches of size 128×128 with a step size of 32. This creates 311,220 image patches from 1,330 training shadow images. This training set includes 151,327 non-shadow patches, 147,312 shadow-boundary patches, and 12,581 full-shadow patches. Table 1: Shadow removal results of our networks compared to stateof-the-art shadow removal methods on the adjusted ISTD testing set [22,47]. The metric is RMSE (the lower, the better). Best results are in bold. Methods Training Shadow Removal Evaluation We first evaluate our method on the adjusted testing set of the ISTD dataset [47,22]. Following previous work [47,14,37,22], we compute the root-mean-squareerror (RMSE) in the LAB color space on the shadow area, non-shadow area, and the whole image, where all shadow removal results are re-sized to 256×256. Note that our method can take any size image as input. We used the Zhu et al. [59] shadow detector, pre-trained on the SBU dataset and fine-tuned on the ISTD dataset, to obtain the shadow masks for our testing, as in [22]. In Table 1, we compare our weakly-supervised methods with the recent stateof-the-art methods of Guo et al. [14], Gong et al. [11], Yang et al. [53], ST-CGAN [47], DeshadowNet [37], MaskShadow-GAN [15], and SP+M-Net [22]. The second column shows the training data of each method. All other deeplearning methods require paired shadow-free images as training signal except MaskShadow-GAN, which is trained on unpaired shadow and shadow-free images from the ISTD dataset. ST-CGAN and SP+M-Net also require the training shadow masks. Our method, trained without any shadow-free image, got 9.7 RMSE on the shadow areas, which is competitive with SP+M-Net. However, SP+M-Net requires full supervision. Our method outperforms MaskShadow-GAN by 22%, reducing the RMSE in the shadow area from 12.4 to 9.7 while also achieving lower RMSE on the nonshadow area. We outperform DeshadowNet and ST-CGAN, two methods that were trained with paired shadow and shadow-free images, reducing the RMSE by 38% and 26% respectively. Fig. 5 compares qualitative shadow removal results from our method with other state-of-the-art methods on the ISTD dataset. Our method, trained with just an adversarial signal, produces clean shadow-free images with very few artifacts. On the other hand, ST-CGAN and MaskShadow-GAN tend to produce Input [47] [15] [22] Ours GT Fig. 5: Comparison of shadow removal on ISTD dataset. Qualitative comparison between our method and the state-of-the-art methods: ST-CGAN [47], MaskShadow-GAN [15], SP+M-Net [22]. Our method, trained without any shadow-free images, produces clean shadow-free images with very few artifacts. blurry images, introduce artifacts, and often relight the wrong image parts. Our method generates images which are visually similar to that of SP+M-Net. While SP+M-Net is less affected by the error in the shadow masks (shown in the 2nd row), our method generates images with more consistent colors between areas inside and outside the shadow boundaries (3rd and 4th rows). In all cases, our method preserves almost perfectly the textures beneath the shadows (last row). Ablation Studies We conduct ablation studies to better understand the effects of each proposed component in our framework. Starting from the original model with all the proposed features and losses, we train new models removing the proposed components one at a time. Table 2 summarizes these experiments. The first row shows the results of our model when we set the search space of the scaling factor w to [−10, 10] and the search space of the additive constant b to [−255, 255]. In this case, the model collapses and consistently outputs uniformly dark images. Similarly, the model collapses when we omit the boundary loss L bd . We observe that this loss is essential in stabilizing the training as it prevents the Param-Net from outputting consistently high values. The matting loss L mat−α and L GAN loss are critical for learning proper shadow removal. We observe that without the matting loss L mat−α , the model behaves similarly to an image inpainting model where it tends to modify all parts of the images to fool the discriminator. Last, dropping the smoothness loss L sm only results in a slight drop in shadow removal performance, from 9.7 to 10.2 RMSE on the shadow areas. However, we observe more visible boundary artifacts on the output images without this loss. Video Shadow Removal Video Shadow Removal is challenging for shadow removal methods. A video sequence has hundreds of frames with changing shadows. It is even harder for videos with a moving camera, moving objects, and illumination changes. To better evaluate the performance of shadow removal methods in videos, we collected a set of 8 videos, each containing a static scene without visible moving objects. We cropped those videos to obtain clips with the only dominant motions caused by the shadows (either by direct light motion or motion of the unseen occluders). As can be seen from the top row of Fig. 6, the dataset includes videos containing shadows cast by close-up occluders, far distance occluders, videos with simple-to-complex shadows, and shadows on various types of backgrounds and materials. Inspired by [3], we propose a "max-min" technique to obtain a single pseudo shadow-free frame for each video: since the camera is static and there is no visible moving object in the frames, the changes in the video are caused by the moving shadows. We first obtain two images V max and V min by taking the maximum and minimum intensity values at each pixel location across the whole video. V max is then the image that contains the shadow-free values of pixels if they ever go out of the shadows. Similarly, their shadowed values, if they ever go into the shadows, are captured in V min . Fig. 6 shows these two images for a video named "plant". From these two images, we can trivially obtain a mask, namely moving-shadow M, marking the pixels appearing in both the shadow and non-shadow areas in the video: M i = 1 if V max,i > V min,i + 0 otherwise,(9) where we set a small threshold of = 40. This method allows us to obtain pairs of shadow and non-shadow pixel values in the moving-shadow mask, M, for free. To measure shadow removal performance, we input the frames of these videos into the shadow removal algorithm and measure the RMSE on the LAB color channel between the output frame and the image V max on the moving-shadow area M. We compute RMSE on each video and take their average to measure the shadow removal performance on the whole dataset. Table 3 summarizes the performance of our methods compared to MaskShadow-GAN [15] and SP+M-Net [22] on these videos. Our method outperforms SP+M-Net and MaskShadow-GAN, reducing the RMSE by 5% and 11% respectively. As our method only needs shadow segmentation masks for training, we use a pre-trained shadow detection model [59] to obtain a set of shadow masks for each video. While these shadow mask sets are imperfect, fine-tuning our model using this free supervision results in a 10% error reduction, showing the advantage of our training scheme. Fig. 7 visualizes two example shadow removal results for different methods. We show a single input frame of each video. From left to right are the input frame, the shadow removal results of MaskShadow-GAN [15], the results of SP+M-Net [22], the results of our model trained on the ISTD dataset, and the result of our model fine-tuned with each testing video for 1 epoch. The top row shows an example where all methods perform relatively well. Our method seems to have better color balance between the relit pixels and the non-shadow pixels, although there is a visible boundary artifact due to imperfect shadow masks. After 1 epoch of fine-tuning, these artifacts are greatly suppressed. The bottom row shows a challenging case where all methods fail to remove shadows properly. Table 3: Shadow removal results on our proposed Video Shadow Removal dataset. The metric is RMSE (the lower, the better), compared to the pseudo shadow-free frame on the moving shadow mask. All methods were pretrained on the ISTD dataset. Ours+ denotes our model fine-tuned for one epoch on each video using the shadow masks generated by a shadow detector [59] pre-trained on the SBU dataset [43] Methods Input Frame [15] Conclusion We presented a novel patch-based deep-learning model to remove shadows from images. This method can be trained on patches cropped directly from the shadow images, using the shadow segmentation mask as the only supervision signal. This obviates the dependency on paired training data and allows us to train this system on any kind of shadow image. The main contribution of this paper is a set of physics-based constrains that enable the training of this mapping. We have illustrated the effectiveness of our method on the standard ISTD dataset [47] and on our novel Video Shadow Removal dataset. As shadow detection methods mature with the aid of recently proposed shadow detection datasets [48,16], our method can be trained to remove shadows for a very low annotation cost. Fig. 1 : 1Paired training data (left) consists of training examples {shadow, i equals its value in the input image I shadow i . -If i indicates a pixel in the umbra areas of the shadows, we enforce α i = 1 so that the value of the output pixel I output i equals its relit value I relit i . -We do not control the value of α in the penumbra areas of the shadows and rely on the training of the network to estimate these values. Fig. 6 : 6Examples of Video Shadow Removal dataset. The dataset consists of videos where both the scene and the visible objects remaining static. The top row shows frames of different videos in our dataset. The second row visualizes our method to obtain the shadow-free frames for evaluating shadow removal. Fig. 7 : 7Shadow Removal on Videos.We visualize the shadow removal results of different methods on two frames extracted from our video dataset. "Ours+" denotes the results of our model fine-tuned with each testing video for 1 epoch. Top row shows an example where all methods perform relatively well. Bottom row shows a challenging case where all methods fail to remove shadow properly. Table 2 : 2Ablation Studies. We train our network without a certain loss or fea-ture and report the shadow removal performances on the ISTD dataset [47]. The metric is RMSE (the lower, the better). The table shows that all the proposed features in our model are essential in training for shadow removal. Methods Shadow Non-Shadow All Input Image 40.2 2.6 8.5 Ours w/o limiting search space 47.5 2.9 9.9 Ours w/o L bd 41.7 3.9 9.8 Ours w/o Lmat−α 38.7 3.1 9.0 Ours w/o Lsm−α 10.2 2.8 4.0 Ours w/o LGAN 26.9 2.9 6.8 Ours 9.7 3.0 4.0 [22] Ours Ours+Input Frame MaskShadow-GAN SP+M-NetOurs Ours+RMSE 32.9 23.5 22.2 20.9 18.0 All code, trained models, and data are available at: https://www3.cs.stonybrook. edu/~cvl/projects/FSS2SR/index.html Acknowledgements. This work was partially supported by the Partner University Fund, the SUNY2020 ITSC, and a gift from Adobe. Computational support provided by IACS and a GPU donation from NVIDIA. We thank Kumara Kahatapitiya and Cristina Mata for assistance with the manuscript. Shadow removal using intensity surfaces and texture anchor points. E Arbel, H Hel-Or, IEEE Transactions on Pattern Analysis and Machine Intelligence. 33Arbel, E., Hel-Or, H.: Shadow removal using intensity surfaces and texture anchor points. IEEE Transactions on Pattern Analysis and Machine Intelligence 33, 1202- 1216 (2011) Stargan: Unified generative adversarial networks for multi-domain image-to-image translation. Y Choi, M J Choi, M Kim, J W Ha, S Kim, J Choo, Choi, Y., Choi, M.J., Kim, M., Ha, J.W., Kim, S., Choo, J.: Stargan: Unified generative adversarial networks for multi-domain image-to-image translation. 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. IEEE/CVF Conference on Computer Vision and Pattern Recognition pp. 8789- 8797 (2017) Shadow matting and compositing. Y Y Chuang, D B Goldman, B Curless, D H Salesin, R Szeliski, ACM Transactions on Graphics. 223sepcial Issue of the SIGGRAPH 2003 ProceedingsChuang, Y.Y., Goldman, D.B., Curless, B., Salesin, D.H., Szeliski, R.: Shadow matting and compositing. ACM Transactions on Graphics 22(3), 494-500 (July 2003), sepcial Issue of the SIGGRAPH 2003 Proceedings Shadow analysis in high-resolution satellite imagery of urban areas. P Dare, 10.14358/PERS.71.2.169Photogrammetric Engineering and Remote Sensing. 71Dare, P.: Shadow analysis in high-resolution satellite imagery of urban ar- eas. Photogrammetric Engineering and Remote Sensing 71, 169-177 (02 2005). https://doi.org/10.14358/PERS.71.2.169 Argan: Attentive recurrent generative adversarial network for shadow detection and removal. B Ding, C Long, L Zhang, C Xiao, IEEE/CVF International Conference on Computer Vision (ICCV. Ding, B., Long, C., Zhang, L., Xiao, C.: Argan: Attentive recurrent generative ad- versarial network for shadow detection and removal. 2019 IEEE/CVF International Conference on Computer Vision (ICCV) pp. 10212-10221 (2019) Recovery of chromaticity image free from shadows via illumination invariance. M S Drew, IEEE Workshop on Color and Photometric Methods in Computer Vision, ICCV03. Drew, M.S.: Recovery of chromaticity image free from shadows via illumination invariance. In: In IEEE Workshop on Color and Photometric Methods in Computer Vision, ICCV03. pp. 32-39 (2003) 4-sensor camera calibration for image representation invariant to shading, shadows, lighting, and specularities. G Finlayson, M S Drew, 10.1109/ICCV.2001.937663Proceedings of the International Conference on Computer Vision. the International Conference on Computer Vision2Finlayson, G., Drew, M.S.: 4-sensor camera calibration for image representation invariant to shading, shadows, lighting, and specularities. In: Proceedings of the International Conference on Computer Vision. vol. 2, pp. 473-480 vol.2 (July 2001). https://doi.org/10.1109/ICCV.2001.937663 On the removal of shadows from images. G Finlayson, S Hordley, C Lu, M Drew, IEEE Transactions on Pattern Analysis and Machine Intelligence. Finlayson, G., Hordley, S., Lu, C., Drew, M.: On the removal of shadows from images. IEEE Transactions on Pattern Analysis and Machine Intelligence (2006) Removing shadows from images. G Finlayson, S D Hordley, M S Drew, Proceedings of the European Conference on Computer Vision. pp. 823-836. ECCV '02. the European Conference on Computer Vision. pp. 823-836. ECCV '02London, UK, UKSpringer-VerlagFinlayson, G., Hordley, S.D., Drew, M.S.: Removing shadows from images. In: Pro- ceedings of the European Conference on Computer Vision. pp. 823-836. ECCV '02, Springer-Verlag, London, UK, UK (2002), http://dl.acm.org/citation.cfm?id= 645318.649239 Hamiltonian path based shadow removal. C Fredembach, G D Finlayson, BMVCFredembach, C., Finlayson, G.D.: Hamiltonian path based shadow removal. In: BMVC (2005) Interactive removal and ground truth for difficult shadow scenes. H Gong, D Cosker, J. Opt. Soc. Am. A. 339Gong, H., Cosker, D.: Interactive removal and ground truth for dif- ficult shadow scenes. J. Opt. Soc. Am. A 33(9), 1798-1811 (2016). . 10.1364/JOSAA.33.001798https://doi.org/10.1364/JOSAA.33.001798, http://josaa.osa.org/abstract. cfm?URI=josaa-33-9-1798 Improved training of wasserstein gans. I Gulrajani, F Ahmed, M Arjovsky, V Dumoulin, A C Courville, Advances in Neural Information Processing Systems. Gulrajani, I., Ahmed, F., Arjovsky, M., Dumoulin, V., Courville, A.C.: Improved training of wasserstein gans. In: Advances in Neural Information Processing Sys- tems. pp. 5767-5777 (2017) Single-image shadow detection and removal using paired regions. R Guo, Q Dai, D Hoiem, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. the IEEE Conference on Computer Vision and Pattern RecognitionGuo, R., Dai, Q., Hoiem, D.: Single-image shadow detection and removal using paired regions. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2011) Paired regions for shadow detection and removal. R Guo, Q Dai, D Hoiem, IEEE Transactions on Pattern Analysis and Machine Intelligence. Guo, R., Dai, Q., Hoiem, D.: Paired regions for shadow detection and removal. IEEE Transactions on Pattern Analysis and Machine Intelligence (2012) Mask-ShadowGAN: Learning to remove shadows from unpaired data. X Hu, Y Jiang, C W Fu, P A Heng, ICCV. to appearHu, X., Jiang, Y., Fu, C.W., Heng, P.A.: Mask-ShadowGAN: Learning to remove shadows from unpaired data. In: ICCV (2019), to appear Revisiting shadow detection: A new benchmark dataset for complex world. X Hu, T Wang, C W Fu, Y Jiang, Q Wang, P A Heng, ArXiv abs/1911.06998Hu, X., Wang, T., Fu, C.W., Jiang, Y., Wang, Q., Heng, P.A.: Revisiting shadow detection: A new benchmark dataset for complex world. ArXiv abs/1911.06998 (2019) Direction-aware spatial context features for shadow detection. X Hu, L Zhu, C W Fu, J Qin, P A Heng, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. the IEEE Conference on Computer Vision and Pattern RecognitionHu, X., Zhu, L., Fu, C.W., Qin, J., Heng, P.A.: Direction-aware spatial context fea- tures for shadow detection. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2018) An improved adaptive background mixture model for real-time tracking with shadow detection. P Kaewtrakulpong, R Bowden, KaewTrakulPong, P., Bowden, R.: An improved adaptive background mixture model for real-time tracking with shadow detection (2002) Adam: A method for stochastic optimization. D P Kingma, J Ba, Proceedings of the International Conference on Learning Representations. the International Conference on Learning RepresentationsKingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: Proceedings of the International Conference on Learning Representations (2015) Weakly labeling the antarctic: The penguin colony case. H Le, B Goncalves, D Samaras, H Lynch, The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops. Le, H., Goncalves, B., Samaras, D., Lynch, H.: Weakly labeling the antarctic: The penguin colony case. In: The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops (June 2019) Geodesic distance histogram feature for video segmentation. H Le, V Nguyen, C P Yu, D Samaras, ACCVLe, H., Nguyen, V., Yu, C.P., Samaras, D.: Geodesic distance histogram feature for video segmentation. In: ACCV (2016) Shadow removal via shadow image decomposition. H Le, D Samaras, Proceedings of the International Conference on Computer Vision. the International Conference on Computer VisionLe, H., Samaras, D.: Shadow removal via shadow image decomposition. In: Pro- ceedings of the International Conference on Computer Vision (2019) A+D Net: Training a shadow detector with adversarial shadow attenuation. H Le, T F Y Vicente, V Nguyen, M Hoai, D Samaras, Proceedings of the European Conference on Computer Vision. the European Conference on Computer VisionLe, H., Vicente, T.F.Y., Nguyen, V., Hoai, M., Samaras, D.: A+D Net: Training a shadow detector with adversarial shadow attenuation. In: Proceedings of the European Conference on Computer Vision (2018) Co-localization with categoryconsistent features and geodesic distance propagation. H Le, C P Yu, G Zelinsky, D Samaras, ICCV 2017 Workshop on CEFRL: Compact and Efficient Feature Representation and Learning in Computer Vision. Le, H., Yu, C.P., Zelinsky, G., Samaras, D.: Co-localization with category- consistent features and geodesic distance propagation. In: ICCV 2017 Workshop on CEFRL: Compact and Efficient Feature Representation and Learning in Computer Vision (2017) Asymmetric gan for unpaired image-to-image translation. Y Li, S Tang, R Zhang, Y Zhang, J Li, S Yan, IEEE Transactions on Image Processing. 28Li, Y., Tang, S., Zhang, R., Zhang, Y., Li, J., Yan, S.: Asymmetric gan for unpaired image-to-image translation. IEEE Transactions on Image Processing 28, 5881-5896 (2019) Texture-consistent shadow removal. F Liu, M Gleicher, ECCVLiu, F., Gleicher, M.: Texture-consistent shadow removal. In: ECCV (2008) Wasserstein gan with quadratic transport cost. H Liu, X Gu, D Samaras, The IEEE International Conference on Computer Vision (ICCV). Liu, H., Gu, X., Samaras, D.: Wasserstein gan with quadratic transport cost. In: The IEEE International Conference on Computer Vision (ICCV) (October 2019) A two-step computation of the exact gan wasserstein distance. H Liu, G Xianfeng, D Samaras, International Conference on Machine Learning. Liu, H., Xianfeng, G., Samaras, D.: A two-step computation of the exact gan wasserstein distance. In: International Conference on Machine Learning. pp. 3165- 3174 (2018) Unsupervised image-to-image translation networks. M Y Liu, T Breuel, J Kautz, ArXiv abs/1703.00848Liu, M.Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation net- works. ArXiv abs/1703.00848 (2017) Which training methods for gans do actually converge?. L Mescheder, S Nowozin, A Geiger, International Conference on Machine Learning. Mescheder, L., Nowozin, S., Geiger, A.: Which training methods for gans do actu- ally converge? In: International Conference on Machine Learning (2018) Spectral normalization for generative adversarial networks. T Miyato, T Kataoka, M Koyama, Y Yoshida, International Conference on Machine Learning. Miyato, T., Kataoka, T., Koyama, M., Yoshida, Y.: Spectral normalization for generative adversarial networks. In: International Conference on Machine Learning (2018) Brightness correction and shadow removal for video change detection with uavs. T Müller, B Erdnüeß, Defense + Commercial Sensing. Müller, T., Erdnüeß, B.: Brightness correction and shadow removal for video change detection with uavs. In: Defense + Commercial Sensing (2019) Estimating shadows with the bright channel cue. A Panagopoulos, C Wang, D Samaras, N Paragios, Panagopoulos, A., Wang, C., Samaras, D., Paragios, N.: Estimating shadows with the bright channel cue (2010) Simultaneous cast shadows, illumination and geometry inference using hypergraphs. Pattern Analysis and Machine Intelligence. A Panagopoulos, C Wang, D Samaras, N Paragios, 10.1109/TPAMI.2012.110IEEE Transactions on. 352Panagopoulos, A., Wang, C., Samaras, D., Paragios, N.: Simultaneous cast shad- ows, illumination and geometry inference using hypergraphs. Pattern Analy- sis and Machine Intelligence, IEEE Transactions on 35(2), 437-449 (2013). https://doi.org/10.1109/TPAMI.2012.110 Compositing digital images. T Porter, T Duff, Proceedings of the ACM SIG-GRAPH Conference on Computer Graphics. 183Porter, T., Duff, T.: Compositing digital images. Proceedings of the ACM SIG- GRAPH Conference on Computer Graphics 18(3) (January 1984) Detecting moving shadows: Algorithms and evaluation. A Prati, I Mikic, M M Trivedi, R Cucchiara, IEEE Trans. Pattern Anal. Mach. Intell. 25Prati, A., Mikic, I., Trivedi, M.M., Cucchiara, R.: Detecting moving shadows: Al- gorithms and evaluation. IEEE Trans. Pattern Anal. Mach. Intell. 25, 918-923 (2003) Deshadownet: A multi-context embedding deep network for shadow removal. L Qu, J Tian, S He, Y Tang, R W H Lau, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. the IEEE Conference on Computer Vision and Pattern RecognitionQu, L., Tian, J., He, S., Tang, Y., Lau, R.W.H.: Deshadownet: A multi-context em- bedding deep network for shadow removal. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2017) Clustering-based shadow edge detection in a single color image. W Shiting, Z Hong, International Conference on Mechatronic Sciences, Electric Engineering and Computer. Shiting, W., Hong, Z.: Clustering-based shadow edge detection in a single color image. In: International Conference on Mechatronic Sci- ences, Electric Engineering and Computer. pp. 1038-1041 (Dec 2013). . 10.1109/MEC.2013.6885215https://doi.org/10.1109/MEC.2013.6885215 The shadow meets the mask: Pyramid-based shadow removal. Y Shor, D Lischinski, Computer Graphics Forum. 272Shor, Y., Lischinski, D.: The shadow meets the mask: Pyramid-based shadow re- moval. Computer Graphics Forum 27(2), 577-586 (April 2008) Blue screen matting. A R Smith, J F Blinn, Proceedings of the ACM SIG-GRAPH Conference on Computer Graphics. the ACM SIG-GRAPH Conference on Computer GraphicsSmith, A.R., Blinn, J.F.: Blue screen matting. In: Proceedings of the ACM SIG- GRAPH Conference on Computer Graphics (1996) Shadow detection and removal for occluded object information recovery in urban high-resolution panchromatic satellite images. N Su, Y Zhang, S Tian, Y Yan, X Miao, IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing. 9Su, N., Zhang, Y., Tian, S., Yan, Y., Miao, X.: Shadow detection and removal for occluded object information recovery in urban high-resolution panchromatic satellite images. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing 9, 2568-2582 (2016) Improving generalization and stability of generative adversarial networks. H Thanh-Tung, T Tran, S Venkatesh, International Conference on Learning Representations. Thanh-Tung, H., Tran, T., Venkatesh, S.: Improving generalization and stability of generative adversarial networks. In: International Conference on Learning Rep- resentations (2019) Noisy label recovery for shadow detection in unfamiliar domains. T F Y Vicente, M Hoai, D Samaras, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. the IEEE Conference on Computer Vision and Pattern RecognitionVicente, T.F.Y., Hoai, M., Samaras, D.: Noisy label recovery for shadow detection in unfamiliar domains. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2016) Leave-one-out kernel optimization for shadow detection and removal. T F Y Vicente, M Hoai, D Samaras, IEEE Transactions on Pattern Analysis and Machine Intelligence. 403Vicente, T.F.Y., Hoai, M., Samaras, D.: Leave-one-out kernel optimization for shadow detection and removal. IEEE Transactions on Pattern Analysis and Ma- chine Intelligence 40(3), 682-695 (2018) Large-scale training of shadow detectors with noisily-annotated shadow examples. T F Y Vicente, L Hou, C P Yu, M Hoai, D Samaras, Proceedings of the European Conference on Computer Vision. the European Conference on Computer VisionVicente, T.F.Y., Hou, L., Yu, C.P., Hoai, M., Samaras, D.: Large-scale training of shadow detectors with noisily-annotated shadow examples. In: Proceedings of the European Conference on Computer Vision (2016) Single image shadow removal via neighbor-based region relighting. T F Y Vicente, D Samaras, Proceedings of the European Conference on Computer Vision Workshops. the European Conference on Computer Vision WorkshopsVicente, T.F.Y., Samaras, D.: Single image shadow removal via neighbor-based region relighting. In: Proceedings of the European Conference on Computer Vision Workshops (2014) Stacked conditional generative adversarial networks for jointly learning shadow detection and shadow removal. J Wang, X Li, J Yang, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. the IEEE Conference on Computer Vision and Pattern RecognitionWang, J., Li, X., Yang, J.: Stacked conditional generative adversarial networks for jointly learning shadow detection and shadow removal. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2018) T Wang, X Hu, Q Wang, P A Heng, C W Fu, Instance shadow detection. CVPR. Wang, T., Hu, X., Wang, Q., Heng, P.A., Fu, C.W.: Instance shadow detection. CVPR (2020) Digital compositing for film and video. S Wright, Focal PressWright, S.: Digital compositing for film and video. In: Focal Press (2001) Strong shadow removal via patch-based shadow edge detection. Q Wu, W Zhang, B V K V Kumar, IEEE International Conference on Robotics and Automation pp. Wu, Q., Zhang, W., Kumar, B.V.K.V.: Strong shadow removal via patch-based shadow edge detection. 2012 IEEE International Conference on Robotics and Au- tomation pp. 2177-2182 (2012) A bayesian approach for shadow extraction from a single image. T P Wu, C K Tang, Tenth IEEE International Conference on Computer Vision (ICCV'05. 1Wu, T.P., Tang, C.K.: A bayesian approach for shadow extraction from a single im- age. Tenth IEEE International Conference on Computer Vision (ICCV'05) Volume 1 1, 480-487 Vol. 1 (2005) Natural shadow matting. T P Wu, C K Tang, M S Brown, H Y Shum, http:/doi.acm.org/10.1145/1243980.1243982ACM Trans. Graph. 262Wu, T.P., Tang, C.K., Brown, M.S., Shum, H.Y.: Natural shadow matting. ACM Trans. Graph. 26(2) (June 2007). https://doi.org/10.1145/1243980.1243982, http: //doi.acm.org/10.1145/1243980.1243982 Shadow removal using bilateral filtering. Q Yang, K Tan, N Ahuja, IEEE Transactions on Image Processing. 21Yang, Q., Tan, K., Ahuja, N.: Shadow removal using bilateral filtering. IEEE Trans- actions on Image Processing 21, 4361-4368 (2012) Dualgan: Unsupervised dual learning for image-to-image translation. Z Yi, H Zhang, P Tan, M Gong, IEEE International Conference on Computer Vision (ICCV. Yi, Z., Zhang, H., Tan, P., Gong, M.: Dualgan: Unsupervised dual learning for image-to-image translation. 2017 IEEE International Conference on Computer Vi- sion (ICCV) pp. 2868-2876 (2017) Ris-gan: Explore residual and illumination with generative adversarial networks for shadow removal. L Zhang, C Long, X Zhang, C Xiao, AAAI Conference on Artificial Intelligence (AAAI). Zhang, L., Long, C., Zhang, X., Xiao, C.: Ris-gan: Explore residual and illumina- tion with generative adversarial networks for shadow removal. In: AAAI Conference on Artificial Intelligence (AAAI) (2020) Improving shadow suppression for illumination robust face recognition. W Zhang, X Zhao, J M Morvan, L Chen, IEEE Transactions on Pattern Analysis and Machine Intelligence. 41Zhang, W., Zhao, X., Morvan, J.M., Chen, L.: Improving shadow suppression for illumination robust face recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence 41, 611-624 (2019) Distraction-aware shadow detection. Q Zheng, X Qiao, Y Cao, R W H Lau, IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR. Zheng, Q., Qiao, X., Cao, Y., Lau, R.W.H.: Distraction-aware shadow detec- tion. 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) pp. 5162-5171 (2019) Unpaired image-to-image translation using cycle-consistent adversarial networks. J Y Zhu, T Park, P Isola, A A Efros, 2017 IEEE International Conference on. Computer Vision (ICCVZhu, J.Y., Park, T., Isola, P., Efros, A.A.: Unpaired image-to-image translation using cycle-consistent adversarial networks. In: Computer Vision (ICCV), 2017 IEEE International Conference on (2017) Bidirectional feature pyramid network with recurrent attention residual modules for shadow detection. L Zhu, Z Deng, X Hu, C W Fu, X Xu, J Qin, P A Heng, Proceedings of the European Conference on Computer Vision. the European Conference on Computer VisionZhu, L., Deng, Z., Hu, X., Fu, C.W., Xu, X., Qin, J., Heng, P.A.: Bidirectional feature pyramid network with recurrent attention residual modules for shadow detection. In: Proceedings of the European Conference on Computer Vision (2018)
[]
[ "Surface tension and the mechanics of liquid inclusions in compliant solids", "Surface tension and the mechanics of liquid inclusions in compliant solids" ]
[ "Robert W Style \nYale University\n06520New HavenCTUSA\n\nMathematical Institute\nUniversity of Oxford\nOX1 3LBOxfordUK\n", "John S Wettlaufer \nYale University\n06520New HavenCTUSA\n\nMathematical Institute\nUniversity of Oxford\nOX1 3LBOxfordUK\n", "Eric R Dufresne \nYale University\n06520New HavenCTUSA\n" ]
[ "Yale University\n06520New HavenCTUSA", "Mathematical Institute\nUniversity of Oxford\nOX1 3LBOxfordUK", "Yale University\n06520New HavenCTUSA", "Mathematical Institute\nUniversity of Oxford\nOX1 3LBOxfordUK", "Yale University\n06520New HavenCTUSA" ]
[]
Eshelby's theory of inclusions has wide-reaching implications across the mechanics of materials and structures including the theories of composites, fracture, and plasticity. However, it does not include the effects of surface stress, which has recently been shown to control many processes in soft materials such as gels, elastomers and biological tissue. To extend Eshelby's theory of inclusions to soft materials, we consider liquid inclusions within an isotropic, compressible, linear-elastic solid. We solve for the displacement and stress fields around individual stretched inclusions, accounting for the bulk elasticity of the solid and the surface tension (i.e. isotropic strain-independent surface stress) of the solid-liquid interface. Surface tension significantly alters the inclusion's shape and stiffness as well as its near-and far-field stress fields. These phenomenon depend strongly on the ratio of inclusion radius, R, to an elastocapillary length, L. Surface tension is significant whenever inclusions are smaller than 100L. While Eshelby theory predicts that liquid inclusions generically reduce the stiffness of an elastic solid, our results show that liquid inclusions can actually stiffen a solid when R < 3L/2. Intriguingly, surface tension cloaks the far-field signature of liquid inclusions when R = 3L/2. These results are have far-reaching applications from measuring local stresses in biological tissue, to determining the failure strength of soft composites.
10.1039/c4sm02413c
[ "https://arxiv.org/pdf/1409.1998v1.pdf" ]
9,964,978
1409.1998
4cc37b59232bd9fda714c004d5fff213ef0c5c44
Surface tension and the mechanics of liquid inclusions in compliant solids Robert W Style Yale University 06520New HavenCTUSA Mathematical Institute University of Oxford OX1 3LBOxfordUK John S Wettlaufer Yale University 06520New HavenCTUSA Mathematical Institute University of Oxford OX1 3LBOxfordUK Eric R Dufresne Yale University 06520New HavenCTUSA Surface tension and the mechanics of liquid inclusions in compliant solids (Dated: September 9, 2014) Eshelby's theory of inclusions has wide-reaching implications across the mechanics of materials and structures including the theories of composites, fracture, and plasticity. However, it does not include the effects of surface stress, which has recently been shown to control many processes in soft materials such as gels, elastomers and biological tissue. To extend Eshelby's theory of inclusions to soft materials, we consider liquid inclusions within an isotropic, compressible, linear-elastic solid. We solve for the displacement and stress fields around individual stretched inclusions, accounting for the bulk elasticity of the solid and the surface tension (i.e. isotropic strain-independent surface stress) of the solid-liquid interface. Surface tension significantly alters the inclusion's shape and stiffness as well as its near-and far-field stress fields. These phenomenon depend strongly on the ratio of inclusion radius, R, to an elastocapillary length, L. Surface tension is significant whenever inclusions are smaller than 100L. While Eshelby theory predicts that liquid inclusions generically reduce the stiffness of an elastic solid, our results show that liquid inclusions can actually stiffen a solid when R < 3L/2. Intriguingly, surface tension cloaks the far-field signature of liquid inclusions when R = 3L/2. These results are have far-reaching applications from measuring local stresses in biological tissue, to determining the failure strength of soft composites. I. INTRODUCTION Eshelby's theory of inclusions [1] provides a fundamental result underpinning a wide swath of phenomena in composite mechanics [2][3][4][5], fracture mechanics [6,7], dislocation theory [8], plasticity [9,10] and even seismology [11]. The theory describes how an inclusion of one elastic material deforms when it is embedded in an elastic host matrix. At an individual inclusion level, it predicts how the inclusion will deform in response to far-field stresses applied to the matrix. It also allows the prediction of the macroscopic material properties of a composite from a knowledge of its microstructure. Eshelby's theory does not include the effects of surface stresses at the inclusion/matrix boundary. However, recent work has suggested that surface stresses need to be accounted for in soft materials. This has been suggested both by theoretical models of nanoscale inclusions [12][13][14], and by recent experiments which have shown that surface tension (isotropic, strain-independent surface stress) can also significantly affect soft solids at micron and even millimetric scales. For example, solid capillarity limits the resolution of lithographic features [15][16][17][18], drives pearling and creasing instabilities [19][20][21][22], causes the Young-Dupré relation to break down for sessile droplets [23][24][25][26][27][28], and leads to a failure of the Johnson-Kendall-Roberts theory of adhesion [29][30][31][32][33]. Of particular relevance are our recent experiments embedding droplets in soft solids, where we found that Eshelby's predictions could not describe the response of inclusions below a critical, micron-scale elastocapillary length [34]. A similar break down was also seen in recent experiments that embedded bubbles in soft, elastic foams [35]. * [email protected] To apply Eshelby's theory to a broad-class of mechanical phenomena in soft materials, we need to reformulate it to account for surface tension. Here, we derive analytic expressions for the deformation of individual inclusions, the deformation and stress fields around the inclusions, and the elastic moduli of soft composites. Our approach builds upon previous theoretical works that have: focused on strain-dependent surface stresses [14,[36][37][38][39] (which are relevant to nanoinclusions in stiffer materials, but not for softer materials such as gels [40]), only considered isotropic loadings [12], used incorrect boundary conditions [13] (cf [41]), or only considered incompressible solids and employed a dipole approximation to calculate composite properties [42]. II. STRETCHING INDIVIDUAL INCLUSIONS We begin by considering how surface tension affects Eshelby's solution for the deformation of individual inclusions embedded in elastic solids subjected to far-field stresses [1]. We consider an isolated, incompressible, spherical droplet of radius R embedded in a linear-elastic solid that is deformed by a constant uniaxial far-field stress, as shown in Figure 1. The displacement field in the solid satisfies (1 − 2ν)∇ 2 u + ∇(∇ · u) = 0,(1) where ν is Poisson's ratio of the solid. For far-field boundary conditions, the stress in the solid σ is given by the applied uniaxial stress σ zz = σ ∞ , σ xx = σ yy = 0 in cartesian coordinates. Stress and strain are related by where δ ij is the Kronecker delta, and E is Young's modulus. Thus, the far-field boundary conditions can also be written ij = 1 E [(1 + ν)σ ij − νδ ij σ kk ] ,(2)zz = ∞ zz = σ ∞ zz /E, xx = yy = −ν ∞ zz . At the surface of the droplet the elastic stress satisfies a generalised Young-Laplace equation, which states that the difference in normal stress across an interface depends on its surface stress, Υ, and curvature K (equal to twice the mean curvature, or the sum of the principal curvatures) via σ · n = −pn + ΥKn(3) (e.g. [20,23]). Here n is the normal to the deformed droplet surface, σ · n is the normal stress on the solid side, and p is the pressure in the droplet. The assumption that the surface stress is simply an isotropic, strainindependent, surface tension is appropriate for many soft materials including gels and elastomers [40]. Expressions for n and K in terms of surface displacements are given in the Appendix -these are different from the expressions used in [13] which ignored inclusion deformation and assumed that K = 2Υ/R [14]. We exploit symmetry and solve the problem in spherical polar co-ordinates by adopting as an ansatz the solution u r = Fr + G r 2 + P 2 (cos θ)× 12νAr 3 + 2Br + 2 (5 − 4ν)C r 2 − 3 D r 4 , and u θ = dP 2 (cos θ) dθ × (7 − 4ν)Ar 3 + Br + 2 (1 − 2ν)C r 2 + D r 4 . (4) as described by [14]. The surface displacements in the radial and θ directions (θ is the polar angle from the z-axis) are u r and u θ respectively, P 2 is the Legendre polynomial of order 2, and A through G will be determined from the boundary conditions. Applying the far-field strain condition, we find that A = 0, F = (1 − 2ν) ∞ zz /3 and B = (1 + ν) ∞ zz /3. Droplet incompressibility requires that S u · n dS = 2π 0 π 0 R 2 u r sin θ dθdφ = 0, where S is the boundary of the stretched droplet, and the area integral is evaluated using results from differential geometry summarized in the Appendix. This gives G = −(1−2ν)R 3 ∞ zz /3. Finally, by applying the boundary condition (3) using equation (2) to covert stresses to strains and displacements we obtain C = 5R 3 (1 + ν)[ R L − (1 + ν)] 6 R L (7 − 5ν) + (17 − 2ν − 19ν 2 ) ∞ zz(5) and D = R 5 (1 + ν)[ R L − (−1 + ν + 2ν 2 )] R L (7 − 5ν) + (17 − 2ν − 19ν 2 ) ∞ zz .(6) Here L ≡ Υ/E is the elastocapillary length, a material property of the solid/liquid interface. For perturbations of wavelength much smaller than L, λ L, surface deformations are primarily opposed by surface tension, whereas for λ L, bulk elasticity suppresses deformation of the surface (e.g. [19,25,43,44]). With the expressions for A − G, Equation (4) gives us the exact displacement solutions. These also allow calculation of stresses in the solid: we convert displacements to strains (e.g. [45]) and then use Equation (2) to find the non-zero stress components σ rr , σ rθ , σ θθ and σ φφ . While these results are for uniaxial stress, they are readily extended to provide the solution for general farfield stresses. In the appropriate coordinate frame, the far-field stress matrix is diagonalisable so the only nonzero far-field stresses are σ 1 , σ 2 and σ 3 . Then, from linearity of the governing equations, we can calculate the resulting displacements by simply summing the solutions for uniaxial far-field stresses σ 1 , σ 2 and σ 3 . A. Inclusion shape While Eshelby's results are scale-free, surface tension makes the response of a liquid inclusion strongly sizedependent For large droplets, R L, the fluid droplet deforms more than the surrounding solid. In this limit, the droplet shape only depends on σ ∞ /E and ν, in agreement with Eshelby's theory [1]. However, as R approaches L, high interfacial curvatures are suppressed by surface tension. For R L, u r (R, θ) = 0 and the droplets remain spherical. This is visualized in Figure 2 for a uniaxially-stressed solid where the far-field stress and strain are σ ∞ = 0.3E and ∞ zz = 0.3, respectively. Here, the radial and polar displacements in the top two rows are measured relative to the far-field displacement field: u ∞ r = Fr + 2P 2 (cos θ)Br and u ∞ θ = P 2 (cos θ)Br. Changes in droplet shape are captured with an effective droplet strain d = ( − 2R)/R, where is the long-axis R/L=0.1 R/L=1 R/L=10 R/L=100 This is plotted in Figure 3(a). In both extremes of droplet size, the droplet shape is independent of size. In the capillary-dominated regime (R L) the droplet stays spherical ( d = 0). In the large-droplet limit (R L), surface tension does not play a role, and Eshelby's results are recovered ( d = 10 ∞ xx /3). There is a smooth crossover between these limits in the vicinity of R ∼ L. Surface tension makes significant changes to droplet shape for droplet radii up to about 100L. Although we only consider the uniaxial stress case above, similar results are obtained for more general triaxial far-field stresses. For example, for an incompressible solid in plane stress conditions (σ 1 , σ 2 = 0, σ ∞ 3 = 0, e.g. [34]), the length of the droplet in the 1-direction is 1 = 2R 1 + 5(2σ ∞ 1 − σ ∞ 2 ) E 6 + 15 L R .(8) We recently compared this result to experimental measurements of individual liquid inclusions in soft, stretched solids. We found fairly good agreement over a wide range of droplet sizes, substrate stiffnesses and applied strains [34]. B. Stress focussing by inclusions The macroscopic strength of composites can be reduced due to stress focusing by inclusions. According to the Tresca yield condition, the solid will yield when the shear stress exceeds a critical value τ c . Figure 2 (bottom row) shows the maximum shear stress, τ max , for an incompressible solid with a uniaxial far-field stress for various values of R/L. The maximum shear stress is greatest at the tip of the inclusion, and the value there increases significantly as R is reduced below L. In fact at the inclusion tip τ max (r = R, θ = 0) = τ ∞ max 5(2 + 9 L R ) 6 + 15 L R .(9) This is plotted in Figure 3(b). There is a significant increase in shear-stress concentration as surface tension becomes more important with τ max (R, 0) increasing from 5τ ∞ max /3 when R L, to 3τ ∞ max when R L. These results suggest that surface tension could weaken a soft composite when inclusions fall below a size ∼ 100L. This also means that the applied strain at which yielding is expected to occur is no longer independent of the size of the liquid inclusion, as would be predicted from Eshelby's results, but depends on the parameter R/L. These results hint at the potential role of surface tension for fracture mechanics in soft materials where a critical value is the crack-tip stress. The capillary-induced stress focussing seen here shows how surface tension could potentially significantly alter this value [46]. C. Dipole signature of inclusions At finite concentrations, inclusions interact at a distance through their far-field stresses. This can be important for determining mechanical properties of dilute composites (e.g. [42,47]). The far-field solutions are conveniently expressed by a multipole expansion. Here, inclusions appear as force dipoles in the far-field. From Equations (4), we find the leading order terms in the inclusion-induced displacements (u r − u ∞ r , u θ − u ∞ θ ) are proportional to 1/r 2 . This corresponds to a force dipole in an elastic body P ij = Pẑ iẑj + P e δ ij ,(10) withẑ being the unit vector in the z-direction. The displacement fields due to the dipoles are [48] u r = (1 + ν) [(1 − 2ν) (P + 3P e ) + P 2 (cos θ)(5 − 4ν)P ] 12πE(1 − ν)r 2 ,(11) and u θ = (1 + ν)(1 − 2ν) 12πEr 2 (1 − ν) dP 2 (cos θ) dθ P.(12) Thus, from comparison with Equation (4), P = 24CπE(1 − ν)/(1 + ν),(13) and P e = 4πE(1 − ν) G − 2C(1 − 2ν) (1 + ν)(1 − 2ν) .(14) The first dipole, P , is a force dipole of two point forces on the z-axis which also act along the z-axis -i.e. parallel to the applied far-field stress. The second term P e is an isotropic centre of expansion [48]. When ν = 1/2, the displacement field due to P e vanishes, and P = 8πCE. Intriguingly, the dipole strength, P , can be positive or negative. Figure 3(c) shows the normalised dipole strength P/σ ∞ R 3 of a liquid inclusion in incompressible solid with a uniaxial applied stress. For large inclusions (R > 1.5L), P > 0 and the dipole is a pair of outward pointing point forces. This increases solid displacements -consistent with a weak point in the solid. For small inclusions (R < 1.5L), P < 0 and so the dipole opposes the applied far-field stress, acting like a stiff point in the solid. The sign switch is clearly seen in the displacement fields of Figure 2. At R = 1.5L, the inclusion has no effect on the far-field elasticity field and is effectively invisible (e.g., see [49]). III. SOFT COMPOSITES We have shown that the surface tension of a small liquid droplet in a soft linear elastic solid resists deformation imposed by far-field stretch. Therefore, we expect that the dispersion of small liquid droplets within a solid can increase its apparent macroscopic stiffness. We calculate the effective Young's modulus E c of a composite containing a dilute quantity of monodisperse droplets by following Eshelby's original approach [1,13]. First, we calculate the excess energy W due to the presence of a single inclusion when a solid is uniaxially stretched. Then, we consider uniaxial stretching of a dilute composite of noninteracting inclusions. If the applied stress is σ zz = σ ∞ , the strain energy density of the composite is (15) where Φ is the volume fraction of inclusions. The second equality comes from the relationship between the strain energy density and the effective modulus of the material, allowing calculation of E c from W . The excess energy due to the presence of a single elastic inclusion in a uniaxially-stressed solid is E = (σ ∞ ) 2 /(2E) + W Φ/(4πR 3 /3) = (σ ∞ ) 2 /(2E c ),W = 1 2 Vi (σ ij ij − σ ∞ ij ∞ ij )dV + 1 2 Vm (σ ij ij − σ ∞ ij ∞ ij )dV + Υ∆S. (16) Here we assume that the inclusion is an elastic solid for generality -the droplet is the limiting case of zero shear modulus. The volumes of the elastic matrix outside of the inclusion and the inclusion V m and V i , respectively, the far-field stresses/strains are σ ∞ ij and ∞ ij respectively, and the change in surface are of the droplet upon stretching is ∆S. Using the divergence theorem, the stress boundary condition (3), and the fact that in the far-field, σ ∞ ij = σ ij , W = 1 2 Vm (σ ∞ ij ij − σ ij ∞ ij )dV + 1 2 S + (n i σ ∞ ij u j − n i σ ij u ∞ j )dS − Υ 2 S Ku i n i dS + Υ∆S.(17) Integration on the matrix side of the droplet surface S is denoted by S + . From Equation (2), the first term is zero, so W depends only upon displacements and stresses at the droplet surface. Using our earlier results (e.g. Equations 4), along with second-order (in the displacement) versions of the expressions for n, K, dS and ∆S shown in the Appendix, we obtain W for the case of a uniaxial far-field stress σ ∞ : W = 2πR 3 σ ∞2 (1 − ν) × [ R L (1 + 13ν) − (9 − 2ν + 5ν 2 + 16ν 3 )] E(1 + ν)[ R L (7 − 5ν) + (17 − 2ν − 19ν 2 )] .(18) Finally, from Equation (15), For an incompressible solid ν = 1/2 and we have Figure 4 plots the results of Equation (20) and shows the dramatic influence of capillarity on soft composite stiffness. When surface tension is negligible (R L), the composite becomes more compliant as the density of droplets increases -in exact agreement with Eshelby's prediction of E c /E = (1 + 5Φ/3) −1 (dotted curve), and qualitatively agreeing with other classical composite laws (e.g. [2,3]). However Eshelby's predictions break down when R 100L. In fact, when R < 1.5L, increasing the density of droplets causes the solid to stiffen, consistent with the dipole sign-switching seen earlier. In the surface-tension dominated limit, R L, the droplets stay spherical, and we find the maximum achievable composite stiffness E c = E/(1−Φ) (dash-dotted curve). Note that the droplets do not behave like rigid particles in this limit, for which E c = E/(1 − 5Φ/2) [1] (dashed curve). Although the droplets remain spherical due to capillarity, there are non-zero tangential displacements, unlike the case of rigid particles. E c E = 1 + 3(1 − ν) R L (1 + 13ν) − (9 − 2ν + 5ν 2 + 16ν 3 ) (1 + ν) R L (7 − 5ν) + (17 − 2ν − 19ν 2 ) Φ −1(19)E c E = 1 + 5 2 L R 5 2 L R (1 − Φ) + (1 + 5 3 Φ) .(20) These results agree with experiments. Recently, we made soft composites of glycerol droplets embedded in soft silicone solids. In quantitative agreement with the theory, we saw stiffening of solids by droplets in compliant solids, and softening in stiffer solids [34]. In the dilute limit (Φ → 0), Equation (20) matches with recent theoretical predictions (derived using the dipole approximation for inclusions in incompressible solids) that describe experimental measurements of shear moduli of emulsions containing monodisperse bubbles [35,42]. IV. CONCLUSIONS We have modified Eshelby's inclusion theory to include surface tension for liquid inclusions in a linear-elastic solid, giving both the microscopic behaviour and the macroscopic effects of inclusions in composites. We have shown that surface tension stiffens small inclusions, and focusses shear stresses at the inclusion tips. Thus composites with small, capillary-dominated inclusions will be stiffer but may be weaker. This stress-concentration illustrates the potentially strong role of surface tension in the failure of soft-solids, highlighting the relevance of this work to emerging fields like fracture mechanics and plasticity in soft materials (e.g. [46,50,51]). Inclusions with surface tension can be viewed, at leading order, as elastic dipoles in a solid. The sign of the dipole captures the stiffening behaviour due to capillarity. Treating inclusions as dipoles also offers a simplified picture of inclusions that give the interactions between features in elastic bodies, and can streamline calculations of bulk composite properties via standard theories. The analytic theory presented for bulk composite stiffness, which incorporates the entire elastic field around inclusions, validates the dipole approach by recovering previous results for incompressible materials in the limit of dilute composites [35,42]. Our work is applicable to a wide variety of soft material problems. Most obviously it can be directly applied to composites comprising soft materials such as gels and elastomers. As a specific example, we have shown how surface tension effects allow elastic cloaking, with inclusions of size R = 1.5L being mechanically invisible. Our work also has interesting uses in mechanobiology, as biological tissue is predominantly soft. For example, a recent study embedded droplets in biological tissue and observed their deformations to extract local anisotropic stresses [52]. The coupling between microscopic and macroscopic stress also plays an important role in the tensional homeostasis of soft tissues [53,54]. Although we have only considered liquid inclusions here our analysis can be repeated for more general soft composites with elastic inclusions in place of liquid droplets. In that case, we expect that similar capillary effects to those presented here will be seen whenever R 100Υ/E i , 100Υ/E m with E i /E m being the inclusion/matrix stiffnesses respectively. V. ACKNOWLEDGEMENTS We thank Peter Howell and Alain Goriely for helpful conversations. We are grateful for funding from the National Science Foundation (CBET-1236086) to ERD, the Yale University Bateman Interdepartmental Postdoctoral Fellowship to RWS and the John Simon Guggenheim Foundation, the Swedish Research Council, and a Royal Society Wolfson Research Merit Award to JSW. VI. APPENDIX To calculate the effect of surface tension on the shape of a droplet embedded in a soft solid, we need expressions for the normal to the droplet surface, its curvature, and surface area in terms of the surface displacements. We consider an initially spherical droplet with the position of its surface given by x = (R, 0, 0), and apply a uniaxial stretch so that x → x = (R + u r , u θ , 0). From axisymmetry, the u r , u θ are independent of the angle φ. We calculate the normal to the droplet surface, n, by taking the cross-product of the surface tangent vectors, ∂x /∂θ and ∂x /∂φ [55], n = ∂x ∂θ ∧ ∂x ∂φ ∂x ∂θ ∧ ∂x ∂φ ,(21) with ∂x /∂θ = ∂u r ∂θ − u θ , R + u r + ∂u θ ∂θ , 0 , and ∂x /∂φ = (0, 0, (R + u r ) sin θ + u θ cos θ) . At leading order in u we find n = 1, u θ R − 1 R ∂u r ∂θ , 0 .(24) The droplet surface curvature, K, can be calculated from differential geometry using the first and second fundamental forms [55]: K = e f G f − 2f f F f + g f E f E f G f − F 2 f(25) where E f = ∂x ∂θ · ∂x ∂θ , F f = ∂x ∂θ · ∂x ∂φ , G f = ∂x ∂φ · ∂x ∂φ ,(26) and e f = n · ∂ 2 x ∂θ 2 , f f = n · ∂ 2 x ∂θ∂φ , g f = n · ∂ 2 x ∂φ 2 .(27) Thus, at leading order in u, K = 2 R − 1 R 2 2u r + cot θ ∂u r ∂θ + ∂ 2 u r ∂θ 2 .(28) Using the results above, we also obtain the area element dS = E f G f − F 2 f dθdφ [55]. At leading order in u, dS = R 2 sin θ + R u θ cos θ + 2u r sin θ + ∂u θ ∂θ sin θ dθdΦ, (29) and after integration we obtain the droplet surface area S = 4πR 2 + 2π 0 π 0 R u θ cos θ + 2u r sin θ + ∂u θ ∂θ sin θ dθdΦ = 4πR 2 + 2π 0 π 0 2u r sin θ dθdΦ. (30) FIG. 2 . 2Examples of droplets embedded in an incompressible solid under uniaxial strain with ∞ zz = 0.3. Top: excess radial displacements (ur −u ∞ r )/R caused by the presence of the inclusion. The elastic dipole around the inclusion changes sign as R/L increases. Middle: excess tangential displacements (u θ − u ∞ θ )/R. θ is the polar angle from the z-axis. Bottom: shear-stress concentration factor τmax/τ ∞ max . When surface tension dominates, τmax is significantly increased at the inclusion tip. The black arrows denote the stretch of the host material. of the droplet. For an incompressible solid, Eq. FIG. 3 . 3Liquid inclusion characteristics as a function of size R/L for inclusions in an incompressible solid with an applied uniaxial far-field stress as shown inFigure 1. a) Droplet strain, d = ( − 2R)/R divided by far-field strain ∞ only depends on R/L. When R/L 1, surface tension dominates and there is no droplet deformation. When R/L 1, surface tension is negligible and the shape prediction is that of classical Eshelby theory -given by the dash-dotted line. The dashed line shows the material stretch, ( − 2R)/R = ∞ . b) The shear-stress concentration factor at the inclusion tip (r = R, θ = 0). This corresponds to the highest shear stress in the solid around the inclusion (seeFigure 2, bottom row). Dash-dotted lines show surface-tension dominated and Eshelby limits: τmax/τ ∞ max = 3, 5/3 respectively. c) The farfield dipole caused around the inclusion. Note that this dipole changes sign at R = 1.5L, indicating the transition between inclusion stiffening and inclusion softening of the composite. FIG. 4 . 4The stiffness of soft composites. Young's modulus of composites of droplets embedded in linear-elastic solids, Ec as a function of liquid content. The dotted curve shows Eshelby's prediction without surface tension. The dash-dotted curve shows the surface-tension dominated limit, R/L 1. The dashed curve show Eshelby's prediction for rigid spheres embedded in an elastic solid. . J D Eshelby, Proc. Roy. Soc. Lond. A. 241376J. D. Eshelby, Proc. Roy. Soc. Lond. A 241, 376 (1957). . Z Hashin, S Shtrikman, J. Mech. Phys. Solids. 11127Z. Hashin and S. Shtrikman, J. Mech. Phys. Solids 11, 127 (1963). . T Mori, K Tanaka, Acta Metall, 21571T. Mori and K. Tanaka, Acta Metall. 21, 571 (1973). . R Hill, J. Mech. Phys. Solids. 11357R. Hill, J. Mech. Phys. Solids 11, 357 (1963). . B Budiansky, J. Mech. Phys. Solids. 13223B. Budiansky, J. Mech. Phys. Solids 13, 223 (1965). . J R Rice, J. Appl. Mech. 35379J. R. Rice, J. Appl. Mech. 35, 379 (1968). . B Budiansky, R J O&apos;connell, Int. J. Solids Struct. 1281B. Budiansky and R. J. O'Connell, Int. J. Solids Struct. 12, 81 (1976). . T Mura, Micromechanics of Defects in Solids. 3SpringerT. Mura, Micromechanics of Defects in Solids, Vol. 3 (Springer, 1987). . J Hutchinson, Proc. Roy. Soc. London A. 319247J. Hutchinson, Proc. Roy. Soc. London A 319, 247 (1970). . M Berveiller, A Zaoui, J. Mech. Phys. Solids. 26325M. Berveiller and A. Zaoui, J. Mech. Phys. Solids 26, 325 (1978). . H Kanamori, D L Anderson, Bull. Seismol. Soc. Am. 651073H. Kanamori and D. L. Anderson, Bull. Seismol. Soc. Am. 65, 1073 (1975). . P Sharma, S Ganti, J. Appl. Mech. 71663P. Sharma and S. Ganti, J. Appl. Mech. 71, 663 (2004). . F Yang, J. Appl. Phys. 953516F. Yang, J. Appl. Phys. 95, 3516 (2004). . H L Duan, J Wang, Z P Huang, B L Karihaloo, Proc. Roy. Soc. A. 4613335H. L. Duan, J. Wang, Z. P. Huang, and B. L. Karihaloo, Proc. Roy. Soc. A 461, 3335 (2005). . C Y Hui, A Jagota, Y Y Lin, E J Kramer, Langmuir. 181394C. Y. Hui, A. Jagota, Y. Y. Lin, and E. J. Kramer, Langmuir 18, 1394 (2002). . A Jagota, D Paretkar, A Ghatak, Phys. Rev. E. 8551602A. Jagota, D. Paretkar, and A. Ghatak, Phys. Rev. E 85, 051602 (2012). . S Mora, C Maurini, T Phou, J.-M Fromental, B Audoly, Y Pomeau, Phys. Rev. Lett. 111114301S. Mora, C. Maurini, T. Phou, J.-M. Fromental, B. Au- doly, and Y. Pomeau, Phys. Rev. Lett. 111, 114301 (2013). . D Paretkar, X Xu, C.-Y. Hui, A Jagota, Soft Matter. 104084D. Paretkar, X. Xu, C.-Y. Hui, and A. Jagota, Soft Mat- ter 10, 4084 (2014). . S Mora, T Phou, J.-M Fromental, L M Pismen, Y Pomeau, Phys. Rev. Lett. 105214301S. Mora, T. Phou, J.-M. Fromental, L. M. Pismen, and Y. Pomeau, Phys. Rev. Lett. 105, 214301 (2010). . S Mora, M Abkarian, H Tabuteau, Y Pomeau, Soft Matter. 710612S. Mora, M. Abkarian, H. Tabuteau, and Y. Pomeau, Soft Matter 7, 10612 (2011). . A Chakrabarti, M K Chaudhury, Langmuir. 296926A. Chakrabarti and M. K. Chaudhury, Langmuir 29, 6926 (2013). . D L Henann, K Bertoldi, Soft Matter. 10709D. L. Henann and K. Bertoldi, Soft Matter 10, 709 (2014). . R W Style, E R Dufresne, Soft Matter. 87177R. W. Style and E. R. Dufresne, Soft Matter 8, 7177 (2012). . R W Style, Y Che, J S Wettlaufer, L A Wilen, E R Dufresne, Phys. Rev. Lett. 11066103R. W. Style, Y. Che, J. S. Wettlaufer, L. A. Wilen, and E. R. Dufresne, Phys. Rev. Lett. 110, 066103 (2013). R W Style, Y Che, S J Park, B M Weon, J H Je, C Hyland, G K German, M P Power, L A Wilen, J S Wettlaufer, E R Dufresne, Proc. Nat. Acad. Sci. Nat. Acad. Sci11012541R. W. Style, Y. Che, S. J. Park, B. M. Weon, J. H. Je, C. Hyland, G. K. German, M. P. Power, L. A. Wilen, J. S. Wettlaufer, and E. R. Dufresne, Proc. Nat. Acad. Sci. 110, 12541 (2013). N Nadermann, C.-Y. Hui, A Jagota, 10.1073/pnas.1304587110Proc. Nat. Acad. Sci. Nat. Acad. Sci11010541N. Nadermann, C.-Y. Hui, and A. Jagota, Proc. Nat. Acad. Sci. 110, 10541 (2013). . J B Bostwick, M Shearer, K E Daniels, Soft Matter. J. B. Bostwick, M. Shearer, and K. E. Daniels, Soft Matter , (2014). Dynamic Contact Angle of a Soft Solid. S Karpitschka, S Das, B Andreotti, J Snoeijer, arXiv:arXiv:1406.5547physics.flu-dynS. Karpitschka, S. Das, B. Andreotti, and J. Snoei- jer, "Dynamic Contact Angle of a Soft Solid," (2014), arXiv:arXiv:1406.5547 [physics.flu-dyn]. . K Johnson, K Kendall, A Roberts, Proc. Roy. Soc. A. 324301K. Johnson, K. Kendall, and A. Roberts, Proc. Roy. Soc. A 324, 301 (1971). . R W Style, C Hyland, R Boltyanskiy, J S Wettlaufer, E R Dufresne, Nature Commun. 42728R. W. Style, C. Hyland, R. Boltyanskiy, J. S. Wettlaufer, and E. R. Dufresne, Nature Commun. 4, 2728 (2013). . T Salez, M Benzaquen, E Raphael, Soft Matter. 910699T. Salez, M. Benzaquen, and E. Raphael, Soft Matter 9, 10699 (2013). . X Xu, A Jagota, C.-Y. Hui, Soft Matter. 104625X. Xu, A. Jagota, and C.-Y. Hui, Soft Matter 10, 4625 (2014). . Z Cao, M J Stevens, A V Dobrynin, Macromolecules. 473203Z. Cao, M. J. Stevens, and A. V. Dobrynin, Macro- molecules 47, 3203 (2014). . R W Style, R Boltyanskiy, B Allen, K E Jensen, H P Foote, J S Wettlaufer, E R Dufresne, arXiv:1407.6424arXiv preprintR. W. Style, R. Boltyanskiy, B. Allen, K. E. Jensen, H. P. Foote, J. S. Wettlaufer, and E. R. Dufresne, arXiv preprint arXiv:1407.6424 (2014). . L Ducloue, O Pitois, J Goyon, X Chateau, G Ovarlez, Soft Matter. 105093L. Ducloue, O. Pitois, J. Goyon, X. Chateau, and G. Ovarlez, Soft Matter 10, 5093 (2014). . H L Duan, X Yi, Z P Huang, J Wang, Mech. Mater. 3981H. L. Duan, X. Yi, Z. P. Huang, and J. Wang, Mech. Mater. 39, 81 (2007). . H L Duan, X Yi, Z P Huang, J Wang, Mech. Mater. 3994H. L. Duan, X. Yi, Z. P. Huang, and J. Wang, Mech. Mater. 39, 94 (2007). . S Brisard, L Dormieux, D Kondo, Comp. Mater. Sci. 50403S. Brisard, L. Dormieux, and D. Kondo, Comp. Mater. Sci. 50, 403 (2010). . S Brisard, L Dormieux, D Kondo, Comp. Mater. Sci. 48589S. Brisard, L. Dormieux, and D. Kondo, Comp. Mater. Sci. 48, 589 (2010). . C.-Y Hui, A Jagota, Langmuir. 2911310C.-Y. Hui and A. Jagota, Langmuir 29, 11310 (2013). . H Duan, J Wang, Z Huang, B Karihaloo, J. Mech. Phys. Solids. 531574H. Duan, J. Wang, Z. Huang, and B. Karihaloo, J. Mech. Phys. Solids 53, 1574 (2005). . J F Palierne, Rheologica Acta. 29204J. F. Palierne, Rheologica Acta 29, 204 (1990). . J Wang, H Duan, Z Huang, B Karihaloo, Proc. Roy. Soc. A. 4621355J. Wang, H. Duan, Z. Huang, and B. Karihaloo, Proc. Roy. Soc. A 462, 1355 (2006). . R W Style, R Boltyanskiy, G K German, C Hyland, C W Macminn, A F Mertz, L A Wilen, Y Xu, E R Dufresne, Soft Matter. 104047R. W. Style, R. Boltyanskiy, G. K. German, C. Hyland, C. W. MacMinn, A. F. Mertz, L. A. Wilen, Y. Xu, and E. R. Dufresne, Soft Matter 10, 4047 (2014). L D Landau, E M Lifshitz, Theory of Elasticity, Third. LondonPergamon Press7L. D. Landau and E. M. Lifshitz, Course of Theoretical Physics, Volume 7: Theory of Elasticity, Third (Pergamon Press, London, 1986). . T Liu, R Long, C.-Y. Hui, Soft Matter. T. Liu, R. Long, and C.-Y. Hui, Soft Matter , Accepted (2014). . U S Schwarz, S A Safran, Rev. Mod. Phys. 851327U. S. Schwarz and S. A. Safran, Rev. Mod. Phys. 85, 1327 (2013). A Lurie, A Belyaev, Theory of Elasticity. BerlinSpringer246A. Lurie and A. Belyaev, Theory of Elasticity (Springer, Berlin, 2005) p. 246. . G W Milton, M Briane, J R Willis, New J. Phys. 8248G. W. Milton, M. Briane, and J. R. Willis, New J. Phys. 8, 248 (2006). . S Kundu, A J Crosby, Soft Matter. 53963S. Kundu and A. J. Crosby, Soft Matter 5, 3963 (2009). . J Cui, C H Lee, A Delbos, J J Mcmanus, A J Crosby, Soft Matter. 77827J. Cui, C. H. Lee, A. Delbos, J. J. McManus, and A. J. Crosby, Soft Matter 7, 7827 (2011). . O Campàs, T Mammoto, S Hasso, R A Sperling, D O&apos;connell, A G Bischof, R Maas, D A Weitz, L Mahadevan, D E Ingber, Nature Methods. 11183O. Campàs, T. Mammoto, S. Hasso, R. A. Sperling, D. O'Connell, A. G. Bischof, R. Maas, D. A. Weitz, L. Mahadevan, and D. E. Ingber, Nature Methods 11, 183 (2014). . R Brown, R Prajapati, D Mcgrouther, I Yannas, M Eastwood, J. Cell. Physiol. 175323R. Brown, R. Prajapati, D. McGrouther, I. Yannas, and M. Eastwood, J. Cell. Physiol. 175, 323 (1998). . A Zemel, I Bischofs, S Safran, Phys. Rev. Lett. 97128103A. Zemel, I. Bischofs, and S. Safran, Phys. Rev. Lett. 97, 128103 (2006). E Abbena, S Salamon, A Gray, Modern Differential Geometry of Curves and Surfaces with Mathema. Taylor & FrancisE. Abbena, S. Salamon, and A. Gray, Modern Differential Geometry of Curves and Surfaces with Mathema (Taylor & Francis, 2006).
[]
[ "Self-supervised Auxiliary Learning with Meta-paths for Heterogeneous Graphs", "Self-supervised Auxiliary Learning with Meta-paths for Heterogeneous Graphs" ]
[ "Dasol Hwang \nClova AI Research\nNaver Corp\n\n", "Jinyoung Park \nClova AI Research\nNaver Corp\n\n", "Sunyoung Kwon ", "Kyung-Min Kim ", "Jung-Woo Ha ", "Hyunwoo J Kim \nClova AI Research\nNaver Corp\n\n", "\nKorea University\n\n" ]
[ "Clova AI Research\nNaver Corp\n", "Clova AI Research\nNaver Corp\n", "Clova AI Research\nNaver Corp\n", "Korea University\n" ]
[]
Graph neural networks have shown superior performance in a wide range of applications providing a powerful representation of graph-structured data. Recent works show that the representation can be further improved by auxiliary tasks. However, the auxiliary tasks for heterogeneous graphs, which contain rich semantic information with various types of nodes and edges, have less explored in the literature. In this paper, to learn graph neural networks on heterogeneous graphs we propose a novel self-supervised auxiliary learning method using meta-paths, which are composite relations of multiple edge types. Our proposed method is learning to learn a primary task by predicting meta-paths as auxiliary tasks. This can be viewed as a type of meta-learning. The proposed method can identify an effective combination of auxiliary tasks and automatically balance them to improve the primary task. Our methods can be applied to any graph neural networks in a plug-in manner without manual labeling or additional data. The experiments demonstrate that the proposed method consistently improves the performance of link prediction and node classification on heterogeneous graphs.Preprint. Under review.
null
[ "https://arxiv.org/pdf/2007.08294v1.pdf" ]
220,546,566
2007.08294
72cfb845df72f714da0f83598b007e5c041cdf41
Self-supervised Auxiliary Learning with Meta-paths for Heterogeneous Graphs Dasol Hwang Clova AI Research Naver Corp Jinyoung Park Clova AI Research Naver Corp Sunyoung Kwon Kyung-Min Kim Jung-Woo Ha Hyunwoo J Kim Clova AI Research Naver Corp Korea University Self-supervised Auxiliary Learning with Meta-paths for Heterogeneous Graphs Graph neural networks have shown superior performance in a wide range of applications providing a powerful representation of graph-structured data. Recent works show that the representation can be further improved by auxiliary tasks. However, the auxiliary tasks for heterogeneous graphs, which contain rich semantic information with various types of nodes and edges, have less explored in the literature. In this paper, to learn graph neural networks on heterogeneous graphs we propose a novel self-supervised auxiliary learning method using meta-paths, which are composite relations of multiple edge types. Our proposed method is learning to learn a primary task by predicting meta-paths as auxiliary tasks. This can be viewed as a type of meta-learning. The proposed method can identify an effective combination of auxiliary tasks and automatically balance them to improve the primary task. Our methods can be applied to any graph neural networks in a plug-in manner without manual labeling or additional data. The experiments demonstrate that the proposed method consistently improves the performance of link prediction and node classification on heterogeneous graphs.Preprint. Under review. Introduction Graph neural networks [1][2][3] have been proven effective to learn representations for various tasks such as node classification [4], link prediction [5,6], and graph classification [7,8]. The powerful representation yields state-of-the-art performance in a variety of applications including social network analysis [9,4,10], citation network analysis [11,12], visual understanding [13][14][15], recommender systems [16][17][18], physics [19,20], and drug discovery [21,22]. Despite the wide operating range of graph neural networks, employing auxiliary (pre-text) tasks has been less explored for further improving graph representation learning. Pre-training with an auxiliary task is a common technique for deep neural networks. Indeed, it is the de facto standard step in natural language processing and computer vision to learn a powerful backbone networks such as BERT [23] and ResNet [24] leveraging large datasets such as BooksCorpus [25], English Wikipedia, and ImageNet [26]. The models trained on the auxiliary task are often beneficial for the primary (target) task of interest. Despite the success of pre-training, few approaches have been generalized to graph-structured data due to their fundamental challenges. First, graph structure (e.g., the number of nodes/edges, and diameter) and its meaning can significantly differ between domains. So the model trained on an auxiliary task can harm generalization on the primary task, i.e., negative transfer [27]. Also, many graph neural networks are transductive approaches. This often makes transfer learning between datasets inherently infeasible. So, pre-training on the target dataset has been proposed using auxiliary tasks: graph kernel [28], graph reconstruction [29], and attribute masking [21]. These assume that the auxiliary tasks for pre-training are carefully selected with substantial domain knowledge and expertise in graph characteristics to assist the primary task. Since most graph neural networks operate on homogeneous graphs, which have a single type of nodes and edges, the previous pre-training/auxiliary tasks are not specifically designed for heterogeneous graphs, which have multiple types of nodes and edges. Heterogeneous graphs commonly occur in real-world applications, for instance, a music dataset has multiple types of nodes (e.g., user, song, artist) and multiple types of relations (e.g., user-artist, song-film, song-instrument). In this paper, we proposed a framework to train a graph neural networks with automatically selected auxiliary self-supervised tasks which assist the target task without additional data and labels. Our approach first generates meta-paths from heterogeneous graphs without manual labeling and train a model with meta-path prediction to assist the primary task such as link prediction and node classification. This can be formulated as a meta-learning problem. Furthermore, our method can be adopted to existing GNNs in a plug-in manner, enhancing the model performance. Our contribution is threefold: (i) We propose a self-supervised learning method on a heterogeneous graph via meta-path prediction without additional data. (ii) Our framework automatically selects metapaths (auxiliary tasks) to assist the primary task via meta-learning. (iii) We develop Hint Network that helps the learner network to benefit from challenging auxiliary tasks. To the best of our knowledge, this is the first auxiliary task with meta-paths specifically designed for leveraging heterogeneous graph structure. Our experiment shows that meta-path prediction improves the representational power and the gain can be further improved to explicitly optimize the auxiliary tasks for the primary task via meta-learning and the Hint Network, built on various state-of-the art GNNs. Related Work Graph Neural Networks have provided promising results for various tasks [16][17][18][30][31][32]. Bruna et al. [33] proposed a neural network that performs convolution on the graph domain using Fourier basis from spectral graph theory. In contrast, non-spectral (spatial) approaches have been developed [12,11,4,34]. Inspired by self-supervised learning [35][36][37][38] and pre-training [23,39] in computer vision and natural language processing, pre-training for GNNs has been recently proposed [21,28]. Recent works [34,40] show promising results that transfer learning can be successful on graphs but they require additional manually labeled data. To avoid the need for manual labeling, self-supervised learning on the target domain such as graph kernel [28], graph reconstruction [29], and attribute masking [21] has been proposed. The auxiliary tasks should be manually chosen with domain knowledge and they are not optimized for the primary task. Auxiliary Learning is a learning strategy to employ auxiliary tasks to assist the primary task. It is similar to multi-task learning, but auxiliary learning cares only the performance of the primary task. A number of auxiliary learning methods are proposed in a wide range of tasks [41][42][43]. AC-GAN [44] proposed an auxiliary classifier for generative models. Recently, Meta-Auxiliary Learning [45] proposes an elegant solution to generate new auxiliary tasks by collapsing existing classes. However, it cannot be applicable to some tasks such as link prediction which has only one positive class. Our approach generates meta-paths on heterogeneous graphs to make new labels and trains models to predict meta-paths as auxiliary tasks. Meta-learning aims learning to learn models efficiently and effectively, and generalizes the learning strategy to new tasks. Meta-learning includes black-box methods to approximate gradients without any information about models [46], optimization-based methods to learn an optimal initialization for adapting new tasks [47][48][49][50], learning loss functions [48,51] and metric-learning or non-parametric methods for few-shot learning [52][53][54]. In contrast to classical learning algorithms that generalize across samples, meta-learning generalizes across tasks. In this paper, we use meta-learning to learn a concept across tasks and transfer the knowledge from auxiliary tasks to the primary task. Method The goal of our framework is to learn with multiple auxiliary tasks to improve the performance of the primary task. In this work, we demonstrate our framework with math-path predictions as auxiliary tasks. But our framework could be extended to include other auxiliary tasks. The meta-paths capture diverse and meaningful relations between nodes on heterogeneous graphs [55]. However, learning with auxiliary tasks has multiple challenges: identifying useful auxiliary tasks, balancing the auxiliary tasks with the primary task, and converting challenging auxiliary tasks into solvable (and relevant) tasks. To address the challenges, we propose SELf-supervised Auxiliary Learning (SELAR). Our framework consists of two main components: 1) learning weight functions to softly select auxiliary tasks and balance them with the primary task via meta-learning, and 2) learning Hint Networks to convert challenging auxiliary tasks into more relevant and solvable tasks to the primary task learner. Meta-path Prediction as a self-supervised task Most existing graph neural networks have been studied focusing on homogeneous graphs that have a single type of nodes and edges. However, in real-world applications, heterogeneous graphs [56], which have multiple types of nodes and edges, commonly occur. Learning models on the heterogeneous graphs requires different considerations to effectively represent their node and edge heterogeneity. Heterogeneous graph [57]. Let G = (V, E) be a graph with a set of nodes V and edges E. A heterogeneous graph is a graph equipped with a node type mapping function f v : V → T v and an edge type mapping function f e : E → T e , where T v is a set of node types and T e is a set of edge types. Each node v i ∈ V (and edge e ij ∈ E resp.) has one node type, i.e., f v (v i ) ∈ T v , (and one edge type f e (e ij ) ∈ T e resp.). In this paper, we consider the heterogeneous graphs with |T e | > 1 or |T v | > 1. When |T e | = 1 and |T v | = 1, it becomes a homogeneous graph. [55,58] is a path on a heterogeneous graph G that a sequence of nodes connected with heterogeneous edges, i.e., v 1 Meta-Path t1 − → v 2 t2 − → . . . t l − → v l+1 , where t l ∈ T e denotes an l-th edge type of the meta-path. The meta-path can be viewed as a composite relation R = t 1 • t 2 . . . • t l between node v 1 and v l+1 , where R 1 • R 2 denotes the composition of relation R 1 and R 2 . The definition of meta-path generalizes multi-hop connections and is shown to be useful to analyze heterogeneous graphs. For instance, in Book-Crossing dataset, 'user-item-written.series-item-user' indicates that a meta-path that connects users who like a same book series. We introduce meta-path prediction as a self-supervised auxiliary task to improve the representational power of graph neural networks. To our knowledge, the meta-path prediction has not been studied in the context of self-supervised learning for graph neural networks in the literature. Meta-path prediction is similar to link prediction but meta-paths allow heterogeneous composite relations. The meta-path prediction can be achieved in the same manner as link prediction. If two nodes u and v are connected by a meta-path p with the heterogeneous edges (t 1 , t 2 , . . . t ), then y p u,v = 1, otherwise y p u,v = 0. The labels can be generated from a heterogeneous graph without any manual labeling. They can be obtained by A p = A t l . . . A t2 A t1 , where A t is the adjacency matrix of edge type t. The binarized value at (u, v) in A p indicates whether u and v are connected with the meta-path p. In this paper, we use meta-path prediction as a self-supervised auxiliary task. Let X ∈ R |V |×d and Z ∈ R |V |×d be input features and their hidden representations learnt by GNN f , i.e., Z = f (X; w, A), where w is the parameter for f , and A ∈ R |V |×|V | is the adjacency matrix. Then link prediction and meta-path prediction are obtained by a simple operation aŝ y t u,v = σ(Φ t (z u ) Φ t (z v )),(1) where Φ t is the task-specific network for task t ∈ T and z u and z v are the node embeddings of node u and v. e.g., Φ 0 (and Φ 1 resp.) for link prediction (and the first type of meta-path prediction resp.). The architecture is shown in Fig. 1. To optimize the model, as the link prediction, cross entropy is used. The graph neural network f is shared by the link prediction and meta-path predictions. As any auxiliary learning methods, the meta-paths (auxiliary tasks) should be carefully chosen and properly weighted so that the meta-path prediction does not compete with link prediction especially when the capacity of GNNs is limited. To address these issues, we propose our framework that automatically select meta-paths and balance them with the link prediction via meta-learning. Self-Supervised Auxiliary Learning Our framework SELAR is learning to learn a primary task with multiple auxiliary tasks to assist the primary task. This can be formally written as Figure 1: The SELAR framework for self-supervised auxiliary learning. Our framework learns how to balance (or softly select) auxiliary tasks to improve the primary task via meta-learning. In this paper, the primary task is link prediction (or node classification) and auxiliary tasks are meta-path predictions to capture rich information of a heterogeneous graph. min w,Θ E [ L pr (w * (Θ)) ] (x,y)∼D pr s.t. w * (Θ) = argmin w E L pr+au (w; Θ) (x,y)∼D pr+au ,(2) where L pr (·) is the primary task loss function to evaluate the trained model f (x; w * (Θ)) on metadata D pr and L pr+au is the loss function to train a model on training data D pr+au with the primary and auxiliary tasks. To avoid cluttered notation, f , x, and y are omitted. Each task T t has N t samples and T 0 and {T t } T t=1 denote the primary and auxiliary tasks respectively. The proposed formulation in Eq. (2) learns how to assist the primary task by optimizing Θ via meta-learning. The nested optimization problem given Θ is a regular training with properly adjusted loss functions to balance the primary and auxiliary tasks. The formulation can be more specifically written as min w,Θ M0 i=1 1 M 0 0 (y (0,meta) i , f (x (0,meta) i ; w * (Θ)) (3) s.t. w * (Θ) = argmin w T t=0 Nt i=1 1 N t V(ξ (t,train) i ; Θ) t (y (t,train) i , f t (x (t,train) i ; w)),(4) where t and f t denote the loss function and the model for task t. We overload t with its function value, i.e., t = t (y (t,train) i , f t (x (t,train) i ; w). ξ (t,train) i is the embedding vector of i th sample for task t. In our experiment, ξ (t,train) i is the concatenation of one-hot representation of task types, the label of the sample (positive/negative), and its loss value, i.e., ξ (t,train) i = t ; e t ; y (t,train) i ∈ R T +2 . To derive our learning algorithm, we first shorten the objective function in Eq. (3) and Eq. (4) as L pr (w * (Θ)) and L pr+au (w; Θ). This is equivalent to Eq. (2) without expectation. Then, our formulation is given as min w,Θ L pr (w * (Θ)) s.t. w * (Θ) = argmin w L pr+au (w; Θ),(5) To circumvent the difficulty of the bi-level optimization, as previous works [47,48] in meta-learning we approximate it with the updated parametersŵ using the gradient descent update as w * (Θ) ≈ŵ k (Θ) = w k − α∇ w L pr+au (w k ; Θ),(6) where α is the learning rate for w. We do not numerically evaluateŵ k (Θ) instead we plug the computational graph ofŵ k in L pr (w * (Θ)) to optimize Θ. Let ∇ Θ L pr (w * (Θ k )) be the gradient evaluated at Θ k . Then updating parameters Θ is given as Θ k+1 = Θ k − β∇ Θ L pr (ŵ k (Θ k )),(7) where β is the learning rate for Θ. This update allows softly selecting useful auxiliary tasks (metapaths) and balance them with the primary task to improve the performance of the primary task. Without balancing tasks with the weighting function V(·; Θ), auxiliary tasks can dominate training and degrade the performance of the primary task. The model parameters w k for tasks can be updated with optimized Θ k+1 in (7) as w k+1 = w k − α∇ w L pr+au (w k ; Θ k+1 ).(8) Remarks. The proposed formulation can suffer from the meta-overfitting [59,60] meaning that the parameters Θ to learn weights for softly selecting meta-paths and balancing the tasks with the primary task can overfit to the small meta-dataset. In our experiment, we found that the overfitting can be alleviated by meta-validation sets [59]. To learn Θ that is generalizable across meta-training sets, we optimize Θ across k different meta-datasets like k-fold cross validation using the following equation: Θ k+1 = Θ k − β E ∇ Θ L pr (ŵ k (Θ k )) , D pr ∼CV(9) where D meta ∼ CV is a meta-dataset from cross validation. We used 3-fold cross validation and the gradients of Θ w.r.t different meta-datasets are averaged to update Θ k , see Algorithm 1. The cross validation is crucial to alleviate meta-overfitting and more discussion is Section 4.3. Algorithm 1 Self-supervised Auxiliary Learning Input: training data for primary/auxiliary tasks D pr , D au , mini-batch size N pr , N au Input: max iterations K, # folds for cross validation C Output: network parameter w K for the primary task 1: for k = 1 to K do g c ← ∇ Θ L pr (ŵ k (Θ k )) with D pr(meta) m Eq. (7) 8: end for 9: Update Θ k+1 ← Θ k − β C c g c Eq. (9) 10: w k+1 = w k − α∇ w L pr+au (w k ; Θ k+1 ) with D pr m ∪ D au m Eq. (8) 11: end for 3.3 Hint Networks Figure 2: HintNet helps the learner network to learn even with challenging and remotely relevant auxiliary tasks. As our framework selects effective auxiliary tasks, our framework with HintNet learns V H to decide to use hintŷ H in the orange line from HintNet or not via meta-learning.ŷ in the blue line denotes the prediction from the learner network. Meta-path prediction is generally more challenging than link prediction and node classification since it requires the understanding of long-range relations across heterogeneous nodes. The metapath prediction gets more difficult when minibatch training is inevitable due to the size of datasets or models. Within a mini-batch, important nodes and edges for meta-paths are not available. Also, a small learner network, e.g., twolayer GNNs, with a limited receptive field, inherently cannot capture long-range relations. The challenges can hinder representation learning and damage the generalization of the primary task. We proposed a Hint Network (HintNet) which makes the challenge tasks more solvable by correcting the answer with more information at the learner's need. Specifically, in our experiments, the HintNet corrects the answer of the learner with its own answer from the augmented graph with hub nodes, see Fig. 2. The amount of help (correction) by HintNet is optimized maximizing the learner's gain. Let V H (·) and Θ H be a weight function to determine the amount of hint and its parameters which are optimized by meta-learning. Then, our formulation with HintNet is given as min w,Θ M0 i=1 1 M 0 0 (y (0,meta) i , f (x (0,meta) i ; w * (Θ, Θ H ))(10)s.t. w * (Θ) = argmin w T t=0 Nt i=1 1 N t V(ξ (t,train) i , t ; Θ) t (y (t,train) i ,ŷ (t,train) i (Θ H )),(11) whereŷ (t,train) i (Θ H ) denotes the convex combination of the learner's answer and HintNet's answer, i.e., V H (ξ (t,train) i ; Θ H )f t (x (t,train) i ; w) + (1 − V H (ξ (t,train) i ; Θ H ))f t H (x (t,train) i ; w). The sample embedding is ξ (t,train) i = e t ; y (t,train) i ; t ; t H ∈ R T +3 . Experiments We evaluate our proposed methods on four public benchmark datasets on heterogeneous graphs. Our experiments answer the following research questions: Q1. Is meta-path prediction effective for representation learning on heterogeneous graphs? Q2. Can the meta-path prediction be further improved by the proposed methods (e.g., SELAR, HintNet)? Q3. Why are the proposed methods effective, any relation with hard negative mining? Datasets. We use two public benchmark datasets from different domains for link prediction: Music dataset Last-FM and Book dataset Book-Crossing, released by KGNN-LS [61], RippleNet [30]. We use two datasets for node classification: citation network datasets ACM and Movie dataset IMDB, used by HAN [55] for node classification tasks. ACM has three types nodes (Paper ( Baselines. We evaluate our methods with four graph neural networks: GCN [12], GAT [11], GIN [34] and SGConv [62]. We compare four learning strategies: Vanilla, standard training of base models only with the primary task samples; w/o meta-path, learning a primary task with sample weighting function V(ξ; Θ); w/ meta-path, training with the primary task and auxiliary tasks (meta-path prediction) with a standard loss function; SELAR proposed in Section 3.2, learning the primary task with optimized auxiliary tasks by meta-learning; SELAR+Hint introduced in Section 3.3. Implementation details are in the supplement. Learning Link Prediction with meta-path prediction We used five types of meta-paths of length 2 to 4 for auxiliary tasks. Table 1 shows that our methods consistently improve link prediction performance for all the GNNs, compared to the Vanilla and the method using Meta-Weight-Net only without meta-paths (denoted as w/o meta-path). Overall, a standard training with meta-paths shows 2% improvement on average on Last-FM and about 3% improvement on Book-Crossing whereas meta-learning that learns sample weights improves only 0.4% and 0.6% on average and two cases, e.g., GCN on Last-FM and SGC on Book-Crossing, show degradation compared to the standard training (Vanilla). As we expected, SELAR and SELAR with HintNet provide more optimized auxiliary learning resulting in 2.2% and 2.5% absolute improvement on Last.fm and 4.1% and 4.4% on the Book-Crossing dataset. Further, in particular, GIN on Bookcrossing, SELAR+HintNet provides ∼8.1% absolute improvement compared to the vanilla algorithm. Learning Node Classification with meta-path prediction Similar to link prediction above, our SELAR consistently enhances node classification performance of all the GNN models and the improvements are more significant on IMDB which is larger than the ACM dataset. We believe that ACM dataset is already saturated and the room for improvement is limited. However, our methods still show small yet consistent improvement over all the architecture on ACM. We conjecture that the efficacy of our proposed methods differs depending on graph structures. However, it is worth noting that the introducing meta-path prediction as auxiliary tasks remarkably improves the performance of primary tasks such as link and node prediction with consistency compared to the existing methods. "w/o meta-path", the meta-learning to learn sample weight function on a primary task shows marginal degradation in five out of eight settings highlighted with * . Remarkably, SELAR improved the F1-score of GAT on the IMDB by (6.54%) compared to the vanilla learning scheme. Analysis of Weighting Function and Meta-overfitting The effectiveness of meta-path prediction and the proposed learning strategies are answered above. To address the last research question Q3. why the proposed method is effective, we provide analysis on the weighting function V(ξ; Θ) learned by our framework. Also, we show the evidence that meta-overfitting occurs and can be addressed by cross-validation as in Algorithm 1. Weighting function. Our proposed methods can automatically balance multiple auxiliary tasks to improve the primary task. To understand the ability of our method, we analyze the weighting function and the adjusted loss function by the weighting function, i.e.,V(ξ; Θ), V(ξ; Θ) t (y,ŷ). The positive and negative samples are solid and dash lines respectively. We present the weighting function learnt by SELAR+HintNet for GAT which is the best-performing construction on Last-FM. The weighting function is from the epoch with the best validation performance. Fig. 3 shows that the learnt weighting function attends to hard examples more than easy ones with a small loss range from 0 to 1. Also, the primary task-positive samples are relatively less down weighted than auxiliary tasks even when the samples are easy (i.e., the loss is ranged from 0 to 1). Our adjusted loss V(ξ; Θ) t (y,ŷ) is closely related to the focal loss, −(1 − p t ) γ log(p t ). When t is the cross-entropy, it becomes V(ξ; Θ) log(p t ), where p is the model's prediction for the correct class and p t is defined as p if y = 1, otherwise 1 − p as [63]. The weighting function differentially evolves over iterations. At the early stage of training, it often focuses on easy examples first and then changes its focus over time. Also, the adjusted loss values by the weighting function learnt by our method differ across tasks. To analyze the contribution of each task, we calculate the average of the task-specific weighted loss on the Last-FM and Book-Crossing datasets. Especially, on the Book-Crossing, our method has more attention to 'user-item' (primary task) and 'user-item-literary.series.item-user' (auxiliary task) which is a meta-path that connects users who like a book series. This implies that two users who like a book series likely have a similar preference. More results and discussion are available in the supplement. Meta cross-validation, i.e., cross-validation for meta-learning, helps to keep weighting function from over-fitting on meta data. Table 3 evidence that our algorithms as other meta learning methods can overfit to meta-data. As in Algorithm 1, our proposed methods, both SELAR and SELAR with HintNet, with cross-validation denoted as '3-fold' alleviates the meta-overfitting problem and provides a significant performance gain, whereas without meta cross-validation denoted as '1-fold' the proposed method can underperform the vanilla training strategy. Conclusion We proposed meta-path prediction as self-supervised auxiliary tasks on heterogeneous graphs. Our experiments show that the representation learning on heterogeneous graphs can benefit from metapath prediction which encourages to capture rich semantic information. The auxiliary tasks can be further improved by our proposed method SELAR, which automatically balances auxiliary tasks to assist the primary task via a form of meta-learning. The learnt weighting function identifies more beneficial meta-paths for the primary tasks. Within a task, the weighting function can adjust the cross entropy like the focal loss, which focuses on hard examples by decreasing weights for easy samples. Moreover, when it comes to challenging and remotely relevant an auxiliary tasks, our HintNet helps the learner by correcting the learner's answer dynamically and further improves the gain from auxiliary tasks. Our framework based on meta-learning provides learning strategies to balance primary task and auxiliary tasks, and easy/hard (and positive/negative) samples. Interesting future directions include applying our framework to other domains and various auxiliary tasks. A Summary We provide additional experimental results and implementation details that are not included in the main paper due to the space limit. This supplement includes (1) additional experimental results showing that our methods can be further improved by regularization alleviating meta-overfitting, (2) details of datasets, (3) implementation details, (4) task selection, and (5) behaviours of the weighting function at different training stages. B Meta-Learning and Regularization We compare the learning strategies: Vanilla, standard training of base models only with the primary task; Graph-MW w/o mp, modified MW-Net [48] for graph neural networks, which learns a primary task for weighting the primary task samples; Graph-MW w/ mp, MW-Net [48] for graph neural networks, which learns training with the primary and auxiliary tasks. SELAR and SELAR+Hint denote our models introduced in the main. Regularized SELAR+Hint is the exactly same model as SELAR+Hint but it is trained with a regularization. We added a regularizer to HintNet introduced in main paper (Section 3); Avg. Gain, averaged gain of all GNNs from Vanilla. C Details of datasets We use two datasets (Last-FM, Book-Crossing) for link prediction tasks and two datasets (ACM, IMDB) for node classification tasks. Last-FM and Book-Crossing do not have node features, while ACM and IMDB have node features, which are bag-of-words of keywords and plots. The Last-FM dataset with a knowledge graph have 122 types of edges, e.g., "artist.origin", "musician.instruments.played", "person.or.entity.appearing.in.film", and "film.actor.film", etc. Book-Crossing with a knowledge graph has 52 types of edges, e.g., "book.genre", "literary.series", "date.of.first.publication", and "written.work.translation", etc. ACM has three types of nodes (Paper(P), Author(A), Subject(S)), four types of edges (PA, AP, PS, SP), and labels (categories of papers). IMDB contains three types of nodes (Movie (M), Actor (A), Director (D)), four types (MA, AM, MD, DM) of edges and labels (genres of movies). Statistics of the datasets are in Table 5. D Implementation details All the models are randomly initialized and optimized using Adam [64] optimizers. Hyperparameters such as learning rate and weight decay rate are tuned using validation sets for all models. For a fair comparison, the number of layers is fixed to two and the dimension of output node embedding is the same across models. The node embedding z for Last-FM has 16 dimensions and for the rest of the datasets 64 dimensions. Since datasets have a different number of samples, we train models for a different number of epochs; Last-FM (100), E Task selection Our proposed methods identify useful auxiliary tasks and balance them with the primary task. In other words, the loss functions for tasks are differentially adjusted by the weighting function learnt by SELAR+HintNet. To analyze the weights of the tasks, we calculate the average of the task-specific weighted loss. Table. 6 shows tasks in descending order of the task weights. 'user-item-actor-item' has the largest weight followed by 'user-item' (primary task), 'user-item-appearing.in.film-item', 'user-item-instruments-item', 'user-item-user-item' and 'useritem-artist.origin-item' on the Last-FM. It indicates that the preference of a given user is closely related to other items connected by an actor, e.g., specific edge type 'film.actor.film' in the knowledge graph. Moreover, our method focuses on 'user-item' interaction for the primary task. On the Book-Crossing data, our method has more attention to 'user-item' for the primary task and 'user-item-literary.series.item-user' which means that users who like a series book have similar preferences. user-item-actor-item 7.675 user-item * 6.439 user-item * 7.608 user-item-literary.series-item-user 6.217 user-item-appearing.in.film-item 7.372 item-genre-item 6.163 user-item-instruments-item 7.049 user-item-user-item 6.126 user-item-user-item 6.878 user-item-user 6.066 item-user-item 6.727 item-user-item 6.025 * primary task F Weighting function at different training stages The weighting functions of our methods dynamically change over time. In Fig. 4, each row is the weighting function learnt by SELAR+HintNet for GCN, GAT, GIN, and SGC on Last-FM. From left, columns are from the first epoch, the epoch with the best validation performance, and the last epoch respectively. The positive and negative samples are illustrated in solid and dash lines respectively in Fig. 4. At the begging of training (the first epoch), one noticeable pattern is that the weighting function focuses more on 'easy' samples. At the epoch with the highest performance, easy samples are down-weighted and the weight is large when the loss is large. It implies that hard examples are more focused. At the last epoch, most weights converge to zero when the loss is extremely small or large in the last epoch. Since learning has almost been done, the weighting function is learned in a direction that considers both easy and difficult examples less. Especially, for GCN and GAT in the epoch with the highest performance, the weights are increasing and it means that our weighting function imposes that easy samples to smaller importance and more attention on hard samples. Among all tasks, the scale of weights in the primary task is relatively high compared to that of auxiliary tasks. This indicates that our method focuses more on the primary task. ← CVSplit(D pr m , c) Split Data for CV 6:ŵ k (Θ) ← w k − α∇ w L pr+au (w k ; Θ) with D P), Author(A), Subject(S)), four types of edges (PA, AP, PS, SP) and labels (categories of papers). IMDB contains three types of nodes (Movie (M), Actor (A), Director (D)), four types (MA, AM, MD, DM) of edges and labels (genres of movies). ACM and IMDB have node features, which are bag-of-words of keywords and plots. Dataset details are in the supplement. ( a ) aWeighting function V(ξ; Θ).(b) Adjusted Cross Entropy V(ξ; Θ) t (y,ŷ). Figure 3 : 3Weighting function V(·) learnt by SELAR+HintNet. V(·) gives overall high weights to the primary task positive samples (red) in (a). V(·) decreases the weights of easy samples with a loss ranged from 0 to 1. In (b), the adjusted cross entropy, i.e., −V(ξ; Θ) log(ŷ), by V(·) acts like the focal loss, which focuses on hard examples by −(1 − p t ) γ log(ŷ). Figure 4 : 4Weightinf function V(·) learnt by SELAR+HintNet on Last-FM on GCN, GAT, GIN and SGC. Table 1 : 1Link prediction performance (AU C) of GNNs trained by various learning strategies.Dataset Base GNNs Vanilla w/o meta-path Ours w/ meta-path SELAR SELAR+Hint Last-FM GCN 0.7898 * 0.7850 0.8135 0.8163 0.8162 GAT 0.8090 0.8100 0.8184 0.8319 0.8349 GIN 0.7895 0.8081 0.8304 0.8211 0.8255 SGC 0.7725 0.7759 0.7801 0.7803 0.7857 Avg. Gain - +0.0046 +0.0204 +0.0222 +0.0253 Book-Crossing GCN 0.6918 0.6967 0.6970 0.7081 0.7075 GAT 0.6704 0.6759 0.7026 0.7136 0.7247 GIN 0.6782 0.6968 0.7442 0.7554 0.7587 SGC 0.6781 * 0.6732 0.6933 0.7070 0.7039 Avg. Gain - +0.0061 +0.0297 +0.0414 +0.0441 Table 2 : 2Node classification performance (F 1-score) of GNNs trained by various learning schemes.Dataset Base GNNs Vanilla w/o meta-path Ours w/ meta-path SELAR SELAR+Hint ACM GCN 0.9034 * 0.9025 0.9147 0.9031 0.9160 GAT 0.9179 * 0.9092 0.9188 0.9198 0.9188 GIN 0.9060 0.9130 0.9101 0.9076 0.9135 SGC 0.9138 * 0.9115 0.9202 0.9120 0.9171 Avg. Gain - -0.0013 +0.0057 +0.0003 +0.0061 IMDB GCN 0.5826 0.5952 0.6189 0.6072 0.5970 GAT 0.5587 * 0.5543 0.6013 0.6197 0.6017 GIN 0.5965 * 0.5856 0.5974 0.5994 0.5974 SGC 0.5675 0.5944 0.5894 0.6147 0.5779 Avg. Gain - +0.0061 +0.0255 +0.0340 +0.0172 Table 3 : 3Comparison between 1-fold and 3-fold as meta-data on Last-FM datasets. SELAR SELAR+Hint Model Vanilla 1-fold 3-fold 1-fold 3-fold GCN 0.7898 0.7885 0.8163 0.7716 0.8162 GAT 0.8090 0.8293 0.8319 0.8002 0.8349 GIN 0.7895 0.8182 0.8211 0.8176 0.8255 SGC 0.7725 0.7391 0.7803 0.7416 0.7857 Table . 4 shows that SELAR, SELAR+Hint, and Regularized SELAR+Hint consistently improve the link prediction performance on Last-FM and Book-Crossing datasets, compared to the Vanilla and Graph-MW. Graph-MW with meta-paths shows 0.25% improvement on average on Last-FM while our SELAR+Hint provides 2.5% improvement on average. In particular, Our regularized SELAR+Hint has 2.9% gains compared to the Vanilla. On Book-Crossing, Graph-MW without meta-paths and with meta-paths show 0.6% and 3.8% improvements from the Vanilla respectively. It indicates that the auxiliary tasks are helpful on the primary task on Book-Crossing. Also, our regularized SELAR+Hint has 4.8% absolute improvement compared to the Vanilla. The regularization which is applied to alleviate overfitting improves the overall performance of the SELAR+Hint. Table 4 : 4Link prediction performance (AU C-score) of GNNsDataset Base GNNs Vanilla Graph-MW w/o mp Graph-MW w/ mp SELAR SELAR +Hint Regularized SELAR+Hint Last-FM GCN 0.7898 0.7850 0.7861 0.8163 0.8162 0.8206 GAT 0.8090 0.8100 0.8244 0.8319 0.8349 0.8280 GIN 0.7895 0.8081 0.8204 0.8211 0.8255 0.8262 SGC 0.7725 0.7759 0.7400 0.7803 0.7857 0.8029 Avg. Gain - +0.0046 +0.0025 +0.0222 +0.0253 +0.0292 Book Crossing GCN 0.6918 0.6967 0.7047 0.7081 0.7075 0.7109 GAT 0.6704 0.6759 0.7075 0.7136 0.7247 0.7290 GIN 0.6782 0.6968 0.7543 0.7554 0.7587 0.7602 SGC 0.6781 0.6732 0.7038 0.7070 0.7039 0.7088 Avg. Gain - +0.0061 +0.0380 +0.0414 +0.0441 +0.0476 Table 5 : 5Datasets on heterogeneous graphs., ACM (200), and IMDB (200). The model with the best validation set performance is chosen for the test. For link prediction, the neighborhood sampling algorithm[4] is used and the neighborhood size is 8 and 16 in Last-FM and Book-Crossing respectively. For node classification, the neighborhood size is 8 in all datasets. The test performance was reported with the best models on the validation sets.Datasets # Nodes # Edges # Edge type # Features Link prediction Last-FM 15,084 73,382 122 N/A Book-Crossing 110,739 442,746 52 N/A Node classification ACM 8,994 25,922 4 1,902 IMDB 12,772 37,288 4 1,256 Table 6 : 6The average of the task-specific weighted loss on Last-FM and Book-Crossing datasets.Meta-paths (Last-FM)Avg. Meta-paths (Book-Crossing) Avg. Rex William L Hamilton, Jure Ying, Leskovec, arXiv:1709.05584Representation learning on graphs: Methods and applications. arXiv preprintWilliam L Hamilton, Rex Ying, and Jure Leskovec. Representation learning on graphs: Methods and applications. arXiv preprint arXiv:1709.05584, 2017. Geometric deep learning: going beyond euclidean data. Joan Michael M Bronstein, Yann Bruna, Arthur Lecun, Pierre Szlam, Vandergheynst, IEEE34Michael M Bronstein, Joan Bruna, Yann LeCun, Arthur Szlam, and Pierre Vandergheynst. Geometric deep learning: going beyond euclidean data. IEEE, 34(4):18-42, 2017. A comprehensive survey on graph neural networks. Zonghan Wu, Shirui Pan, Fengwen Chen, Guodong Long, Chengqi Zhang, S Yu Philip, IEEEZonghan Wu, Shirui Pan, Fengwen Chen, Guodong Long, Chengqi Zhang, and S Yu Philip. A comprehen- sive survey on graph neural networks. IEEE, 2020. Inductive representation learning on large graphs. William L Hamilton, Rex Ying, Jure Leskovec, abs/1706.02216CoRRWilliam L. Hamilton, Rex Ying, and Jure Leskovec. Inductive representation learning on large graphs. CoRR, abs/1706.02216, 2017. Modeling relational data with graph convolutional networks. Michael Schlichtkrull, N Thomas, Peter Kipf, Rianne Bloem, Van Den, Ivan Berg, Max Titov, Welling, European Semantic Web Conference. Michael Schlichtkrull, Thomas N Kipf, Peter Bloem, Rianne Van Den Berg, Ivan Titov, and Max Welling. Modeling relational data with graph convolutional networks. In European Semantic Web Conference, pages 593-607, 2018. Link prediction based on graph neural networks. Muhan Zhang, Yixin Chen, NeurIPS. Muhan Zhang and Yixin Chen. Link prediction based on graph neural networks. In NeurIPS, pages 5165-5175, 2018. Convolutional networks on graphs for learning molecular fingerprints. Dougal David K Duvenaud, Jorge Maclaurin, Rafael Iparraguirre, Timothy Bombarell, Alán Hirzel, Ryan P Aspuru-Guzik, Adams, NeurIPS. David K Duvenaud, Dougal Maclaurin, Jorge Iparraguirre, Rafael Bombarell, Timothy Hirzel, Alán Aspuru-Guzik, and Ryan P Adams. Convolutional networks on graphs for learning molecular fingerprints. In NeurIPS, pages 2224-2232, 2015. Hierarchical graph representation learning with differentiable pooling. Rex Ying, Jiaxuan You, Christopher Morris, Xiang Ren, William L Hamilton, Jure Leskovec, abs/1806.08804CoRRRex Ying, Jiaxuan You, Christopher Morris, Xiang Ren, William L. Hamilton, and Jure Leskovec. Hierar- chical graph representation learning with differentiable pooling. CoRR, abs/1806.08804, 2018. FastGCN: Fast learning with graph convolutional networks via importance sampling. Jie Chen, Tengfei Ma, Cao Xiao, In ICLR. Jie Chen, Tengfei Ma, and Cao Xiao. FastGCN: Fast learning with graph convolutional networks via importance sampling. In ICLR, 2018. Structural deep network embedding. Daixin Wang, Peng Cui, Wenwu Zhu, SIGKDD. Daixin Wang, Peng Cui, and Wenwu Zhu. Structural deep network embedding. In SIGKDD, pages 1225-1234, 2016. Petar Veličković, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Lio, Yoshua Bengio, arXiv:1710.10903Graph attention networks. arXiv preprintPetar Veličković, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Lio, and Yoshua Bengio. Graph attention networks. arXiv preprint arXiv:1710.10903, 2017. Semi-supervised classification with graph convolutional networks. N Thomas, Max Kipf, Welling, ICLR. OpenReview.net. Thomas N. Kipf and Max Welling. Semi-supervised classification with graph convolutional networks. In ICLR. OpenReview.net, 2017. Scene graph generation by iterative message passing. Danfei Xu, Yuke Zhu, B Christopher, Li Choy, Fei-Fei, CVPR. Danfei Xu, Yuke Zhu, Christopher B Choy, and Li Fei-Fei. Scene graph generation by iterative message passing. In CVPR, pages 5410-5419, 2017. Graph r-cnn for scene graph generation. Jianwei Yang, Jiasen Lu, Stefan Lee, Dhruv Batra, Devi Parikh, ECCV. Jianwei Yang, Jiasen Lu, Stefan Lee, Dhruv Batra, and Devi Parikh. Graph r-cnn for scene graph generation. In ECCV, pages 670-685, 2018. Graph-based global reasoning networks. Yunpeng Chen, Marcus Rohrbach, Zhicheng Yan, Yan Shuicheng, Jiashi Feng, Yannis Kalantidis, CVPR. Yunpeng Chen, Marcus Rohrbach, Zhicheng Yan, Yan Shuicheng, Jiashi Feng, and Yannis Kalantidis. Graph-based global reasoning networks. In CVPR, pages 433-442, 2019. Rianne Van Den, Thomas N Berg, Max Kipf, Welling, arXiv:1706.02263Graph convolutional matrix completion. arXiv preprintRianne van den Berg, Thomas N Kipf, and Max Welling. Graph convolutional matrix completion. arXiv preprint arXiv:1706.02263, 2017. Geometric matrix completion with recurrent multi-graph neural networks. Federico Monti, Michael Bronstein, Xavier Bresson, NeurIPS. Federico Monti, Michael Bronstein, and Xavier Bresson. Geometric matrix completion with recurrent multi-graph neural networks. In NeurIPS, pages 3697-3707, 2017. Graph convolutional neural networks for web-scale recommender systems. Rex Ying, Ruining He, Kaifeng Chen, Pong Eksombatchai, L William, Jure Hamilton, Leskovec, SIGKDD. Rex Ying, Ruining He, Kaifeng Chen, Pong Eksombatchai, William L Hamilton, and Jure Leskovec. Graph convolutional neural networks for web-scale recommender systems. In SIGKDD, pages 974-983, 2018. Alvaro Sanchez-Gonzalez, Nicolas Heess, Jost Tobias Springenberg, Josh Merel, Martin Riedmiller, Raia Hadsell, Peter Battaglia, arXiv:1806.01242Graph networks as learnable physics engines for inference and control. arXiv preprintAlvaro Sanchez-Gonzalez, Nicolas Heess, Jost Tobias Springenberg, Josh Merel, Martin Riedmiller, Raia Hadsell, and Peter Battaglia. Graph networks as learnable physics engines for inference and control. arXiv preprint arXiv:1806.01242, 2018. Interaction networks for learning about objects, relations and physics. Peter Battaglia, Razvan Pascanu, Matthew Lai, Danilo Jimenez Rezende, NeurIPS. Peter Battaglia, Razvan Pascanu, Matthew Lai, Danilo Jimenez Rezende, et al. Interaction networks for learning about objects, relations and physics. In NeurIPS, pages 4502-4510, 2016. Strategies for pre-training graph neural networks. Weihua Hu, Bowen Liu, Joseph Gomes, Marinka Zitnik, Percy Liang, Vijay Pande, Jure Leskovec, ICLR. Weihua Hu, Bowen Liu, Joseph Gomes, Marinka Zitnik, Percy Liang, Vijay Pande, and Jure Leskovec. Strategies for pre-training graph neural networks. In ICLR, 2020. Moleculenet: a benchmark for molecular machine learning. Zhenqin Wu, Bharath Ramsundar, Evan N Feinberg, Joseph Gomes, Caleb Geniesse, S Aneesh, Karl Pappu, Vijay Leswing, Pande, Chemical science. 92Zhenqin Wu, Bharath Ramsundar, Evan N Feinberg, Joseph Gomes, Caleb Geniesse, Aneesh S Pappu, Karl Leswing, and Vijay Pande. Moleculenet: a benchmark for molecular machine learning. Chemical science, 9(2):513-530, 2018. Jacob Devlin, Ming-Wei Chang, Kenton Lee, Kristina Toutanova Bert, arXiv:1810.04805Pre-training of deep bidirectional transformers for language understanding. arXiv preprintJacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre-training of deep bidirec- tional transformers for language understanding. arXiv preprint arXiv:1810.04805, 2018. Deep residual learning for image recognition. Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun, CVPR. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In CVPR, pages 770-778, 2016. Aligning books and movies: Towards story-like visual explanations by watching movies and reading books. Yukun Zhu, Ryan Kiros, Rich Zemel, Ruslan Salakhutdinov, Raquel Urtasun, Antonio Torralba, Sanja Fidler, ICCV. Yukun Zhu, Ryan Kiros, Rich Zemel, Ruslan Salakhutdinov, Raquel Urtasun, Antonio Torralba, and Sanja Fidler. Aligning books and movies: Towards story-like visual explanations by watching movies and reading books. In ICCV, pages 19-27, 2015. Imagenet: A large-scale hierarchical image database. Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, Li Fei-Fei, CVPR. IeeeJia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In CVPR, pages 248-255. Ieee, 2009. A survey on transfer learning. Qiang Sinno Jialin Pan, Yang, IEEE Transactions on knowledge and data engineering. 2210Sinno Jialin Pan and Qiang Yang. A survey on transfer learning. IEEE Transactions on knowledge and data engineering, 22(10):1345-1359, 2009. Nicolò Navarin, V Dinh, Alessandro Tran, Sperduti, arXiv:1811.06930Pre-training graph neural networks with kernels. arXiv preprintNicolò Navarin, Dinh V Tran, and Alessandro Sperduti. Pre-training graph neural networks with kernels. arXiv preprint arXiv:1811.06930, 2018. Graph-bert: Only attention is needed for learning graph representations. Jiawei Zhang, Haopeng Zhang, Li Sun, Congying Xia, arXiv:2001.05140arXiv preprintJiawei Zhang, Haopeng Zhang, Li Sun, and Congying Xia. Graph-bert: Only attention is needed for learning graph representations. arXiv preprint arXiv:2001.05140, 2020. Ripplenet: Propagating user preferences on the knowledge graph for recommender systems. Hongwei Wang, Fuzheng Zhang, Jialin Wang, Miao Zhao, Wenjie Li, Xing Xie, Minyi Guo, Proceedings of the 27th ACM International Conference on Information and Knowledge Management. the 27th ACM International Conference on Information and Knowledge ManagementHongwei Wang, Fuzheng Zhang, Jialin Wang, Miao Zhao, Wenjie Li, Xing Xie, and Minyi Guo. Ripplenet: Propagating user preferences on the knowledge graph for recommender systems. In Proceedings of the 27th ACM International Conference on Information and Knowledge Management, pages 417-426, 2018. RGCNN: regularized graph CNN for point cloud segmentation. Gusi Te, Wei Hu, Amin Zheng, Zongming Guo, Susanne Boll, Kyoung Mu Lee, Jiebo Luo, Wenwu Zhu, Hyeran Byun, Chang Wen Chen, Rainer Lienhart, and Tao MeiACMGusi Te, Wei Hu, Amin Zheng, and Zongming Guo. RGCNN: regularized graph CNN for point cloud segmentation. In Susanne Boll, Kyoung Mu Lee, Jiebo Luo, Wenwu Zhu, Hyeran Byun, Chang Wen Chen, Rainer Lienhart, and Tao Mei, editors, ACM, pages 746-754. ACM, 2018. Graph convolutional encoders for syntax-aware neural machine translation. Joost Bastings, Ivan Titov, Wilker Aziz, Diego Marcheggiani, Khalil Sima&apos;an, arXiv:1704.04675arXiv preprintJoost Bastings, Ivan Titov, Wilker Aziz, Diego Marcheggiani, and Khalil Sima'an. Graph convolutional encoders for syntax-aware neural machine translation. arXiv preprint arXiv:1704.04675, 2017. Joan Bruna, Wojciech Zaremba, Arthur Szlam, Yann Lecun, arXiv:1312.6203Spectral networks and locally connected networks on graphs. arXiv preprintJoan Bruna, Wojciech Zaremba, Arthur Szlam, and Yann LeCun. Spectral networks and locally connected networks on graphs. arXiv preprint arXiv:1312.6203, 2013. . Keyulu Xu, Weihua Hu, Jure Leskovec, Stefanie Jegelka, arXiv:1810.00826arXiv preprintKeyulu Xu, Weihua Hu, Jure Leskovec, and Stefanie Jegelka. How powerful are graph neural networks? arXiv preprint arXiv:1810.00826, 2018. Unsupervised visual representation learning by context prediction. Carl Doersch, Abhinav Gupta, Alexei A Efros, ICCV. Carl Doersch, Abhinav Gupta, and Alexei A Efros. Unsupervised visual representation learning by context prediction. In ICCV, pages 1422-1430, 2015. Unsupervised learning of visual representations by solving jigsaw puzzles. Mehdi Noroozi, Paolo Favaro, ECCV. SpringerMehdi Noroozi and Paolo Favaro. Unsupervised learning of visual representations by solving jigsaw puzzles. In ECCV, pages 69-84. Springer, 2016. Unsupervised visual representation learning by context prediction. Carl Doersch, Abhinav Gupta, Alexei A Efros, ICCV. Carl Doersch, Abhinav Gupta, and Alexei A Efros. Unsupervised visual representation learning by context prediction. In ICCV, pages 1422-1430, 2015. Contrastive learning for image captioning. Bo Dai, Dahua Lin, NeurIPS. Bo Dai and Dahua Lin. Contrastive learning for image captioning. In NeurIPS, pages 898-907, 2017. Jeff Donahue, Yangqing Jia, Oriol Vinyals, Judy Hoffman, Ning Zhang, Eric Tzeng, and Trevor Darrell. Deion. In ICML. Jeff Donahue, Yangqing Jia, Oriol Vinyals, Judy Hoffman, Ning Zhang, Eric Tzeng, and Trevor Darrell. Deion. In ICML, pages 647-655, 2014. Opportunities and obstacles for deep learning in biology and medicine. Travers Ching, S Daniel, Himmelstein, K Brett, Alexandr A Beaulieu-Jones, Brian T Kalinin, Do, P Gregory, Enrico Way, Paul-Michael Ferrero, Michael Agapow, Zietz, M Michael, Hoffman, Journal of The Royal Society Interface. 1514120170387Travers Ching, Daniel S Himmelstein, Brett K Beaulieu-Jones, Alexandr A Kalinin, Brian T Do, Gregory P Way, Enrico Ferrero, Paul-Michael Agapow, Michael Zietz, Michael M Hoffman, et al. Opportuni- ties and obstacles for deep learning in biology and medicine. Journal of The Royal Society Interface, 15(141):20170387, 2018. Multitask learning with low-level auxiliary tasks for encoder-decoder based speech recognition. Shubham Toshniwal, Hao Tang, Liang Lu, Karen Livescu, arXiv:1704.01631arXiv preprintShubham Toshniwal, Hao Tang, Liang Lu, and Karen Livescu. Multitask learning with low-level auxiliary tasks for encoder-decoder based speech recognition. arXiv preprint arXiv:1704.01631, 2017. Deepstereo: Learning to predict new views from the world's imagery. John Flynn, Ivan Neulander, James Philbin, Noah Snavely, CVPR. John Flynn, Ivan Neulander, James Philbin, and Noah Snavely. Deepstereo: Learning to predict new views from the world's imagery. In CVPR, pages 5515-5524, 2016. Unsupervised learning of depth and ego-motion from video. Tinghui Zhou, Matthew Brown, Noah Snavely, David G Lowe, CVPR. Tinghui Zhou, Matthew Brown, Noah Snavely, and David G Lowe. Unsupervised learning of depth and ego-motion from video. In CVPR, pages 1851-1858, 2017. Conditional image synthesis with auxiliary classifier gans. Augustus Odena, Christopher Olah, Jonathon Shlens, ICML. Augustus Odena, Christopher Olah, and Jonathon Shlens. Conditional image synthesis with auxiliary classifier gans. In ICML, pages 2642-2651. JMLR. org, 2017. Self-supervised generalisation with meta auxiliary learning. Shikun Liu, Andrew Davison, Edward Johns, NeurIPS. Shikun Liu, Andrew Davison, and Edward Johns. Self-supervised generalisation with meta auxiliary learning. In NeurIPS, pages 1677-1687, 2019. Optimization as a model for few-shot learning. Sachin Ravi, Hugo Larochelle, ICLR. OpenReview.net. Sachin Ravi and Hugo Larochelle. Optimization as a model for few-shot learning. In ICLR. OpenReview.net, 2017. Model-agnostic meta-learning for fast adaptation of deep networks. Chelsea Finn, Pieter Abbeel, Sergey Levine, ICML. Chelsea Finn, Pieter Abbeel, and Sergey Levine. Model-agnostic meta-learning for fast adaptation of deep networks. In ICML, pages 1126-1135. JMLR. org, 2017. Meta-weight-net: Learning an explicit mapping for sample weighting. Jun Shu, Qi Xie, Lixuan Yi, Qian Zhao, Sanping Zhou, Zongben Xu, Deyu Meng, NeurIPS. Jun Shu, Qi Xie, Lixuan Yi, Qian Zhao, Sanping Zhou, Zongben Xu, and Deyu Meng. Meta-weight-net: Learning an explicit mapping for sample weighting. In NeurIPS, 2019. Meta-sgd: Learning to learn quickly for few-shot learning. Zhenguo Li, Fengwei Zhou, Fei Chen, Hang Li, arXiv:1707.09835arXiv preprintZhenguo Li, Fengwei Zhou, Fei Chen, and Hang Li. Meta-sgd: Learning to learn quickly for few-shot learning. arXiv preprint arXiv:1707.09835, 2017. Gradient-based meta-learning with learned layerwise metric and subspace. Yoonho Lee, Seungjin Choi, Proceedings of Machine Learning Research. Jennifer G. Dy and Andreas Krause80PMLRYoonho Lee and Seungjin Choi. Gradient-based meta-learning with learned layerwise metric and subspace. In Jennifer G. Dy and Andreas Krause, editors, ICML, volume 80 of Proceedings of Machine Learning Research, pages 2933-2942. PMLR, 2018. Learning what and where to transfer. Yunhun Jang, Hankook Lee, Sung Ju Hwang, Jinwoo Shin, of Proceedings of Machine Learning Research. Kamalika Chaudhuri and Ruslan SalakhutdinovPMLR97Yunhun Jang, Hankook Lee, Sung Ju Hwang, and Jinwoo Shin. Learning what and where to transfer. In Kamalika Chaudhuri and Ruslan Salakhutdinov, editors, ICML, volume 97 of Proceedings of Machine Learning Research, pages 3030-3039. PMLR, 2019. Siamese neural networks for one-shot image recognition. Gregory Koch, Richard Zemel, Ruslan Salakhutdinov, ICML deep learning workshop. Lille2Gregory Koch, Richard Zemel, and Ruslan Salakhutdinov. Siamese neural networks for one-shot image recognition. In ICML deep learning workshop, volume 2. Lille, 2015. Prototypical networks for few-shot learning. Jake Snell, Kevin Swersky, Richard Zemel, NeurIPS. Jake Snell, Kevin Swersky, and Richard Zemel. Prototypical networks for few-shot learning. In NeurIPS, pages 4077-4087, 2017. Learning to compare: Relation network for few-shot learning. Flood Sung, Yongxin Yang, Li Zhang, Tao Xiang, H S Philip, Timothy M Torr, Hospedales, CVPR. IEEE Computer SocietyFlood Sung, Yongxin Yang, Li Zhang, Tao Xiang, Philip H. S. Torr, and Timothy M. Hospedales. Learning to compare: Relation network for few-shot learning. In CVPR, pages 1199-1208. IEEE Computer Society, 2018. Heterogeneous graph attention network. Xiao Wang, Houye Ji, Chuan Shi, Bai Wang, P Peng Cui, Yanfang Yu, Ye, abs/1903.07293CoRRXiao Wang, Houye Ji, Chuan Shi, Bai Wang, Peng Cui, P. Yu, and Yanfang Ye. Heterogeneous graph attention network. CoRR, abs/1903.07293, 2019. Mining heterogeneous information networks: A structural analysis approach. Yizhou Sun, J Han, SIGKDD Explorations. 14Yizhou Sun and J. Han. Mining heterogeneous information networks: A structural analysis approach. SIGKDD Explorations, 14:20-28, 01 2012. A survey of heterogeneous information network analysis. Chuan Shi, Yitong Li, Jiawei Zhang, Yizhou Sun, S Yu Philip, IEEE Transactions on Knowledge and Data Engineering. 291Chuan Shi, Yitong Li, Jiawei Zhang, Yizhou Sun, and S Yu Philip. A survey of heterogeneous information network analysis. IEEE Transactions on Knowledge and Data Engineering, 29(1):17-37, 2016. Pathsim: Meta path-based top-k similarity search in heterogeneous information networks. Yizhou Sun, Jiawei Han, Xifeng Yan, S Philip, Tianyi Yu, Wu, Proceedings of the VLDB Endowment. the VLDB Endowment4Yizhou Sun, Jiawei Han, Xifeng Yan, Philip S Yu, and Tianyi Wu. Pathsim: Meta path-based top-k similarity search in heterogeneous information networks. Proceedings of the VLDB Endowment, 4(11):992-1003, 2011. How to train your maml. Antreas Antoniou, Harrison Edwards, Amos Storkey, arXiv:1810.09502arXiv preprintAntreas Antoniou, Harrison Edwards, and Amos Storkey. How to train your maml. arXiv preprint arXiv:1810.09502, 2018. Fast context adaptation via meta-learning. Kyriacos Luisa M Zintgraf, Vitaly Shiarlis, Katja Kurin, Shimon Hofmann, Whiteson, arXiv:1810.03642arXiv preprintLuisa M Zintgraf, Kyriacos Shiarlis, Vitaly Kurin, Katja Hofmann, and Shimon Whiteson. Fast context adaptation via meta-learning. arXiv preprint arXiv:1810.03642, 2018. Knowledge-aware graph neural networks with label smoothness regularization for recommender systems. Hongwei Wang, Fuzheng Zhang, Mengdi Zhang, Jure Leskovec, Miao Zhao, Wenjie Li, Zhongyuan Wang, SIGKDD. Hongwei Wang, Fuzheng Zhang, Mengdi Zhang, Jure Leskovec, Miao Zhao, Wenjie Li, and Zhongyuan Wang. Knowledge-aware graph neural networks with label smoothness regularization for recommender systems. In SIGKDD, pages 968-977, 2019. Felix Wu, Tianyi Zhang, Amauri Holanda De SouzaJr, Christopher Fifty, Tao Yu, Kilian Q Weinberger, arXiv:1902.07153Simplifying graph convolutional networks. arXiv preprintFelix Wu, Tianyi Zhang, Amauri Holanda de Souza Jr, Christopher Fifty, Tao Yu, and Kilian Q Weinberger. Simplifying graph convolutional networks. arXiv preprint arXiv:1902.07153, 2019. Kaiming He, and Piotr Dollár. Focal loss for dense object detection. Tsung-Yi Lin, Priya Goyal, Ross Girshick, ICCV. Tsung-Yi Lin, Priya Goyal, Ross Girshick, Kaiming He, and Piotr Dollár. Focal loss for dense object detection. In ICCV, pages 2980-2988, 2017. Adam: A method for stochastic optimization. P Diederik, Jimmy Kingma, Ba, 3rd International Conference on Learning Representations. San Diego, CA, USAConference Track ProceedingsDiederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In Yoshua Bengio and Yann LeCun, editors, 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings, 2015.
[]
[ "SPSG: Self-Supervised Photometric Scene Generation from RGB-D Scans", "SPSG: Self-Supervised Photometric Scene Generation from RGB-D Scans" ]
[ "Angela Dai \nTechnical University of Munich\n\n", "Yawar Siddiqui \nTechnical University of Munich\n\n", "Justus Thies \nTechnical University of Munich\n\n", "Julien Valentin \nGoogle\n", "Matthias Nießner \nTechnical University of Munich\n\n" ]
[ "Technical University of Munich\n", "Technical University of Munich\n", "Technical University of Munich\n", "Google", "Technical University of Munich\n" ]
[]
We present SPSG, a novel approach to generate high-quality, colored 3D models of scenes from RGB-D scan observations by learning to infer unobserved scene geometry and color in a self-supervised fashion. Our self-supervised approach learns to jointly inpaint geometry and color by correlating an incomplete RGB-D scan with a more complete version of that scan. Notably, rather than relying on 3D reconstruction losses to inform our 3D geometry and color reconstruction, we propose adversarial and perceptual losses operating on 2D renderings in order to achieve high-resolution, high-quality colored reconstructions of scenes. This exploits the high-resolution, self-consistent signal from individual raw RGB-D frames, in contrast to fused 3D reconstructions of the frames which exhibit inconsistencies from view-dependent effects, such as color balancing or pose inconsistencies. Thus, by informing our 3D scene generation directly through 2D signal, we produce high-quality colored reconstructions of 3D scenes, outperforming state of the art on both synthetic and real data.Preprint. Under review.
10.1109/cvpr46437.2021.00179
[ "https://arxiv.org/pdf/2006.14660v1.pdf" ]
220,127,945
2006.14660
0f2b71606b2eeab9e9b521abfce0e72f680c3861
SPSG: Self-Supervised Photometric Scene Generation from RGB-D Scans Angela Dai Technical University of Munich Yawar Siddiqui Technical University of Munich Justus Thies Technical University of Munich Julien Valentin Google Matthias Nießner Technical University of Munich SPSG: Self-Supervised Photometric Scene Generation from RGB-D Scans We present SPSG, a novel approach to generate high-quality, colored 3D models of scenes from RGB-D scan observations by learning to infer unobserved scene geometry and color in a self-supervised fashion. Our self-supervised approach learns to jointly inpaint geometry and color by correlating an incomplete RGB-D scan with a more complete version of that scan. Notably, rather than relying on 3D reconstruction losses to inform our 3D geometry and color reconstruction, we propose adversarial and perceptual losses operating on 2D renderings in order to achieve high-resolution, high-quality colored reconstructions of scenes. This exploits the high-resolution, self-consistent signal from individual raw RGB-D frames, in contrast to fused 3D reconstructions of the frames which exhibit inconsistencies from view-dependent effects, such as color balancing or pose inconsistencies. Thus, by informing our 3D scene generation directly through 2D signal, we produce high-quality colored reconstructions of 3D scenes, outperforming state of the art on both synthetic and real data.Preprint. Under review. Introduction The wide availability of consumer range cameras has propelled research in 3D reconstruction of real-world environments, with applications ranging from content creation to indoor robotic navigation and autonomous driving. While state-of-the-art 3D reconstruction approaches have now demonstrated robust camera tracking and large-scale reconstruction [20,15,29,6], occlusions and sensor limitation lead these approaches to yield reconstructions that are incomplete both in geometry and in color, making them ill-suited for use in the aforementioned applications. In recent years, geometric deep learning has made significant progress in learning to reconstruct complete, high-fidelity 3D models of shapes from RGB or RGB-D observations [18,7,23,19,22], leveraging synthetic 3D shape data to provide supervision for the geometric completion task. Recent work has also advanced generative 3D approaches towards operating on larger-scale scenes [26,5,8]. However, producing complete, colored 3D reconstructions of real-world environments remains challenging -in particular, for real-world observations, we do not have complete ground truth data available. Several promising approaches have been proposed to produce geometric and color reconstructions of 3D shapes, but tend to rely on single-object domain specificity [24] or synthetic 3D data for supervision [27], rendering them unsuitable for reconstructing colored 3D models of real-world scenes due to the significantly larger contextual scale and domain gap with synthetic data. We introduce SPSG, a generative 3D approach to create high-quality 3D models of real-world scenes from partial RGB-D scan observations in a self-supervised fashion. Our self-supervised approach leverages incomplete RGB-D scans as target by generating a more incomplete version as input by removing frames. This allows correlation of more-incomplete to less-incomplete scans while ignoring unobserved regions. However, the target scan reconstruction from the given RGB-D scan suffers from inconsistencies in camera alignments and view-dependent effects, resulting in significant color artifacts. Moreover, the success of adversarial approaches in 2D image generation [10,16] cannot be directly adopted when the target scan is incomplete, as this results in the 'real' examples for the discriminator taking on incomplete characteristics. Our key observation is that while a 3D scan is incomplete, each individual 2D frame is complete from its viewpoint. Thus, we leverage the 2D signal provided by the raw RGB-D frames, which provide high-resolution, self-consistent observations as well as photo-realistic examples for adversarial and perceptual losses in 2D. Thus, our generative 3D model predicts a 3D scene reconstruction represented as a truncated signed distance function with per-voxel colors (TSDF), where we leverage a differentiable renderer to compare the predicted geometry and color to the original RGB-D frames. In addition, we employ a 2D adversarial and 2D perceptual loss between the rendering and the original input in order to achieve sharp, high-quality, complete colored 3D reconstructions. Our experiments show that our 2D-based self-supervised approach towards inferring complete geometric and colored 3D reconstructions produces significantly improved performance in comparison to state of state-of-the-art methods, both quantitatively and qualitatively on both synthetic and real data. We additionally analyze the effect of the 2D rendering losses in contrast to using 3D reconstruction, adversarial, and perceptual losses, and demonstrate that our 2D loss formulation avoids various artifacts introduced by a 3D loss formulation. This enables our self-supervised approach to generate compelling colored 3D models for real-world scans of large-scale scenes. Related Work RGB-D based 3D Reconstruction 3D reconstruction of objects and scenes using RGB-D data is a well explored field [20,15,29,6]. For a detailed overview of 3D reconstruction methods, we refer to the state of the art report of Zollhöfer et al. [33]. In addition, our work is related to surface texturing techniques which optimize for texture in observed regions [12,13]; however, in contrast, our goal is to target incomplete scans where color data is missing in the 3D scans. Learned Single Object Reconstruction The reconstruction of single objects given RGB or RGB-D input is an active field of research. Many works have explored a variety of geometric shape representations, including occupancy grids [30], volumetric truncated signed distance fields [7], point clouds [32], and recently using deep networks to model implicit surface representations [22,19,31]. While such methods have shown impressive geometric reconstruction, generating colored objects has been far less explored. Im2Avatar [27] predicts an occupancy grid to represent the shape, followed by predicting a color volume. PIFu [24] proposes to estimate a pixel-aligned implicit function representing both the shape and appearance of an object, focusing on the reconstruction of humans. While Texture Fields [21] does not reconstruct 3D geometry, this approach predicts the color for a shape by estimating a function mapping a surface position to a color value. These approaches make significant progress in estimating colored reconstructions, but focus on the limited domain of objects, which are both limited in volume and far more structured than reconstruction of full scenes. Figure 1: Our SPSG approach formulates the problem of generating a complete, colored 3D model from an incomplete scan observation to be self-supervised, enabling training on incomplete real-world scan data. Our key idea is to leverage a 2D view-guided synthesis for self-supervision, comparing rendered views of our predicted model to the original RGB-D frames of the scan. Learned Scene Completion While there is a large corpus of work on single object reconstruction, there have been fewer efforts focusing on reconstructing scenes. SSCNet [26] introduce a method to jointly predict the geometric occupancy and semantic segmentation of a scene from an RGB-D image. ScanComplete [5] introduces an autoregressive approach to complete partial scans of large-scale scenes. These approaches focus on geometric and semantic predictions, relying on synthetic 3D data to provide complete ground truth scenes for training, resulting in loss of quality due to the synthetic-real domain gap when applied to real-world scans. In contrast, SGNN [8] proposes a selfsupervised approach for geometric completion of partial scans, allowing training on real data. Our approach is inspired by that of SGNN; however, we find that their 3D self-supervision formulation is insufficient for compelling color generation, and instead propose to guide our self-supervision through 2D renderings of our 3D predictions. Method Overview Our aim is to generate a complete 3D model, with respect to both geometry and color, from an incomplete RGB-D scan. We take as input a series of RGB-D frames and estimated camera poses, fused into a truncated signed distance field representation (TSDF) through volumetric fusion [4]. The input TSDF is represented in a volumetric grid, with each voxel storing both distance and color values. We then learn to generate a TSDF representing the complete geometry and color, from which we extract the final mesh using Marching Cubes [17]. To effectively generate compelling color and geometry for real scan data, we develop a self-supervised approach to learn from incomplete target scans. From an incomplete target scan, we generate a more incomplete version by removing a subset of its RGB-D frames, and learn the generation process between the two levels of incompleteness while ignoring the unobserved space in the target scan. Notably, rather than relying on the incomplete target 3D colored TSDF -which contains inconsistencies from view-dependent effects, micro-misalignments in camera pose estimation, and is often lower resolution than that of the color sensor (to account for the lower resolution and noise in the depth capture) -we instead propose a 2D view-guided synthesis, relying on losses formulated on 2D renderings of our predicted TSDF. As each individual image is self-consistent and high resolution, we mitigate such artifacts by leveraging this image information to guide our predictions. That is, we render our predicted TSDF to the views of the original images, with which we can then compare our rendered predictions and the original RGB-D frames. This allows us to exploit the consistency of each individual frame during training, as well as employ not only a reconstruction loss for geometry and color, but also adversarial and perceptual losses, where the 'real' target images are the raw RGB-D frames. Each of these views is complete, high-resolution, and photo-realistic, which provides guidance for our approach to learn to generate complete, high-quality, colored 3D models. Self-supervised Photometric Generation The key idea of our method for photometric scene generation from incomplete RGB-D scan observations is to formulate a self-supervised approach based on 2D view-guided synthesis, leveraging rendered views of our predicted 3D model. Since training on real-world scan data is crucial for realistic color generation, we need to be able to learn from incomplete target scan data as complete ground truth is unavailable for real-world scans. Thus, we learn a generative process from the correlation of an incomplete target scan composed of RGB-D frames {f k } with a more incomplete version of that scan constructed from a subset of the frames {f i } ⊂ {f k }. The input scan S i during training is then created by volumetric fusion of {f i } to a volumetric TSDF with per-voxel distances and colors. This is inspired by the SG-NN approach [8]; however, crucially, rather than relying on the fused incomplete target TSDF, we formulate 2D-based rendering losses to guide our geometry and color predictions. This both avoids smaller-scale artifacts from inconsistencies in camera pose estimation as well as view-dependent lighting and color balancing, and importantly, allows formulation of adversarial and perceptual losses with the raw RGB-D frames, which are individually complete views in image space. These losses are critical towards producing compelling photometric scene generation results. Additionally, our self-supervision exploits the different patterns of incompleteness seen across a variety of target scans, where each individual target scan remains incomplete but learning across a diverse set of patterns enables generating output 3D models that have more complete, consistent geometry and color than any single target scan seen during training. Differentiable Rendering To formulate our 2D-based losses, we render our predicted TSDF S p in a differentiable fashion to generate color, depth, and world-space normal images, C v , D v , and N v , for a given view v. We then operate on C v , D v and N v to formulate our reconstruction, adversarial, and perceptual losses. Specifically, for S p comprising per-voxel distances and colors, and a camera view v with the intrinsics (focal length, principal point), extrinsics (rotation, translation), and image dimensions, we then generate C v , D v , and N v by raycasting, as shown in Figure 2. For each pixel in Figure 2: Differentiable rendering of our 3D predicted TSDF geometry and color. the output image, we construct a ray r from the view v and march along r through S p using trilinear interpolation to determine TSDF values. To locate the surface at the zero-crossing of S p , we look for sign changes between current and previous TSDF values. For efficient search, we first use a fixed increment to search along the ray (half of the truncation value), and once a zero-crossing has been detected, we use an iterative line search to refine the estimate. The refined zero-crossing location is then used to provide the depth, normal, and color values for D v , N v , and C v as distance from the camera, negative gradient of the TSDF, and associated color value, respectively. Our differentiable TSDF rendering is implemented in CUDA as a PyTorch extension for efficient runtime, with the backward pass similarly implemented through ray marching, using atomic add operations to accumulate gradient information when multiple pixels correspond to a voxel. 2D View-Guided Synthesis / Re-rendering loss Our self-supervised approach is based on 2D losses operating on the depth, normal, and color images D v , N v , and C v , which are rendered from the predicted TSDF S p . This enables comparison to the original RGB-D frame data D t v , N t v (normals are computed in world space from the depth images), and C t v , thus, avoiding explicit view inconsistencies in the targets as well as providing complete target view information. For the task of generating a complete photometric reconstruction from an incomplete scan, we employ a reconstruction loss to anchor geometry and color predictions, as well as an adversarial and perceptual loss, to capture more realistic appearance in the final prediction. Reconstruction Loss. We use an 1 loss to guide depth and color to the target depth and color: L R D = 1 N p ||D v (p) − D t v (p)|| 1 L R C = 1 3N p ||C v (p) − C t v (p)|| 1 .(1) Since the rendered D v and C v may not have valid values for all pixels (where no surface geometry was seen), these losses operate only on the valid pixels p, normalized by the number of valid pixels N . The color loss operates on the 3 channels of the CIELAB color space, which we empirically found to provide better color performance than RGB space. Note that these reconstruction losses as formulated have a trivial solution where generating no surface geometry in S p provides no loss, so we employ a 3D geometric reconstruction loss L R G on the predicted 3D TSDF distances, weighted by a small value w g to discourage lack of surface geometry prediction. For L R G , we mask out any voxels which were unobserved in the target scan. The final reconstruction loss is then L R = w g L R G + L R D + L R C . Adversarial Loss. To capture a more realistic photometric scene generation, we employ an adversarial loss on both N v and C v . Note that since depth values are completely view dependent, we do not use this information in the adversarial loss. In particular, this helps avoid averaging artifacts when only the reconstruction loss is used, which helps markedly in addressing color imbalance in the Figure 3: Network architecture overview. Our approach is fully-convolutional, operating on an input TSDF volume and predicting an output TSDF, from which we apply our 2D view-guided synthesis. training set (e.g., color dominated by walls/floors colors which typically have little diversity). We use the conditional adversarial loss: L A = E x,Nv,Cv (log D(x, [N v , C v ])) + E x,N t v ,C t v (log(1 − D(x, [N t v , C t v ]))(2) where [·, ·] denotes concatenation, and x is the condition, with x = [N i v , C i v ] where N i v , C i v are the rendered normal and color images of the input scan S i from view v. Note that although N t v and C t v can be considered complete in the image view, N v and C v may contain invalid pixels; for these invalid pixels we copy the corresponding values from D t v and C t v to avoid trivially recognizing real from synthesized by number of invalid pixels. Similar to Pix2Pix [14], we use a patch-based discriminator, on 94 × 94 patches of 320 × 256 images. Perceptual Loss. We additionally employ a loss to penalize perceptual differences from the rendered color images of our predicted TSDF. We use a pretrained VGG network [25], and use a content loss [9] where feature maps from the eighth convolutional layer are compared with an 2 loss. L P = ||VGG 8 (C v ) − VGG 8 (C t v )|| 2(3) Data Generation To generate the input and target scans S i and S t used during training, we use a random subset of the target RGB-D frames (in our experiments, 50% ) to construct S i . Both S i and S t are then constructed through volumetric fusion [4]; we use a voxel resolution of 2cm. In order to realize efficient training, we train on cropped chunks of the input-target pairs of size 64 × 64 × 128 voxels. For each train chunk, we associate up to five RGB-D frames based on their geometric overlap with the chunk. These frames are used as targets for the 2D losses on the rendered predictions. Network Architecture Our network, visualized in Figure 3, is designed to produce a 3D volumetric TSDF representation of a scene from an input volumetric TSDF. We predict both geometry and color in a fully-convolutional, end-to-end-trainable fashion. We first predict geometry, followed by color, so that the color predictions can be directly informed by the geometric structure. The geometry is predicted with an encoderdecoder structure, then color using an encoder-decoder followed by a series of convolutions which maintain spatial resolution. The encoder-decoder for geometry prediction spatially subsamples to a factor 1/4 of the original resolution, and outputs a feature map f g from which the final geometry is predicted. The geometric predictions then inform the color prediction, with f g input to the next encoder-decoder. The color prediction is structured similarly to the geometry encoder-decoder, with a series of additional convolutions maintaining the spatial resolution. We found that avoiding spatial subsampling before the color prediction helped to avoid checkering artifacts in the predicted color outputs. Our discriminator architecture is composed of a series of 2D convolutions, each spatially subsampling its input by a factor of 2. For a detailed architecture specification, we refer to the appendix. Training Details We train our approach on a single NVIDIA GeForce RTX 2080. We weight the loss term L R G with w g = 0. Results To evaluate our SPSG approach, we consider the real-world scans from the Matterport3D dataset [3], where no complete ground truth is available for color and geometry, and additionally provide further analysis on synthetic data from the chair class of ShapeNet [2], where complete ground truth data is available. To enable quantitative evaluation on Matterport3D scenes, we consider input scans generated with 50% of all available RGB-D frames for each scene, and evaluate against the target scan composed of all available RGB-D frames (ignoring unobserved space). For ShapeNet, we consider single RGB-D frame input, and the complete shape as the target. Evaluation metrics To evaluate our color reconstruction quality, we adopt several metrics to evaluate rendered views of the predicted meshes in comparison to the original views (as we do not have complete 3D color data available for real-world scenarios). First, we consider the Fréchet Inception Distance (FID) [11], which is commonly used to evaluate the quality of images synthesized by 2D generative techniques, and captures a distance between the distributions of synthesized images and real images. The structure similarity image metric (SSIM) [1] is often used to measure more local characteristics in comparing a synthesized image directly to the target image, but can tend to favor averaging over sharp detail. Finally, we capture a perceptual metric, Feature-1 , following the metric proposed in Oechsle et al. [21], which evaluates the 1 distance between the feature embeddings of the synthesized and target images under an InceptionV3 network [28]. To measure the geometric quality of our reconstructed shapes and scenes, we use an intersection-overunion (IoU) metric as well as a Chamfer distance metric. IoU is computed over the voxelization of the output meshes of all approaches, with voxel size of 2cm for Matterport3D data and 0.01 (relative to the unit normalized space) for ShapeNet data. For Chamfer distance, we sample 30K points from the output meshes as well as ground truth meshes, and compute the distance in metric space for Matterport3D and normalized space for ShapeNet. Note that for the case of real scans, all unobserved space in the target is ignored for the geometric evaluation. For all comparisons to state-of-the-art approaches predicting both color and geometry, we provide as input the incomplete TSDF and color, and if necessary, adapt the method's input (denoted by + ). Self-supervised photometric scene generation. We demonstrate our self-supervised approach to generate reconstructions of scenes from incomplete scan data, using scan data from Matterport3D [3] with the official train/test split (72/18 trainval/test scenes comprising 1788/394 rooms). Tables 1 and 4 show a comparison of our approach to state-of-the-art methods for color and geometry reconstruction: PIFu [24] and Texture Fields [21]. Since Texture Fields predicts only color, we provide our predicted geometry as input; for test scenes, since it is designed for fixed volume sizes, we apply it in sliding window fashion. We additionally show qualitative results in Figure 4. All methods were trained on the generated input-target pairs of scans from Matterport3D with frames removed from the target scan to create the corresponding inputs, and the respective proposed loss functions used for training. Note that the prior methods have all been developed for the single object scenario with full supervision available (e.g., using synthetic ground truth), and are limited in capturing the diversity in geometry and color of real-world scenes. Our self-supervised formulation with rendering losses enables capturing a more realistic distribution of geometry and color in generating complete 3D scenes. Method SSIM (↑) Feature-1 (↓) FID (↓) Baseline-3D What is the effect of the 2D view-guided synthesis? In Table 2, we analyze the effects of our various 2D rendering based losses, and show qualitative results in Figure 6. We first replace our rendering-based losses with analogous 3D losses, i.e., L R , L A , and L P use the 3D incomplete target TSDF instead of 2D views (Baseline-3D). This approach learns to reflect the inconsistencies present in the fused 3D target scan (e.g., striping artifacts where one frame ends and another begins), and moreover, suffers from the incompleteness of the target scan data when used as 'real' examples for the discriminator and the perceptual loss (resulting in black artifacts in some missing regions). Thus, our approach to leverage rendering based losses using the original RGB-D frames produces more consistent, compelling reconstructions. Table 4: Evaluation of geometric reconstruction from Matterport3D [3] scans (left) and ShapeNet [2] chairs (right). Note that for real scans, unobserved regions in the target are ignored for evaluation. Additionally, we evaluate the effect of our adversarial and perceptual losses on the output color quality, evaluating our approach with the adversarial loss removed (Ours (no adversarial)), perceptual loss removed (Ours (no perceptual)), and both adversarial and perceptual losses removed (Ours ( 1 only)). Using only an 1 loss results in blurry, washed out colors. With the adversarial loss, the colors are less washed out, and with the perceptual loss, colors become sharper; using all losses combines these advantages to achieve compelling scene generation. Evaluation on synthetic 3D shapes. We additionally evaluate our approach in comparison to state-of-the-art methods on synthetic 3D data, using the chairs category of ShapeNet (5563/619 trainval/test shapes). All methods are provided a single RGB-D frame as input, and for training, the complete shape as target. Tables 4 and 3 show quantitative evaluation for geometry and color predictions, respectively. Our approach predicts more accurate geometry, and our adversarial and perceptual losses provide more compelling color generation. Conclusion We introduce SPSG, a self-supervised approach to generate complete, colored 3D models from incomplete RGB-D scan data. Our 2D view-guided formulation enables self-supervision as well as compelling color generation through 2D adversarial and perceptual losses. Thus we can train and test on real-world scan data where complete ground truth is unavailable, avoiding the large domain gap in using synthetic color and geometry data. We believe this is an exciting avenue for future research, and provides an interesting alternative for synthetic data generation or domain transfer. A Network Architecture We detail our network architecture specifications in Figure 7. Convolution parameters are given as (nf_in, nf_out, kernel_size, stride, padding). Each convolution (except those producing final outputs for geometry and color) is followed by a Leaky ReLU and batch normalization. Figure 7: Network architecture specification. Given an incomplete RGB-D scan, we take its 3D geometry and color as input, and leverage a fully-convolutional neural network to predict the complete 3D model represented volumetrically for both geometry and color. B Additional Results B.1 Additional Ablation Studies We additionally evaluate the effect of the CIELAB color space that our approach uses for color generation, in comparison to RGB space. Table 5 quantitatively evaluates the color generation, showing that CIELAB space is more effective, and Figure 8 shows that using CIELAB space allows our approach to capture a greater diversity of colors in our output predictions. Table 5: Comparison of our approach using CIELAB color space to using RGB on Matterport3D [3] scans. CIELAB produces more effective color generation. Figure 8: Qualitative comparison of our approach using CIELAB color space vs RGB color space on Matterport3D [3] scans. Using CIELAB space allows us to capture more diversity in output color generation. B.2 Runtime Performance Since our network architecture is composed of 3D convolutions, we can generate an output prediction in a single forward pass for an input scan, with runtime performance dependent on the 3D volume of the test scene as O(dimx×dimy×dimz). A small scene of size 1.5×3.0×2.6 meters (72×152×128 voxels), inference time is 0.33 seconds; a medium scene of size 3.7×3.1×2.6 meters (184×156×128 voxels) takes 0.84 seconds, and a large scene of size 6.0 × 6.6 × 2.6 meters (300 × 328 × 128 voxels) takes 2.4 seconds. B.3 Qualitative Results We provide additional qualitative results of colored reconstruction of Matterport3D [3] scans and ShapeNet [2] chairs in Figures 9 and 10, respectively. As can be seen, our method consistently generates sharper results compared to the baseline methods. In Figure 9, the comparison to Oechsle et al. [21] is shown. Since the approach does not complete geometry, we provide our predicted geometry as input. In contrast to our method, it is not properly estimating color tones like for the green chair in the bottom row of the figure. Figure 10 shows more examples for our experiments on the ShapeNet dataset in comparison to Im2Avatar [27], PIFu [24] and Texture Fields [21]. Figure 4 : 4Qualitative evaluation of colored reconstruction on Matterport3D[3] scans. Figure 5 : 5Qualitative evaluation of colored reconstruction on ShapeNet[2] chairs. Figure 6 : 6Qualitative evaluation of our design choices on Matterport3D[3] scans. Figure 9 : 9Additional qualitative evaluation of colored reconstruction on Matterport3D[3] scans. 1 and the adversarial loss for the generator by 0.005; all other terms inMethod SSIM (↑) Feature-1 (↓) FID (↓) PIFu + [24] 0.67 0.25 81.5 Texture Fields [21] (on Ours Geometry) 0.70 0.23 68.4 Ours 0.71 0.22 56.0 Table 1 : 1Evaluation of colored reconstruction from incomplete scans of Matterport3D[3] scenes. We evaluate rendered views of the outputs of all methods against the original color images.the loss have a weight of 1.0. We use the Adam optimizer with a learning rate of 0.0001 and batch size of 2, and train our model for ≈ 48 hours until convergence. For efficient training, we train on 64 × 64 × 128 cropped chunks of scans; at test time, since our model is fully-convolutional, we operate on entire incomplete scans of varying sizes as input. Table 2 : 2Ablation study of our design choices on Matterport3D[3] scans.Method SSIM (↑) Feature-1 (↓) FID (↓) Im2Avatar [27] 0.85 0.25 59.7 PIFu + [24] 0.86 0.24 70.3 Texture Fields [21] (on Ours Geometry) 0.93 0.20 30.3 Ours 0.93 0.19 29.0 Table 3 : 3Evaluation of colored reconstruction from incomplete scans of ShapeNet[2] chairs. Acknowledgments and Disclosure of FundingFigure 10: Additional qualitative evaluation of colored reconstruction on ShapeNet[2]chairs. On the mathematical properties of the structural similarity index. D Brunet, E R Vrscay, Z Wang, IEEE Transactions on Image Processing. 214D. Brunet, E. R. Vrscay, and Z. Wang. On the mathematical properties of the structural similarity index. IEEE Transactions on Image Processing, 21(4):1488-1499, 2011. A X Chang, T Funkhouser, L Guibas, P Hanrahan, Q Huang, Z Li, S Savarese, M Savva, S Song, H Su, arXiv:1512.03012An information-rich 3d model repository. arXiv preprintA. X. Chang, T. Funkhouser, L. Guibas, P. Hanrahan, Q. Huang, Z. Li, S. Savarese, M. Savva, S. Song, H. Su, et al. Shapenet: An information-rich 3d model repository. arXiv preprint arXiv:1512.03012, 2015. Matterport3d: Learning from RGB-D data in indoor environments. A X Chang, A Dai, T A Funkhouser, M Halber, M Nießner, M Savva, S Song, A Zeng, Y Zhang, 2017 International Conference on 3D Vision. Qingdao, ChinaA. X. Chang, A. Dai, T. A. Funkhouser, M. Halber, M. Nießner, M. Savva, S. Song, A. Zeng, and Y. Zhang. Matterport3d: Learning from RGB-D data in indoor environments. In 2017 International Conference on 3D Vision, 3DV 2017, Qingdao, China, October 10-12, 2017, pages 667-676, 2017. A volumetric method for building complex models from range images. B Curless, M Levoy, Proceedings of the 23rd annual conference on Computer graphics and interactive techniques. the 23rd annual conference on Computer graphics and interactive techniquesACMB. Curless and M. Levoy. A volumetric method for building complex models from range images. In Proceedings of the 23rd annual conference on Computer graphics and interactive techniques, pages 303-312. ACM, 1996. Scancomplete: Large-scale scene completion and semantic segmentation for 3d scans. A Dai, D Ritchie, M Bokeloh, S Reed, J Sturm, M Nießner, 2018 IEEE Conference on Computer Vision and Pattern Recognition. Salt Lake City, UT, USAA. Dai, D. Ritchie, M. Bokeloh, S. Reed, J. Sturm, and M. Nießner. Scancomplete: Large-scale scene completion and semantic segmentation for 3d scans. In 2018 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2018, Salt Lake City, UT, USA, June 18-22, 2018, pages 4578-4587. Bundlefusion: Real-time globally consistent 3d reconstruction using on-the-fly surface reintegration. A Dai, M Nießner, M Zollhöfer, S Izadi, C Theobalt, 24:1-24:18ACM Trans. Graph. 363A. Dai, M. Nießner, M. Zollhöfer, S. Izadi, and C. Theobalt. Bundlefusion: Real-time globally consistent 3d reconstruction using on-the-fly surface reintegration. ACM Trans. Graph., 36(3):24:1-24:18, 2017. Shape completion using 3d-encoder-predictor cnns and shape synthesis. A Dai, C R Qi, M Nießner, 2017 IEEE Conference on Computer Vision and Pattern Recognition. Honolulu, HI, USAA. Dai, C. R. Qi, and M. Nießner. Shape completion using 3d-encoder-predictor cnns and shape synthesis. In 2017 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017, Honolulu, HI, USA, July 21-26, 2017, pages 6545-6554, 2017. Sg-nn: Sparse generative neural networks for self-supervised scene completion of rgb-d scans. A Dai, C Diller, M Nießner, Proc. Computer Vision and Pattern Recognition (CVPR). Computer Vision and Pattern Recognition (CVPR)IEEEA. Dai, C. Diller, and M. Nießner. Sg-nn: Sparse generative neural networks for self-supervised scene completion of rgb-d scans. In Proc. Computer Vision and Pattern Recognition (CVPR), IEEE, 2020. Image style transfer using convolutional neural networks. L A Gatys, A S Ecker, M Bethge, Proceedings of the IEEE conference on computer vision and pattern recognition. the IEEE conference on computer vision and pattern recognitionL. A. Gatys, A. S. Ecker, and M. Bethge. Image style transfer using convolutional neural networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 2414-2423, 2016. Generative adversarial nets. I Goodfellow, J Pouget-Abadie, M Mirza, B Xu, D Warde-Farley, S Ozair, A Courville, Y Bengio, Advances in Neural Information Processing Systems. I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio. Generative adversarial nets. In Advances in Neural Information Processing Systems, pages 2672-2680, 2014. Gans trained by a two time-scale update rule converge to a local nash equilibrium. M Heusel, H Ramsauer, T Unterthiner, B Nessler, S Hochreiter, Advances in neural information processing systems. M. Heusel, H. Ramsauer, T. Unterthiner, B. Nessler, and S. Hochreiter. Gans trained by a two time-scale update rule converge to a local nash equilibrium. In Advances in neural information processing systems, pages 6626-6637, 2017. 3dlite: towards commodity 3d scanning for content creation. J Huang, A Dai, L J Guibas, M Nießner, ACM Trans. Graph. 366J. Huang, A. Dai, L. J. Guibas, and M. Nießner. 3dlite: towards commodity 3d scanning for content creation. ACM Trans. Graph., 36(6):203-1, 2017. Adversarial texture optimization from rgb-d scans. J Huang, J Thies, A Dai, A Kundu, C M Jiang, L Guibas, M Nießner, T Funkhouser, J. Huang, J. Thies, A. Dai, A. Kundu, C. M. Jiang, L. Guibas, M. Nießner, and T. Funkhouser. Adversarial texture optimization from rgb-d scans. 2020. Image-to-image translation with conditional adversarial networks. P Isola, J.-Y Zhu, T Zhou, A A Efros, Proceedings of the IEEE conference on computer vision and pattern recognition. the IEEE conference on computer vision and pattern recognitionP. Isola, J.-Y. Zhu, T. Zhou, and A. A. Efros. Image-to-image translation with conditional adversarial networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 1125-1134, 2017. Kinectfusion: real-time 3d reconstruction and interaction using a moving depth camera. S Izadi, D Kim, O Hilliges, D Molyneaux, R A Newcombe, P Kohli, J Shotton, S Hodges, D Freeman, A J Davison, A W Fitzgibbon, Proceedings of the 24th Annual ACM Symposium on User Interface Software and Technology. the 24th Annual ACM Symposium on User Interface Software and TechnologySanta Barbara, CA, USAS. Izadi, D. Kim, O. Hilliges, D. Molyneaux, R. A. Newcombe, P. Kohli, J. Shotton, S. Hodges, D. Freeman, A. J. Davison, and A. W. Fitzgibbon. Kinectfusion: real-time 3d reconstruction and interaction using a moving depth camera. In Proceedings of the 24th Annual ACM Symposium on User Interface Software and Technology, Santa Barbara, CA, USA, October 16-19, 2011, pages 559-568, 2011. Progressive growing of gans for improved quality, stability, and variation. T Karras, T Aila, S Laine, J Lehtinen, arXiv:1710.10196arXiv preprintT. Karras, T. Aila, S. Laine, and J. Lehtinen. Progressive growing of gans for improved quality, stability, and variation. arXiv preprint arXiv:1710.10196, 2017. Marching cubes: A high resolution 3d surface construction algorithm. W E Lorensen, H E Cline, Proceedings of the 14th Annual Conference on Computer Graphics and Interactive Techniques, SIGGRAPH 1987. the 14th Annual Conference on Computer Graphics and Interactive Techniques, SIGGRAPH 1987Anaheim, California, USAW. E. Lorensen and H. E. Cline. Marching cubes: A high resolution 3d surface construction algorithm. In Proceedings of the 14th Annual Conference on Computer Graphics and Interactive Techniques, SIGGRAPH 1987, Anaheim, California, USA, July 27-31, 1987, pages 163-169, 1987. Voxnet: A 3d convolutional neural network for real-time object recognition. D Maturana, S Scherer, IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS 2015. Hamburg, GermanyD. Maturana and S. Scherer. Voxnet: A 3d convolutional neural network for real-time object recognition. In 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS 2015, Hamburg, Germany, September 28 -October 2, 2015, pages 922-928, 2015. Occupancy networks: Learning 3d reconstruction in function space. L Mescheder, M Oechsle, M Niemeyer, S Nowozin, A Geiger, Proceedings IEEE Conf. on Computer Vision and Pattern Recognition (CVPR). IEEE Conf. on Computer Vision and Pattern Recognition (CVPR)L. Mescheder, M. Oechsle, M. Niemeyer, S. Nowozin, and A. Geiger. Occupancy networks: Learning 3d reconstruction in function space. In Proceedings IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), 2019. Kinectfusion: Real-time dense surface mapping and tracking. R A Newcombe, S Izadi, O Hilliges, D Molyneaux, D Kim, A J Davison, P Kohli, J Shotton, S Hodges, A W Fitzgibbon, 10th IEEE International Symposium on Mixed and Augmented Reality. Basel, SwitzerlandR. A. Newcombe, S. Izadi, O. Hilliges, D. Molyneaux, D. Kim, A. J. Davison, P. Kohli, J. Shotton, S. Hodges, and A. W. Fitzgibbon. Kinectfusion: Real-time dense surface mapping and tracking. In 10th IEEE International Symposium on Mixed and Augmented Reality, ISMAR 2011, Basel, Switzerland, October 26-29, 2011, pages 127-136, 2011. Texture fields: Learning texture representations in function space. M Oechsle, L Mescheder, M Niemeyer, T Strauss, A Geiger, Proceedings of the IEEE International Conference on Computer Vision. the IEEE International Conference on Computer VisionM. Oechsle, L. Mescheder, M. Niemeyer, T. Strauss, and A. Geiger. Texture fields: Learning texture representations in function space. In Proceedings of the IEEE International Conference on Computer Vision, pages 4531-4540, 2019. Deepsdf: Learning continuous signed distance functions for shape representation. J J Park, P Florence, J Straub, R A Newcombe, S Lovegrove, IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2019. Long Beach, CA, USAJ. J. Park, P. Florence, J. Straub, R. A. Newcombe, and S. Lovegrove. Deepsdf: Learning continuous signed distance functions for shape representation. In IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2019, Long Beach, CA, USA, June 16-20, 2019, pages 165-174, 2019. Octnet: Learning deep 3d representations at high resolutions. G Riegler, A O Ulusoy, A Geiger, 2017 IEEE Conference on Computer Vision and Pattern Recognition. Honolulu, HI, USAG. Riegler, A. O. Ulusoy, and A. Geiger. Octnet: Learning deep 3d representations at high resolutions. In 2017 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017, Honolulu, HI, USA, July 21-26, 2017, pages 6620-6629, 2017. Pifu: Pixel-aligned implicit function for high-resolution clothed human digitization. S Saito, Z Huang, R Natsume, S Morishima, A Kanazawa, H Li, Proceedings of the IEEE International Conference on Computer Vision. the IEEE International Conference on Computer VisionS. Saito, Z. Huang, R. Natsume, S. Morishima, A. Kanazawa, and H. Li. Pifu: Pixel-aligned implicit func- tion for high-resolution clothed human digitization. In Proceedings of the IEEE International Conference on Computer Vision, pages 2304-2314, 2019. Very deep convolutional networks for large-scale image recognition. K Simonyan, A Zisserman, arXiv:1409.1556arXiv preprintK. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014. Semantic scene completion from a single depth image. S Song, F Yu, A Zeng, A X Chang, M Savva, T A Funkhouser, 2017 IEEE Conference on Computer Vision and Pattern Recognition. Honolulu, HI, USAS. Song, F. Yu, A. Zeng, A. X. Chang, M. Savva, and T. A. Funkhouser. Semantic scene completion from a single depth image. In 2017 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017, Honolulu, HI, USA, July 21-26, 2017, pages 190-198, 2017. Y Sun, Z Liu, Y Wang, S E Sarma, arXiv:1804.06375Im2avatar: Colorful 3d reconstruction from a single image. arXiv preprintY. Sun, Z. Liu, Y. Wang, and S. E. Sarma. Im2avatar: Colorful 3d reconstruction from a single image. arXiv preprint arXiv:1804.06375, 2018. Rethinking the inception architecture for computer vision. C Szegedy, V Vanhoucke, S Ioffe, J Shlens, Z Wojna, Proceedings of the IEEE conference on computer vision and pattern recognition. the IEEE conference on computer vision and pattern recognitionC. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens, and Z. Wojna. Rethinking the inception architecture for computer vision. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 2818-2826, 2016. Elasticfusion: Dense SLAM without A pose graph. T Whelan, S Leutenegger, R F Salas-Moreno, B Glocker, A J Davison, Robotics: Science and Systems XI. Rome, ItalySapienza University of RomeT. Whelan, S. Leutenegger, R. F. Salas-Moreno, B. Glocker, and A. J. Davison. Elasticfusion: Dense SLAM without A pose graph. In Robotics: Science and Systems XI, Sapienza University of Rome, Rome, Italy, July 13-17, 2015, 2015. 3d shapenets: A deep representation for volumetric shapes. Z Wu, S Song, A Khosla, F Yu, L Zhang, X Tang, J Xiao, IEEE Conference on Computer Vision and Pattern Recognition. Boston, MA, USAZ. Wu, S. Song, A. Khosla, F. Yu, L. Zhang, X. Tang, and J. Xiao. 3d shapenets: A deep representation for volumetric shapes. In IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2015, Boston, MA, USA, June 7-12, 2015, pages 1912-1920, 2015. Disn: Deep implicit surface network for highquality single-view 3d reconstruction. Q Xu, W Wang, D Ceylan, R Mech, U Neumann, Advances in Neural Information Processing Systems. H. Wallach, H. Larochelle, A. Beygelzimer, F. Alché-Buc, E. Fox, and R. Garnett32Q. Xu, W. Wang, D. Ceylan, R. Mech, and U. Neumann. Disn: Deep implicit surface network for high- quality single-view 3d reconstruction. In H. Wallach, H. Larochelle, A. Beygelzimer, F. Alché-Buc, E. Fox, and R. Garnett, editors, Advances in Neural Information Processing Systems 32, pages 492-502. 2019. Pointflow: 3d point cloud generation with continuous normalizing flows. G Yang, X Huang, Z Hao, M.-Y Liu, S Belongie, B Hariharan, The IEEE International Conference on Computer Vision (ICCV). G. Yang, X. Huang, Z. Hao, M.-Y. Liu, S. Belongie, and B. Hariharan. Pointflow: 3d point cloud generation with continuous normalizing flows. In The IEEE International Conference on Computer Vision (ICCV), October 2019. State of the art on 3d reconstruction with rgb-d cameras. M Zollhöfer, P Stotko, A Görlitz, C Theobalt, M Nießner, R Klein, A Kolb, 10.1111/cgf.13386Computer Graphics Forum. 37M. Zollhöfer, P. Stotko, A. Görlitz, C. Theobalt, M. Nießner, R. Klein, and A. Kolb. State of the art on 3d reconstruction with rgb-d cameras. Computer Graphics Forum, 37:625-652, 05 2018. doi: 10.1111/cgf.13386.
[]
[ "Periodicity, Thermal Effects, and Vacuum Force: Rotation in Random Classical Zero-Point Radiation", "Periodicity, Thermal Effects, and Vacuum Force: Rotation in Random Classical Zero-Point Radiation" ]
[ "Yefim S Levin \nDepartment of Electrical and Computer Engineering\nBoston University\n02215BostonMA\n" ]
[ "Department of Electrical and Computer Engineering\nBoston University\n02215BostonMA" ]
[]
Thermal effects of acceleration through a vacuum have been investigated in the past from different perspectives, with both quantum and classical methods. However, the existence of the thermal effects associated with rotation in a flat vacuum requires a deeper analysis. In this work we show that for a detector rotating in a random classical zero-point electromagnetic or massless scalar radiation at zero temperature such thermal effects exist. Analysis and calculations are carried out in terms of correlation functions of random classical electromagnetic or massless scalar field in the rotating reference system. This system is constructed as an infinite set of Frenet-Seret tetrads µ τ defined so that the detector is at rest in a tetrad at each proper time τ . Particularly, (1) correlation functions, more exactly their frequency spectrum , contain the Planck thermal factor
null
[ "https://arxiv.org/pdf/1003.4458v1.pdf" ]
115,157,363
1003.4458
6d1f33c16a1842741a20fcca8e72bdc9826d84d7
Periodicity, Thermal Effects, and Vacuum Force: Rotation in Random Classical Zero-Point Radiation 23 Mar 2010 March 24, 2010 Yefim S Levin Department of Electrical and Computer Engineering Boston University 02215BostonMA Periodicity, Thermal Effects, and Vacuum Force: Rotation in Random Classical Zero-Point Radiation 23 Mar 2010 March 24, 2010 Thermal effects of acceleration through a vacuum have been investigated in the past from different perspectives, with both quantum and classical methods. However, the existence of the thermal effects associated with rotation in a flat vacuum requires a deeper analysis. In this work we show that for a detector rotating in a random classical zero-point electromagnetic or massless scalar radiation at zero temperature such thermal effects exist. Analysis and calculations are carried out in terms of correlation functions of random classical electromagnetic or massless scalar field in the rotating reference system. This system is constructed as an infinite set of Frenet-Seret tetrads µ τ defined so that the detector is at rest in a tetrad at each proper time τ . Particularly, (1) correlation functions, more exactly their frequency spectrum , contain the Planck thermal factor 1/(exp(hω/k B T rot ) − 1), and (2) the energy density the rotating detector observes is proportional to the sum of energy densities of Planck's spectrum at the temperature T rot =h Ω 2πkB and zero-point radiation. The proportionality factor is 2 3 (4γ 2 − 1) for an electromagnetic field and 2 9 (4γ 2 − 1) for a massless scalar field, where γ = (1 − ( Ωr c ) 2 ) −1/2 , and r is a detector rotation radius. The origin of these thermal effects is the periodicity of the correlation functions and their discrete spectrum, both following rotation with angular velocity Ω. The correlation functions without periodicity properties do not display thermal features. The thermal energy can also be interpreted as a source of a force, f vac , applied to the rotating detector from the vacuum field, "vacuum f orce". The f vac depends on the size of neither the charge nor the mass, like the force in the Casimir model for a charged particle, but, contrary to the last one, it is directed to the center of the circular orbit. The f vac infinitely grows by magnitude when r → r 0 = c/Ω. Therefore the radius of circular orbits with a fixed Ω is bounded. The orbits with a radius greater than r 0 do not exist simply because the returning vacuum force becomes infinite. On the uttermost orbit with the radius r 0 , a linear velocity of the rotating particle would have become c. The f vac becomes very small and proportional to r when r is small, r ≪ c/Ω. Such vacuum force dependance on radius, at large and small r, can be associated respectively with so called confinement and asymptotic freedom, known in quantum chromodynamics, and provide a new explanation for them. Introduction. This work is focused on thermal effects hypothetically associated with rotation through a vacuum of a massless scalar or electromagnetic field in a flat, Minkowski, space, and performed in a classical approach. Investigations of rotation are mostly based on the ideas developed for a linear acceleration through a vacuum [1] - [9]. For example, in [1], the authors write: "... in the Rindler case, a set of uniformly accelerated particle detectors ... will give zero response in the Rindler vacuum state, and will give a consistent thermal response to the Minkowski vacuum state." And later on: "We might therefore expect a set of rotating detectors to similarly reveal the state of a rotating vacuum field". This program for the rotation case was used, for example, in [3] and [10] and after that in [2]. Below we discuss some results of [2]. A rotating 4-space, with the Trocheries-Takeno (T) coordinates and a non-static non-diagonal metrics, with the associated quantum Fock space (referred below as T-F space) of a massless scalar field are considered in [2], along with Minkowski (M) space and its associated Fock (M-F) quantum space. The T-coordinates (t, r, θ, z) are connected with M-coordinates (t,r,θ,z) as: t =t cosh Ωr −rθ sinh Ωr, r =r, θ =θ cosh Ωr −t r sinh Ωr, z =z.(1) The main motivation to use T-coordinates is that a particle at rest in T-space, with constant values of (r, θ, z), has a velocity v(r) = tanh(Ωr) in the M-space, which is less than the speed of light for anỹ r. The "rotating vacuum" of a massless scalar field in the T-F space is not the Minkowski vacuum, Based on this fact, the response function, R(E), of a rotating Unruh -De Witt detector in a massless scalar field is obtained in [2]. It describes probability of excitation of the detector with energy E per unit proper time. The authors consider the R(E) for three different situations depending on the motion of the detector and the state in which the quantized field is prepared. 1. The response function, referred to as R (r) M (E, R 0 ) [2], for the field in the Minkowski vacuum state, |0 M , and at the detector rotating (r) in Minkowski space on a circular orbit with radius R 0 . 2. The response function, R (i) T (E, R 0 ), for the field in the rotating vacuum state, |0 T , of T-F quantum space, and the inertial (i) detector, non rotating, at the distance R 0 from the field rotation center in M-space. 3. The response function, R (r) T (E, R 0 ), for the field in the rotating vacuum state, |0 T , of T-F quantum space and the rotating (r) detector in Minkowski space on an orbit with radius R 0 . The results obtained for the first two scenarios look self-consistent and meet the expectations based on the experience gained from the Rindler case of a uniformly accelerated detector, at least in part. They still do not reveal Planck's thermal properties of a vacuum associated with rotation. The third situation is less clear. This is what authors say about it [2]: "...we once again arrive at the same confrontation between canonical quantum field theory and the detector formalism, which was settled by Letaw and Pfautsch and Padmanabhan: how is it possible for the orbiting detector to be excited in the rotating vacuum". The non-null excitation rate in the third scenario, the authors say in [2], can be attributed to two independent origins: 1. non-staticity of the Trocheries-Takeno metric, and 2. to the Unruh -De Witt detector model adapted in [2]. The Glauber model detector would not be excited in this situation. The authors in [2] give this problem the following explanation and solution: "Because the rotating vacuum excites even a rotating detector, we consider this as a noise which will be measured by any other state of motion of the detector." And: " This amounts to saying that the inertial detector will also measure this noise, and we normalize the rate in this situation by subtracting from it the value of R T (E, R 0 ), resulting in a normalized excitation rate for the inertial detector in interaction with the field in the rotating vacuum." So, instead of R (i) T (E, R 0 ), they useR (i) T (E, R 0 ) = R (i) T (E, R 0 ) − R (r) T (E, R 0 ). But, even with this correction, there is one more problem associated with rotating vacuum which is not addressed in [2]. In the Minkowski space, none of the points of the rotating system considered as a "rotating vacuum" has an angular velocity Ω. Indeed, the angular velocity Ω M , in the Minkowski space, of a point with fixed spatial T coordinates (r = R 0 , θ = θ 0 , z = z 0 ) is Ω M = dθ dt = 1 R 0 tanh ΩR 0 = Ω.(2) So in Trocheries -Takeno coordinates formalism [2], Ω is just a parameter without a clear physical sense, and the concept of "rotating vacuum" is ambiguous. In this work we do not use the concept of "rotating vacuum". The word "rotation" is associated with a detector moving on a circle only. Our approach to the problem is based on the concept of "measurements" made by a point-like detector, rotating in a scalar or electromagnetic vacuum. Bernard suggested in [11] "to represent measurements by an observable, without describing the detection process," with a "transformation law which tells us how this observable is modified when the same detector is forced to move along some other world line". This should be applicable to both quantum and classical theory. Nevertheless, analysis of local measurements in terms of local observables only, without any references to a detector features, turns out to have some restrictions. Indeed, the character of the motion of a detector implies some detector features and therefore determines a detection process. For example, in the frame of special relativity theory, a rotating detector should have a charge to be rotated and held on a circle. Therefore it behaves like a rotating oscillator and should be selective to frequencies. We will show that the angular velocity of the observer is a key parameter to describe the thermal properties of the rotation in random zero-point classical radiation. Regarding the transformation law of an observable, mentioned above [11], to represent a measurement, both quantum and classical, the simplest assumption is that the observable is an invariant for all possible world lines and coordinate systems. Mathematically an observable with such properties can be described in a tetrad formalism, because tetrad components of vectors and tensors are invariants with respect to coordinate transformations [12] - [17] A similar approach has been used in [4] for a uniformly accelerated observer, even though the tetrad formalism was not used explicitly. Inertial systems, local in terms of time and each defined at an observer proper time, were used in [4]. The tetrad formalism was used in [7] to describe interacton between two uniformly accelerated oscillators in a vacuum, located in a plane perpendicular to the motion direction. In this work, the measurements made by the rotating detector are described in the rotating ref- erence system consisting of an infinite number of instantaneous inertial reference frames and mathe-matically defined as tetrads at each moment of the detector proper time. Along with such a reference system, the two-point correlation functions of the electromagnetic and scalar massless field and energy density of these fields are defined and analyzed for zero-point radiation. The article is organized as follows. Let the detector be a particle moving through an electromagnetic field in Minkowsky space-time and the detector measures it on the world line in a locally inertial reference frame. We assume that the field is classical and in a vacuum state. Mathematical definition of the vacuum field state is given in the next subsection. The quantities associated with such local measurements can be described in 4-orthogonal tetrad (OT) formalism [14], [18]. Any vector or tensor may be resolved along 4 tetrad vectors µ i (a) , a =1, 2, 3, and 4. ( The tetrad vectors are described in Appendix A ). For example, a 4-vector velocity of a detector and the tensor of electromagnetic field are respectively U i = U (a) (µ) µ (a) i , F ik = µ (a) i µ (b) k F (ab) (µ).(3) The components U (a) (µ) = µ i (a) U i(4) and F (ab) (µ) = µ i (a) µ k (b) F ik .(5) are invariants in the tensorial sense (i.e with respect to coordinate transformation) and defined in a local reference frame with locally lorentz-invariant metrics tensor η ab = η ab = diag(1, 1, 1, −1) (Appendix A ) . Therefore they describe local observable quantities. In this work OTs are defined as Frenet-Serret orthogonal tetrads associated with each point of the world line of the rotating detector with 4-vector velocity U i = c (−βγ sin α, βγ cos α, 0, γ),(6)where β = v/c = Ωa/c, γ = (1 − β 2 ) −1/2 , α = Ωγτ ,µ i (4) = U i c , µ i (1) = (cos α, sin α, 0, 0), µ i (2) = (−γ sin α, γ cos α, 0, βγ). µ i (3) = (0, 0, 1, 0).(7) In local reference frames, defined by these tetrads, the detector is at rest: U (a) = µ i (a) U i = µ i (a) U k g ik = (0, 0, 0, −c).(8) The 3-vector acceleration of the detector in it is constant in both magnitude and direction: U (a) = µ i (a)U i = µ i (a)U k g ik = (−aΩ 2 γ 2 , 0, 0, 0), g ik = diag(1, 1, 1, −1),(9) as it would be in the case of a uniformly accelerated detector. It is why we preferred to use Frenet-Serret tetrads, and not Fermi-Walker ones. Fermi-Walker tetrads do not have this feature ( see Appendix A). Following formulas (5) and ( 7), the electric E (k) (µ|τ ) and magnetic H (k) (µ|τ ) fields, which denote local observable quantities, in the Frenet-Serret reference frame µ τ at the proper time τ of the rotating detector can be given in terms of electric E k and magnetic H k fields in the inertial laboratory coordinate system: E (1) (µ|τ ) = F (41) (µ|τ ) = E 1 γ cos α + E 2 γ sin α − H 3 βγ, E (2) (µ|τ ) = F (42) = E 1 (− sin α) + E 2 cos α, E (3) (µ|τ ) = F (43) = E 3 γ + H 1 βγ cos α + H 2 βγ sin α, H (1) (µ|τ ) = F (23) = H 1 γ cos α + H 2 γ sin α + E 3 βγ, H (2) (µ|τ ) = F (31) = H 1 (− sin α) + H 2 cos α, H (3) (µ|τ ) = F (12) = H 3 γ + E 1 (−βγ cos α) + E 2 (−βγ sin α),(10) where α = Ωγτ . A mathematical subject of this work is bilinear combinations of the local fields, which are taken in two tetrads, averaged over the field in a vacuum state defined in the laboratory coordinate system. Formulas (10) can be used to calculate the following two-field correlation functions (CF) of the electromagnetic field at the rotating detector: I E (ab) ≡ E (a) (µ 1 |τ 1 )E (b) (µ 2 |τ 2 ) , I EH (ab) ≡ E (a) (µ 1 |τ 1 )H (b) (µ 2 |τ 2 ) , I H (ab) ≡ H (a) (µ 1 |τ 1 )H (b) (µ 2 |τ 2 ) ,(11) where a, b = 1, 2, 3. In these expressions µ 1 and µ 2 are two reference frames (tetrads) on the circle of the rotating detector at the proper times τ 1 and τ 2 respectively. For example, I E (11) =< E 1 (τ 1 )E 1 (τ 2 ) > γ 2 cos α 1 cos α 2 + < E 1 (τ 1 )E 2 (τ 2 ) > γ 2 cos α 1 sin α 2 + < E 2 (τ 1 )E 1 (τ 2 ) > γ 2 sin α 1 cos α 2 + < E 1 (τ 1 )H 3 (τ 2 ) > (−1)βγ 2 cos α 1 + < H 3 (τ 1 )E 1 (τ 2 ) > (−1)βγ 2 cos α 2 + < E 2 (τ 1 )E 2 (τ 2 ) > γ 2 sin α 1 sin α 2 + < E 2 (τ 1 )H 3 (τ 2 ) > (−1)βγ 2 sin α 1 + < H 3 (τ 1 )E 2 (τ 2 ) > (−1)βγ 2 sin α 2 + (βγ) 2 < H 3 (τ 1 )H 3 (τ 2 ) > .(12) The expressions for some other CFs are given in Appendix B. They follow from (10). When τ 1 → τ 2 these expressions can be used to calculate expectation values for energy density. Here means averaging over a vacuum state of the electromagnetic field in the laboratory coordinate system. In the next section, we will consider averaging for a situation when a vacuum state of the electromagnetic field is a random classical zero point radiation. Correlation Function Calculation Scheme: Example for I E (11) . In the classical case, the electric and magnetic field components E k and H k in (10) and (12) represent the random zero-point radiation in the laboratory coordinate system [4](47), (48) at a time-space position (t, r) of the rotating detector : E(τ ) = 2 λ=1 d 3 kǫ( k, λ)h 0 (ω) cos[ k r(τ ) − ωγτ − θ( k, λ)], H(τ ) = 2 λ=1 d 3 k[k,ǫ ( k, λ)]h 0 (ω) cos[ k r(τ ) − ωγτ − θ( k, λ)],(13) where, in distinction from [4], the laboratory coordinates r(t) and time t are taken in terms of proper time τ of the rotating observer: r(τ ) = (a cos Ωγτ, a sin Ωγτ, 0), t = γτ,(14) the θ( k, λ) describe random phases distributed uniformly on the interval (0, 2π) and independently for each wave vector k and polarization λ of of a plane wave, and π 2 h 2 0 (ω) = (1/2)hω.(15) Averaging in (12) means averaging over random phases θ(k, λ). To illustrate a technique of CF calculation, we will compute the CF I E (11) as an example. This technique is very similar to one in [4] developed for a uniformly accelerated case, as apposed to rotation, though the tetrad formalism is not used there. The <> expressions in ( 12 ), contain double integrals and double sums d k d k ′ λ λ ′ . Using the known θ -function properties [4] < cos θ( k, λ) cos θ( k ′ , λ ′ ) >=< sin θ( k, λ) sin θ( k ′ , λ ′ ) >= 1 2 δ λ λ ′ δ 3 ( k − k ′ ), < cos θ( k, λ) sin θ( k ′ , λ ′ ) >= 0(16) and the sum over polarization 2 λ=1 ǫ i ( k, λ)ǫ i ( k ′ , λ ′ ) = δ ij − k i k j /k 2 ≡ δ ij −k ikj ,(17) they can be reduced to an integral-sum of the the type d k λ . Then using variable change in the integrands, from k to k ′ , k x cos α +k y sin α =k ′ x , −k x sin α +k y cos α =k ′ y ,(18)with α = α 1 + α 2 2 = Ωγ(τ 2 + τ 1 ) 2 ,k i = k i /k, i = x, y, z,(19) we come to the following expressions for the <> terms in (12): < E 1 (τ 1 )E 1 (τ 2 ) >= d 3 k R + (− cos 2 α) d 3 kk 2 x R + (− sin 2 α) d 3 kk 2 y R, < E 1 (τ 1 )E 2 (τ 2 ) >=< E 2 (τ 1 )E 1 (τ 2 ) >= − sin 2α 2 d 3 kk 2 x R + sin 2α 2 d 3 kk 2 y R, < E 1 (τ 1 )H 3 (τ 2 ) >=< E 1 (τ 2 )H 3 (τ 1 ) >= − cos α d 3 kk y R, < E 2 (τ 1 )E 2 (τ 2 ) >= d 3 k R + (− sin 2 α) d 3 kk 2 x R + (− cos 2 α) d 3 kk 2 y R, < E 2 (τ 1 )H 3 (τ 2 ) >=< E 2 (τ 2 )H 3 (τ 1 ) >= (− sin α) d 3 kk y R, < H 3 (τ 1 )H 3 (τ 2 ) >= d 3 kk 2 x R + d 3 kk 2 y R.(20) In these expressions, the prime symbol of the "dummy" variable k ′ is omitted for simplicity, and we use the following notations: R = h 2 0 (ω) 1 2 cos kF, F = cγ(τ 2 − τ 1 )[1 −k y v c sin δ/2 δ/2 ], δ = α 2 − α 1 = Ωγ(τ 2 − τ 1 ).(21) After some simplifications we come to the following expression for I E (11) : I E (11) = E (1) (µ 1 |τ 1 )E (1) (µ 2 |τ 2 ) = γ 2 cos δ d 3 k h 2 0 (ω) 1 2 cos kF + 2βγ 2 cos δ 2 d 3 kk y h 2 0 (ω) 1 2 cos kF + γ 2 [β 2 − cos 2 δ 2 ] d 3 kk 2 x h 2 0 (ω) cos kF + γ 2 [β 2 + sin 2 δ 2 ] d 3 kk 2 y h 2 0 (ω) 1 2 cos kF.(22) This function clearly depends only on the proper time interval τ 2 − τ 1 and is not dependent on (τ 1 + τ 2 )/2 that is I E (11) = I E (11) (τ 2 − τ 1 ).(23) General expressions for other CFs can be found in Appendix B. They have the same properties and also depend only on the proper time interval τ 2 − τ 1 . 2.3 The Correlation Function I E (11) in Terms of Elementary Functions. The CF I E (11) ≡ E (1) (µ 1 |τ 1 )E (1) (µ 2|τ 2 ) defined and discussed above can be represented in terms of elementary functions. After integration of (22) in spherical coordinates, over k and then over φ, we come to the expression: I E (11) = 3hc 2π 2 [c(t 2 − t 1 )] 4 γ 2 {+[2π cos δ] π 0 dθ sin θ (1 − k 2 sin 2 θ) 7/2 +[3πk 2 cos δ − 2π cos 2 (δ/2) + 2πβ 2 − 8πβk cos(δ/2) + π] π 0 dθ sin 3 θ (1 − k 2 sin 2 θ) 7/2 +[−3πk 2 cos 2 (δ/2) + 3πβ 2 k 2 − 2πβk 3 cos(δ/2) + 4πk 2 ] π 0 dθ sin 5 θ (1 − k 2 sin 2 θ) 7/2 }(24) (see Appendix C for details). The integrals over θ in this expression are : π 0 dθ sin θ (1 − k 2 sin 2 θ) 7/2 = 2 5(1 − k 2 ) + 8 15(1 − k 2 ) 2 + 16 15(1 − k 2 ) 3 ,(25)π 0 dθ sin 3 θ (1 − k 2 sin 2 θ) 7/2 = 4 15(1 − k 2 ) 2 + 16 15(1 − k 2 ) 3 ,(26)π 0 dθ sin 5 θ (1 − k 2 sin 2 θ) 7/2 = 16 15(1 − k 2 ) 3 .(27)k = − v c sin δ/2 δ/2 , δ = Ωγ(τ 2 − τ 1 ). Other CFs can also be expressed in terms of elementary functions. In this form the CFs do not display thermal features. In the next section we will investigate under what conditions they can display thermal properties. We will show that periodic CFs have thermal features. 2.4 Periodicity of Correlation Functions: Example for I E (11) . We assume that CFs at a rotating detector should be periodic because CF measurements is one of the tools the detector can use to justify the periodicity of its motion. Mathematically it means that I E (11) (t 2 − t 1 ) = I E (11) ( (t 2 − t 1 ) + 2π Ω n )(28) or I E (11) (τ 2 − τ 1 ) = I E (11) ( (τ 2 − τ 1 ) + 2π Ωγ n )(29) Here Ω = 2π T is an angular velocity of the rotating detector and n = 0, 1, 2, 3, ... . Breaking down cos kF in (22) into odd and even powers of k y and taking into consideration that the odd part of the integrand gives zero after integration over k y it is easy to show that the CF is periodic if in its integrand ω = ck = Ωn. It means that the rotating detector observes not the entire random electromagnetic radiation spectrum but only a discrete part of it. We could also expect the same result based on the following consideration. Even though no assumptions about a structure of the rotating detector have been made so far, it should have some common features connected with the type of its motion. First of all it should have a charge simply because a neutral, not charged, detector cannot be used to observe electromagnetic field and can not be kept on a circular orbit. Then the charge of the rotating detector behaves as an oscillator with a frequency Ω and resonance frequencies nΩ. Of course this discrete spectrum is the same as the radiation spectrum of a rotating electrical charge [20](39.29). The expression (24) for I E (11) cannot be used to analyze the periodicity consequences because the integration over entire continuous spectrum of ω has already been done in it. It is why we have used the expression (22), before the integration over ω. Let us now consider the correlation function I E (11) , periodic over τ , with the discrete spectrum. There are two ways to do this. The first one is simpler, just to modify the formula (22) for I E (11) for the discrete spectrum. It will be described below in the next subsection. The second one is identical with the approach we have used above for the continuous spectrum but with the modified equations (13) and relationships (16) for the discrete spectrum. It is described in Appendix D. 2.5 Correlation Functions With the Discrete Spectrum: Example for I E (11) . The integrals in (22) can be represented as d 3 k[ ] 1 2 h 2 0 (ω) cos kF = chk 4 0 4π 2 dO[ ]S,(31) where S = dκ κ 3 cos κF d , dO = dθdφ sin θ, κ = k k 0 , k 0 = Ω/c,(32) and F d = k 0 F = δ[1 −k y v c sin δ/2 δ/2 ],(33) The expressions in [ ] are 1,k y = ky k ,k 2 x = ( kx k ) 2 , andk 2 y = ( ky k ) 2 do not depend on κ. For the discrete spectrum case the integration in (31) over κ should be changed to summation over n. So the the only term to be changed is S → S d . It becomes S d = ∞ 0 n 3 cos nF d .(34) Then the periodical CF, corresponding to (22), with the discrete spectrum can be defined in the form I E (11)d ≡ E (1) (µ 1 |τ 1 )E (1) (µ 2 |τ 2 ) d = chk 4 0 4π 2 { γ 2 cos δ dO S d + 2βγ 2 cos δ 2 dOk y S d + γ 2 [β 2 − cos 2 δ 2 ] dOk 2 x S d + γ 2 [β 2 + sin 2 δ 2 ] dOk 2 y S d },(35) where integration is held on angular variables only, and S d is a series sum which is analyzed in the next section using the Abel-Plana formula. The Abel-Plana Formula and Thermal Properties of Correlation Functions With the Discrete Spectrum: Example for I E (11)d . Using Abel-Plana summation formula [21], [22], [23] ∞ n=0 f (n) = ∞ 0 f (x) dx + f (0) 2 + i ∞ 0 dt f (it) − f (−it) e 2πt − 1 ,(36) with f (n) = n 3 cos nF d we come to the following expression for S d (34): Ω 4 S d = ∞ 0 d ωω 3 cos(ωF ) + ∞ 0 dω 2ω 3 cosh(ωF ) e 2πω/Ω − 1 ,F = F d Ω ,(38) and the CF (35) becomes I E (11)d = E (1) (µ 1 |τ 1 )E (1) (µ 2 |τ 2 ) d = dO K(θ, φ, δ) × 2 3h πc 3 { ∞ 0 d ωω 3 cos(ωF ) + ∞ 0 dω 2ω 3 cosh(ωF ) e 2πω/Ω − 1 },(39) where K(θ, φ, δ) = 3 8π { γ 2 cos δ + 2βγ 2 cos δ 2k y + γ 2 [β 2 − cos 2 δ 2 ]k 2 x + γ 2 [β 2 + sin 2 δ 2 ]k 2 y .(40) Expressions for S d after integration over ω are given in (96), Appendix E, and further discussion could have been made in terms of obtained elementary functions. But it is simpler to consider the structure of the integrand in the expression for S d explicitly. The CF I E (11)d resembles the CF of the thermal radiation, with Planck's spectrum and zero-point radiation included, observed by a detector at rest in an inertial frame [4](73) : E T i (0, s − t/2)E T i (0, s + t/2) = 2 3h πc 3 { ∞ 0 dωω 3 cos ωt + ∞ 0 dω 2ω 3 cos ωt eh ω kT − 1 }(41) which corresponds to the spectral function π 2h2 T (ω) = 1 2h ω cothh ω 2kT =hω( 1 2 + 1 eh ω/kT − 1 ).(42)T rot =h Ω 2πk B .(43) The Planck factor is an indication that some thermal effects accompany the detector rotation in the random classical zero-point electromagnetic radiation though there is also a significant distinction between them. In (39),F = t(1 −k y v c sin(Ωt/2) Ωt/2 ) and cosh are used instead of t and cos respectively in (41). The coefficientF depends on both θ and φ becausek y = sin θ sin φ. Besides the expression (39), compared to (41), contains coefficient K(θ, φ, δ) and integration over θ and φ. So the CF I E (11)d at a rotating detector explores some thermal properties but does not coincide with the CF (41) at an inertial observer put in the radiation with Planck's radiation. Partly it occurs because radiation, isotropic in the laboratory system, looks anisotropic for a rotating detector. Is there any situation when operands in (39) and (41) are identical ? It is easy to see that in the limit t → 0 and thereforeF → 0 , when two observation points, τ 1 and τ 2 (or t 1 and t 2 in the laboratory system) coincide, both expressions are identical. This observation brings up the idea that the energy density ( one-observation-point quantity and consisting of diagonal elements of the CF ) of the random classical electromagnetic radiation measured by a detector, rotating through a zero point radiation, has the Planck spectrum at the temperature T rot (43). This issue will be discussed in the next section. The Energy Density of Random Classical Electromagnetic Radiation Observed by a Rotating detector: Periodicity and Planck's Spectrum. In any reference frame µ τ , with Minkowsky metrics η (ab) , local lorentz coordinates can be introduced [13], section 9.6. The local reference frame, defined this way, is an inertial system, and all laws of Special Relativity should be true in this locally inertial reference frame. Then the energy density measured by the rotating observer at µ τ will be of the form: w = 1 8π 3 a=1 ( E 2 (a) (µ|τ ) + H 2 (a) (µ|τ ) )(44) or, in terms of electric and magnetic fields measured in the laboratory coordinate system (10), w = 1 4π { [ E 2 1 + E 2 3 ]γ 2 (1 + β 2 ) + E 2 2 } + 1 8π 4 γ 2 β ( E 1 H 3 − E 3 H 1 ),(45) where as we will show below E 2 i = H 2 i , i = 1, 2, 3, and w does not depend on the choice of a tetrad µ. We have already seen that the correlation functions with a periodicity have a discrete spectrum. Effectively, in calculations, it means that integral expressions for zero-point random radiation fields E i and H i in the laboratory coordinates should be modified and presented as series over frequencies. Explicit expressions for the fields E i and H i with discrete spectrum are given in Appendix D and could be used in ( 45) to take into consideration periodicity. With the help of these formulas and using the technique for discrete spectrum described above we come to the following expressions E 1 H 3 − E 3 H 1 = 0,(46) and E 2 i = H 2 i = k 4 0h c 2π 2 dO(1 −k 2 i ) ∞ n=0 n 3 , k 0 = Ω/c(47) for i = 1, 2, 3. Finally, after integration over θ and φ, we have w = (4γ 2 − 1) 3h c 3 π 2 Ω 4 ∞ n=0 n 3 .(48) Using the Abel-Plana formula (38) with F d = 0 this expression can be given in the form: w = 2 (4γ 2 − 1) 3 (w ZP + w T ),(49) where w ZP =h c 3 π 2 ∞ 0 d ω 1 2 ω 3 , w T =h c 3 π 2 ∞ 0 dω ω 3 eh ω/k B Trot − 1 = 4 π 2 k 4 B 60(ch) 3 T 4 rot = 4σ c T 4 rot ,(50) k B is the Boltzman constant, and σ is the Stefan-Boltzman constant. Thus, due to the periodicity of the motion, the detector rotating in the zero-point radiation under the temperature T = 0 observes not only original zero-point radiation, w ZP , but also the radiation, w T , with Planck's spectrum if parameter T rot is interpreted as the temperature associated with the detector rotation. Expression w T is exactly the energy density of the black radiation at the temperature T rot [ [24], (60, 14)]. The factor 2 3 (4γ 2 − 1) comes from integration in (47) over angles due to anisotropy of the electromagnetic field measured by the rotating observer. All this consideration is true when Ωr < c. The first term of (49), corresponding to ZP radiation, is divergent for any r and Ω. The second one, describing the thermal properties, is convergent, though it is growing to infinity if r → c/Ω for a fixed Ω or Ω → c/r for a fixed r. Random Classical Massless Zero-Point Scalar Field at a Rotating Detector. Correlation Function. The scalar field ψ s (µ τ |τ ) in a tetrad µ τ has the same form as in the laboratory coordinate system, ψ s (τ ) , taken in the location of the tetrad, because it is a scalar. Then the correlation function measured by an observer rotating through a random classical massless zero-point scalar field radiation has the form [4]: ψ s (µ 1 |τ 1 )ψ s (µ 2 |τ 2 ) = ψ s (τ 1 )ψ s (τ 2 ) ,(51) where ψ s (τ i ) = d 3 k i f (ω i ) cos{ k i r(τ i ) − ω i γτ i − θ(k i )},(52) and (instead ofh 0 (ω) in (13) ) f 2 (ω i ) =h c 2 2π 2 ω i , ω i = ck 0i , i = 1, 2.(53) The θ-functions, r(τ i ), and t(τ i ) are defined in (14) and (16). Using these expressions and variable change (18) in the double-integral (51) we get the expression : ψ s (µ 1 |τ 1 )ψ s (µ 2 |τ 2 ) = d 3 kf 2 (ω) 1 2 cos kF,(54) where F is defined in (21). Having integrated over k, φ , and θ we come to the expression for the CF of the random classical massless scalar field at the rotating detector moving through a zero point massless scalar radiation ( see details in Appendix F ): ψ s (µ 1 |τ 1 )ψ s (µ 2 |τ 2 ) = −h c π 1 (γ(τ 2 − τ 1 )c) 2 − 4r 2 sin 2 Ωγ(τ 2 −τ 1 ) 2 .(55) This correlation function is also identical to the positive frequency Wightman function [1](3), up to a constant. This function does not expose thermal features. Nevertheless the situation changes if the CF periodicity is taken into consideration. In the scalar field, the CF can be considered periodical for the same reasons it is periodical in the electromagnetical fields. This issue is investigated below. Periodicity of the Correlation Function, Abel-Plana Formula, and the Planck's Factor. To take into consideration the periodicity of the CF we have to use its expression ( 54) before integration over ω. The equation (54) can be given in the form ψ s (µ 1 |τ 1 )ψ s (µ 2 |τ 2 ) =h ck 2 0 4π 2 dO dκκ cos κF d , dO = sin θdθdφ, κ = k k 0 , k 0 = Ω/c, F d = k 0 F.(56) If this function of τ = τ 2 − τ 1 is periodic then, as we saw for the CF I E (11) above, κ = ck ck 0 = n = 0, 1, 2, .. and the integral over κ becomes an infinite series: ψ s (µ 1 |τ 1 )ψ s (µ 2 |τ 2 ) d =h ck 2 0 4π 2 dO ∞ n=0 n cos nF d .(57) Expression (57 ) Then ψ s (µ 1 |τ 1 )ψ s (µ 2 |τ 2 ) d =h 4π 2 c dO { ∞ 0 dω ω cos ωF − ∞ 0 dω 2ω cosh ωF eh ω kT rot − 1 } (60) The expression in { } is similar to the right side of the expression [4], (27) for the correlation function of the scalar massless zero-point field at the detector at rest in Planck's spectrum at the temperature T The appearance of the Planck factor (eh ω kT rot − 1) −1 shows similarity between the radiation spectrum observed at the rotating detector in the massless scalar zero-point field and the radiation spectrum observed by an inertial observer placed in a thermostat filled with the radiation at the temperature T = T rot . But there is also a difference between them. TheF and cosh are used in the first expression whereas t and cos are used in the second expression respectively. TheF is a function of θ and φ. It means that a thermal radiation observed by the rotating detector moving in the massless scalar zero-point radiation is anisotropic. The resemblance between both expressions becomes closer if t = 0 andF = 0 and two points of an observation agree. Both expressions are identical. But in the case of one-point observation which occurs whenF = 0 it is better to consider the energy density of the scalar massless field, as is done in the next section. The Energy Density and Planck's Spectrum. The energy density T (44) of the massless scalar field at the detector rotating through the zero-point massless scalar field can be expressed in terms of the tensor of energy-momentum T ik at the location of the detector in the laboratory coordinate system [14] as T (44) = µ i (4) µ k (4) T ik ,(62) where µ i a are tetrads. The energy-momentum tensor is [25](2.27) T ik = ψ ,i ψ ,k − 1 2 η ik η rs ψ ,r ψ ,s , η ik = η ik = diag(1, 1, 1, −1)(63) Using (52), (53), and Frenet-Serret tetrads it is easy to show that T 11 = T 22 = T 33 = 1 3 T 44 =h c 3π dkk 3 =h Ω 4 3πc 3 dκ κ 3(64) and T (44) = 4γ 2 − 1 3 T 44 = 4γ 2 − 1 3h Ω 4 πc 3 dκ κ 3(65) With periodical features taken into consideration this expression has the following form T (44) d = 4γ 2 − 1 3h πc 3 Ω 4 ∞ n=0 n 3(66) ( It has an additional factor n 2 compared with (57) because T ik have derivatives of ψ-functions. ) or T (44) d = 4γ 2 − 1 3h πc 3 2 ( ∞ 0 d ω 1 2 ω 3 + ∞ 0 dω ω 3 eh ω/kTrot − 1 . )(67) Let us compare this expression and the expression for the energy density of the massless scalar field with Planck's spectrum of random thermal radiation at the temperature T, along with the zero-point radiation in an inertial reference frame, T 44 T = 1 2 [( ∂ψ T ∂(ct) ) 2 + ( ∂ψ T ∂x ) 2 + ( ∂ψ T ∂y ) 2 + ( ∂ψ T ∂z ) 2 ],(68) where [4] ψ T = d 3 k f T (ω) cos [ k r − ωt − θ( k) ](69) and f 2 T (ω) = c 2 π 2h ω [ 1 2 + 1 exp(hω/kT ) − 1 ].(70) It is easy to show that T (44) d = 2(4γ 2 − 1) 9 T 44 T =Trot .(71) So, due to periodicity of the motion, an observer rotating through a zero point radiation of a massless random scalar field should see the same energy density as an inertial observer would see, moving in a thermal bath at the temperature T rot =h Ω 2πk , multiplied by the factor 2 9 (4γ 2 − 1). This factor comes from integration over angles and is a consequence of anisotropy of the scalar field measured by an observer with angular velocity Ω. Conclusion and Perspectives. The thermal effects of non inertial motion investigated in the past for uniform acceleration through classical random zero-point radiation of electromagnetic and massless scalar field are shown to exist in the case of rotation motion as well. The rotating reference system {µ τ }, along with the two-point correlation functions (CFs) and energy density, are defined and used as the basis for investigating effects observed by a detector rotating through random classical zero-point radiation. The reference system consists of Frenet -Serret orthogonal tetrads µ τ . At each proper time τ the rotating detector is at rest and has a constant acceleration vector at the µ τ . The two-point CFs and the energy density at the rotating reference system should be periodic with the period T = 2π Ω , where Ω is an angular detector velocity, because CF and energy density measurements are one of the tools the detector can use to justify the periodicity of its motion. The CFs have been calculated for both electromagnetic and massless scalar fields in two cases, with and without taking this periodicity into consideration. It was found that only periodic CFs have some thermal features and particularly the Planck factor with the temperature T rot =h Ω 2πk B (k B is the Boltzman constant). Mathematically this property is connected with the discrete spectrum of the periodic CFs, and its interpretation is based on the Abel-Plana summation formula. It is also shown that energy densities of the electromagnetic and massless scalar fields observed by the detector rotating through classical zero-point radiation at zero temperature are respectively w = 2 (4γ 2 − 1) 3 w em (T rot ) and T (44) d = 2(4γ 2 − 1) 9 T 44 Trot . Each of them consists of two terms. The first term, corresponding to zero-point radiation energy density, is divergent, and the second one, describing the thermal effect, is convergent. Let us discuss the convergent electromagnetic thermal energy density w em,T = 2 (4γ 2 − 1) 3 × 4σ c T 4 rot , γ 2 = (1 − (Ωr) 2 /c 2 ) −1 .(72) It includes factor 2 3 (4γ 2 − 1). Appearance of this factor is connected with the fact that rotation is defined by two parameters, angular velocity and the radius of rotation, in contrast with a uniformly accelerated linear motion which is defined by only one parameter, acceleration a. If, for a fixed Ω, the radius of a circular orbit grows, r → c/Ω, the second factor does not change but the first one grows. Such behaviour of the convergent term may have a mechanical interpretation. Let several small particles with the same sign charge move through the vacuum field on a circular orbit. Let us further assume that repulsive interaction of the particles results in a shift of the particles to another circular orbit with slightly greater radius r but with the same angular velocity Ω. Then the thermal energy density w em,T , observed locally by each of the particles, would increase. This increase demands an additional work against the vacuum field and therefore initiates the force, let us call it the vacuum f orce, which acts on these particles from the vacuum field. The volume density of this force is given by f vac = − dw em,T dr = − 8 3 Ω 2 c 2 × 2r (1 − (Ωr) 2 /c 2 ) 2 × 4σ c T 4 rot(73) The force f vac does not depend on the size of neither the charge nor the mass and originates from the thermal energy w em,T , even though it is positive. These three features make f vac similar to the force, f cas , in the Casimir model for a charged particle [28,29,30] E(a) = −Ch c 2a , f cas = − dE da = −Ch c 2a 2 ,(74) where a is a radius. This model was designed to explain a charged particle stability. The force f cas also does not depend on the size of neither the charge nor the mass, the energy E(a) is positive ( because C ≈ −0.09 ) [29]. Nevertheless f vac and f cas are significantly different. Indeed, 1. The f vac is applied to the dynamical system of a particle (particles) moving on a circular orbit, not to a static one as f cas in the Casimir model. 2. The f vac is attractive one because it is directed from the location with greater positive energy density, w em,T , to the location with smaller one, that is to the center of the circular orbit. Thus we could expect it might balance the repulsive force associated with interaction of the charged particles. In contrast, the f cas is known to be repulsive, directed from the center of the shell outward, and therefore can not balance repulsive electrical forces. 3. The f vac infinitely grows with r → r 0 = c/Ω and works as a restoring spring force. Therefore the radius of circular orbits with a fixed Ω is bounded. The orbits with a radius greater than r 0 do not exist because the vacuum force becomes infinite. On the uttermost orbit with the radius r 0 , a linear velocity of the rotating particle would have become c 4. The f vac becomes very small and proportional to r when r is small, r ≪ c/Ω. The last two features of the f vac mean that the further the rotating particle from the center is the more bounded it becomes or, in other words, confined. The closer to the center it is the freer it becomes. This reminds us of two significant concepts in quantum chromodynamics (QCD), the theory of strong interactions: asymptotic freedom and confinement. Confinement theory of quarks and gluons is still a challenge for strong interaction physics [31]. Therefore a concept of a newly introduced vacuum force can be useful for understanding a confinement phenomenon, even though the concept is introduced in the frame of the stochastic electrodynamics but confinement realizes in strong interactions. Moreover quarks, with strong interaction between them, do have an electrical charge, can interact with the electromagnetic field vacuum and experience the vacuum force. In Appendix G we make rough and preliminary estimates of the f vac and T rot , just to understand what order of magnitude they could have in hadron. This is only one of possible directions of the vacuum force f vac applications. More detailed discussion of the vacuum force will be given in a different publication. The same consideration is true for the convergent thermal part of massless scalar field. Some of the results discussed in this paper have been obtained in [32]. An orthogonal tetrad ( OT ) is a set of four orthogonal and normalized 4-vectors µ i (a) , labeled by a=1,2,3,4, so that µ i (a) µ (b)i = η (ab) .(75) Co-vectors µ (b) i are defined as µ (a)i = η (ab) µ i (b) , µ i (a) = η (ab) µ (b)i .(76) The η (ab) is a diagonal matrix η (ab) = η (ab) = diag(1, 1, 1, −1).(77) Frenet-Serret OTs satisfy the formulas [14](55): Dµ i (4) = bµ i (1) , Dµ i (1) =cµ i (2) + bµ i (4) , Dµ i (2) = dµ i (3) −cµ i (1) , Dµ i (3) = −dµ i (2) ,(78) where D = d dτ , τ is a proper time of the detector, in the flat space-time with metric g ik =diag(1,1,1,-1). Solution of this system is given in (7) In (80) , as in [13] ( 4.167 ), the metric is chosen in the form g ik = (1, 1, 1, 1). We preferred to use Frenet-Serret tetrads, and not Fermi-Walker ones, because in a reference frame associated with a Fermi-Walker tetrad e (a)i the 3-vector acceleration is not constant in both direction and magnitudeU (a) = e (a)lUl = (−aΩ 2 γ 2 cos αγ, −aΩ 2 γ 2 sin αγ, 0, 0), and the acceleration depends on proper time τ . B Some Correlation Functions of an Electromagnetic Field at a Rotating Detector as 3-Dimensional Integrals over( k, θ, φ ). Two correlation functions mentioned at the end of Section 2.1 are the following: I E (22) = E (2) (µ 1 |τ 1 ) E (2) (µ 2 |τ 2 ) = E 1 (τ 1 )E 1 (τ 2 ) sin α 1 sin α 2 + E 1 (τ 1 )E 2 (τ 2 ) (−1) sin α 1 cos α 2 + E 2 (τ 1 )E 1 (τ 2 ) (−1) cos α 1 sin α 2 + E 2 (τ 1 )E 1 (τ 2 ) cos α 1 cos α 2 , I E (33) = E (3) (µ 1 |τ 1 ) E (3) (µ 2 |τ 2 ) = γ 2 E 3 (τ 2 )E 3 (τ 1 ) − γ 2 v c cos α 1 E 3 (τ 2 )H 1 (τ 1 ) − γ 2 v c cos α 2 H 1 (τ 2 )E 3 (τ 1 ) − γ 2 v c sin α 1 E 3 (τ 2 )H 2 (τ 1 ) − γ 2 v c sin α 2 H 2 (τ 2 )E 3 (τ 1 ) + γ 2 ( v c ) 2 cos α 2 cos α 1 H 1 (τ 2 )H 1 (τ 1 ) + γ 2 ( v c ) 2 sin α 2 sin α 1 H 2 (τ 2 )H 2 (τ 1 ) + γ 2 ( v c ) 2 cos α 2 sin α 1 H 1 (τ 2 )H 2 (τ 1 ) + γ 2 ( v c ) 2 sin α 2 cos α 1 H 2 (τ 2 )H 1 (τ 1 ) .(83) Is is easy to show that they depend on the difference δ = α 2 − α 1 only. I E (22) = cos δ d 3 k R + sin 2 δ 2 d 3 kk 2 x R + (−1) cos 2 δ 2 d 3 kk 2 y R.(84)I E (33) = γ 2 v 2 c 2 cos δ d 3 k R + γ 2 v c (−2) cos δ 2 d 3 kk y R + γ 2 [1 − v 2 c 2 cos 2 δ 2 ] d 3 kk 2 x R + γ 2 [1 + v 2 c 2 sin 2 δ 2 ] d 3 kk 2 y R.(85) Expressions for R and δ are given in (21). The non diagonal components of the correlation function are zeroes : E (1) (µ 1 |τ 1 ) E (2) (µ 2 |τ 2 ) = E (1) (µ 2 |τ 2 ) E (2) (µ 1 |τ 1 ) = 0, E (1) (µ 1 |τ 1 ) E (3) (µ 2 |τ 2 ) = E (1) (µ 2 |τ 2 ) E (3) (µ 1 |τ 1 ) = 0, E (2) (µ 1 |τ 1 ) E (3) (µ 2 |τ 2 ) = E (2) (µ 2 |τ 2 ) E (3) (µ 1 |τ 1 ) = 0,(86) Similar expressions have been received for the CF with magnetic field components. So all CFs can be given as 3-dimensional integrals over (k, θ, φ). C Integral calculations: final expression for I E (11) . All non zero expressions for the CF in subsection (2.3) should be integrated over k, θ , and φ. The integral over k can be easily calculated: ∞ 0 dkk 3 cos{k(2r sin δ 2 sin θ sin φ − c(t 2 − t 1 ))} = 6 {2r sin δ 2 sin θ sin φ − c(t 2 − t 1 )} 4 = = 6 [c(t 2 − t 1 )] 4 1 [1 − v c sin δ/2 δ/2 sin θ sin φ] 4 .(87) The integrals over θ and φ can be represented in terms of elementary functions. Let us show it for I E (11) ≡ E (1) (µ 1 |τ 1 )E (1) (µ 1 |τ 2 ) : I E (11) = 3hc 2π 2 [c(t 2 − t 1 )] 4 γ 2 π 0 dθ × { (cos δ sin θ + (− cos 2 δ 2 + v 2 c 2 ) sin 3 θ) 2π 0 dφ 1 (1 + b sin φ) 4 +(−2 v c cos δ 2 )sin 2 θ 2π 0 dφ sin φ (1 + b sin φ) 4 + sin 3 θ 2π 0 dφ sin 2 φ (1 + b sin φ) 4 },(88) We have taken into consideration here that k x = sin θ cos φ,k y = sin θ sin φ,k z = cos θ (89) and used notations b ≡ k sin θ, k ≡ − v c sin δ/2 δ/2 . So k is a constant, not a wave vector. The next step is to calculate the integral over φ. Because [26], 2π 0 dφ 1 (1 + b sin φ) 4 = π(2 + 3b 2 ) (1 − b 2 ) 7/2 ,(90)2π 0 dφ sin φ (1 + b sin φ) 4 = −bπ(4 + b 2 ) (1 − b 2 ) 7/2 ,(91) and 2π 0 dφ sin 2 φ (1 + b sin φ) 4 = π(1 + 4b 2 ) (1 − b 2 ) 7/2 ,(92) the correlation function takes the form (24). D Another Way to Receive the I E 11) for the Discrete Spectrum. In section (2.5) we have obtained the general expression for the CF I E (11)d ≡ E (1) (µ 1 |τ 1 )E (1) (µ 2 |τ 2 ) d with discrete spectrum, based on its periodicity. This also could be done directly using the following expressions for the fields E i and H i , with discrete spectrum, instead of the equations (13): E( r, t) = a do k 2 n [k,ǫ(k, λ)] h 0 (ω n ) cos[ k n r − ω n t − Θ( k n , λ)], k n = k nk , k n = k 0 n, k 0 = Ω c , ω n = c k n , do = dθ dφ sin θ, k = (k x ,k y ,k z ) = (sin θ cos φ, sin θ sin φ, cos θ ), a = k 0 . The unit vectork defines a direction of the wave vector k in a spherical momentum 3-space and does not depend on its value, n. The right side of the first equation in the relation (16) should be modified. We do this in two steps. First we rewrite them in a spherical momentum space [27], p.656 as : cos θ( k 1 λ 1 ) cos θ( k 2 λ 2 ) = sin θ( k 1 λ 1 ) sin θ( k 2 λ 2 ) = 1 2 δ λ 1 λ 2 δ 3 ( k 1 − k 2 ) = 1 2 δ λ 1 λ 2 2 k 2 1 δ(k 1 − k 2 )δ(k 1 −k 2 ).(94) And then, in the case of the discrete spectrum, it will be the following: cos θ( k n 1 λ 1 ) cos θ( k n 2 λ 2 ) = sin θ( k n 1 λ 1 ) sin θ( k n 2 λ 2 ) = 1 2 δ λ 1 λ 2 2 k 0 (k 0 n 1 ) 2 δ n 1 n 2 δ(k 1 −k 2 ). (95) The equation E Expression for S d after Integration over ω. The expressions for S d in ( 38) mentioned in subsection 2.6 , after integration over ω, are the following: 4 ]. S d = 6 F 4 d + [ 3 − 2 sin 2 (F d /2) 8 sin 4 (F d /2) − 6 F 4 d ](96) (97) The first integral of S d in ( 38) is divergent and the second one is convergent as can seen from (96). where B = γτ c, E = 2a sin θ sin Ωγτ 2 , and τ = τ 2 − τ 1 . Because B − |E| = cγτ {1 − v c | sin θ sin π(γτ /T ) π(γτ /T ) |} > cγτ (1 − v/c) > 0, and using [19] we obtain : 2π 0 dφ 1 [E sin φ − B] 2 = 2πB (B 2 − E 2 ) 3/2 .(99) Finally, integrating it over θ we come to (55). G The Force f vac and Temperature T rot Estimations. The only purpose of the following estimations is to figure out the order of a magnitude of the vacuum force f vac and rotation temperature T rot which can be associted with a proton size r 0 ≈ 10 −15 m [33]. The radial component of the force acting on a spherical particle of the radius a rotating through an electromagnetic zero-point field on a circular orbit with radius r ≈ r 0 and with angular velocity Ω = c/r 0 can be given in the form F = f vac 4 3 πa 3 = − x (1 − x 2 ) 2 4ch 135 π a 3 r 5 0 , x = r r 0 ≤ 1,(100) where f vac is given in (73). Just for estimation purposes, let us take a ≈ 10 −18 m. Then F ≈ − x (1 − x 2 ) 2 × (2.8) × 10 −7 J m = − x (1 − x 2 ) 2 × (1.75) × 10 −12 GeV F ermi .(101) For 1 − x ≈ 10 −6 , F ≈ −(4.4) × 10 −1 GeV /F ermi = .7 × 10 5 newtons in a good agreement with an order of magnitude of strong interaction forces. Similarly, T rot =h Ω 2πk B =h c 2πk B 1 r 0 ,(102) and, for distances r 0 ≈ 10 −15 m, corresponds to the temperature T rot ≈ 3.4 × 10 11 K, a little bit less then the temperature (1.90 ± 0.02) × 10 12 K needed for a quark-gluon plasma creation [34]. These estimations is a good motivation for further investigations. |0 M = |0 T , because the Bogolubov coefficients of the transformation between creation-annihilation operators (b + M , b M ) and (a + T , a T ) of M-F and T-F quantum spaces respectively are not equal to zero. Section 2 is dedicated to a detector motion through a random classical zero-point electromagnetic radiation. Subsection 2.1, Appendixes A and B: The expressions for the components of the electromagnetic field measured at a Frennet-Seret tetrad are found in terms of the field components in the laboratory inertial coordinate system, and correlation functions of the electromagnetic field at a rotating detector are constructed. Subsection 2.2: The correlation function calculation scheme isdescribed. Subsection 2.3 and Appendix C: The final expressions of the correlation functions in terms of elementary functions are given. These correlation functions turned out not to display any thermal features. Subsection 2.4: An assumption about the existence of the periodicity of the correlation functions and a discrete spectrum associated with it is discussed and justified. To the best of our knowledge, this idea has not been discussed in the literature yet. Subsection 2.5, Appendix D: New correlation functions with the discrete spectrum are constructed. Subsection 2.6, Appendix E: An example of the correlation functions with discrete spectrum is calculated and discussed, with the use of the Abel-Plana formula. The temperature T rot associated with rotation is introduced. Section 3.Expression for energy density of the random classical zero-point electromagnetic field measured by a rotating detector is constructed. It is explicitly shown to display thermal features, following spectrum discreteness observed by the detector. Section 4 is dedicated to detector rotation in massless zeropoint scalar field radiation. Subsection 4.1: A correlation function of the massless zero-point scalar field is calculated with the use of tetrad formalism. Subsection 4.2: The correlation function of the massless zero-point scalar field for a discrete spectrum following its periodicity is defined. Its spectrum contains the Planck's factor. Subsection 4.3: Energy density of the massless scalar field measured by a rotating detector, and their thermal properties connected with the detector rotation and periodicity are obtained and discussed. Section 5: Conclusion and Perspectives. and Ω, a are angular velocity and circumference radius of the rotating detector respectively. 4-vectors of Frenet-Serret OT, solutions of the equations (78), have the form: is a definition of a new correlation function, with periodicity, of the scalar massless field at the rotating detector.The Abel-Plana summation formula in this case is F d /Ω and T rot is defined in (43). very thankful to prof. T.H. Boyer and prof. D.C. Cole for their encouraging comments while reading my manuscript. In response to their recommendations I have included the part about physical sense of the thermal properties in the section Conclusion and Perspectives. APPENDIX A Orthogonal Tetrads. , with b = −βΩγ 2 ,c = Ωγ 2 , ; d = 0. (79) Fermi-Walker tetrad vectors are defined as ([13] (9.148, 4.139, and 4.167))de (a)k dτ = (e (a)lUl )U k /c 2 − (e (a)l U l )U k /c 2(80)and can be given in the form[13] 4.167 e (1)k = (cos α cos αγ + γ sin α sin αγ, sin α cos αγ − γ cos α sin αγ, 0, −i(vγ/c) sin αγ), e (2)k = (cos α sin αγ − γ sin α cos αγ, sin α sin αγ + γ cos α cos αγ, 0, +i(vγ/c) cos αγ), α, 0, γ). (k, λ) h 0 (ω n ) cos[ k n r − ω n t − Θ( k n , λ)],H( r, t) 2 λ=1 ǫ i ( kλ)ǫ j ( kλ) = δ ij −k ikjdoes not depend on n. The correlation function finally takes the form (35,34) F Correlation Function Calculation for Random Zero-Point Radiation of a Scalar Massless Field . The expression (54) after integration over positive k = ω/c becomes:ψ s (µ 1 |τ 1 )ψ s (µ 2 |τ 2 ) [E sin φ − B] −2 , (See formulas[19], 1.5.23, 1.2.43. ) In these expressions, k is not a module of a wave vector k, but a constant for the CFs: T −1) if, in (39), we define a new constant, a rotation temperature, T rot asIndeed, from (39) and (41) we can see that the integrands in both expressions have the Planck factor 1/(exph ω k B + F d /2πn) 4 + 1 (1 − F d /2πn)or S d = 6 F 4 d + 6 ∞ n=1 1 (2πn) 4 [ 1 (1 Detecting the rotating quantum vacuum. P C W Davis, T Dray, C A Manogue, Phys. Rev. D. 534382P.C.W. Davis, T.Dray, C.A. Manogue, Detecting the rotating quantum vacuum. Phys. Rev. D 53, 4382 (1996). The rotating detector and vacuum fluctuations. V A De Lorenci, R D M De Paola, N F Svaiter, Clas. Quantum Grav. 17V.A. De Lorenci, R.D.M. De Paola, N.F. Svaiter, The rotating detector and vacuum fluctuations, Clas. Quantum Grav. 17, 4241-4253(2000). N D Birrell, P C W Davis, Qunatum Fields in Curved Space. Cambridge University PressN.D. Birrell, P.C.W. Davis, Qunatum Fields in Curved Space, Cambridge University Press, 1982. Thermal effects of acceleration through classical radiation. T H Boyer, Phys. Rev. D. 212137T.H.Boyer, Thermal effects of acceleration through classical radiation. Phys. Rev. D 21, 2137 (1980). Thermal effects of acceleration for a classical dipole oscillator in classical electromagnetic zero-point radiation. T H Boyer, Phys. Rev. D. 291089T.H.Boyer, Thermal effects of acceleration for a classical dipole oscillator in classical electromag- netic zero-point radiation. Phys. Rev. D 29, 1089 (1984). The Casimir effect and its application. V M Mostepanenko, N N Trunov, Oxford Science PublicationsV.M.Mostepanenko, N.N. Trunov, The Casimir effect and its application. Oxford Science Publi- cations, 1996. Thermal effects of acceleration for a spatially extended electromagnetic system in classical electromagnetic zero-point radiation: Traversely positioned classical oscillators. C Daniel, Cole, Phys. Rev. D. 35562Daniel C. Cole, Thermal effects of acceleration for a spatially extended electromagnetic system in classical electromagnetic zero-point radiation: Traversely positioned classical oscillators. Phys. Rev. D 35, 562 (1987). Analitic mappings: A new approach to quantum field theory in accelerated frames. Norma Sanchez, Phys. Rev.D. 242100Norma Sanchez, Analitic mappings: A new approach to quantum field theory in accelerated frames. Phys. Rev.D 24, 2100 (1981) Inertia as a zero-point-field force: Critical analysis of the Haisch-Rueda-Puthoff theory. Yefim S Levin, Phys. Rev. A. 7912114Yefim S. Levin, Inertia as a zero-point-field force: Critical analysis of the Haisch-Rueda-Puthoff theory. Phys. Rev. A 79, 012114 (2009) . J D Pfautsch, Phys. Review D24. 1491J.D. Pfautsch, Phys. Review D24, 1491, (1981) Quantum Effects in non inertial frames and quantum covarince. Denis Bernard, , Denis Bernard. Quantum Effects in non inertial frames and quantum covarince, p. 82-106. Lecture Notes in Physics. Field Theory, Quantum Gravity, and Strings. H. J. de Vega, N. Sanchez246Lecture Notes in Physics. Field Theory, Quantum Gravity, and Strings, v.246, 1986. Edited by H. J. de Vega, N. Sanchez. C Moller, The Theory of Relativity. OxfordAt the Clarendon PressC.Moller, The Theory of Relativity (Oxford, At the Clarendon Press,1952); C Moller, The Theory of Relativity (Oxford, At the Clarendon Press. second editionC.Moller, The Theory of Relativity (Oxford, At the Clarendon Press,second edition, 1972) Relativity: The General Theory. J L Synge, Interscience PublishersAmsterdam, New YorkJ. L. Synge, Relativity: The General Theory (North Holland Publishing Co., Amsterdam, New York, Interscience Publishers, 1960) Electrodynamics in a Rotating System of Reference. W M Irvine, Physica. 30W. M. Irvine, Electrodynamics in a Rotating System of Reference. Physica 30, 1160-1170 (1964). Generalized Lorentz transformations and their use (Minsk, Nauka and Technika. O S Ivanitskaya, O.S. Ivanitskaya, Generalized Lorentz transformations and their use (Minsk, Nauka and Technika, 1969). C W Misner, K S Thorne, J A Wheeler, Gravitation. C.W.Misner, K.S.Thorne, J.A. Wheeler, Gravitation, 1973 L D Landau, E M Lifschitz, Field Theory. RussianL. D. Landau, E. M. Lifschitz, Field Theory, 1973 (Russian) . A P Prudnikov, Y A Brichkov, O I Marichev, 2.5.16.35Integrals and Series. A.P. Prudnikov, Y.A. Brichkov, O.I. Marichev, Integrals and Series (Moscow, Science, 1981, Russian). Formula 2.5.16.35. D Ivanenko, A Sokolov, Classical Field Theory. Russian, Klassicheskaya Teoriya polyaMoscowD.Ivanenko, A. Sokolov, Classical Field Theory (Moscow, 1951, In Russian, Klassicheskaya Teoriya polya) Higher Transcendental Functions. Harry Bateman, McGraw-Hill Book Company, Inc122Harry Bateman, Higher Transcendental Functions, v.1, McGraw-Hill Book Company, Inc, 1953, formula 1.9 (11), p. 22. The Casimir Effect and its Applications. V M Mostepanenko, N N Trunov, Usp. Fiz. Nauk. 156NovemeberV. M. Mostepanenko, N. N. Trunov, The Casimir Effect and its Applications, Usp. Fiz. Nauk (156), 385-426 (Novemeber 1988) Analytic functions. M A Evgrafov, 264In RussianM. A. Evgrafov, Analytic functions ( In Russian), Nauka, M., 1968, p. 264. . L D Landau, E M Lifshits, Statistical Physics. 1964RussianL.D.Landau, E.M. Lifshits, Statistical Physics, 1964 (Russian) N D Birell, P C Davis, Quantum Fields in Curved Space. CambridgeCambridge University PressN.D. Birell, P.C.W Davis, Quantum Fields in Curved Space (Cambridge:Cambridge University Press, 1982). I S Gradstein, I M Ryzhik, 2.551Tables of integrals, series, and products. New YorkAcademicI.S.Gradstein and I.M. Ryzhik, Tables of integrals, series, and products ( Academic, New York, 1965), 2.551 Quantum mechanics. A S Davydov, Pergamon PressA.S. Davydov, Quantum mechanics, Pergamon Press, 1968 P W Milloni, The Quantum Vacuum.An Introduction to Quantum Electrodynamics. Academic PressP.W. Milloni, The Quantum Vacuum.An Introduction to Quantum Electrodynamics, Academic Press, 1994 Quantum Electromagnetic Zero-Point Energy of a Conducting Spherical Shell and the Casimit Model for a Charged Particle. T H Boyer, Phys. Rev. 174T.H. Boyer, Quantum Electromagnetic Zero-Point Energy of a Conducting Spherical Shell and the Casimit Model for a Charged Particle, Phys. Rev. 174 (1968). Quantum Electromagnetic Zero-Point Energy of a Conducting Spherical Shell. B Davis, J. Math. Phys. 131324B. Davis, Quantum Electromagnetic Zero-Point Energy of a Conducting Spherical Shell, J. Math. Phys. 13, 1324(1972). V N Gribov, arXiv:hep-ph/9902279v1The theory of quark confinement. V. N. Gribov, The theory of quark confinement, arXiv:hep-ph/9902279v1. Rotation in Classical Zero-Point Radiation. Y S Levin, arXiv:math-ph/0606009vQuantum Vaccum. Y. S. Levin, Rotation in Classical Zero-Point Radiation and in Quantum Vaccum. arXiv:math-ph/0606009v, 2006. S G Karshenboim, arXiv:hep-ph/9712347v1What do we actually know on the proton radius. S.G. Karshenboim, What do we actually know on the proton radius? arXiv:hep-ph/9712347v1 11 Dec 1997 Critical Point of QCD at finite T and µ, lattice results for physical quark masses. Z Fodor, S D Katz, 10.1088/1126-6708/2004/04/050Journal of High Energy Physics. 50Z. Fodor, S.D. Katz."Critical Point of QCD at finite T and µ, lattice results for physical quark masses". Journal of High Energy Physics 2004: 50, doi:10.1088/1126-6708/2004/04/050
[]
[ "Evaluating Customization of Remote Tele-operation Interfaces for Assistive Robots", "Evaluating Customization of Remote Tele-operation Interfaces for Assistive Robots" ]
[ "Vinitha Ranganeni ", "Noah Ponto ", "Maya Cakmak " ]
[]
[]
Mobile manipulator platforms, like the Stretch RE1 robot, make the promise of in-home robotic assistance feasible. For people with severe physical limitations, like those with quadriplegia, the ability to tele-operate these robots themselves means that they can perform physical tasks they cannot otherwise do themselves, thereby increasing their level of independence. In order for users with physical limitations to operate these robots, their interfaces must be accessible and cater to the specific needs of all users. As physical limitations vary amongst users, it is difficult to make a single interface that will accommodate all users. Instead, such interfaces should be customizable to each individual user. In this paper we explore the value of customization of a browser-based interface for teleoperating the Stretch RE1 robot. More specifically, we evaluate the usability and effectiveness of a customized interface in comparison to the default interface configurations from prior work. We present a user study involving participants with motor impairments (N=10) and without motor impairments, who could serve as a caregiver, (N=13) that use the robot to perform mobile manipulation tasks in a real kitchen environment. Our study demonstrates that no single interface configuration satisfies all users' needs and preferences. Users perform better when using the customized interface for navigation, but not for manipulation due to higher complexity of learning to manipulate through the robot. All participants are able to use the robot to complete all tasks and participants with motor impairments believe that having the robot in their home would make them more independent.
10.48550/arxiv.2304.02771
[ "https://export.arxiv.org/pdf/2304.02771v1.pdf" ]
257,985,252
2304.02771
cdc3a49112b1e407fcae54cbdd8f827f8cf12551
Evaluating Customization of Remote Tele-operation Interfaces for Assistive Robots Vinitha Ranganeni Noah Ponto Maya Cakmak Evaluating Customization of Remote Tele-operation Interfaces for Assistive Robots Mobile manipulator platforms, like the Stretch RE1 robot, make the promise of in-home robotic assistance feasible. For people with severe physical limitations, like those with quadriplegia, the ability to tele-operate these robots themselves means that they can perform physical tasks they cannot otherwise do themselves, thereby increasing their level of independence. In order for users with physical limitations to operate these robots, their interfaces must be accessible and cater to the specific needs of all users. As physical limitations vary amongst users, it is difficult to make a single interface that will accommodate all users. Instead, such interfaces should be customizable to each individual user. In this paper we explore the value of customization of a browser-based interface for teleoperating the Stretch RE1 robot. More specifically, we evaluate the usability and effectiveness of a customized interface in comparison to the default interface configurations from prior work. We present a user study involving participants with motor impairments (N=10) and without motor impairments, who could serve as a caregiver, (N=13) that use the robot to perform mobile manipulation tasks in a real kitchen environment. Our study demonstrates that no single interface configuration satisfies all users' needs and preferences. Users perform better when using the customized interface for navigation, but not for manipulation due to higher complexity of learning to manipulate through the robot. All participants are able to use the robot to complete all tasks and participants with motor impairments believe that having the robot in their home would make them more independent. I. INTRODUCTION Physically assistive robots have the potential to assist people with motor limitations to complete activities of daily living independently. However, these robots do not yet have robust autonomous capabilities for completing tasks in a wide variety of environments. Tele-operation can make these robots more readily available and satisfy users' desire for having control. Many existing tele-operation interfaces provide a single control configuration which may not be accessible to all users. In this work we explore customization of remote tele-operation interfaces for operating a Stretch RE1. More specifically, we build on prior work done by Cabrera et. al [1] by adding additional control features and analyzing user preferences and performance when using different interface configurations. We run two studies with users without motor impairments, who could serve as a caregiver, (N=13) and users with motor impairments (N=10). In these studies ( Fig. 1) users learn how to use the various control settings in the interface and are asked to complete a series of tasks All authors are associated with Paul G. Allen School of Computer Science & Engineering, University of Washington, {vinitha, ponton, mcak-mak}@cs.washington.edu Fig. 1: An overview of the study design. Users go through three exploration phases to learn how to use the various control display modes (action overlay and predictive display) and action modes (step actions, press-release, and click-click) in the interface. After each exploration phase they complete a task with their customized settings and the default settings that are highlighted in orange. using default settings determined by prior work and their own customized settings. We have three hypotheses: H1 The is no single interface configuration that will satisfy all users' needs and preferences. H2 Users with motor impairments will have different preferences than people without. H3 Users' task completion time, number of errors and clicks will be lower when using their customized settings. Our findings show that users' preferences in interface configurations vary and there is no single configuration that is the "winner". All users perform better when using their customized settings for the control of navigation. When controlling manipulation, they perform better when completing the task the second, time irrespective of interface configuration, likely due to the complexity of manipulation through the robot. Additionally, users found the interface to be intuitive, easy-to-learn, easy-to-use in all configurations and found the robot useful. Participants with motor impairments believe that having the robot in their home would make them more independent, demonstrating the utility of tele-operated assistive robots in the near future. II. RELATED WORKS Prior work has explored the potential for assistive robots to assist people with motor impairments [2], [3]. Tele-operation of these robots has been shown to be a viable solution to enable individuals with motor limitations to complete activities of daily living (ADL) independently [4], [5], [6]. Additionally, tele-operation allows robots to be practical without requiring full autonomy and satisfies the users' desire to have control. To reduce the burden on the user while still giving them control, semi-autonomous tele-operation systems have been widely studied. A significant amount of this work studies inferring user intent during tele-operation and providing autonomous assistance accordingly [7], [8], [9], [10], [11], [12]. However, these tele-operation interfaces provide a single control configuration that may not be accessible to users with different abilities or necessarily satisfy users' preferences. Making control interfaces customizable will make them accessible to users with unique physical abilities and fit user preferences. A participant, with motor impairments, in a user study from prior work specifically emphasized the need for flexible interfaces that cater to people with motor impairments [1]. Furthermore, prior work has shown that allowing people to customize tele-operation interfaces impacts their task completion time and subjective preferences [13]. Much of the work on interface customization has been for GUIs [14], [15], [16]. Jain et. al have explored customization of the level of assistance of provided by the robot [17] but not the control interface itself. In this work, we explore customization of cursor-based web interface for remotely tele-operating a mobile manipulator. We build on work done by Cabrera et. al [1] who developed a web interface for remotely tele-operating a Stretch RE1. We developed additional control features for tele-operating Stretch and analyze subjective preferences, task performance, and success. III. ROBOT SYSTEM A. Hardware The Stretch RE1 mobile manipulator is developed by Hello Robot. Stretch has a telescoping arm that extends 50cm horizontally and is attached to a prismatic lift that reaches 110cm vertically. The arm has a 1 degree of freedom gripper attached to a rotational joint. The movement of the arm is orthogonal to the movement of the differential drive base. Stretch also has a Realsense camera attached to a pan-tilt head and two fixed fish-eye cameras: one with an overhead view of the base and arm and the other with a view of the gripper. Stretch's affordability, safety, and physical capabilities make it feasible to deploy long-term in novice users' homes. Stretch's software is open-source and is based on ROS1. It has a suite of autonomous features but our work focuses on remote tele-operation of the robot through a web interface. B. Remote Tele-operation Interface Design The remote tele-operation interface ( Fig. 2) for Stretch has two distinct modes that can be toggled by switching tabs on the top left corner of the interface. Each mode has controls for controlling a different subset of the robot's actuators. The Navigation mode controls the mobile base and the Manipulation mode controls the arm height, extension, and gripper. The Navigation mode has two camera views: (1) a fixed overhead fish-eye camera view and (2) a overhead camera view with pan/tilt controls. The Manipulation mode has the same two camera views as the Navigation mode but also has an additional fish-eye camera view from the gripper's perspective. Each mode has it's own subset of control displays and action modes. 1) Action Overlay Control Display: This control display has buttons overlaid on each camera view. The buttons control different actuators on the robot. The Navigation mode has two translation and two rotation actions. The Manipulation mode has two buttons to control each of the following degrees of freedom: the arm's height, the arm's extension, gripper rotation in/out, open/close the gripper, and translation for the mobile base (for a total of 10 buttons). When the cursor hovers over a button, an icon overlaid indicates the action and tooltip text appears with explanation. Additionally, in the Manipulation mode, the icon turns red when when robot's arm or gripper is in collision with an object and a red stop sign appears over the icon when the arm and gripper have reach their respective joint limits. The user can control the speed of the robot by selecting from five preset speeds. The button outline turns red while the robot executes the corresponding action. 2) Predictive Display Control Display: This control display overlays a trajectory on the fixed overhead fish-eye view of the robot's base and is only applicable in the Navigation mode. The length and curve of the trajectory affect the speed and heading over the robot respectively. The longer the trajectory, the faster the robot will move and the shorter the trajectory the slower the robot will move. If the user presses anywhere behind the base the robot will move at a fixed speed backwards. If the user presses on the left side of the base the robot will rotate to the left and will rotate to the right if the user presses on the right side of the base. The trajectory turns red when the robot is moving. C. Action Modes All action modes are applicable to both control displays. • Step Actions: The robot moves for two seconds when user presses the button or trajectory. The distance the robot moves is determined by the speed. • Press-Release: The robot moves when the user presses and holds the button or trajectory and stops when the they release. • Click-Click: The robot moves when the user clicks the button or trajectory and stops when they click again. IV. USER STUDY DESIGN A. Environment & Tasks The study was conducted in a kitchen settings with a working area of roughly 2.15x4 meters. The tasks involve driving the robot to a specific position and orientation, picking up a cube, and recycling trash. Tasks involve a combination of observing the environment, navigating, manipulation and collision avoidance. Task 1 -The user drives the robot from a starting position into a square in front of the fridge. They must orient the robot to face the fridge (Fig. 3a). Task 2 -The user must control the robot to pick a cube up off a table. The robot is positioned next to the table (Fig. 3b). Task 3 -The user must drive the robot from the fridge to the stove, pick up a piece of trash on the stove, drive to the recycling bin and drop the trash in the bin (Fig. 3c). B. Procedure Participants join a video conferencing call through Zoom with screen sharing capabilities. They then log into the web interface for controlling the robot. Participants were located all around the U.S. and were not physically present. The user begins by watching an overview video of how the robot and interface works. In the first phase of the study, the user explores how to use the action overlay control display in the navigation mode. They watch video tutorials on how to use each of the action modes. After each video they have a chance to become comfortable with the controls. They then complete task 1 with the default settings and customized settings. They customize their settings by selecting their preferred action mode in the settings menu. The default setting is the step actions mode, which was the original action mode provided by the interface developed by Hello Robot. In the next phase of the study, the user explores that predictive display mode inspired by the Beam tele-presence robot interface. Similar to the first phase, they user watches video tutorial on how to use each of the action modes in this control display. After each video they have a chance to become comfortable with the controls. They then complete task 1 with the default and customized settings. The default setting is the press-release mode. This is the original action mode provided by the Beam's interface. Task 1 is considered a success when the user successfully drives the robot to the goal region and has it face the fridge. Next, the user explores the manipulation mode. They watch video tutorials on how to use each of the action modes and have a chance to become comfortable with the controls. They then complete task 2 with the default and customized settings. The default setting is the step actions mode. Task 2 is considered a success when the robot picks the cube up off the table. In the last phase of the study the user completes task 3 using both the default and customized settings for both the navigation and manipulation mode. Task 3 is considered a success when the trash is dropped in the recycling bin. The default settings for the action overlay and predictive display control displays are step actions and press-release respectively. The default setting for the manipulation mode is step actions. After selecting their preferred settings, the user fills out a questionnaire on the customization process. Additionally, the user fills out a questionnaire after completing the task with the default settings and then again with the customized settings. After completing the task twice, the user fills out a series of questionnaire about their experience, provides suggestions and recommendation and demographic information. Note, we counterbalance the order in which the task is completed with default and customized settings. C. Measurements During the study the users share their screen and we record both the users' verbally expressed thoughts and use of the interface. For each task we record the number of clicks, task completion time, whether or not the task was successfully completed, number and type of errors, including the user missing when grabbing an object or dropping an object. After completing each attempt of task 3, we asked users to state their agreement with a series of statements on a 5point likert scale on usability, accessibility, efficiency of the interface and their satisfaction with the settings. Additionally, the user answered a series of open-ended questions on whether they found the robot useful, if they would use the robot in their homes, any modifications they would need to make to their home to use it and recommendations for improving the interface. V. FINDINGS A. Study 1: Users without motor impairments Our study was completed by 13 individuals from the general population (6 Male, 6 Female, 1 Other) with ages ranging from 20-55 (M=26, SD=9). We asked participants to rate their proficiency with technology on a 7-point Likert scale. The average rating was 5.31 with a standard deviation of 1.97. The study took 90 minutes and participants were compensated with a $50 amazon gift card. We present our findings summarizing user setting preferences, task performance (success and efficiency), and then describe the teleoperation interface usage. 1) Setting Preferences: The preferred settings by participants for Task 3 are shown in Fig. 4. In the navigation mode, 46% of participants chose the action overlay control display and 54% of participants chose the predictive display control display. Participants who chose predictive display liked the simplicity in comparison to the action overlay mode: "Obviously, the predictive display is very nice, because it gets rid of buttons" (M, 25). We asked participants to rate their proficiency with technology on a 7-point Likert scale. Majority of participants who chose action overlay rated themselves lower (M=3.83, SD=1.86) than participants who chose predictive display (M=6.57, SD=0.49). Overall, the press-release action mode was largely preferred in both the navigation and manipulation mode: "I like [press-release] mode better. In the [step-actions] mode it was to touch, stop, touch, stop" (M, 29). However, there is no subset of settings that is a clear "winner" as there is a spread across preferred control display and action mode. Note that 7.69% of participants, across the different modes, chose their customized setting to be exactly the same as the default setting. 2) Task Success: All participants successfully completed Task 1 and 2 with both default and customized settings. All participants successfully completed Task 3 with the default settings and 11 participants successfully completed Task 3 with the customized settings. Despite the high success rate, we observed errors during Task 2 and 3 ( Table I). The average number of errors for Task 2 was higher when participants used their customized settings. We noticed most errors occurred when users selected either press-release or click-click as their preferred mode. For Task 3 the average number of errors was lower when users used their customized settings. We did not see a correlation between proficiency with technology and number of errors. 3) Task Performance: We show the time taken and number of clicks across participants when using customized and default settings for all tasks in Fig. 7. • Task 1 -Action Overlay: Majority of the participants had fewer clicks and faster task completion time when using their customized settings. • Task 1 -Predictive Display: Majority of the participants had fewer clicks and faster task completion time when using their customized settings. • Task 2: Majority of participants had faster task completion time when using the default settings (step actions) irrespective of ordering. There is no clear trend for the number of clicks. • Task 3 -Manipulation: Majority of participants spent less time in the manipulation mode when completing the task a second time irrespective of ordering. Participants generally had fewer clicks when they completed the task with their customized settings. • Task 3 -Navigation: Majority of participants completed the task faster when using their customized settings but had fewer clicks their second time completing the task irrespective of ordering. Overall, participants had faster task completion time when using customized settings for navigation but had faster task completion time when doing the tasks a second time for the manipulation mode regardless of whether they used custom or default settings first. The manipulation mode is more difficult to use than the navigation mode as there are more buttons and degrees-of-freedom to control. One participant even found the manipulation mode to be overwhelming: "Having so many buttons makes me nervous". This possibly resulted in a learning curve irrespective of the task setting order for the manipulation mode. 4) Task Workload and Subject Evaluation: The task load index was assessed after the completion of Task 3 with Fig. 6. On average, the mental demand, physical demand and frustration level ratings were slightly higher with the customized settings. Interface intuitiveness, learnability, efficiency, error recovery, accessibility to participants, and satisfaction with interface settings rated similarly between the default and customized settings (Fig. 8). 5) Utility of Robot: All participants found the robot to be useful. Some said that the robot would not be useful in their own lives but could be useful to someone with motor impairments. Other participants said that the robot could be useful to complete tasks when they are not physically present or in hard to reach places such as overhead cabinets or shelves. One participant said the robot would be useful if they were sick and unable to get out of bed. They would use the robot to fetch things for them in this situation. B. Study 2: Users with motor impairments Next, our study was completed by people with motor impairments who are our representative target population. The study setup and procedure is identical to the first study. We had 10 participants(4 Male, 6 Female) with varying levels of motor limitations (Table II) and ages ranging from 21-46 (M=30, SD=7.5). We asked participants to rate their proficiency with technology on a 7-point Likert scale. The average rating was 6 with a standard deviation of 0.87. The study took 90 minutes and participants were compensated with a $100 amazon gift card. 1) Setting Preferences: The preferred settings by participants for Task 3 are shown in Fig. 5. In the navigation mode, 44% of participants chose the action overlay control display and 56% chose the predictive display control display. Majority of participants preferred the press-release mode over other action modes for both the action overlay and predictive display control display: "The [press-release mode] is way easier than the [step actions mode]" (P3). Participants also noted that they liked the ability to take small steps within the press-release mode (as in the step actions mode), hence allowing for both continuous and step-wise control: "I like the [press-release mode] because you could just click, click, click for step-wise movement" (P8). None of the participants Overall, similar to the preferences of people without motor impairments, the press-release action mode was largely preferred in both the navigation and manipulation mode. Again, there is no subset of settings that is a clear "winner" as there is a spread across preferred control display and action mode. 2) Task Success: All participants successfully completed all three tasks. We observed errors in Task 2 with the default and customized settings and Task 3 with the customized settings (Fig. I). With the default setting in Task 2 (i.e., step actions), we noticed that some participants had difficulty estimating how far the arm would move based on their speed setting. This caused the robot to overshoot when reaching for the cube and collide with the table. One participant missed grabbing the cube twice in Task 2 with the customized settings and 2 participants missed grabbing the trash in Task 3 with the customized settings as they had trouble with depth perception. Overall, the number of errors was low and participants recovered from errors and eventually succeeded in completing the tasks. 3) Task Performance: We show the time taken and the number of clicks across participants when using customized and default settings for Task 3 in Fig. 7. P1 is not included in this plot as they were not able to complete the task with the default settings for the predictive display control display (press-release). Their head array was not capable of doing the press and hold cursor action. • Task 1 -Action Overlay: All participants had faster task completion time and fewer clicks when using their customized settings. • Task 1 -Predictive Display: Majority of the participants completed the tasks faster when using their customized settings and all participants had fewer clicks when using their customized settings irrespective of ordering. • Task 2: Majority of participants had faster task completion time and fewer clicks when completing the task a second time irrespective of interface settings. • Task 3 -Manipulation: Majority of participants completed the task faster the second time irrespective of interface settings, but they had fewer clicks when using the customized settings. • Task 3 -Navigation: Majority of participants had faster task completion time and fewer clicks when using the customized settings. Overall, participants had faster task completion time when using customized settings for navigation but had faster task completion time when doing the tasks a second time for the manipulation mode irrespective of ordering. This is possibly a result of the manipulation mode being more difficult learn for the aforementioned reasons. P8 referred to this learning curve: "It's really fun. I think it's more a matter of you keep doing it and getting used to it. You're just figuring it out. It's like when you get a new phone, and you don't know where things are." (P8). 4) Task Workload and Subjective Interface Evaluation: The task load index was assessed after the completion of Task 3 with both customized and default settings. The averages are shown in Fig. 6. Overall, all TLX ratings were very similar between default and customized settings. Interface intuitiveness, learnability, efficiency, error recovery, accessibility to participants, and satisfaction with interface settings rated similarly between the default and customized settings (Fig. 6). Additionally, the ratings for all categories are higher than the ratings by the participants without motor impairments. The difference in ratings between users with and without motor impairments is possibly due to the direct impact this platform could have on the users' lives. 5) Utility of Robot: All participants said that the robot is useful and that their homes could accommodate the robot. All participants said that they would use the robot to retrieve items around the household such as water (P6, P9), cooking utensils (P7), food (P7) and medical supplies (P5). Participants also said they would use it for tasks such as scratching their forehead (P6), unloading laundry (P3), putting groceries away (P10), and organization (P10). Participants with arm function and no leg function said that they would specifically use the robot to fetch them items that are beyond their reach and if they were in their bed instead of their wheelchair: "The last place I was staying at I literally, did not get out of bed for like a month, and this would have been nice to get my water out of my fridge." (P9). Participants with no arm or leg function said they would use it more frequently so that they would not need to ask anyone for help. Most participants did not find utility for the robot outside of their home, but P3, P8 and P10 said that they could use the robot when grocery shopping. We asked participants to rate their independence on a 7point Likert scale (M=2.6, SD=1.02). We then asked participants to state their agreement with the statement "Having the robot in my home will make me more independent" on a 7-point Likert scale (M=5.8, SD=1.6). Overall, participants believe the robot is useful and will make them more independent. VI. DISCUSSION & CONCLUSION This paper explored customization of tele-operation interfaces for assisting individuals with severe motor limitations and potential caregiver. We believe that we would see greater benefits of customization if the robot was deployed longterm in someone's home (e.g. [18]). Nevertheless, this work confirms the utility of such a robot and the benefits of customization of tele-operation interfaces for both user groups. 1) Settings Preferences: User preferences in interface configurations varied. There was no single interface configuration that was more strongly preferred over another. Users with motor impairments did not choose the step actions mode due to the fatigue of repeated clicks. Some participants without motor impairments chose the step actions mode in the action overlay control display because they were worried that they would damage the robot in the continuous control modes. Additionally, P1 was not able to use the pressrelease mode with his head array. These findings confirm our hypotheses that there is no single interface configuration that satisfies all users' abilities and preferences (H1) and there are difference in preferences between participants with and without motor impairments (H2). 2) Task Performance: All users had faster task completion time and fewer clicks with the customized settings in the navigation mode but performed better the second time in the manipulation mode irrespective of which interface configuration they started with. This suggests that they was a learning curve which is possibly due the complexity of the interface controls in the manipulation mode. We believe if users had more time to familiarize themselves with the manipulation mode they would have performed better with the customized settings. Additionally, users did not have a practice task that combined both navigation and manipulation mode. We noticed a learning curve associated with using both modes to complete a task. Overall, the number of errors was very low, all participants successfully completed task 1 and 2 and all but two participants without motor impairments successfully completed task 3. These findings partially confirm our hypothesis (H3) that users' task completion time, number of errors and clicks will be lower when using their customized settings. 3) Context Adaptation: Several participants wanted to switch between different settings when completing task 3. For example, some participants wanted to use the step actions when trying to pick up or drop the trash in the recycling bin. Some participants who selected the press-release mode realized that they could use it like the step actions mode with shorter clicks. This suggests that settings preferences can vary depending on the context of the task and interfaces should allow for easy adaptation of settings to different contexts. Fig. 2 : 2(Top) Interface in Navigation mode. There are two possible control displays: action overlay and predictive display. (Middle) The interface in Manipulation mode. (Bottom) The settings menu. up cube on the table (c) Task 3: Toss trash on the stove in the recycling bin Fig. 3 : 3Overview of all tasks Fig. 4 : 4Users without motor impairments settings preferences for Task 3. Fig. 5 : 5Users with motor impairments settings preferences for Task 3. Fig. 6 : 6Task 3 workload for users with and without motor impairments. Fig. 7 : 7(Top) The time taken when using default settings versus customized settings. (Bottom) The number of clicks when using default settings versus customized settings. Points under the line show that users had fewer clicks or completed the task faster with the customized settings. Points above the line show that users had fewer clicks or completed the task faster with default settings. We only plot points for users that chose settings different than the default settings. Fig. 8 : 8Participant agreement with statements about interface intuitiveness, efficiency and accessibility. TABLE I : INumber of errors across each task for participants with motor impairments both customized and default settings and the averages are shown in TABLE II : IIDemographic information of participants with motor impairments preferred the step-actions mode because of the fatigue caused by repetitive clicking. An exploration of accessible remote tele-operation for assistive mobile manipulators in the home. M E Cabrera, T Bhattacharjee, K Dey, M Cakmak, 2021 30th IEEE International Conference on Robot & Human Interactive Communication. RO-MANM. E. Cabrera, T. Bhattacharjee, K. Dey, and M. Cakmak, "An exploration of accessible remote tele-operation for assistive mobile manipulators in the home," in 2021 30th IEEE International Con- ference on Robot & Human Interactive Communication (RO-MAN). . IEEE. IEEE, 2021, pp. 1202-1209. Robots for humanity: using assistive robotics to empower people with disabilities. T L Chen, M Ciocarlie, S Cousins, P M Grice, K Hawkins, K Hsiao, C C Kemp, C.-H King, D A Lazewatsky, A E Leeper, IEEE Robotics & Automation Magazine. 201T. L. Chen, M. Ciocarlie, S. Cousins, P. M. Grice, K. Hawkins, K. Hsiao, C. C. Kemp, C.-H. King, D. A. Lazewatsky, A. E. Leeper, et al., "Robots for humanity: using assistive robotics to empower people with disabilities," IEEE Robotics & Automation Magazine, vol. 20, no. 1, pp. 30-39, 2013. The role of assistive robotics in the lives of persons with disability. S W Brose, D J Weber, B A Salatin, G G Grindle, H Wang, J J Vazquez, R A Cooper, American Journal of Physical Medicine & Rehabilitation. 896S. W. Brose, D. J. Weber, B. A. Salatin, G. G. Grindle, H. Wang, J. J. Vazquez, and R. A. Cooper, "The role of assistive robotics in the lives of persons with disability," American Journal of Physical Medicine & Rehabilitation, vol. 89, no. 6, pp. 509-521, 2010. Mobile manipulation through an assistive home robot. M Ciocarlie, K Hsiao, A Leeper, D Gossow, 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems. IEEEM. Ciocarlie, K. Hsiao, A. Leeper, and D. Gossow, "Mobile manipula- tion through an assistive home robot," in 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems. IEEE, 2012, pp. 5313- 5320. In-home and remote use of robotic body surrogates by people with profound motor deficits. P M Grice, C C Kemp, PloS one. 143212904P. M. Grice and C. C. Kemp, "In-home and remote use of robotic body surrogates by people with profound motor deficits," PloS one, vol. 14, no. 3, p. e0212904, 2019. Active robot-assisted feeding with a generalpurpose mobile manipulator: Design, evaluation, and lessons learned. D Park, Y Hoshi, H P Mahajan, H K Kim, Z Erickson, W A Rogers, C C Kemp, Robotics and Autonomous Systems. 124103344D. Park, Y. Hoshi, H. P. Mahajan, H. K. Kim, Z. Erickson, W. A. Rogers, and C. C. Kemp, "Active robot-assisted feeding with a general- purpose mobile manipulator: Design, evaluation, and lessons learned," Robotics and Autonomous Systems, vol. 124, p. 103344, 2020. Teleoperation with intelligent and customizable interfaces. A D Dragan, S S Srinivasa, K C Lee, Journal of Human-Robot Interaction. 22A. D. Dragan, S. S. Srinivasa, and K. C. Lee, "Teleoperation with intelligent and customizable interfaces," Journal of Human-Robot Interaction, vol. 2, no. 2, pp. 33-57, 2013. Recognition, prediction, and planning for assisted teleoperation of freeform tasks. K Hauser, Autonomous Robots. 35K. Hauser, "Recognition, prediction, and planning for assisted teleop- eration of freeform tasks," Autonomous Robots, vol. 35, pp. 241-254, 2013. A novel telerobotic method for human-in-the-loop assisted grasping based on intention recognition. K Khokar, R Alqasemi, S Sarkar, K Reed, R Dubey, 2014 IEEE International Conference on Robotics and Automation (ICRA). IEEEK. Khokar, R. Alqasemi, S. Sarkar, K. Reed, and R. Dubey, "A novel telerobotic method for human-in-the-loop assisted grasping based on intention recognition," in 2014 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2014, pp. 4762-4769. Customized handling of unintended interface operation in assistive robots. D Gopinath, M N Javaremi, B Argall, 2021 IEEE International Conference on Robotics and Automation (ICRA). IEEED. Gopinath, M. N. Javaremi, and B. Argall, "Customized handling of unintended interface operation in assistive robots," in 2021 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2021, pp. 10 406-10 412. Assistive teleoperation of robot arms via automatic time-optimal mode switching. L V Herlant, R M Holladay, S S Srinivasa, 2016 11th ACM/IEEE International Conference on Human-Robot Interaction (HRI). IEEEL. V. Herlant, R. M. Holladay, and S. S. Srinivasa, "Assistive tele- operation of robot arms via automatic time-optimal mode switching," in 2016 11th ACM/IEEE International Conference on Human-Robot Interaction (HRI). IEEE, 2016, pp. 35-42. Predicting user intent through eye gaze for shared autonomy. H Admoni, S Srinivasa, 2016 AAAI fall symposium series. H. Admoni and S. Srinivasa, "Predicting user intent through eye gaze for shared autonomy," in 2016 AAAI fall symposium series, 2016. Cursor-based robot tele-manipulation through 2d-to-se2 interfaces. M E Cabrera, K Dey, K Krishnaswamy, T Bhattacharjee, M Cakmak, 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEEM. E. Cabrera, K. Dey, K. Krishnaswamy, T. Bhattacharjee, and M. Cakmak, "Cursor-based robot tele-manipulation through 2d-to-se2 interfaces," in 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2021, pp. 4230-4237. Automatically detecting pointing performance. A Hurst, S E Hudson, J Mankoff, S Trewin, Proceedings of the 13th international conference on Intelligent user interfaces. the 13th international conference on Intelligent user interfacesA. Hurst, S. E. Hudson, J. Mankoff, and S. Trewin, "Automatically detecting pointing performance," in Proceedings of the 13th interna- tional conference on Intelligent user interfaces, 2008, pp. 11-19. Dynamically adapting guis to diverse input devices. S Carter, A Hurst, J Mankoff, J Li, Proceedings of the 8th international ACM SIGACCESS conference on Computers and accessibility. the 8th international ACM SIGACCESS conference on Computers and accessibilityS. Carter, A. Hurst, J. Mankoff, and J. Li, "Dynamically adapting guis to diverse input devices," in Proceedings of the 8th international ACM SIGACCESS conference on Computers and accessibility, 2006, pp. 63-70. Automatically generating user interfaces adapted to users' motor and vision capabilities. K Z Gajos, J O Wobbrock, D S Weld, Proceedings of the 20th annual ACM symposium on User interface software and technology. the 20th annual ACM symposium on User interface software and technologyK. Z. Gajos, J. O. Wobbrock, and D. S. Weld, "Automatically gener- ating user interfaces adapted to users' motor and vision capabilities," in Proceedings of the 20th annual ACM symposium on User interface software and technology, 2007, pp. 231-240. An approach for online user customization of shared autonomy for intelligent assistive devices. S Jain, B , Proc. of the IEEE Int. Conf. on Robot. and Autom. of the IEEE Int. Conf. on Robot. and AutomStockholm, SwedenS. Jain and B. Argall, "An approach for online user customization of shared autonomy for intelligent assistive devices," in Proc. of the IEEE Int. Conf. on Robot. and Autom., Stockholm, Sweden, 2016. Improving robot-assisted care using participatory design. V Nguyen, S Olatunji, M Cakmak, A Edsinger, C C Kemp, R W A , 66th Human Factors and Ergonomics Society (HFES) International Annual Meeting. V. Nguyen, S. Olatunji, M. Cakmak, A. Edsinger, C. C. Kemp, and R. W. A., "Improving robot-assisted care using participatory design," in 66th Human Factors and Ergonomics Society (HFES) International Annual Meeting, 2022.
[]
[ "Long-term multi-band photometric monitoring of Mrk 501", "Long-term multi-band photometric monitoring of Mrk 501" ]
[ "Axel Arbet-Engels \nInstitute for Particle Physics and Astrophysics\nETH Zurich\nOtto Stern Weg 5CH-8093ZurichSwitzerland\n", "Dominik Baack \nFakultät Physik\nTechnische Universität Dortmund\nOtto Hahn Str. 4aD-44227DortmundGermany\n", "Matteo Balbo \nDepartment of Astronomy\nUniversity of Geneva\nChemin Pegasi 51CH-1290VersoixSwitzerland\n", "Adrian Biland \nInstitute for Particle Physics and Astrophysics\nETH Zurich\nOtto Stern Weg 5CH-8093ZurichSwitzerland\n", "Thomas Bretz \nInstitute for Particle Physics and Astrophysics\nETH Zurich\nOtto Stern Weg 5CH-8093ZurichSwitzerland\n\nPhysikalisches Institut III A\nRWTH Aachen University\nOtto Blumenthal StrD-52074AachenGermany\n", "Jens Buss \nFakultät Physik\nTechnische Universität Dortmund\nOtto Hahn Str. 4aD-44227DortmundGermany\n", "Daniela Dorner \nInstitut für Theoretische Physik und Astrophysik\nUniversität Würzburg\nEmil Fischer Str. 3197074WürzburgGermany\n", "Laura Eisenberger \nInstitut für Theoretische Physik und Astrophysik\nUniversität Würzburg\nEmil Fischer Str. 3197074WürzburgGermany\n", "Dominik Elsaesser \nFakultät Physik\nTechnische Universität Dortmund\nOtto Hahn Str. 4aD-44227DortmundGermany\n", "Dorothee Hildebrand \nInstitute for Particle Physics and Astrophysics\nETH Zurich\nOtto Stern Weg 5CH-8093ZurichSwitzerland\n", "Roman Iotov \nInstitut für Theoretische Physik und Astrophysik\nUniversität Würzburg\nEmil Fischer Str. 3197074WürzburgGermany\n", "Adelina Kalenski \nInstitut für Theoretische Physik und Astrophysik\nUniversität Würzburg\nEmil Fischer Str. 3197074WürzburgGermany\n", "Karl Mannheim \nInstitut für Theoretische Physik und Astrophysik\nUniversität Würzburg\nEmil Fischer Str. 3197074WürzburgGermany\n", "Alison Mitchell \nInstitute for Particle Physics and Astrophysics\nETH Zurich\nOtto Stern Weg 5CH-8093ZurichSwitzerland\n", "Dominik Neise \nInstitute for Particle Physics and Astrophysics\nETH Zurich\nOtto Stern Weg 5CH-8093ZurichSwitzerland\n", "Maximilian Noethe \nFakultät Physik\nTechnische Universität Dortmund\nOtto Hahn Str. 4aD-44227DortmundGermany\n", "Aleksander Paravac \nInstitut für Theoretische Physik und Astrophysik\nUniversität Würzburg\nEmil Fischer Str. 3197074WürzburgGermany\n", "Wolfgang Rhode \nFakultät Physik\nTechnische Universität Dortmund\nOtto Hahn Str. 4aD-44227DortmundGermany\n", "Bernd Schleicher \nInstitut für Theoretische Physik und Astrophysik\nUniversität Würzburg\nEmil Fischer Str. 3197074WürzburgGermany\n", "Vitalii Sliusar 3⋆ \nDepartment of Astronomy\nUniversity of Geneva\nChemin Pegasi 51CH-1290VersoixSwitzerland\n", "Roland Walter \nDepartment of Astronomy\nUniversity of Geneva\nChemin Pegasi 51CH-1290VersoixSwitzerland\n" ]
[ "Institute for Particle Physics and Astrophysics\nETH Zurich\nOtto Stern Weg 5CH-8093ZurichSwitzerland", "Fakultät Physik\nTechnische Universität Dortmund\nOtto Hahn Str. 4aD-44227DortmundGermany", "Department of Astronomy\nUniversity of Geneva\nChemin Pegasi 51CH-1290VersoixSwitzerland", "Institute for Particle Physics and Astrophysics\nETH Zurich\nOtto Stern Weg 5CH-8093ZurichSwitzerland", "Institute for Particle Physics and Astrophysics\nETH Zurich\nOtto Stern Weg 5CH-8093ZurichSwitzerland", "Physikalisches Institut III A\nRWTH Aachen University\nOtto Blumenthal StrD-52074AachenGermany", "Fakultät Physik\nTechnische Universität Dortmund\nOtto Hahn Str. 4aD-44227DortmundGermany", "Institut für Theoretische Physik und Astrophysik\nUniversität Würzburg\nEmil Fischer Str. 3197074WürzburgGermany", "Institut für Theoretische Physik und Astrophysik\nUniversität Würzburg\nEmil Fischer Str. 3197074WürzburgGermany", "Fakultät Physik\nTechnische Universität Dortmund\nOtto Hahn Str. 4aD-44227DortmundGermany", "Institute for Particle Physics and Astrophysics\nETH Zurich\nOtto Stern Weg 5CH-8093ZurichSwitzerland", "Institut für Theoretische Physik und Astrophysik\nUniversität Würzburg\nEmil Fischer Str. 3197074WürzburgGermany", "Institut für Theoretische Physik und Astrophysik\nUniversität Würzburg\nEmil Fischer Str. 3197074WürzburgGermany", "Institut für Theoretische Physik und Astrophysik\nUniversität Würzburg\nEmil Fischer Str. 3197074WürzburgGermany", "Institute for Particle Physics and Astrophysics\nETH Zurich\nOtto Stern Weg 5CH-8093ZurichSwitzerland", "Institute for Particle Physics and Astrophysics\nETH Zurich\nOtto Stern Weg 5CH-8093ZurichSwitzerland", "Fakultät Physik\nTechnische Universität Dortmund\nOtto Hahn Str. 4aD-44227DortmundGermany", "Institut für Theoretische Physik und Astrophysik\nUniversität Würzburg\nEmil Fischer Str. 3197074WürzburgGermany", "Fakultät Physik\nTechnische Universität Dortmund\nOtto Hahn Str. 4aD-44227DortmundGermany", "Institut für Theoretische Physik und Astrophysik\nUniversität Würzburg\nEmil Fischer Str. 3197074WürzburgGermany", "Department of Astronomy\nUniversity of Geneva\nChemin Pegasi 51CH-1290VersoixSwitzerland", "Department of Astronomy\nUniversity of Geneva\nChemin Pegasi 51CH-1290VersoixSwitzerland" ]
[]
Aims. Radio-to-TeV observations of the bright nearby (z=0.034) blazar Markarian 501 (Mrk 501), performed from December 2012 to April 2018, are used to study the emission mechanisms in its relativistic jet. Methods. We examined the multi-wavelength variability and the correlations of the light curves obtained by eight different instruments, including the First G-APD Cherenkov Telescope (FACT), observing Mrk 501 in very high-energy (VHE) gamma-rays at TeV energies. We identified individual TeV and X-ray flares and found a sub-day lag between variability in these two bands.Results. Simultaneous TeV and X-ray variations with almost zero lag are consistent with synchrotron self-Compton (SSC) emission, where TeV photons are produced through inverse Compton scattering. The characteristic time interval of 5-25 days between TeV flares is consistent with them being driven by Lense-Thirring precession.
10.1051/0004-6361/202141886
[ "https://arxiv.org/pdf/2109.03205v2.pdf" ]
237,434,291
2109.03205
7c1942ab1bdb83914d149de364da2469b6b1e8af
Long-term multi-band photometric monitoring of Mrk 501 Wednesday 1 st December, 2021 Axel Arbet-Engels Institute for Particle Physics and Astrophysics ETH Zurich Otto Stern Weg 5CH-8093ZurichSwitzerland Dominik Baack Fakultät Physik Technische Universität Dortmund Otto Hahn Str. 4aD-44227DortmundGermany Matteo Balbo Department of Astronomy University of Geneva Chemin Pegasi 51CH-1290VersoixSwitzerland Adrian Biland Institute for Particle Physics and Astrophysics ETH Zurich Otto Stern Weg 5CH-8093ZurichSwitzerland Thomas Bretz Institute for Particle Physics and Astrophysics ETH Zurich Otto Stern Weg 5CH-8093ZurichSwitzerland Physikalisches Institut III A RWTH Aachen University Otto Blumenthal StrD-52074AachenGermany Jens Buss Fakultät Physik Technische Universität Dortmund Otto Hahn Str. 4aD-44227DortmundGermany Daniela Dorner Institut für Theoretische Physik und Astrophysik Universität Würzburg Emil Fischer Str. 3197074WürzburgGermany Laura Eisenberger Institut für Theoretische Physik und Astrophysik Universität Würzburg Emil Fischer Str. 3197074WürzburgGermany Dominik Elsaesser Fakultät Physik Technische Universität Dortmund Otto Hahn Str. 4aD-44227DortmundGermany Dorothee Hildebrand Institute for Particle Physics and Astrophysics ETH Zurich Otto Stern Weg 5CH-8093ZurichSwitzerland Roman Iotov Institut für Theoretische Physik und Astrophysik Universität Würzburg Emil Fischer Str. 3197074WürzburgGermany Adelina Kalenski Institut für Theoretische Physik und Astrophysik Universität Würzburg Emil Fischer Str. 3197074WürzburgGermany Karl Mannheim Institut für Theoretische Physik und Astrophysik Universität Würzburg Emil Fischer Str. 3197074WürzburgGermany Alison Mitchell Institute for Particle Physics and Astrophysics ETH Zurich Otto Stern Weg 5CH-8093ZurichSwitzerland Dominik Neise Institute for Particle Physics and Astrophysics ETH Zurich Otto Stern Weg 5CH-8093ZurichSwitzerland Maximilian Noethe Fakultät Physik Technische Universität Dortmund Otto Hahn Str. 4aD-44227DortmundGermany Aleksander Paravac Institut für Theoretische Physik und Astrophysik Universität Würzburg Emil Fischer Str. 3197074WürzburgGermany Wolfgang Rhode Fakultät Physik Technische Universität Dortmund Otto Hahn Str. 4aD-44227DortmundGermany Bernd Schleicher Institut für Theoretische Physik und Astrophysik Universität Würzburg Emil Fischer Str. 3197074WürzburgGermany Vitalii Sliusar 3⋆ Department of Astronomy University of Geneva Chemin Pegasi 51CH-1290VersoixSwitzerland Roland Walter Department of Astronomy University of Geneva Chemin Pegasi 51CH-1290VersoixSwitzerland Long-term multi-band photometric monitoring of Mrk 501 Wednesday 1 st December, 2021Received Month Day, Year; accepted Month Day, Year.arXiv:2109.03205v2 [astro-ph.HE] 30 Nov 2021 Astronomy & Astrophysics manuscript no. sliusar_fact_mrk501astroparticle physics -relativistic jets -radiation mechanisms: non-thermal -radiative transfer -BL Lacertae objects: individual: Mrk 501 Aims. Radio-to-TeV observations of the bright nearby (z=0.034) blazar Markarian 501 (Mrk 501), performed from December 2012 to April 2018, are used to study the emission mechanisms in its relativistic jet. Methods. We examined the multi-wavelength variability and the correlations of the light curves obtained by eight different instruments, including the First G-APD Cherenkov Telescope (FACT), observing Mrk 501 in very high-energy (VHE) gamma-rays at TeV energies. We identified individual TeV and X-ray flares and found a sub-day lag between variability in these two bands.Results. Simultaneous TeV and X-ray variations with almost zero lag are consistent with synchrotron self-Compton (SSC) emission, where TeV photons are produced through inverse Compton scattering. The characteristic time interval of 5-25 days between TeV flares is consistent with them being driven by Lense-Thirring precession. Introduction Blazars are active galactic nuclei (AGN) with a relativistic jet pointed towards the observer. These sources emit from the radio to the TeVs, and they are the most populous group of objects detected above 100 GeV. Their emission is non-thermal and is typically characterised by two broad bumps peaking in the infrared to the X-rays, and in the γ-rays. The low-energy bump is associated with synchrotron radiation of relativistic electrons. Depending on the peak energy of this component, blazars can be subdivided into low (peaking in the infrared), intermediate, or high (peaking in the X-rays) synchrotron-peaked blazars (LBL, IBL, and HBL respectively). Multi-wavelength variability studies provide important insight into the jet inner structure and emission mechanisms. In addition to the electromagnetic radiation, neutrinos have also been associated with some blazars and were used to probe the radiation processes within the jet (Mücke & Protheroe 2001a,b;Petropoulou et al. 2016;Mannheim 1993). Markarian 501 (Mrk 501) is one of the most frequently studied nearby (z = 0.034) bright blazars and was discovered to be a TeV source by the Whipple imaging atmospheric Cherenkov telescope (Quinn et al. 1996). Mrk 501 has been monitored extensively in the radio (Richards et al. 2011;Piner et al. 2010), V-band (Smith et al. 2009), X-rays (MAGIC Collaboration et al. 2020;Abdo et al. 2011b), and γrays (Abdo et al. 2011b;Dorner et al. 2015 Ahnen et al. 2017). Although variability is present in all energy bands, it peaks in the TeV, with a flux varying from 0.3 to 10 times that of the Crab nebula (Crab Unit, CU) (Aharonian et al. 1999). In 2012, Mrk 501 was simultaneously observed by 25 different instruments during a three-month campaign (Ahnen et al. 2018) and was found in an intermediate state with a TeV flux of about 0.5 CU that reached 4.9 CU on June 9. The TeV outburst was also accompanied by X-ray flares that were detected by Swift/XRT. This campaign reported the hardest VHE spectra ever measured, with a power-law index close to 2. The bumps in spectral energy distribution (SED) peaked at above 5 keV and 0.4 TeV, respectively, indicating that the source is an extreme HBL (EHBL). This extreme spectrum could be transient, the usual harder-when-brighter TeV spectrum was not observed either, in contrast to previous multi-wavelength campaigns (Acciari et al. 2011b;Aharonian et al. 2001;Albert et al. 2007b;Abdo et al. 2011a). A one-zone SSC model reasonably explains the overall shape of the SED (Bartoli et al. 2012;Aleksić et al. 2015a;Ahnen et al. 2018) and accounts for most of its emission. Mismatches were pointed out during some flares (e.g. Ahnen et al. A&A proofs: manuscript no. sliusar_fact_mrk501 nections (Ahnen et al. (2018)) or Fermi first-order acceleration (Shukla et al. 2015). Another approach is to consider a photohadronic model (Sahu et al. 2020b;Shukla et al. 2015), where Fermi-accelerated protons produce the VHE γ-rays through the photo-pion process. As the most significant gamma-ray variations occur during the high state of Mrk 501 (Abdo et al. 2011c;Zech et al. 2017), comparing models applied to different source states is also difficult. In this paper, we report observations that took place between 2012 and 2018 from the radio to the TeV, which were part of a long-term multi-wavelength campaign carried out with the FACT telescope. This paper is structured as follows. In Sect. 2 we briefly describe the instruments and data reduction techniques we used to obtain the light curves. In Sect. 3 we report the investigations of the variability, auto-and cross-correlations of the light curves, identification and correlations of individual TeV and X-ray flares, and analysis of the correlations between GeV and radio variations. Section 4 includes the discussion of the main results, with an emphasis on the likely underlying physical processes. Finally, Sect. 5 provides a summary of the results and conclusions. Data and analysis To characterise the long-term variability of Mrk 501, we gathered data taken between December 14, 2012, and April 18, 2018, with eight different instruments observing from the radio to the TeV. The instruments and data reduction techniques are described in the following subsections, and the resulting light curves are presented in Fig. 1. The light curves obtained for FACT and Swift/BAT include negative count rates, as expected when the emission from a source is lower than the sensitivity of a background-dominated instrument. The Fermi-LAT light curve is always positive because of the use of a positively defined model in the maximum likelihood fitting. Correlations and variability analyses presented in Sect. 3 disregard negative or low signal-to-noise ratio (<2 sigma) data points. Flare identification used complete light curves because uncertainties are properly considered in the Bayesian Block algorithm. Radio to GeV observations Mrk 501 was observed by Owens Valley Radio Observatory (OVRO) 40 meters telescope as part of the Fermi-LAT highcadence blazars monitoring program (Richards et al. 2011). The radio receiver of the OVRO telescope has a centre frequency of 15 GHz with a 3 GHz bandwidth and an aperture efficiency ∼ 0.25. The radio flux densities were obtained using ON-ON technique for dual-beam optics configuration to minimise ground and atmospheric contamination of the signal. The noise at the level of 55 K, which includes actual thermal noise of 30 K, the CMB, as well as atmospheric and ground contributions, leads to a ∼ 5% systematic flux uncertainty. Observations were mostly performed twice per week. The data were publicly available online from the Owens Valley Radio Observatory archive 1 . A complete description of the instrument, calibrations, and analysis is provided in Richards et al. (2011). Regular observations of Mrk 501 were performed in the Vband as part of the optical monitoring of γ-ray blazars to support data are publicly available, and for the time period used in this paper, we used data from Cycle 5 to Cycle 10, spanning from September 9, 2012, until March 2018 2 . Mrk 501 observations in three ultraviolet bands (W1, M2, and W2 filters) are available from the Swift Ultra-Violet and Optical Telescope (UVOT) (Roming et al. 2005). Standard aperture photometry analysis was performed using the Swift/UVOT software tools from the HEASOFT package (version 6.24). Calibration data were obtained from the CALDB (version 20170922). An aperture of 6 arcsec radius was used for the flux extraction for all the filters. The background flux level was estimated in an annulus with radii 11 or 19 arcsec centred on the location of Mrk 501. The two regions we used for source and background estimation were verified to not include light from any nearby sources, stray light, or elements of UVOT supporting structures. Dereddening of the fluxed was performed following the prescription from Roming et al. (2009) using E(B − V) = 0.0164 (Schlafly & Finkbeiner 2011). Observations in the X-rays (0.2-10 keV) were performed by the Swift/XRT X-ray telescope (Burrows et al. 2005). Considering the SED of Mrk 501, the X-ray light curves from Swift/XRT were built for the 0.3-2 keV and 2-10 keV bands separately to probe the emission process below and above the cutoff energy. Swift/XRT performed regular high-quality observations of Mrk 501. The light curve is publicly available using the Swift-XRT products generation tool 3 . This tool uses HEASOFT software version 6.22, and the complete analysis pipeline is described in Evans et al. (2009). The Burst Alert Telescope (BAT, Krimm et al. 2013) on board of the Swift satellite allows monitoring the complete sky at hard X-rays every few hours. The Swift/BAT reduction pipeline is described in Tueller et al. (2010) and Baumgartner et al. (2013). Our pipeline is based on the BAT analysis software HEASOFT version 6.13. A first analysis was performed to derive background detector images. We created sky images (task batsurvey) in the eight standard energy bands (in keV: 14 -20, 20 -24, 24 -35, 35 -50, 50 -75, 75 -100, 100 -150, and 150 -195) using an input catalogue of 86 bright sources that have the potential of being detected in single pointings. The detector images were then cleaned by removing the contribution of all detected sources (task batclean) and averaged to obtain one background image per day. The variability of the background detector images was smoothed pixel by pixel, fitting the daily background values with a polynomial model. The BAT image analysis was then run using these smoothed averaged background maps. The result of our processing was compared to the standard results presented by the Swift team (light curves and spectra of bright sources from the Swift/BAT 70-month survey catalogue 4 ), and a very good agreement was found. The Swift/BAT light curves of Mrk 501 were built in several energy bands. For each time bin and energy band, a weighted mosaic of the selected data was first produced, and the source region count rate was extracted assuming a fixed source position and shape of the point spread function. The signal-to-noise ratio of the source varies regularly because of intrinsic variability, its position in the BAT field of view, and the distance to the Sun. The 15-50 keV two-day bin light curve presented in Fig. 1 spans from December 14, 2012, to April 18, 2018, that is, over 29344 orbital periods or almost 5.5 years. The Large Area Telescope on board the Fermi Gammaray Space Telescope (hereafter Fermi-LAT) is a γ-ray detector that is sensitive from 20 MeV to 300 GeV (Atwood et al. 2009;Abdo et al. 2009;Ackermann et al. 2012a,b). Data are publicly available and have been analysed with the Fermitools 5 . We performed a binned analysis following the Fermi-LAT team recommendations. We used only photons with energy between 1 GeV and 300 GeV within a square of 21 • × 21 • centred on Mrk 501, flagged with evclass=128 and evtype=3, and with a maximum zenith angle of 90 • . Data were processed with the PASS8 6 , using the P8R3_SOURCE_V2 instrument response function. To model the region surrounding Mrk 501, we used the sources in the 4FGL catalogue (Abdollahi et al. 2020), up to 5 • outside the box, and we described the Galactic diffuse emission and the isotropic extra-Galactic component with gll_iem_v07 and iso_P8R3_SOURCE_V2_v1, respectively 7 . The best fit over the entire time data sample, describing Mrk 501 with a log- parabola model dN/dE = N(E/E b ) −α+β(Log(E/E b )) , yielded N = (4.07 ± 0.06) × 10 −12 cm −2 s −1 MeV −1 , α = 1.73 ± 0.01, β = 0.013 ± 0.005, with E b = 1476.73 MeV. The same fit also yielded a normalisation of 0.97 ± 0.01 for the Galactic diffuse emission and 0.97 ± 0.01 for the isotropic one. We also verified that there were no significant residuals from the whole time teststatistic map. To obtain the light curve, we fitted the model in each time bin (3, 7, and 30 days), keeping the Galactic diffuse and the extragalactic isotropic normalisations fixed to their average values, and freeing the normalisation of the brightest sources within 8 • from the centre and with more than ten predicted photons. When Mrk 501 was detected with a TS<25, we calculated 95% flux upper limits. The distribution of the α parameter and the integrated flux for Mrk 501 varied by less than 0.3% and 1%, respectively, when we let the Galactic and extragalactic normalisation parameters free in the fit. If instead we freed more sources, α and the integrated flux might vary up to 3% and 10%, respectively. TeV observations by FACT The First G-APD Cherenkov Telescope (FACT) is located in the Observatorio del Roque de los Muchachos on the island La Palma at 2.2 km a.s.l. (Anderhub et al. 2013). Using the imaging air Cherenkov technique, it has been operational since October 2011. With its 9.5 m 2 segmented mirror, FACT detects gammarays with energies above a few hundred GeV by observing the Cherenkov light produced in extensive air showers induced by gamma and cosmic rays in the Earth's atmosphere. This approach allows for consistent long-term unbiased monitoring at TeV energies. Typically, no human interaction is needed during the night. The shift is performed fully remote and in an automatic way by software. In combination with an unbiased observing strategy, this allows for consistent long-term monitoring at TeV energies. This is facilitated by the excellent and stable performance of the camera using silicon-based photosensors (SiPM, also known as Geiger-mode avalance photo diods; G-APDs) and a feedback system, keeping the gain of the photosensors stable (Biland et al. 2014). On the one hand, the stable and automatic operation maximises the data-taking efficiency, and with this, the duty cycle of the instrument. On the other hand, the usage of SiPMs also minimises the gaps in the light curves (Dorner et al. A&A proofs: manuscript no. sliusar_fact_mrk501 2017) as these photosensors allow for observations with bright ambient light. While the telescope can operate during full-Moon conditions (Knoetig et al. 2013), observations are typically interrupted for 4-5 days every month for safety reasons associated with the operational conditions at the observatory site. Between December 14, 2012, and April 18, 2018, a total of 1953 hours of physics data from Mrk 501 have been collected during 889 nights with up to 5.5 hours per night and an average nightly observation time of 2.2 hours. To ensure enough event statistics, nights with an observation time of less than 20 minutes were removed from the data sample. After this and after data quality selection (see below), 1344 hours from 633 nights remain. The FACT light curve is shown in Fig. 1 (uppermost panel). To derive the light curve, the data were processed using the modular analysis and reconstruction software 8 (MARS, revision 19203;Bretz & Dorner 2010). The feedback system enables stable and homogeneous gain that does not need to be recalibrated. After signal extraction, the images were cleaned using a twostep process, as described in Arbet-Engels et al. (2021). Based on a principal component analysis of the remaining distribution, a set of parameters describing the shower image was obtained. A detailed description of the image reconstruction and background suppression cuts for light curve and spectral extraction is provided in Beck et al. (2019). By cutting in the angular distance between the reconstructed source position and the real source position in the camera plane, θ, (at θ 2 < 0.037 deg 2 ), the signal from the source was determined. To estimate the background, the same cut was applied for five off-source regions, each located 0.6 deg from the camera centre (observations are typically carried out in wobble-mode (Fomin et al. 1994) with a distance of 0.6 deg from the camera centre). The excess rate was calculated by subtracting the scaled signal in these off-regions from the signal at the source position and dividing it by the effective exposure time. To study and correct the effect of the zenith distance and trigger threshold (changing with ambient light conditions) , the excess rate of the Crab nebula was used. While flares in the MeV/GeV range have been seen, similar flux changes have not been found at TeV energies. Therefore the Crab nebula was used as a standard candle at TeV energies. The dependences of the excess rate on observing conditions are similar to those of the cosmic-ray rate described in Bretz (2019), and details can be found in Beck et al. (2019). For the studied data sample, the corrections in zenith distance are smaller than 10% for more than 97% of the nights, and in trigger threshold, lower than 10% for more than 75% of the nights. The maximum correction in zenith distance is 44%, and only two nights need a correction larger than 40%. In threshold, the largest correction is 72%, but 99% of the nights have a correction smaller than 60%. To verify the effect of the different spectral slopes of Mrk 501, the spectra of 35 time ranges between February 2013 and August 2018 determined using the Bayesian Block algorithm (see Sect. 3) were extracted (see details in Temme et al. (2015) and background suppression cuts from Beck et al. (2019)) were fitted with a simple power law. Within the uncertainties, no obvious correlation was found between index and flux, so that the harder-when-brighter behaviour reported in Aleksić et al. (2015b) is not confirmed for the TeV energy band. The distribution of indices yields an average spectral slope of 2.96 ±0.26, which is compatible with some previously published 8 https://trac.fact-project.org/browser/trunk/Mars/ results of other telescopes, taking the different energy ranges and instrument systematics into account (Albert et al. 2007a). Assuming different slopes from 2.7 to 3.22, the corresponding energy thresholds (E T h ∼ 750 GeV (Cologna et al. 2017)) and integral fluxes were determined to estimate the systematic error of a varying slope of the spectrum of Mrk 501. This resulted in a systematic flux uncertainty lower than 7%, which was added quadratically to the statistical uncertainties. For the light curve of Mrk 501, a data quality selection cut based on the cosmic-ray rate (Hildebrand et al. 2017) was applied to the data sample from December 14, 2012, until April 18, 2018. For this, the artificial trigger rate R750 was calculated above a threshold of 750 DAC-counts, which was found to be independent of the actual trigger threshold. The remaining dependence of R750 on the zenith distance was determined (as described in (Mahlke et al. 2017) and (Bretz 2019)). A corrected rate R750 cor was calculated based on this. Seasonal changes of the cosmic-ray rate due to variations in the Earth's atmosphere were taken into account by determining a reference value R750 ref for each moon period. The distribution of the ratio R750 cor /R750 ref can be described with a Gaussian distribution for the good-quality data, while data with poo quality lies outside the Gaussian distribution. A cut was applied at the points at which the distribution of all data starts to deviate from the Gaussian distribution. Therefore, data with good quality were selected using a cut of 0.93 < R750 cor /R750 ref < 1.3 (Arbet-Engels et al. 2021). This resulted in a total of 1344 hours of good-quality data from Mrk 501 during the discussed time period, distributed over 633 nights, as shown in Fig. 1 (uppermost panel). Timing analysis In this section, we investigate the variability of Mrk 501 band by band using fractional variability and auto-correlation functions. We also use Bayesian Block analysis to identify individual flares and distinguish different source states. Temporal relations between different spectral bands are probed using cross-correlation functions and convolution techniques. Variability The variability in light curves can be compared and quantified by the unit-less fractional variability F var = (S 2 − σ 2 err )/ x 2 (Vaughan et al. 2003), where S is the standard deviation of the light curve, σ 2 err is the mean squared flux error, and x 2 is the square of the average flux. Uncertainties of the fractional variability were estimated using the prescription by Poutanen et al. (2008) and Vaughan et al. (2003). The dependence of the fractional variability on frequency, calculated for the light curves spanning about 5.5 years, is depicted in Fig. 2. The fractional variability is lowest in the radio and optical, increases up to the X-rays, drops in the GeV, and again increases up to the TeV, where the highest variability is found. These two maxima indicate that the falling edges of the two SED components are more variable than the rising ones and that the component peaking in the TeV varies more than the component peaking in the X-rays. Similar behaviour was observed in Mrk 421 (Aleksić et al. 2015b,c;Arbet-Engels et al. 2021). Some previous studies of Mrk 501 focusing on the study of large flares found a fractional variability that monotonically increased with frequency (Aliu et al. 2016;Ahnen et al. 2017). Different binning, cadence, and coverage in the different wavebands may play a role in this discrepancy (Schleicher et al. 2019). We calculated the structure functions (SF, Simonetti et al. 1985) of the various light curves and found in the X-rays an average SF slope of 0.13 and first break at around 20 days. In the TeV range, the average slope of the SF is 0.09. In the radio, the SF slope is close to zero, suggesting either white noise rather than well-structured flares or numerous overlapping flares. Light-curve correlations We correlated the light curves either with themselves (autocorrelation) or with other light curves using the discrete correlation function (DCF, Edelson & Krolik 1988), which properly takes the irregular sampling of the data into account. All the correlations were calculated by filtering out the data of low significance (<2σ) and additionally discarding all upper limits from the Fermi-LAT light curve, as is often done in the literature (e.g. Acciari et al. 2011a;Ahnen et al. 2018). The uncertainties were calculated with the DCF algorithm. A time bin of one day was used in the discrete autocorrelation function (DACF) for most data, except in the UV, optical, and radio (three days) and in the GeV. As the Fermi-LAT flux uncertainties are correlated because the same sky model was used for the different time bins, we had to use a time bin of 20 days (see the discussion in Edelson & Krolik 1988). Shorter lags would result in a correlation above unity (see the grey curve in Fig. 3). For the TeV and X-rays cross-correlations, we used a one-day lag binning, while for the radio and optical correlations with GeV, we used three days. The auto-correlations (Fig. 3) indicate the presence of flares developing on a timescale of a few days mostly in the TeV and hard X-rays and of variations occurring on timescales of many weeks. As the shortest variability timescale of Mrk 501 in the X-rays (∼one day, Tanihata et al. 2001) is shorter than our time bins, the correlation peaks characterise flaring patterns longer than the binning period. We tested the cross-correlations between all light curves and discuss the significant ones (TeV/X-rays, UV/V-band, and GeV/radio) in Sect. 3.3, 3.4, and 3.5. , Fermi-LAT, Swift/UVOT UM2, V-band, and radio. Grey error bars denote 1σ DCF uncertainties. A one-day binning was used, except for Fermi-LAT (20-and 7-day lag binning, black and grey lines, respectively) and for the UV, optical, and radio data (3 days). TeV / X-ray correlation A strong correlation reaching 0.7 at 0 lag is found between the TeV (FACT) and X-ray (Swift/XRT or Swift/BAT) light curves (Fig. 4). The correlation peak is wider for Swift/XRT because of the larger (1-2 days) relative time distances between the FACT A&A proofs: manuscript no. sliusar_fact_mrk501 and Swift/XRT observations. In case of Swift/BAT, observations are coincident (within a day) with the FACT light curve. The lag probability distributions reported in Fig. 4 are obtained from Monte Carlo simulations (as in Peterson et al. 1998). We generated 10 4 subsets for each pair of light curves using flux randomisation (FR) and random subset selection (RSS) and calculated the resulting DCFs to obtain a representative time lag distributions using a centroid threshold of 80% of the DCF maximum (Peterson et al. 2004). The lag uncertainty corresponds to the standard deviation of the distribution of the lag values obtained for the random subsets. The best estimate of the TeV/X-ray lag of (0.17 ± 0.2) days is obtained by correlating the TeV and hard X-ray Swift/BAT light curves. Summing the time-lag distributions obtained from soft and hard X-rays increases the uncertainty to (0.3 ± 0.4) days. This TeV/X-ray correlation in Mrk 501 was already reported using shorter and sparser data sets (Pandey et al. 2017;Ahnen et al. 2018). As noted earlier, the TeV variability is stronger than in the X-rays, with a flux correlation slope (Fig. 5) increasing from 1.3±0.1 (with reference to the variability observed by Swift/XRT 0.3-2 keV) to 1.89 ± 0.15 if Swift/XRT 2-10 keV is used. The dispersion of the data points in Fig. 5 around the main correlation, obtained using the orthogonal distance regression method, is compatible with intra-day variability in these bands or/and by spectral variations between flares. To build the correlation plots ( Fig. 5 and Fig. 10), only coincident FACT and Swift/XRT observations were considered with measurements in time separation of less than 12 hours. Bayesian Block algorithm was used to identify individual flares in the TeV and X-rays bands. The algorithm properly takes the significance of the data points into account, so that we applied it to the unfiltered data (see Sect. 2 and Fig. 1, where the un- cleaned light curves are shown). The Bayesian Block algorithm was tuned for a false-positive probability of 1% (Scargle et al. 2013). A flare is defined as a significant change in flux (3σ) with a duration of at least one binning point. Thirty-seven TeV flares were identified. To enhance the detection of X-ray flares, we compiled a list of individual flares detected by Swift/BAT and Swift/XRT separately. Unfortunately, as the X-ray data were still often either too sparse or too noisy, we finally considered only 15 TeV flares (56330 -56355, 56462 -56465, 56467 -56472, 56491 -56498, 56536 -56540, 56541 -56545, 56815 -56818, 56831 -56833, 56856 -56862, 56874 -56883, 56887 -56893, 56936 -56944, 57156 -57171, 57230 -57235, and 57483 -57489), with simultaneous coverage in X-rays. All of them coincide with individual X-ray flares or with periods with some level of X-ray flaring activity. TeV and X-ray flares have a similar duration. Most flares last less than seven days, confirming the cross-correlation results. GeV / soft X-ray correlation The GeV and soft X-ray variations appear strongly correlated on long timescales, as depicted in Fig. 6 (top panel). The GeV light curve is not correlated with the hard X-ray (Swift/BAT) or TeV light curves, however. This indicates that the GeV variability is not simultaneous to the flares lasting a few days, but is dominated by the long-lasting variations. This behaviour may be biased by the low sensitivity in the GeV band, and should only be considered along with other temporal data. Correlations at/with longer wavelengths We find a strong (reaching 0.85) but broad correlation between the optical and ultraviolet variations (Fig. 7), and both are broadly correlated with the radio light curve. Lags between the bands cannot be reliably estimated from the cross-correlations alone (Fig. 3). The correlation between the GeV and radio light curves also appears strong (reaching 0.8), with radio lagging behind the GeV by 170-250 days. GeV-radio delays have also been reported in various sources by Max-Moerbeck et al. (2014), and were interpreted as shocks propagating in the jet. The radio-GeV behaviour of Mrk 501 appears very different before the period considered in this study, however. Before MJD 56800, no correlation could be found (Fig. 8) lations derived above for the full data set do not change notably when data either before or after MJD 56800 are selected. The source may experience an internal change (misalignment due to the precession of a mini-jet, emission region properties change, etc.), which yields different behaviour and inter-band connection before and after 56800 MJD. It is important to note that this change is not related to the source flaring activity state, and there is no such change after 57600 MJD, for example, when Mrk 501 reaches a quiescent state in most bands (Fig. 1). Spectral variations The long-term duration of our campaign allows us to study the connection between spectral components. In the X-rays, we find a clear indication for a long-term harder-when-brighter behaviour (Fig. 9). The variability amplitude increases at harder X-rays right of the low-energy hump maximum in the SED (Fig. 9). Along with Fig. 5, this indicates a higher variability beyond the cut-off energies of both spectral components. The colour-colour diagram, showing the ratios of the TeV and GeV fluxes and of the hard and soft X-rays fluxes, respectively (Fig. 10), confirms the relation between the two spectral components and that the variability amplitude of the high-energy component is higher than that of the low-energy component. Discussion Summary of results We studied the broadband variability of Mrk 501 from the end of 2012 to the middle of 2018. Data from eight instruments were considered. During this period, the source experienced numerous flaring periods in the TeV. The highest flux was measured in 2014. A&A proofs: manuscript no. sliusar_fact_mrk501 The variability was detected in all wave bands. The fractional variability is lowest in the radio (F var ∼ 0.05) and highest in the TeV (F var ∼ 1.1), and it monotonically increases from the radio to the X-rays and from the GeVs to the TeVs. A similar fractional variability pattern was reported for Mrk 421 on monthly (Aleksić et al. 2015b) and yearly (Ahnen et al. 2016) timescales. In the context of the one-zone synchrotron-self-Compton (SSC) model, this variability pattern indicates that the electron spectrum is more variable at higher energies. For the same period of observations, Mrk 501 is less variable than another nearby and bright HBL blazar Mrk 421 (Arbet-Engels et al. 2021). To determine correlations between the spectral bands, we calculated the discrete-correlation functions for pairs of light curves and found that the X-ray and TeV emission is well correlated with a lag < 0.4 days (1σ). All the flares detected in the TeV band were coincident with X-ray flares. The fractional variability and the correlated TeV and X-ray emission are likely produced by a synchronous change of the spectral shape of the low-(X-rays) and high-energy (TeV) components. A zero-lag correlation between the X-ray and TeV light curves suggests that the variations in the two bands are driven by a common physical process operating in the same emission region. We compare the observational constraints to different models in the next subsections, assuming that the flares are related to particles accelerated in shocks. The radio and GeV emissions were correlated during the majority of the campaign (after about 56800 MJD), but not in older data (Fig. 8, left and right), indicating that the long-term radio emission showing the least variability originates from several components. The VLBA high-resolution radio images at 43 GHz indeed reveal an off-axis jet component that may explain that the GeV and radio are not always correlated (see also Giroletti et al. 2008;). In July 2014, the radio-optical emission of Mrk 501 was also interpreted as a separate spectral component with little contribution in the GeV (MAGIC Collaboration et al. 2020). Synchrotron self-or external Compton emission In the quiescent state, several multi-wavelength campaigns of Mrk 501 reported that its spectral energy distribution was compatible with a one-zone SSC model with a Doppler beaming factor δ ≈ 12 − 21. (MAGIC Collaboration et al. 2020;Lei et al. 2018;Abdo et al. 2011a;Albert et al. 2007b). Additional spectral features detected during flares (MAGIC Collaboration et al. 2020;Lei et al. 2018;Ahnen et al. 2017) led to higher δ ≈ 10 − 50. The wide range of δ values can be explained by a deceleration of the jet (Georganopoulos & Kazanas 2003) or by a radial structure with an inner fast spine and a slower outer layer where the radio emission is produced (Ghisellini et al. 2005). In both cases, the expected variability increases with frequency and should be lowest in the radio. One of the most significant constraints on the size of the emitting region can be derived from the variability timescale. As expected from geometrical properties of a homogeneous source of incoherent, isotropic comoving-frame emission with a comoving size R, the minimum expected variability timescale can be written as t var min = (1 + z)R/(δc) (where z is the source redshift). The shortest doubling timescale observed in Mrk 501 is 2 minutes (Abdo et al. 2011a), and the delay between various VHE bands is 4 minutes (an indication for progressive electron acceleration) indicate a gamma-ray source size < 10 12 km. Variability can also be constrained by the synchrotron cooling timescale. Synchrotron cooling time in the observer's frame of reference can be written as t cool, e ≈ 15.86 × 10 11 ((1 + z)/δ) 1/2 (B/1G) −3/2 (ν/1Hz) −1/2 seconds (Zhang et al. 2019), where ν is the emission frequency. Using typical values of B from 0.01 to 0.31 G, and δ from 12 to 25 (Abdo et al. 2011a;Ahnen et al. 2017), we can find that the synchrotron cooling times in the observer frame become shorter than 9 minutes, 2 hours, 7 hours, and 176 days for 50 keV, 0.3 keV, in the Vband, and in the radio, respectively. These timescales are much shorter than those shown in the auto-correlations. The duration of the flares should therefore be driven by physical changes of the emitting source(s) occurring on dynamical time scales. The reported delay < 0.4 days between the TeV and X-rays fluxes agrees with the self-Compton or external Compton frameworks as electrons cool rapidly (< 0.5 hour) at these energies. The observed connection between the X-ray and TeV emission (Fig. 5) also supports the SSC model scenario of inverse Compton production of VHE gamma-rays by the same population of electrons. The relation between the X-ray and gamma-ray spectral breaks (Fig. 10) finally indicates that the cut-off energies of both spectral components are also related. Assuming self-Compton emission for the TeV γ-rays (Mrk 501 is always in the Thomson regime in the TeV, e.g. Murase et al. 2012), the scenario with the stronger magnetic field produces a minute-variability timescale, which is close to the observations ( Abdo et al. (2011a)). Taking the compactness of the VHE source region into account, high Doppler factors (δ f laring > 40, δ lowstate > 30) (Cologna et al. 2017) are needed to avoid γ-ray absorption. An explanation for the fast TeV variability may also come from MHD turbulent flows or phase transitions (Krishan & Wiita 1994), instabilities of hydrodynamic flows (Subramanian et al. 2012), or jet or mini-jet beaming turbulences on different scales (Crusius-Waetzel & Lesch 1998;Giannios et al. 2010). In the one-zone synchrotron regime, the energy cutoff and the flux normalisation are controlled by B, δ, and the particle spectrum. Variations in these parameters can lead to independent GeV and TeV variability (as indicated by the observations, see Fig. 6). In addition, in the shock-in-jet model, the regions emitting X-rays and TeVs are closer to the black hole than those responsible for the bulk of the optical-radio and the GeV emissions. In the SSC frame, the observed correlations between TeV and X-rays (fluxes and spectral slopes), radio, and GeV fluxes, and the lack of correlations between the TeVs/X-rays and GeVs would require that some of the above physical parameter(s) is/are changing along the jet. Hadronic and lepto-hadronic radiation scenarios In addition to pure leptonic scenarios, in which all radiation is produced by relativistic electrons, purely hadronic models (Mücke & Protheroe 2001a) and lepto-hardronic models were also proposed (Mastichiadis et al. 2013;Petropoulou et al. 2016;Mücke & Protheroe 2001b,a;Galanti et al. 2020;Mannheim 1993). In lepto-hadronic models, the low-energy SED component is still dominated by the synchrotron emission of primary accelerated electrons. However, the high-energy SED component is dominated by proton synchrotron and/or pion-decayinduced pair cascades. The correlation between the TeV and the X-ray flares indicates that electrons and protons are accelerated in the same region. As the proton acceleration time t acc = 20 ξ γ m p c/3 eB ≥ 10 ξ days (where ξ ≥ 1 is the mean free path in units of Larmor radius, Kusunose et al. 2000; Inoue & Takahara 1996) is much longer than the delay observed between the TeV and X-ray light curves, the driving mechanism of the variability should be linked to some dynamical processes (e.g. changing orientation or beaming of the relativistically moving emission region) in combination with propagation effects. However, the absence of a correlation between the GeV and TeV bands on the same timescale as observed for the TeV/X-ray correlation suggests that the variability of the hadronic component is not driven by the process behind the variability of the leptonic component. Hadronic emission models propose proton synchrotron to be responsible for the low-energy SED component and pioninduced cascades for the high-energy component (Mastichiadis et al.2013). These models require strong magnetic fields (>10 G) such that the Larmor radius of the protons remains smaller than the jet radius (Sikora 2011). In these conditions and assuming that the X-ray emission is from proton synchrotron, the expected cooling timescale of tens of hours (Zhang et al. 2019) is also inconsistent with observations. Variability on shorter timescales may again possibly be driven by dynamical processes, which affect the whole population of relativistic particles, but as above, this is not consistent with the absence of a GeV/TeV correlation and with radio lagging behind the GeV by about 200 days. The mechanisms responsible for the broadband emission of blazar jets, accelerating electrons should in principle also accelerate protons and heavier nuclei possibly related to the observed ultra high-energy cosmic rays (UHECRs). When these particles interact with the surrounding photons, γ-rays and neutrinos may be produced (Padovani et al. 2018;Ansoldi et al. 2018;Mannheim 1993). Muons and pions created in these processes will also be affected by the proton acceleration time, so that timescales observed in the TeV cannot be reproduced with this mechanism (Zech et al. 2017). In addition, Pion photoproduction is generally inefficient because protons mostly lose energy through synchrotron radiation (but see Rachen & Mészáros 1998). Flare timing The mass of the supermassive black hole (SMBH) of Mrk 501 is estimated as 0.9 − 3.4 × 10 9 M ⊙ (Barth et al. 2002). The distribution of the time delays between maximums of identified TeV flares peaks between 5 and 25 days (Fig. 11) (undetected flares could mimic longer delays). These delays correspond to ∼ 10 4 R G /c ∼ 17 days, which could be expected if the flares were related to Lense-Thirring (Thirring 1918) precession of a misaligned accretion disk (Bardeen & Petterson 1975;Liska et al. 2017). The inter-flare period originating from such a precession is shortened in the source rest frame by a factor Γ, then, for the observer, it becomes shortened by a factor Γ −1 , which compensate for each other. Conclusions The analysis of about 5.5 years of light curves from eight instruments yields the following main observational results: 1. The strongest variations were found in TeV and X-rays. The TeV and X-ray fluxes measured simultaneously (within 24 hours) are correlated, as are the X-ray and gamma-ray spectral breaks, as expected from SSC models. The lag between the TeV and X-ray variations could be estimated as < 0.4 days (1σ). 2. The characteristic time interval between TeV flares is comparable with the expectation if these flares are triggered by a Lense-Thirring precession of the accretion disk around the SMBH. The observed variability of Mrk 501 was compared with the predictions of leptonic, lepto-hadronic, and hadronic models. We found that purely hadronic models are incompatible with the observations due to extremely long synchrotron cooling time in the radio and relatively short GeV-radio delay. The lepto-hadronic models are also incompatible with observations because they fail to reproduce simultaneously the short lag observed between Xrays and TeV and the absence of correlation between the GeV and TeV. Electron synchrotron self-or external Compton processes match the observations of Mrk 501 in general, although individual flares may require more complex models, including the introduction of a second emission zone (Abdo et al. 2011b;Sahu et al. 2020a;Ahnen et al. 2018). Fig. 2 : 2Fractional variability F var as a function of frequency. Xaxis error bars indicate the energy band of the instrument. Yaxis error bars denote the uncertainty of F var , which for some instruments are smaller than the marker. Fig. 3 : 3Light-curve auto-correlations. From top to bottom: FACT, Swift/BAT, Swift/XRT (2-10 keV), Swift/XRT (0.3-2 keV) Fig. 4 : 4DCF cross-correlations of light curves (from top to bottom panel): FACT with Swift/BAT, Swift/XRT (2-10 keV), and Swift/XRT (0.3-2 keV). One-day binning was used. Left: DCF values as a function of lag. Grey error bars are 1σ uncertainties. Right: Lag distributions derived from FR/RSS simulations (more details are provided in Sect. 3.3). A Gaussian fit (black lines) was used to derive the lag indicated in the plots. Fig. 5 : 5TeV (FACT) flux and X-ray (Swift/XRT 2-10 keV) countrate measured within 24 hours (slope 1.89 ± 0.15). Fig. 6 :Fig. 7 :Fig. 8 : 678using the data from Max-Moerbeck et al. (2014). All the other flux corre-Cross-correlation of the GeV Fermi-LAT light curve with the data from Swift/XRT 0.3-2 keV (top right), Swift/XRT 2-10 keV (top left), FACT (bottom right), and Swift/BAT (bottom left). The time resolution is seven days. The grey error bars denote 1σ uncertainties. Multi-wavelength correlations for different bands. Left: Cross-correlation of the V-band and Swift/UVOT light curves. Right: Cross-correlation V-band and radio light curves. Grey error bars denote 1σ uncertainties. Multi-wavelength correlations for different bands. Left: Cross-correlation of the Fermi-LAT and radio light curves in the time range of [56800, 58226] MJD. Right: Cross-correlation of the Fermi-LAT and radio light curves in the time range of [54693, 56800] MJD using data from (Max-Moerbeck et al. 2014). Grey error bars denote 1σ uncertainties. Fig. 9 :Fig. 10 : 910Ratio of Swift/XRT hard-to-soft vs Swift/XRT 0.3-2 keV count rate. The slope is 0.43 ± 0.04. Ratio of TeV to GeV fluxes vs the ratio of Swift/XRT hard-to-soft flux. The slope is 1.4 ± 0.3. Fig. 11 : 11Period between peaks of 37 identified TeV flares, 15 of which have good X-ray coverage and are listed in Sect. 3.3. ; Abeysekara et al. Corresponding authors, e-mails: [email protected] & [email protected] 2017;⋆ https://sites.astro.caltech.edu/ovroblazars/data.php. As of submission of this paper, the policy has changed and the data is only available upon request to the OVRO collaboration. http://james.as.arizona.edu/∼psmith/Fermi/DATA/photdata.html 3 http://www.Swift.ac.uk/user_objects/ 4 http://Swift.gsfc.nasa.gov/results/bs70mon/ https://fermi.gsfc.nasa.gov/ssc/data/analysis/software/ 6 http://fermi.gsfc.nasa.gov/ssc/data/analysis/documentation/ /Pass8_usage.html 7 http://fermi.gsfc.nasa.gov/ssc/data/access/lat/BackgroundModels Article number, page 3 of 10 Acknowledgements. We acknowledge the important contributions from ETH Zurich (grants ETH-10.08-2, ETH-27.12-1) and by the Swiss SNF, and the German BMBF (Verbundforschung Astro-und Astroteilchenphysik) and HAP (Helmoltz Alliance for Astroparticle Physics) to the FACT telescope project. Part of the reported work was supported by Deutsche Forschungsgemeinschaft (DFG) within the Collaborative Research Center SFB 876 "Providing Information by Resource-Constrained Analysis", project C3. We would like to express our gratitude to E. Lorenz, D. Renker and G. Viertel for their invaluable contribution to the FACT project on its early stages. We would like also to thank the Instituto de Astrofísica de Canarias (IEC) for hosting the FACT telescope over the years and supporting its operations at the Observatorio del Roque de los Muchachos in La Palma. We thank Max-Planck-Institut für Physik for the provided HEGRA CT3 mount, which was refurbished and reused for FACT. We express our sincere gratitude to all MAGIC collaboration for taking care of FACT and help during its remote operations. This research used public data from the Bok Telescope on Kitt Peak and the 1.54 m Kuiper Telescope on Mt. Bigelow(Smith et al. 2009), Fermi-LAT(Smith et al. 2009) and Swift(Gehrels & Swift Team 2004). This research has made use of data from the OVRO 40-m monitoring program (Richards, J. L. et al. 2011, ApJS, 194, 29), supported by private funding from the California Insitute of Technology and the Max Planck Institute for Radio Astronomy, and by NASA grants NNX08AW31G, NNX11A043G, and NNX14AQ89G and NSF grants AST-0808050 and AST-1109911. Data from the OVRO 40-m telescope used for this study, was publicly available when we started the study, though as of the paper submission, the policy has changed and it is only available upon request. . A A Abdo, M Ackermann, M Ajello, ApJ. 727129Abdo, A. A., Ackermann, M., Ajello, M., et al. 2011a, ApJ, 727, 129 . A A Abdo, M Ackermann, M Ajello, ApJ. 727129Abdo, A. A., Ackermann, M., Ajello, M., et al. 2011b, ApJ, 727, 129 . A A Abdo, M Ackermann, M Ajello, Astroparticle Physics. 32193Abdo, A. A., Ackermann, M., Ajello, M., et al. 2009, Astroparticle Physics, 32, 193 A&A proofs: manuscript no. sliusar_fact_mrk501. A&A proofs: manuscript no. sliusar_fact_mrk501 . A A Abdo, M Ackermann, M Ajello, ApJ. 736131Abdo, A. A., Ackermann, M., Ajello, M., et al. 2011c, ApJ, 736, 131 . S Abdollahi, F Acero, M Ackermann, ApJS. 24733Abdollahi, S., Acero, F., Ackermann, M., et al. 2020, ApJS, 247, 33 . A U Abeysekara, A Albert, R Alfaro, ApJ. 841100Abeysekara, A. U., Albert, A., Alfaro, R., et al. 2017, ApJ, 841, 100 . V A Acciari, E Aliu, T Arlen, ApJ. 73825Acciari, V. A., Aliu, E., Arlen, T., et al. 2011a, ApJ, 738, 25 . V A Acciari, T Arlen, T Aune, ApJ. 7292Acciari, V. A., Arlen, T., Aune, T., et al. 2011b, ApJ, 729, 2 . M Ackermann, M Ajello, A Albert, ApJS. 2034Ackermann, M., Ajello, M., Albert, A., et al. 2012a, ApJS, 203, 4 . M Ackermann, M Ajello, A Allafort, Astroparticle Physics. 35346Ackermann, M., Ajello, M., Allafort, A., et al. 2012b, Astroparticle Physics, 35, 346 . F Aharonian, A Akhperjanian, J Barrio, ApJ. 546898Aharonian, F., Akhperjanian, A., Barrio, J., et al. 2001, ApJ, 546, 898 . F A Aharonian, A G Akhperjanian, J A Barrio, A&A. 34911Aharonian, F. A., Akhperjanian, A. G., Barrio, J. A., et al. 1999, A&A, 349, 11 . M L Ahnen, S Ansoldi, L A Antonelli, A&A. 59391Ahnen, M. L., Ansoldi, S., Antonelli, L. A., et al. 2016, A&A, 593, A91 . M L Ahnen, S Ansoldi, L A Antonelli, A&A. 60331Ahnen, M. L., Ansoldi, S., Antonelli, L. A., et al. 2017, A&A, 603, A31 . M L Ahnen, S Ansoldi, L A Antonelli, A&A. 620181Ahnen, M. L., Ansoldi, S., Antonelli, L. A., et al. 2018, A&A, 620, A181 . J Albert, E Aliu, H Anderhub, ApJ. 663125Albert, J., Aliu, E., Anderhub, H., et al. 2007a, ApJ, 663, 125 . J Albert, E Aliu, H Anderhub, ApJ. 669862Albert, J., Aliu, E., Anderhub, H., et al. 2007b, ApJ, 669, 862 . J Aleksić, S Ansoldi, L A Antonelli, A&A. 57350Aleksić, J., Ansoldi, S., Antonelli, L. A., et al. 2015a, A&A, 573, A50 . J Aleksić, S Ansoldi, L A Antonelli, A&A. 576126Aleksić, J., Ansoldi, S., Antonelli, L. A., et al. 2015b, A&A, 576, A126 . J Aleksić, S Ansoldi, L A Antonelli, A&A. 57822Aleksić, J., Ansoldi, S., Antonelli, L. A., et al. 2015c, A&A, 578, A22 . E Aliu, S Archambault, A Archer, A&A. 59476Aliu, E., Archambault, S., Archer, A., et al. 2016, A&A, 594, A76 . H Anderhub, M Backes, A Biland, Journal of Instrumentation. 86008Anderhub, H., Backes, M., Biland, A., et al. 2013, Journal of Instrumentation, 8, P06008 . S Ansoldi, L A Antonelli, C Arcaro, ApJ. 86310Ansoldi, S., Antonelli, L. A., Arcaro, C., et al. 2018, ApJ, 863, L10 . A Arbet-Engels, D Baack, M Balbo, A&A. 64788Arbet-Engels, A., Baack, D., Balbo, M., et al. 2021, A&A, 647, A88 . W B Atwood, A A Abdo, M Ackermann, ApJ. 6971071Atwood, W. B., Abdo, A. A., Ackermann, M., et al. 2009, ApJ, 697, 1071 . J M Bardeen, J A Petterson, ApJ. 19565Bardeen, J. M. & Petterson, J. A. 1975, ApJ, 195, L65 . A J Barth, L C Ho, W L W Sargent, The Astrophysical Journal. 56613Barth, A. J., Ho, L. C., & Sargent, W. L. W. 2002, The Astrophysical Journal, 566, L13 . B Bartoli, P Bernardini, X J Bi, ApJ. 7582Bartoli, B., Bernardini, P., Bi, X. J., et al. 2012, ApJ, 758, 2 . W H Baumgartner, J Tueller, C B Markwardt, ApJS. 20719Baumgartner, W. H., Tueller, J., Markwardt, C. B., et al. 2013, ApJS, 207, 19 . M Beck, A Arbet-Engels, D Baack, 36630Beck, M., Arbet-Engels, A., Baack, D., et al. 2019, 36, 630 . A Biland, T Bretz, J Buß, Journal of Instrumentation. 910012Biland, A., Bretz, T., Buß, J., et al. 2014, Journal of Instrumentation, 9, P10012 . T Bretz, Astroparticle Physics. 11172Bretz, T. 2019, Astroparticle Physics, 111, 72 . T Bretz, A Biland, J Buß, arXiv:1308.1516arXiv e-printsBretz, T., Biland, A., Buß, J., et al. 2013, arXiv e-prints [arXiv:1308.1516] T Bretz, D Dorner, Astroparticle, Particle and Space Physics, Detectors and Medical Physics Applications. C. Leroy, P.-G. Rancoita, M. Barone, A. Gaddi, L. Price, & R. RuchtiBretz, T. & Dorner, D. 2010, in Astroparticle, Particle and Space Physics, Detectors and Medical Physics Applications, ed. C. Leroy, P.-G. Rancoita, M. Barone, A. Gaddi, L. Price, & R. Ruchti, 681-687 . D N Burrows, J E Hill, J A Nousek, Space Sci. Rev. 120165Burrows, D. N., Hill, J. E., Nousek, J. A., et al. 2005, Space Sci. Rev., 120, 165 G Cologna, N Chakraborty, A Jacholkowska, 6th International Symposium on High Energy Gamma-Ray Astronomy. 179250019American Institute of Physics Conference SeriesCologna, G., Chakraborty, N., Jacholkowska, A., et al. 2017, in American Insti- tute of Physics Conference Series, Vol. 1792, 6th International Symposium on High Energy Gamma-Ray Astronomy, 050019 . A R Crusius-Waetzel, H Lesch, A&A. 338399Crusius-Waetzel, A. R. & Lesch, H. 1998, A&A, 338, 399 . D Dorner, J Adam, L M Ahnen, International Cosmic Ray Conference. 35609Dorner, D., Adam, J., Ahnen, L. M., et al. 2017, International Cosmic Ray Con- ference, 35, 609 D Dorner, M L Ahnen, M Bergmann, arXiv:1502.02582ArXiv e-prints. Dorner, D., Ahnen, M. L., Bergmann, M., et al. 2015, ArXiv e-prints [arXiv:1502.02582] . R A Edelson, J H Krolik, ApJ. 333646Edelson, R. A. & Krolik, J. H. 1988, ApJ, 333, 646 . P A Evans, A P Beardmore, K L Page, MNRAS. 3971177Evans, P. A., Beardmore, A. P., Page, K. L., et al. 2009, MNRAS, 397, 1177 . V P Fomin, A A Stepanian, R C Lamb, Astroparticle Physics. 2137Fomin, V. P., Stepanian, A. A., Lamb, R. C., et al. 1994, Astroparticle Physics, 2, 137 . G Galanti, F Tavecchio, M Landoni, MNRAS. 4915268Galanti, G., Tavecchio, F., & Landoni, M. 2020, MNRAS, 491, 5268 . N Gehrels, Swift Team, New Astronomy Reviews. 48431Gehrels, N. & Swift Team. 2004, New Astronomy Reviews, 48, 431 . M Georganopoulos, D Kazanas, New A Rev. 47653Georganopoulos, M. & Kazanas, D. 2003, New A Rev., 47, 653 . G Ghisellini, F Tavecchio, M Chiaberge, A&A. 432401Ghisellini, G., Tavecchio, F., & Chiaberge, M. 2005, A&A, 432, 401 . D Giannios, D A Uzdensky, M C Begelman, MNRAS. 4021649Giannios, D., Uzdensky, D. A., & Begelman, M. C. 2010, MNRAS, 402, 1649 . M Giroletti, G Giovannini, W D Cotton, A&A. 488905Giroletti, M., Giovannini, G., Cotton, W. D., et al. 2008, A&A, 488, 905 D Hildebrand, M L Ahnen, M Balbo, International Cosmic Ray Conference. 35779Hildebrand, D., Ahnen, M. L., Balbo, M., et al. 2017, International Cosmic Ray Conference, 35, 779 . S Inoue, F Takahara, ApJ. 463555Inoue, S. & Takahara, F. 1996, ApJ, 463, 555 M L Knoetig, A Biland, T Bretz, International Cosmic Ray Conference. 331132International Cosmic Ray ConferenceKnoetig, M. L., Biland, A., Bretz, T., et al. 2013, in International Cosmic Ray Conference, Vol. 33, International Cosmic Ray Conference, 1132 . S Koyama, M Kino, M Giroletti, A&A. 586113Koyama, S., Kino, M., Giroletti, M., et al. 2016, A&A, 586, A113 . S Koyama, M Kino, M Giroletti, A&A. 586113Koyama, S., Kino, M., Giroletti, M., et al. 2016, A&A, 586, A113 . H A Krimm, S T Holland, R H D Corbet, The Astrophysical Journal Supplement Series. 20914Krimm, H. A., Holland, S. T., Corbet, R. H. D., et al. 2013, The Astrophysical Journal Supplement Series, 209, 14 . V Krishan, P J Wiita, ApJ. 423172Krishan, V. & Wiita, P. J. 1994, ApJ, 423, 172 . M Kusunose, F Takahara, H Li, ApJ. 536299Kusunose, M., Takahara, F., & Li, H. 2000, ApJ, 536, 299 . M Lei, C Yang, J Wang, X Yang, PASJ. 7045Lei, M., Yang, C., Wang, J., & Yang, X. 2018, PASJ, 70, 45 . M Liska, C Hesp, A Tchekhovskoy, Monthly Notices of the Royal Astronomical Society: Letters. 47481Liska, M., Hesp, C., Tchekhovskoy, A., et al. 2017, Monthly Notices of the Royal Astronomical Society: Letters, 474, L81 . V A Acciari, MAGIC CollaborationS Ansoldi, MAGIC CollaborationA&A. 63786MAGIC Collaboration, Acciari, V. A., Ansoldi, S., et al. 2020, A&A, 637, A86 . M Mahlke, T Bretz, J Adam, International Cosmic Ray Conference. 35612Mahlke, M., Bretz, T., Adam, J., et al. 2017, International Cosmic Ray Confer- ence, 35, 612 . K Mannheim, A&A. 26967Mannheim, K. 1993, A&A, 269, 67 . A Mastichiadis, M Petropoulou, S Dimitrakoudis, MNRAS. 4342684Mastichiadis, A., Petropoulou, M., & Dimitrakoudis, S. 2013, MNRAS, 434, 2684 . W Max-Moerbeck, T Hovatta, J L Richards, MNRAS. 445428Max-Moerbeck, W., Hovatta, T., Richards, J. L., et al. 2014, MNRAS, 445, 428 . A Mücke, R J Protheroe, Astroparticle Physics. 15121Mücke, A. & Protheroe, R. J. 2001a, Astroparticle Physics, 15, 121 A Mücke, R J Protheroe, International Cosmic Ray Conference. 31153Mücke, A. & Protheroe, R. J. 2001b, International Cosmic Ray Conference, 3, 1153 . K Murase, C D Dermer, H Takami, G Migliori, ApJ. 74963Murase, K., Dermer, C. D., Takami, H., & Migliori, G. 2012, ApJ, 749, 63 . P Padovani, P Giommi, E Resconi, MNRAS. 480192Padovani, P., Giommi, P., Resconi, E., et al. 2018, MNRAS, 480, 192 . A Pandey, A C Gupta, P J Wiita, ApJ. 841123Pandey, A., Gupta, A. C., & Wiita, P. J. 2017, ApJ, 841, 123 . B M Peterson, L Ferrarese, K M Gilbert, ApJ. 613682Peterson, B. M., Ferrarese, L., Gilbert, K. M., et al. 2004, ApJ, 613, 682 . B M Peterson, I Wanders, K Horne, PASP. 110660Peterson, B. M., Wanders, I., Horne, K., et al. 1998, PASP, 110, 660 . M Petropoulou, S Coenders, S Dimitrakoudis, Astroparticle Physics. 80115Petropoulou, M., Coenders, S., & Dimitrakoudis, S. 2016, Astroparticle Physics, 80, 115 . B G Piner, N Pant, P G Edwards, ApJ. 7231150Piner, B. G., Pant, N., & Edwards, P. G. 2010, ApJ, 723, 1150 . J Poutanen, A A Zdziarski, A Ibragimov, MNRAS. 3891427Poutanen, J., Zdziarski, A. A., & Ibragimov, A. 2008, MNRAS, 389, 1427 . J Quinn, C W Akerlof, S Biller, ApJ. 45683Quinn, J., Akerlof, C. W., Biller, S., et al. 1996, ApJ, 456, L83 . J P Rachen, P Mészáros, Phys. Rev. D. 58123005Rachen, J. P. & Mészáros, P. 1998, Phys. Rev. D, 58, 123005 . J L Richards, W Max-Moerbeck, V Pavlidou, ApJS. 19429Richards, J. L., Max-Moerbeck, W., Pavlidou, V., et al. 2011, ApJS, 194, 29 . P W A Roming, T E Kennedy, K O Mason, Space Sci. Rev. 12095Roming, P. W. A., Kennedy, T. E., Mason, K. O., et al. 2005, Space Sci. Rev., 120, 95 . P W A Roming, T S Koch, S R Oates, ApJ. 690163Roming, P. W. A., Koch, T. S., Oates, S. R., et al. 2009, ApJ, 690, 163 . S Sahu, C E López Fortín, L H Castañeda Hernández, S Nagataki, S Rajpoot, ApJ. 901132Sahu, S., López Fortín, C. E., Castañeda Hernández, L. H., Nagataki, S., & Ra- jpoot, S. 2020a, ApJ, 901, 132 . S Sahu, C E López Fortín, M E Iglesias Martínez, S Nagataki, P Fernández De Córdoba, MNRAS. 4922261Sahu, S., López Fortín, C. E., Iglesias Martínez, M. E., Nagataki, S., & Fernández de Córdoba, P. 2020b, MNRAS, 492, 2261 . J D Scargle, J P Norris, B Jackson, J Chiang, ApJ. 764167Scargle, J. D., Norris, J. P., Jackson, B., & Chiang, J. 2013, ApJ, 764, 167 . E F Schlafly, D P Finkbeiner, ApJ. 737103Schlafly, E. F. & Finkbeiner, D. P. 2011, ApJ, 737, 103 . B Schleicher, A Arbet-Engels, D Baack, Galaxies. 762Schleicher, B., Arbet-Engels, A., Baack, D., et al. 2019, Galaxies, 7, 62 . A Shukla, V R Chitnis, B B Singh, ApJ. 7982Shukla, A., Chitnis, V. R., Singh, B. B., et al. 2015, ApJ, 798, 2 M Sikora, Jets at All Scales. G. E. Romero, R. A. Sunyaev, & T. Belloni275Sikora, M. 2011, in Jets at All Scales, ed. G. E. Romero, R. A. Sunyaev, & T. Belloni, Vol. 275, 59-67 . J H Simonetti, J M Cordes, D S Heeschen, ApJ. 29646Simonetti, J. H., Cordes, J. M., & Heeschen, D. S. 1985, ApJ, 296, 46 . P S Smith, E Montiel, S Rightley, arXiv:0912.3621arXiv e-printsSmith, P. S., Montiel, E., Rightley, S., et al. 2009, arXiv e-prints [arXiv:0912.3621] . P Subramanian, A Shukla, P A Becker, MNRAS. 4231707Subramanian, P., Shukla, A., & Becker, P. A. 2012, MNRAS, 423, 1707 . C Tanihata, C M Urry, T Takahashi, ApJ. 563569Tanihata, C., Urry, C. M., Takahashi, T., et al. 2001, ApJ, 563, 569 F Temme, M L Ahnen, M Balbo, 34th International Cosmic Ray Conference (ICRC2015). 34707International Cosmic Ray ConferenceTemme, F., Ahnen, M. L., Balbo, M., et al. 2015, in International Cosmic Ray Conference, Vol. 34, 34th International Cosmic Ray Conference (ICRC2015), 707 . H Thirring, Physikalische Zeitschrift. 1933Thirring, H. 1918, Physikalische Zeitschrift, 19, 33 . J Tueller, W H Baumgartner, C B Markwardt, ApJS. 186378Tueller, J., Baumgartner, W. H., Markwardt, C. B., et al. 2010, ApJS, 186, 378 . S Vaughan, R Edelson, R S Warwick, P Uttley, MNRAS. 3451271Vaughan, S., Edelson, R., Warwick, R. S., & Uttley, P. 2003, MNRAS, 345, 1271 . A Zech, M Cerruti, D Mazin, A&A. 60225Zech, A., Cerruti, M., & Mazin, D. 2017, A&A, 602, A25 . Z Zhang, A C Gupta, H Gaur, ApJ. 884125Zhang, Z., Gupta, A. C., Gaur, H., et al. 2019, ApJ, 884, 125
[]
[ "Finite basis problem for identities with involution", "Finite basis problem for identities with involution" ]
[ "Irina Sviridova \nDepartamento de Matemática\nUniversidade de Brasília\n70910-900BrasíliaDFBrazil\n" ]
[ "Departamento de Matemática\nUniversidade de Brasília\n70910-900BrasíliaDFBrazil" ]
[]
We consider associative algebras with involution over a field of characteristic zero. We proved that any algebra with involution satisfies the same identities with involution as the Grassmann envelope of some finite dimensional (Z/4Z)graded algebra with graded involution. As a consequence we obtain the positive solution of the Specht problem for identities with involution: any associative algebra with involution over a field of characteristic zero has a finite basis of identities with involution. These results are analogs of Kemer's theorems for ordinary identities[28]. Similar results were proved also for associative algebras graded by a finite group in [1], and for abelian case in[33]. MSC: Primary 16R50; Secondary 16W20, 16W55, 16W10, 16W50
null
[ "https://arxiv.org/pdf/1410.2233v2.pdf" ]
119,319,273
1410.2233
9c0d18c06394e2828dd752b9c6938d1187b0b801
Finite basis problem for identities with involution 6 Dec 2014 October 26, 2014. Irina Sviridova Departamento de Matemática Universidade de Brasília 70910-900BrasíliaDFBrazil Finite basis problem for identities with involution 6 Dec 2014 October 26, 2014.Associative algebrasalgebras with involutionidentities with involution We consider associative algebras with involution over a field of characteristic zero. We proved that any algebra with involution satisfies the same identities with involution as the Grassmann envelope of some finite dimensional (Z/4Z)graded algebra with graded involution. As a consequence we obtain the positive solution of the Specht problem for identities with involution: any associative algebra with involution over a field of characteristic zero has a finite basis of identities with involution. These results are analogs of Kemer's theorems for ordinary identities[28]. Similar results were proved also for associative algebras graded by a finite group in [1], and for abelian case in[33]. MSC: Primary 16R50; Secondary 16W20, 16W55, 16W10, 16W50 Introduction The interest to involutions on associative algebras can be partially explained by their natural interconnections with various interesting and important classes of algebras which appears in different fields of mathematics and physics (see, e.g., [29]). Particularly, associative algebras with involution is the natural background for important classes of Lie and Jordan algebras ( [25], [30], [37]). The identities with involution are also intensively studied last years. In the theory of identities one of the central problem is the Specht problem. This is the problem of existence of a finite base for any system of identities. Originally this problem was formulated by W.Specht for ordinary polynomial identities of associative algebras over a field of characteristic zero [32]. This problem was positively solved by A.Kemer [28]. The solution is based on the Kemer's classification theorems. They state that any associative algebra over a field of characteristic zero is equivalent in terms of identities (PI-equivalent) to the Grassmann envelope of a finite dimensional superalgebra, and any finitely generated PI-algebra is PI-equivalent to a finite dimensional algebra. The classification theorems have a proper significance. They turn out the key tool for study of polynomial identities several last years. The proof of the main classification theorem of Kemer consists of two principal steps: the supertrick and the PI-representability of finitely generated PIsuperalgebras. On the first step the study of polynomial identities of any associative algebra is reduced to study of identities of the Grassmann envelope of a finitely generated PI-superalgebra. The second step is to prove that a finitely generated PI-superalgebra has the same (Z/2Z)-graded identities as some finite dimensional superalgebra. Later results similar to some of the Kemer's theorems were obtained also for various classes of algebras and identities. A review of results concerning the Specht problem can be found in [11]. One of the most recent results is a positive solution of the local Specht problem for associative algebras over an associative commutative Noetherian ring with unit [7]- [14]. Graded algebras and algebras with involution were also considered with regard to this problem. The positive solution of the Specht problem and analogs of the classification theorems were obtained for graded identities of graded associative algebras over a field of characteristic zero ([1] for a grading by a finite group, and [33] for a grading by a finite abelian group). The equivalence in terms of identities with involution was proved for finitely generated and finite dimensional PI-algebras with involution [34]. The main purpose of this paper is a positive solution of the Specht problem for identities with involution. This problem can be formulated in various forms: in terms of a finite base of identities, and in terms of the Noetherian property for ideals of the free algebra which are invariant under free algebra endomorphisms. The positive answer to this question for identities with involution is equivalent to any of the following statement. Any associative algebra with involution over a field of characteristic zero has a finite base of identities with involution (all identities with involution of a * -algebra follow from a finite family of * -identities). Any * T-ideal of the free associative algebra with involution of infinite rank over a field of characteristic zero is finitely generated as a * T-ideal. Any ascending chain of * T-ideals of the free associative algebra with involution of infinite rank over a field of characteristic zero eventually stabilizes. * T-ideal is a * -invariant two-sided ideal of the free associative algebra with involution, closed under all free algebra endomorphisms which commute with involution. See Lemma 1.1 about the structure of a * T-ideal generated by a set S. We prove in this work that any associative algebra with involution over a field of characteristic zero satisfies the same identities with involution as the Grassmann (Z/4Z)-envelope of some finitely generated (Z/4Z)-graded PI-algebra with graded involution (Theorem 4.1). This is an analog of the supertrick in the classical case. Using the recent result of the author about PI-representability of finitely generated (Z/4Z)-graded PI-algebras with graded involution [35] we obtain a version of the main classification Kemer's theorem for identities with involution (Theorem 4.2). As a consequence we obtain the positive solution of the Specht problem for identities with involution of associative * -algebras over a field of characteristic zero (Theorem 5.1). Throughout the paper we consider associative algebras over a field F of characteristic zero. Involution of an F -algebra A is an anti-automorphism of A of the second order. If we fix an involution * of an associative F -algebra A then the pair (A, * ) is called an associative algebra with involution (or associative * -algebra). Note that an algebra with involution can be considered as an algebra with the supplementary unary linear operation * satisfying identities (a · b) * = b * · a * , (a * ) * = a for all a, b ∈ A. Observe that any * -algebra can be decomposed into the sum of symmetric and skew-symmetric parts. An element a ∈ A is called symmetric if a * = a, and skewsymmetric if a * = −a. So, a + a * is symmetric and a − a * skew-symmetric for any a ∈ A. Thus, we have A = A + ⊕ A − , where A + is the subspace formed by all symmetric elements (symmetric part), and A − is the subspace of all skew-symmetric elements of A (skew-symmetric part). We also use the notations a • b = ab + ba, and [a, b] = ab − ba. It is clear that the symmetric part A + of a * -algebra A with the operation • is a Jordan algebra (Hermitian Jordan algebra). The skew-symmetric part A − with the operation [, ] is a Lie algebra. All classical finite-dimensional simple Lie algebras over an algebraically closed field, except sl n (F ), are of this type [25]. Suppose that A, B are algebras with involution. An ideal I of A invariant with respect to involution is called * -ideal. If I ✂ A is a * -ideal then A/I inherits the involution of A. A homomorphism ϕ : A → B is called * -homomorphism (homomorphism of algebras with involution) if it commutes with the involution. We denote by A 1 × · · · × A ρ the direct product of algebras A 1 , . . . , A ρ , and by A 1 ⊕ · · · ⊕ A ρ ⊆ A the direct sum of subspaces A i of an algebra A. If τ i is the involution of A i (i = 1, . . . , ρ) then A 1 × · · · × A ρ is an algebra with the involution * defined by the rules (a 1 , . . . , a ρ ) * = (τ 1 (a 1 ), . . . , τ ρ (a ρ )), a i ∈ A i . We study identities with involution ( * -identities) of associative algebras with involution. The notion of identity with involution is a formal extension of the notion of ordinary polynomial identity (see, e.g., [24], [34]). A brief introduction to the notion is given in Section 1. The definition of * -identity can be found also in [34] or in [24] with some more details. We refer the reader to the textbooks [17], [18], [24], and to [27], [28] concerning basic definitions, facts and properties of ordinary polynomial identities. We also use in the proof of the classification theorem the concept of a graded identity with involution (graded * -identity). This concept was developed in [35]. The principal definitions concerning this notion is also given in Section 1. In general, the concept of a graded * -identity is the union of concepts of an identity with involution and of a graded identity. The information about graded identities can be found in [23], [24] and in [1], [33]. Besides the notions of the free algebra with involution, identities with involution, the free graded algebra with involution and graded identities with involution, Section 1 also contains the necessary information about graded algebras. Properties of multilinear * -polynomials and multilinear graded * -polynomials alternating or symmetrizing in some set of variables are discussed in Section 3. Such polynomials appears in the study of identities as a result of application of techniques of symmetric group representations. Basic facts and notions concerning applications of representation theory for * -identities can be found in [19], [22], [20], [21], [24]. Observe that in our case the application of representation theory for * -identities is similar to the case of ordinary polynomial identities due to fact that the symmetric group acts by renaming of variables on a homogeneous subset of variables (on a set of symmetric variables in respect to involution or skew-symmetric). Thus in many situations we can apply the same results and arguments as in the case of ordinary polynomial identities. The book [24] contains very detailed and complete exposition of the facts and methods related to application of symmetric group representations for theory of polynomial identities. We appeal to this book when we need facts which can be directly applied in our case or arguments which can be literally repeated. We also refer the reader to [26], [16] concerning principal definitions and facts of representation theory. Section 2 is devoted to the definition of the Grassmann envelope of a (Z/4Z)graded algebra. Section 4 contains the classification theorems for ideals of identities with involution (Theorems 4.1, 4.2, 4.3). They are analogs of Kemer's theorems [28] for polynomial identities of associative algebras over a field of characteristic zero. The proof of Theorem 4.1 follow the scheme of the proof of the classical Kemer's theorem about Grassmann envelopes given in [24]. We adopt this proof for the case of identities with involution. Theorem 4.2 is the corollary of Theorem 4.1 and Theorem 6.2 [35]. The Specht problem solution (Theorem 5.1) for * -identities is given in Section 5. The proof of Theorem 5.1 is the involution version of the original Kemer's proof [28]. Observe that the principal tool of the proof is the Grassmann envelope. Our conception of the Grassmann envelope in this work is different of the usual one. Usually one consider the Grassmann envelope E(A) = A0 ⊗ E0 ⊕ A1 ⊗ E1 for a (Z/2Z)-graded algebra A = A0 ⊕ A1 (superalgebra). It gives super-theory. In this case a graded involution on E(A) induces the superinvolution on A. A (Z/2Z)graded linear transformation ⋆ of the second order of a superalgebra A is called a superinvolution if (a · b) ⋆ = (−1) i·j b ⋆ a ⋆ ∀a ∈ Aī, b ∈ Aj, i, j ∈ {0, 1}. And vice versa, one needs a superinvolution on A to guarantee the correspondent involution on E(A). We use a slight generalization of the traditional construction based on the natural (Z/4Z)-grading of the Grassmann algebra E. We call it the Grassmann Z 4 -envelope to differ it from the traditional Grassmann envelope. This construction is compatible with the usual graded involution. We think that the Specht problem for * -identities can be solved also using the traditional approach based on superinvolutions. It is possible even that the traditional approach could be more natural. But the author assume that the new construction and its connection with graded involutions on associative algebras is rather curious and worth to study. 1 Identities with involution and graded identities with involution. Let F be a field of characteristic zero. Consider two countable sets Y = {y i |i ∈ N}, Z = {z i |i ∈ N} of pairwise different letters, and the free associative non-unitary algebra F Y, Z generated by Y ∪Z. We can define an involution on F Y, Z assuming that variables from Y are symmetric, and from Z skew-symmetric ( α w a i 1 · · · a in ) * = α w a * in · · · a * i 1 = (−1) deg Z w α w a in · · · a i 1 , where y * j = y j , z * j = −z j , w = a i 1 · · · a in , a j ∈ Y ∪ Z, α w ∈ F. (1) F Y, Z is the free associative algebra with involution. Its elements are called * -polynomials. The free associative algebra F X * generated by the set X * = {x i , x * i |i ∈ N} also has an involution defined by ( α w a i 1 · · · a in ) * = α w a * in · · · a * i 1 , where (x j ) * = x * j , (x * j ) * = x j , w = a i 1 · · · a in , a j ∈ X * , α w ∈ F. The equalities y i = x i + x * i 2 , z i = x i − x * i 2 ; x i = y i + z i , x * i = y i − z i(2) induce the isomorphism of algebras with involution F X * and F Y, Z . We use the algebra F Y, Z as the free associative * -algebra. An algebra with involution A satisfies the * -identity (or identity with involution) f = 0 for a non-trivial * -polynomial f = f (y 1 , . . . , y n , z 1 , . . . , z m ) ∈ F Y, Z whenever f (a 1 , . . . , a n , b 1 , . . . , b m ) = 0 for all elements [34]). Conversely, any * T-ideal I of F Y, Z is the ideal of *identities of the algebra with involution F Y, Z /I. We denote by * T [S] the * T-ideal generated by a set S ⊆ F Y, Z . The next statement is clear due to the definition and elementary properties of a * T-ideal. a i ∈ A + , and b i ∈ A − . Let Id * (A) be the ideal of all identities with involution of A. Then Id * (A) is a two-sided * -ideal of F Y, Z closed under all * -endomorphisms of F Y, Z . Such ideals are called * T-ideals (seeLemma 1.1 Let F be a field of characteristic zero. Given a set S ⊆ F Y, Z a polynomial f ∈ F Y, Z belongs to the * T-ideal * T [S] generated by S iff f is a finite linear combination of the form f = (u),j α (u),j v 1 g j (ũ j1 , . . . ,ũ jn j )v 2 , α (u),j ∈ F.(3) Where g j =g j or g j =g * j for the full linearizationg j of a multihomogeneous component of a polynomial g ∈ S;ũ jl = u jl ± u * jl for a monomial u jl ∈ F Y, Z (ũ jl = u jl + u * jl if the corresponding variable x jl of the polynomial g j is symmetric in respect to involution (x jl ∈ Y ), andũ jl = u jl − u * jl if x jl ∈ Z is skew-symmetric); v l ∈ F Y, Z are monomials, possibly empty; (u) = (v 1 ,ũ j1 , . . . ,ũ jn j , v 2 ). Proof. It is clear that the set of all polynomials of the form (3) is a * T-ideal, and contains S. The characteristic of the base field is zero. Therefore, any * Tideal Γ contains all multihomogeneous components of its elements and their full linearizations. Particularly, if a * T-ideal Γ contains S then it contains also all multihomogeneous components of any g ∈ S and their full linearizations. Moreover, any * -invariant evaluation of variables of a homogeneous polynomial g ∈ Γ can be realized by a * -invariant evaluation of the full linearization of g up to a non-zero coefficient. Since the polynomials g i are multilinear then a linear base of all their * -invariant evaluations is formed by their evaluations with the symmetric and skewsymmetric parts of monomials. ✷ We say that a * -polynomial f is a consequence of a set S ⊆ F Y, Z if f ∈ * T [S]. We have also that Id * (A 1 × A 2 ) = Id * (A 1 ) ∩ Id * (A 2 ) for the direct product A 1 × A 2 of arbitrary * -algebras A i . Suppose that Γ is a * T-ideal. A * -variety defined by Γ is the family of all associative * -algebras such that they satisfy f = 0 for any f ∈ Γ. It is denoted by V Γ . A * -algebra A generates V Γ if Γ = Id * (A). Then we write V Γ = V(A). The * -algebra F Y, Z /Γ is the relatively free algebra of the * -variety V Γ . Any * -variety is closed under taking * -subalgebras, * -homomorphic images, and direct products. The free * -algebra of rank ν F Y ν , Z ν , and the relatively free algebra of rank ν F Y ν , Z ν /(Γ ∩ F Y ν , Z ν ) for the * -variety V Γ are also considered (Y ν = {y i |i = 1, . . . , ν}, Z ν = {z i |i = 1, . . . , ν}). Let G be a finite abelian group. An algebra A is G-graded if A = θ∈G A θ is the direct sum of its subspaces A θ satisfying A θ A ξ ⊆ A θξ for all θ, ξ ∈ G. An element a ∈ A θ is called G-homogeneous of degree deg G a = θ. A subspace V of A is graded if V = θ∈G (V ∩ A θ ). Example 1.2 The free associative algebra F = F X generated by X = {x 1 , x 2 , . . . } has the natural (Z/nZ)-grading Fm = Span F {x i 1 x i 2 · · · x is |s ≡ m mod n},m ∈ Z/nZ. The Grassmann algebra of countable rank E = e i , i ∈ N| e i e j = −e j e i , ∀i, j has the homogeneous relations. Thus it inherits the (Z/nZ)-grading of the free algebra Em = Span F {e i 1 e i 2 · · · e is |s ≡ m mod n, i 1 < · · · < i s }. This grading is called natural. Consider a G-graded algebra A with involution. We assume that the involution is a graded anti-automorphism of A, i.e. A * θ = A θ for any θ ∈ G. This is equivalent to condition (see for instance [6]) that the subspaces A + , A − are graded. Particularly, we have that A = θ∈G (A + θ ⊕ A − θ ), where A δ = θ∈G A δ θ , (δ ∈ {+, −}); and A θ = A + θ ⊕ A − θ , (θ ∈ G) . We say that an element a ∈ A δ θ (δ ∈ {+, −}, θ ∈ G) is homogeneous of complete degree deg G a = (δ, θ) or simply G-homogeneous. Example 1.3 Consider the natural (Z/4Z)-grading on the Grassmann algebra of countable rank E = m∈Z/4Z Em described in Example 1.2. Define on E the involution * E by the equalities (e i ) * E = e i for all i ∈ N. This involution is called canonical. It is clear that this involution is graded. Moreover, E + = E0 ⊕ E1, and E − = E2 ⊕ E3. A homomorphisms ϕ : A → B of two G-graded * -algebras A, B is called graded * -homomorphism if ϕ is graded (ϕ(A θ ) ⊆ A θ for any θ ∈ G) , and commutes with the involution. An ideal (a subalgebra) I ✂ A of a graded algebra with involution A is graded * -ideal (graded * -subalgebra ) if it is graded and invariant under the involution. For graded algebras with involution we consider only graded * -ideals, and graded * -homomorphisms. In this case the quotient algebra A/I is also a graded * -algebra with the grading and the involution induced from A. It is clear that the direct product of graded algebras with involution is also a graded algebra with involution (the grading and the involution are component-wise). We can also define the notion of a graded * -identity for a G-graded algebra with a graded involution. The free associative non-unitary algebra F G = F Y G , Z G generated by the set Y G ∪ Z G = {y iθ |θ ∈ G, i ∈ N} ∪ {z iθ |θ ∈ G, i ∈ N} has the involution defined by (1) for monomials in Y G ∪ Z G . We assume that y * jθ = y jθ , and z * jθ = −z jθ (for all θ ∈ G, i ∈ N). The G-grading on F G is defined naturally by the rule deg G a i 1 a i 2 · · · a in = deg G a i 1 · · · deg G a in , where deg G y iθ = deg G z jθ = θ, a j ∈ Y G ∪ Z G . It is clear that the involution (1) is graded. The algebra F G is the free associative G-graded algebra with graded involution. Its elements are called graded * -polynomials. Variables y iθ ∈ Y G , z jθ ∈ Z G are G-homogeneous. Their complete degrees are deg G y iθ = (+, θ), deg G z iθ = (−, θ), θ ∈ G. Let us denote also Y θ = {y iθ |i ∈ N}, and Z θ = {z iθ |i ∈ N} for any θ ∈ G. Let f = f (x 1 , . . . , x n ) ∈ F Y G , Z G be a non-trivial graded * -polynomial (x i ∈ Y G ∪ Z G ). We say that a graded * -algebra A satisfies the graded * -identity (or graded identity with involution) f = 0 iff f (a 1 , . . . , a n ) = 0 for all G-homogeneous elements a i ∈ A δ i θ i of the corresponding complete degrees deg G a i = deg G x i = (δ i , θ i ), δ i ∈ {+, −}, θ i ∈ G (i = 1, . . . , n). Denote by Id gi (A) ✂ F Y G , Z G the ideal of all graded identities with involution of a graded * -algebra A. It is clear that Id gi (A) is a two-side graded * -ideal of F Y G , Z G closed under graded * -endomorphisms of F Y G , Z G . We call such ideals giT-ideals (see [35]). Conversely, any giT-ideal I of F Y G , Z G is the ideal of graded * -identities of the graded algebra with involution F Y G , Z G /I. Given a set S ⊆ F Y G , Z G of graded * -polynomials denote by giT [S] the giT-ideal generated by S. Similarly to case of non-graded * -identities, we have that Id gi (A 1 × · · · × A ρ ) = ρ i=1 Id gi (A i ) for the direct product A 1 × · · · × A ρ of graded * -algebras. Given a giT-ideal Γ consider the family V G Γ of all associative G-graded * -algebras that satisfy f = 0 for any f ∈ Γ. We call V G Γ a graded * -variety defined by Γ. If Γ = Id gi (A) then we say that the graded * -algebra A generates the graded * - variety V G Γ = V G (A). Particularly, V G Γ = V G (F Y G , Z G /Γ). Moreover, the algebra F Γ = F Y G , Z G /Γ is the relatively free algebra of the graded * -variety V G Γ . It is clear that B ∈ V G (A) for a graded * -algebra B whenever Id gi (A) ⊆ Id gi (B). Any graded * -variety is closed under taking graded * -subalgebras, graded * -homomorphic images, and direct products. Let Y G ν = {y iθ |θ ∈ G, 1 ≤ i ≤ ν}, Z G ν = {y iθ |θ ∈ G, 1 ≤ i ≤ ν} be two finite sets, ν ∈ N. We also consider the free G-graded algebra with involution F Y G ν , Z G ν of rank ν generated by Y G ν ∪ Z G ν and the relatively free algebra of rank ν F ν,Γ = F Y G ν , Z G ν /(Γ ∩ F Y G ν , Z G ν ) for the graded * -variety V G Γ .f = g (mod Γ) if f − g ∈ Γ. If we have a graded * -algebra A then we assume that Id * (A) ⊆ Id gi (A). Namely, for a non-graded * -polynomial f (y 1 , . . . , y n , z 1 , . . . , z m ) ∈ F Y, Z we assume f ∈ Id gi (A) whenever f ( θ∈G y 1θ , . . . , θ∈G y nθ , θ∈G z 1θ , . . . , θ∈G z mθ ) ∈ Id gi (A). Particularly, for a multilinear non-graded * -polynomial f (y 1 , . . . , y n , z 1 , . . . , z m ) ∈ F Y, Z we have f ∈ Id gi (A) if and only if f (y 1θ 1 , . . . , y nθn , z 1θ n+1 , . . . , z mθ n+m ) ∈ Id gi (A) for all (θ 1 , . . . , θ n+m ) ∈ G n+m . Thus if A ∼ gi B for G-graded * -algebras A, B then we have also that A ∼ * B. Note that the set X G = {x iθ = y iθ + z iθ |i ∈ N, θ ∈ G} generates in F G a G-graded subalgebra F X G which is isomorphic to the free associative G-graded algebra ( [33]). Thus the ideal Id G (A) of graded identities of A also lies in Id gi (A). Recall that an algebra A is called PI-algebra if it satisfies a non-trivial ordinary polynomial identity (non-graded and without involution) (see [17], [18], [24], [27], [28]). It is clear that for a G-graded PI-algebra A with involution the T-ideal of ordinary polynomial identities Id(A) also lies in Id gi (A). Moreover, we have that Id(A) ⊆ Id * (A) ⊆ Id gi (A). Here for a polynomial f (x 1 , . . . , x n ) ∈ Id(A) we assume that f ∈ Id * (A) iff f (y 1 + z 1 , . . . , y n + z n ) ∈ Id * (A). This is the natural relation induced by the isomorphism (2) of F X * and F Y, Z and inclusion F X ⊆ F X * . By Amitsur's theorem [2], [3] (see also [24]) any * -algebra satisfying a non-trivial * -identity is a PI-algebra. Thus any non-trivial * T-ideal contains a non-trivial Tideal. A G-graded * -algebra can not be a PI-algebra in general (see for instance comments after Theorem 1 [33]). In general case a graded * -algebra A is a PIalgebra iff the neutral component A e satisfies a non-trivial * -identity, where e is the unit element of G (it follows from [2], [3], and [4], [15]). This is equivalent to condition that A satisfies a non-trivial non-graded * -identity. The notion of degree of a graded or non-graded * -polynomial is defined in the usual way. Using the multilinearization process as in the case of ordinary identities ( [17], [18], [24]) we can show that any giT-ideal or * T-ideal over a field of characteristic zero is generated by multilinear polynomials (see also Lemma 1.1). Thus in our case it is enough to consider only multilinear identities. The space of multilinear * -polynomials of degree n has the form P n = Span F {x σ(1) · · · x σ(n) |σ ∈ S n , x i ∈ Y ∪ Z}. Thus P n is the direct sum of subspaces of multihomogeneous and multilinear polynomials depending on a fixed set of symmetric and skew-symmetric variables. When we consider * -identities we can assume that a multilinear * -identity depends on variables {y 1 , . . . , y k }, and {z 1 , . . . , z n−k }, k = 0, . . . , n. Denote by P k,n−k the subspace of all multilinear * -polynomials f (y 1 , . . . , y k , z 1 , . . . , z n−k ) for a fixed number k. Given a * T-ideal Γ ✂ F Y, Z the vector spaces Γ k,n−k = Γ ∩ P k,n−k , and P k,n−k (Γ) = P k,n−k /Γ k,n−k ⊆ F Y, Z /Γ has the natural structure of (F S k ⊗F S n−k )modules. Here S k and S n−k act on symmetric and skew-symmetric variables independently renaming the variables (see, e.g., [20]). Further we consider (Z/4Z)-graded algebras with involution and (Z/4Z)-graded * -identities. We assume that G = Z/4Z, and use for it the additive notation. We also denote for brevity the group Z/4Z by Z 4 , and the free Z 4 -graded * -algebra F Y Z 4 , Z Z 4 by F (4) . Let us define the function η : 2 Grassmann Z 4 -envelope of a graded * -algebra. Assume that G = Z 4 . Consider a Z 4 -graded algebra A = θ∈Z 4 A θ . Definition 2.1 The algebra E 4 (A) = θ∈Z 4 A θ ⊗ F E θ is called Grassmann Z 4 - envelope of A. Where E = θ∈Z 4 E θ is the natural Z 4 -grading of E defined in Example 1.2. The algebra E 4 (A) is also Z 4 -graded with the grading ( E 4 (A)) θ = A θ ⊗ F E θ , θ ∈ Z 4 . If A has a graded involution * A then the F -linear involution * on E 4 (A) is defined by the rules (a ⊗ g) * = a * A ⊗ g * E , where * E is the canonic involution on E (see Example 1.3). Hence (a θ ⊗ g θ ) * = (−1) η(θ) a * A θ ⊗ g θ for any a θ ∈ A θ , g θ ∈ E θ , θ ∈ Z 4 . It is clear that E 4 (A) δ = θ∈G (E 4 (A)) δ θ , δ ∈ {+, −}, where (E 4 (A)) + θ = Span F {a θ ⊗ g θ |a θ ∈ A + θ , g θ ∈ E θ } and (E 4 (A)) − θ = Span F {a θ ⊗ g θ |a θ ∈ A − θ , g θ ∈ E θ } if θ ∈ {0,1};(4)(E 4 (A)) + ξ = Span F {a ξ ⊗ g ξ |a ξ ∈ A − ξ , g ξ ∈ E ξ } and (E 4 (A)) − ξ = Span F {a ξ ⊗ g ξ |a ξ ∈ A + ξ , g ξ ∈ E ξ } if ξ ∈ {2,3}. Let us define some transformations of multilinear Z 4 -graded * -polynomials. Denote by X od = Y1 ∪ Z1 ∪ Y3 ∪ Z3 the subset of all variables, odd in respect to the Z 4 -grading, and by X ev = Y0 ∪ Z0 ∪ Y2 ∪ Z2 the subset of all Z 4 -even variables. Fix on X od the linear order y 11 < y 21 < · · · < z 11 < z 21 < · · · < y 13 < y 23 < · · · < z 13 < z 23 < . . . Assume that f ∈ F (4) is a multilinear graded * -polynomial. Then f is uniquely represented in the form f = u σ∈S k α σ,u u 1 x σ(1) u 2 x σ(2) · · · x σ(k) u k+1 ,(5) where x j ∈ X od , and u = u 1 u 2 · · · u k+1 is a multilinear monomial over X ev , possibly empty, k ≥ 0. Then we assume that s(f ) = u σ∈S k (−1) σ α σ,u u 1 x σ(1) u 2 x σ(2) · · · x σ(k) u k+1 .(6) Consider a collection of variables (y θ , z θ ) = (y 1θ , . . . , y n θ θ , z 1θ , . . . , z m θ θ ) of Z 4 -degree θ. Then for a multilinear graded * polynomial f = f (y0, z0, y1, z1, y2, z2, y3, z3) t(f ) = f y i2 :=z i2 ,y i3 :=z i3 , z i2 :=y i2 ,z i3 :=y i3 = f (y0, z0, y1, z1, z2, y2, z3, y3)(7) is the respective exchange of the variables y ∈ Y θ by z ∈ Z θ , and z by y of Z 4 -degrees θ =2 and3. Observe that t(y 1θ , . . . , y n θ θ , z 1θ , . . . , z m θ θ ) = (z 1θ , . . . , z n θ θ , y 1θ , . . . , y m θ θ ). It is clear that s, t are linear operators on the space of multilinear * Z 4 -polynomials. These operators satisfy the relations s 2 = t 2 = id, st = ±ts, where id is the identical transformation, and the sign in the second formula is defined by the permutation of variables y3, z3 induced by applying of t. Then we denote f = st(f )(8) for a multilinear * Z 4 -polynomial f ∈ F (4) . It is clear that f = ±f for any multilinear f ∈ F (4) . Moreover, we have the next Lemma. u w1 x σ w (1) u w2 x σ w (2) · · · x σ w (k) u wk+1 . The last formula gives the representation (5) of the monomial w; here σ w ∈ S k , x j ∈ X od , and u wj are monomials over X ev , possibly empty. Since f is multilinear then it is enough to consider its evaluations by elements a⊗g, where a ∈ A θ , g ∈ E θ . Taking into account (4) we need to consider evaluations of the form y i 10 = b i 10 ⊗ h i 10 , z i 20 = c i 20 ⊗h i 20 , y i 31 = b i 31 ⊗ g i 31 , z i 41 = c i 41 ⊗g i 41 , y i 62 = c i 62 ⊗ h i 62 , z i 52 = b i 52 ⊗h i 52 , y i 83 = c i 83 ⊗ g i 83 , z i 73 = b i 73 ⊗g i 73 ,(9) where b jθ ∈ A + θ , c jθ ∈ A − θ , and elements h jθ , g jθ ,h jθ ,g jθ ∈ E θ involve disjoint sets of generators of E. Assume that (a 1 ⊗ g 1 , . . . , a n ⊗ g n ) is an evaluation of f of the type (9) (for corresponding elements a i ∈ A, g i ∈ E). Observe that the elements h iθ ,h jξ ∈ E0 ∪ E2 commute with any element of E, and the elements g iθ , g jξ ∈ E1 ∪ E3 anti-commute among themselves. Then we obtain w(a 1 ⊗ g 1 , . . . , a n ⊗ g n ) = w( (ã 1 , . . . ,ã n ) ⊗ g 1 · · · g n . ). Therefore, f (a 1 ⊗ g 1 , . . . , a n ⊗ g n ) = w (−1) σ w α w w(a 1 ⊗ g 1 , . . . , a n ⊗ g n ) = w (−1) σ w (−1) σ w α w w(ã 1 , . . . ,ã n ) ⊗ g 1 · · · g n = f (ã 1 , . . . ,ã n ) ⊗ g 1 · · · g n . (b i 10 ⊗ h i 10 ), (c i 20 ⊗h i 20 ), (b i 31 ⊗ g i 31 ), (c i 41 ⊗g i 41 ), (b i 52 ⊗h i 52 ), (c i 62 ⊗ h i 62 ), (b i 73 ⊗g i 73 ), (c i 83 ⊗ g i 83 )) = w((b i 10 ), (c i 20 ), (b i 31 ), (c i 41 ), (b i 52 ), (c i 62 ), (b i 73 ), (c i 83 )) ⊗ w(g 1 , . . . , g n ) = w(ã 1 , . . . ,ã n ) ⊗ u w1 (h,h) g ′ σ w (1) u w2 (h,h) g ′ σ w (2) · · · g ′ σ w (k) u wk+1 (h,h) = (−1) σ w w Thus f (a 1 ⊗g 1 , . . . , a n ⊗g n ) = 0 for any evaluation (9) if and only if f (ã 1 , . . . ,ã n ) = 0 for all appropriateã i ∈ A δ i θ i , δ i ∈ {+, −}, θ i ∈ Z 4 . ✷ Definition 2. 3 Given a giT-ideal Γ ⊆ F (4) denote by Γ the giT-ideal generated by the set S = { f |f ∈ Γ ∩ (∪ n≥1 P n ) } of st-images of all multilinear polynomials from Γ. Lemma 2.2 along with properties of the operators s, t immediately implies the following. Lemma 2.4 Given a giT-ideal Γ ⊆ F (4) we have that Γ = Id gi (A) for a Z 4 -graded * -algebra A iff Γ = Id gi (E 4 (A)). Besides that, Γ = Γ. Hence, we have that A ∼ gi B for Z 4 -graded * -algebras A, B if and only if E 4 (A) ∼ gi E 4 (B). And E 4 (E 4 (A)) ∼ gi A for any Z 4 -graded algebra A with involution. The last property is also a simple consequence of the facts that E 4 (E 4 (A)) = θ∈Z 4 A θ ⊗ F E θ ⊗ F E θ , and the algebra E 4 (E) = θ∈Z 4 E θ ⊗ F E θ is commutative and non-nilpotent. Remark 2.5 Since E 4 (A) is a subalgebra of A ⊗ F E then by Regev's theorem [31] we have that E 4 (A) is a PI-algebra if and only if A is a PI-algebra. Particularly, consider a * -variety V. Assume that V is defined by a * T-ideal Γ ⊆ F Y, Z , and Γ = Id * (A) for an algebra with involution A. Denote by V Z 4 the class of all associative Z 4graded F -algebras B with involution such that E 4 (B) ∈ V. It is clear from Lemma 2.4 that V Z 4 is a Z 4 -graded * -variety defined by the giT-ideal Γ 1 of Z 4 -graded * - identities of the Z 4 -graded algebra with involution A ⊗ F E = θ∈Z 4 A ⊗ F E θ . The giT-ideal Γ 1 = Γ 2 , where Γ 2 is the giT-ideal generated by Γ, i.e. Γ 2 = Γ Z 4 = giT [S Γ ] for S Γ = { f | y i := θ∈Z 4 y iθ , z i := θ∈Z 4 z iθ , ∀i | f ∈ Γ }. (10) 3 Alternating and symmetrizing polynomials. Let f = f (s 1 , . . . , s k , x 1 , . . . , x n ) ∈ F Y, Z be a multilinear polynomial. Assume that S = {s 1 , . . . , s k } ⊆ Y or S ⊆ Z. We say that f is alternating in S, if f (s σ(1) , . . . , s σ(k) , x 1 , . . . , x n ) = (−1) σ f (s 1 , . . . , s k , x 1 , . . . , x n ) holds for any permutation σ ∈ S k . For any multilinear polynomial with involution g(s 1 , . . . , s k , x 1 , . . . , x n ) we construct a multilinear polynomial f alternating in S = {s 1 , . . . , s k } by setting f (s 1 , . . . , s k , x 1 , . . . , x n ) = A S (g) = σ∈S k (−1) σ g(s σ(1) , . . . , s σ(k) , x 1 , . . . , x n ). The corresponding mapping A S is a linear transformation of multilinear * -polynomials. We call it the alternator. Any * -polynomial f alternating in S can be decomposed as f = m i=1 α i A S (u i ), where the u i 's are monomials, α i ∈ F. We say that a multilinear * -polynomial f (s 1 , . . . , s k , x 1 , . . . , x n ) is symmetrizing in the set S = {s 1 , . . . , s k } (S ⊆ Y or S ⊆ Z), if f (s σ(1) , . . . , s σ(k) , x 1 , . . . , x n ) = f (s 1 , . . . , s k , x 1 , . . . , x n ) for any σ ∈ S k . For any multilinear * -polynomial g(s 1 , . . . , s k , x 1 , . . . , x n ) the multilinear * -polynomial f (s 1 , . . . , s k , x 1 , . . . , x n ) = E S (g) = σ∈S k g(s σ(1) , . . . , s σ(k) , x 1 , . . . , x n ). is symmetrizing in S. E S is also a linear transformation of multilinear * -polynomials. It is called the symmetrizator. Any multilinear * -polynomial f symmetrizing in S can be written as f = m i=1 α i E S (u i ), where the u i 's are monomials of f, and α i ∈ F. Properties of alternating and symmetrizing polynomials with involution are similar to that of ordinary polynomials (see, e.g., [17], [24], [28]). Particularly, a multilinear * -polynomial f (s 1 , . . . , s k , x 1 , . . . , x n ) is symmetrizing in variables S = {s 1 , . . . , s k } iff f (s 1 , . . . , s k , x 1 , . . . , x n ) is the full linearization in the variable s of the non-zero polynomialf = 1 k! f (s, . . . , s, x 1 , . . . , x n ). Moreover, f = (v) α (v) v 0 sv 1 s · · · sv k , whenever f = (v) σ∈S k α (v) v 0 s σ(1) v 1 s σ(1) · · · s σ(k) v k , where monomials v 0 , v 1 , . . . , v k (possibly empty) do not depend on S, α (v) ∈ F. Similarly we can consider graded * -polynomials alternating or symmetrizing in a set of variables S ⊆ Y θ or S ⊆ Z θ for any fixed θ ∈ G (see [35]). Proof. It is clear that f = s(f ). Also the polynomial f can be decomposed as f = (v), τ ∈Sr σ i ∈Sn i , 1≤i≤k α (v),τ (−1) σ 1 · · · (−1) σk (σ 1 · · · σk τ ) v 0 x 1 v 1 x 2 · · · x r v k , where x j ∈ Y1 ∪Z1 , v j are monomials (possibly empty) over Y0 ∪Z0, the permutation τ acts on the variables x j , and the permutations σ i acts on disjoint subsets of the set {x 1 , . . . , x r } corresponding to the sets of variables {t i1 , . . . , t in i }, α (v),τ ∈ F. Then f = (v), τ ∈Sr σ i ∈Sn i , 1≤i≤k α (v),τ (−1) σ 1 ···σk (−1) σ 1 ···σkτ (σ 1 · · · σk τ ) v 0 x 1 v 1 x 2 · · · x r v k = (v), τ ∈Sr (−1) τ α (v),τ σ i ∈Sn i , 1≤i≤k (σ 1 · · · σk) v 0 x τ (1) v 1 x τ (2) · · · x τ (r) v k . Thus f is symmetrizing in any {t i1 , . . . , t in i }. ✷ Given a * T-ideal Γ ✂ F Y, Z the vector space Γ n,m = Γ ∩ P n,m of multilinear *polynomials f (y 1 , . . . , y n , z 1 , . . . , z m ) ∈ Γ has the structure of (F S n ⊗ F S m )-module defined by (σ ⊗ τ )f (y 1 , . . . , y n , z 1 , . . . , z m ) = f (y σ(1) , . . . , y σ(n) , z τ (1) , . . . , z τ (m) ) for any (σ, τ ) ∈ S n × S m . The character of the quotient module P n,m (Γ) = P n,m /Γ n,m ⊆ F Y, Z /Γ can be decomposed as χ n,m (Γ) = λ⊢n µ⊢m m λ,µ (χ λ ⊗ χ µ ), where χ λ ⊗ χ µ is the irreducible S n ×S m -character associated to the pair (λ, µ) of partitions λ ⊢ n, µ ⊢ m, m λ,µ ∈ Z is a multiplicity (see for instance [20], [21], [24], [26]). An irreducible submodule of P n,m (Γ) corresponding to the pair (λ, µ) is generated by a non-zero polynomial f λ,µ = (e T λ ⊗ e Tµ )f, where f ∈ P n,m , and e T λ ∈ F S n , e Tµ ∈ F S m are the essential idempotents corresponding to the Young tableaux T λ , and T µ respectively (see Definition 2.2.12 [24]). We say that a multilinear * -polynomial f corresponds to the pair of partitions (λ, µ) if (F S n ⊗ F S m ) f = (F S n ⊗ F S m ) f λ,µ . Particularly, the next observation holds. (j 1 = 1, . . . , ν), and at most ν sets of alternated variables {s j 2 1 , . . . , s j 2mj 2 } (j 2 = 1, . . . , ν). Thus, f = f ( y, t, z, s), where y = (y 11 , . . . , y 1n 1 , . . . , y ν1 , . . . , y νnν ) ⊆ Y, t = (t 11 , . . . , t 1n 1 , . . . , t ν1 , . . . , t νnν ) ⊆ Y, z = (z 11 , . . . , z 1m 1 , . . . , z ν1 , . . . , z νmν ) ⊆ Z, s = (s 11 , . . . , s 1m 1 , . . . , z ν1 , . . . , z νmν ) ⊆ Z (12) are disjoint collections of variables, and f is symmetrizing in any {y i1 , . . . , y in i }, and {z i1 , . . . , z im i }, and alternating in any {t i1 , . . . , t in i }, and {s i1 , . . . , s im i } (i = 1, . . . , ν). Since f ∈ Id * (E 4 (R)) then f is equal to zero in E 4 (R) for y ij 1 =ȳ i0 ⊗ h n·i+j 10 , t ij 2 =ȳ i1 ⊗ g n·i+j 21 , z ij 3 =z i0 ⊗h m·i+j 30 , s ij 4 =z i1 ⊗g m·i+j 41 ,(13)i = 1, . . . , ν, 1 ≤ j 1 ≤ n i , 1 ≤ j 2 ≤n i , 1 ≤ j 3 ≤ m i , 1 ≤ j 4 ≤m i , whereȳ iθ = y iθ + I,z iθ = z iθ + I, y iθ ∈ Y θ , and z iθ ∈ Z θ are graded variables from Y Z 4 ν ∪ Z Z 4 ν of Z 4 -degree θ ∈ {0,1}, I = Γ 2 ∩ F Y Z 4 ν , Z Z 4 ν , h l0 ,h l0 ∈ E0, g l1 ,g l1 ∈ E1 are elements of the Grassmann algebra depending on disjoint sets of generators. Let us denote a (k) = a, . . . , a k for any element a. Therefore, we obtain in the algebra E 4 (R) the equalities f | (13) =f 3 ⊗ g = 0, where (14) f 3 = f 2 (ȳ Here f 2 = f 1 . Where the graded multilinear polynomial f 1 = f ( y0, y1, z0, z1), with y0 = (y (1,1)0 , . . . , y (1,n 1 )0 , . . . , y (ν,1)0 , . . . , y (ν,nν )0 ) ⊆ Y0, y1 = (y (1,1)1 , . . . , y (1,n 1 )1 , . . . , y (ν,1)1 , . . . , y (ν,nν )1 ) ⊆ Y1, z0 = (z (1,1)0 , . . . , z (1,m 1 )0 , . . . , z (ν,1)0 , . . . , z (ν,mν )0 ) ⊆ Z0, z1 = (z (1,1)1 , . . . , z (1,m 1 )1 , . . . , z (ν,1)1 , . . . , z (ν,mν )1 ) ⊆ Z1, is the result of the evaluation of the variables y ij , z ij of the polynomial f by the corresponding graded variables of the degree0, and of the variables t ij , s ij by the graded variables y (i,j)1 , z (i,j)1 of the degree1 respectively. The element g in (14) is the product of all elements h l0 ,h l0 , g l1 ,g l1 of the Grassmann algebra from (13). Observe that by Lemma 3.1 the polynomial f 2 = f 1 is symmetrizing in any set of variables y (i,1)θ , . . . , y (i,n ′ i )θ , and z (i,1)θ , . . . , z (i,m ′ i )θ , for all i = 1, . . . , ν, θ ∈ {0,1} (if θ =0 then n ′ i = n i , m ′ i = m i , otherwise n ′ i =n i , m ′ i =m i ). The equality (14) means that the graded * -polynomial f 3 = f 2 (yν1 ) belongs to Γ 2 ∩ F Y Z 4 ν , Z Z 4 ν . Thus, f 3 ∈ Γ 2 . The polynomial f 2 = f 1 ( y0, y1, z0, z1) is the full linearization of 1 α · f 3 , where α ∈ F is some nonzero coefficient which appears as the result of identifying of symmetrized variables. The variables of f 2 as in (15). Hence f 2 ∈ Γ 2 . Take the relatively free * -algebra L = F Y, Z /Γ of the * -variety V Γ , and consider the Z 4 -graded * -algebra L ⊗ E = θ∈Z 4 L ⊗ E θ . By Remark 2.5 L ⊗ E satisfies the graded * -identity f 2 ( y0, y1, z0, z1) = 0. Particularly, the evaluation y (i,j 1 )0 =ȳ ij 1 ⊗ h n·i+j 10 , y (i,j 2 )1 =t ij 2 ⊗ g n·i+j 21 , z (i,j 3 )0 =z ij 3 ⊗h m·i+j 30 , z (i,j 4 )1 =s ij 4 ⊗g m·i+j 41 , i = 1, . . . , ν, 1 ≤ j 1 ≤ n i , 1 ≤ j 2 ≤n i , 1 ≤ j 3 ≤ m i , 1 ≤ j 4 ≤m i gives the result f 2 | (16) = f 1 ( ȳ, t , z, s) ⊗ g ′ = f ( ȳ, t , z, s) ⊗ g ′ = 0. Here ȳ, t , z, s is the sequence formed as in (12) by the elementsȳ ij 1 = y ij 1 + Γ,t ij 2 = t ij 2 + Γ, z ij 3 = z ij 3 + Γ,s ij 4 = s ij 4 + Γ (i = 1, . . . , ν, 1 ≤ j 1 ≤ n i , 1 ≤ j 2 ≤n i , 1 ≤ j 3 ≤ m i , 1 ≤ j 4 ≤m i ), where the variables y ij 1 , t ij 2 , z ij 3 , s ij 4 are the same as in (12). The element g ′ is the product of all elements of the Grassmann algebra from (16) depending on disjoint sets of generators. Therefore, f ( ȳ, t , z, s) = 0 in L, and f ∈ Γ. Hence Id * (E 4 (R)) = Γ. ✷ We can reinforce the result similarly to the classical case of Kemer's theorems for PI-algebras [28] using Theorem 6.2 [35]. Theorem 4.2 Let F be a field of characteristic zero. Any proper * T-ideal of the free associative F -algebra with involution is the ideal of identities with involution of the Grassmann Z 4 -envelope of some associative Z 4 -graded algebra with graded involution, finite dimensional over F . Proof. If Γ is a proper * T-ideal of F Y, Z then by Theorem 4.1 we have Γ = Id * (E 4 (B)) for some associative finitely generated Z 4 -graded PI-algebra B with graded involution. Theorem 6.2 [35] states that there exists a finite dimensional over F Z 4 -graded algebra C with graded involution which has the same graded * -identities as B. Hence E 4 (B) ∼ gi E 4 (C). Particularly, Id * (E 4 (B)) = Id * (E 4 (C)) = Γ. ✷ For a finitely generated associative PI-algebra with involution we also have the next theorem. [34]) Let F be a field of characteristic zero. Then a non-zero * T-ideal of * -identities of a finitely generated associative F -algebra with involution coincides with the * T-ideal of * -identities of some finite dimensional associative F -algebra with involution. Observe that Theorem 4.3 can be considered as a partial case of Theorem 4.2 assuming that the Z 4 -grading is trivial. Specht problem. Since f k is multilinear then it implies that f k ∈ Id * (E 4 (C)) = Γ. This contradicts to the construction of Γ. Therefore, Γ is finitely generated as a * T-ideal. ✷ Observe that the usual Grassmann envelope of the superalgebras with superinvolution also can be considered in the context of the Specht problem and Classification theorems for identities with involution. We assume that results similar to Theorem 6.2 [35], and Theorems 4.1, 4.2 can be obtained also in this case. Conjecture 5.1 Let F be a field of characteristic zero, and A = A0 ⊕ A1 a finitely generated associative PI-superalgebra over F with superinvolution. Then there exists a finite dimensional over F associative superalgebra C = C0 ⊕ C1 with superinvolution which satisfies the same identities with superinvolution as A. Conjecture 5.2 Let F be a field of characteristic zero. Then any associative Falgebra with involution satisfies the same * -identities as the Grassmann envelope E(C) = C0 ⊗ E0 ⊕ C1 ⊗ E1 of some associative superalgebra C = C0 ⊕ C1 with superinvolution, finite dimensional over F. The confirmation of these conjectures could imply another solution of the Specht problem for * -identities. Author is deeply thankful to Ivan Shestakov and Antonio Giambruno for useful discussions and inspiration, and grateful to FAPESP for the financial support in this work. Observe that omitting indices by the elements of the group G in the structures of the free graded * -algebra, graded * -identities and graded * -varieties we obtain the notions of non-graded identities with involution and non-graded * -varieties. Notice that in both cases (graded and non-graded) variables of the set Y are reserved for symmetric elements, and variables Z for skew-symmetric. Two G-graded algebras with involution A and B are called gi-equivalent, A ∼ gi B, if Id gi (A) = Id gi (A). Nongraded algebras with involution A and B are * PI-equivalent, A ∼ * B, if Id * (A) = Id * (B). Given a giT-ideal ( * T-ideal) Γ and graded (non-graded) * -polynomials f, g we write Z 4 → 4{0, 1} by the rules η(0) = η(1) = 0, η(2) = η(3) = 1. The next elementary properties of η can be checked directly η(x) + η(y) = η(x + y) + 1 mod 2 if x, y ∈ {1,3},η(x) + η(y) = η(x + y) mod 2 if x or y is even. Lemma 2. 2 A 2Z 4 -graded algebra A with involution satisfies a multilinear Z 4 -graded * -identity f = 0 if and only if E 4 (A) satisfies f = 0.Proof. Assume that f is a multilinear Z 4 -graded * -polynomial. Thenf = w α w w (y i 10 ), (z i 20 ), (y i 31 ), (z i 41 ), (y i 52 ), (z i 62 ), (y i 73 ), (z i 83 ) , α w ∈ F,where w = w (y i 10 ), (z i 20 ), (y i 31 ), (z i 41 ), (y i 52 ), (z i 62 ), (y i 73 ), (z i 83 ) is a multilinear monomial, y θ = (y iθ ), z θ = (z iθ ), θ ∈ Z 4 . Therefore, f = w (−1) σ w α w w, where w = t(w) = w (y i 10 ), (z i 20 ), (y i 31 ), (z i 41 ), (z i 52 ), (y i 62 ), (z i 73 ), (y i 83 ) = Where (ã 1 1, . . . ,ã n ) = ((b i 10 ), (c i 20 ), (b i 31 ), (c i 41 ), (b i 52 ), (c i 62 ), (b i 73 ), (c i 83 )) are arbitrary G-homogeneous elements of A, u wj (h,h) are monomials u wj evaluated by elements h iθ ,h jξ ∈ E0 ∪ E2, and the k-tuple (g ′ 1 , . . . , g ′ k ) = ((g i 31 ), (g i 41 ), (g i 73 ), (g i 83 ) Lemma 3. 1 1Consider disjoint collections of variablesȳ = {y 1 , . . . , y n } ⊆ Y0 ∪ Y1,z = {z 1 , . . . , z m } ⊆ Z0 ∪ Z1, andt = {t 11 , . . . , t 1n 1 , . . . , tk 1 , . . . , tknk }, where {t i1 , . . . , t in i } ⊆ Y1, or {t i1 , .. . , t in i } ⊆ Z1 for any i = 1, . . . ,k. Let f (ȳ,z,t) ∈ F (4) be a multilinear graded * -polynomial, which is alternating in any collection {t i1 , . . . , t in i }, i = 1, . . . ,k. Then the polynomial f depends on the same variables as f, and f is symmetrizing in {t i1 , . . . , t in i } for any i = 1, . . . ,k. (n 1 1) 10 , . . . , y (nν ) ν0 , y (n 1 ) 11 , . . . , y (nν ) ν1 , z (m 1 ) 10 , . . . , z (mν ) ν0 , z (m 1 ) 11 , . . . , z (mν ) Remark 3.2 Given a multilinear * -polynomial f ∈ P n,m there exist a finite set of pairs (λ j , µ j ) (not necessary different) of partitions λ j ⊢ n, µ j ⊢ m (j = 1, . . . , k) and multilinear * -polynomials g λ j ,µ j ∈ P n,m such that g λ j ,µ j corresponds to (λ j , µ j ), and the * T-ideal generated by f can be decomposed as * TMoreover, by Theorem 5.9[22]χ n,m (Γ) =where H Γ = (H(k 1 , l 1 ), H(k 2 , l 2 )) is a double hook corresponding to Γ. The hook H(k, l) is the set of all partitions λ = (λ 1 , . . . , λ s ) satisfying the condition λ k+1 ≤ l. Applying arguments of Lemma 2.5.6[24]we always can assume that for any (λ, µ) ∈ H Γ the set of variables of a polynomial f λ,µ can be decomposed into disjoint unions, and alternating in any. Notice that m λ,µ = 0 in (11) means that f λ,µ = (e T λ ⊗ e Tµ )f ∈ Γ for any Young tableaux T λ , T µ and for any * -polynomial f ∈ P n,m , (see, e.g., Theorem 2.4.5[24]).Classification theorems.Theorem 4.1 Let F be a field of characteristic zero. Any proper * T-ideal of the free associative F -algebra with involution is the ideal of identities with involution of the Grassmann Z 4 -envelope of some finitely generated associative Z 4 -graded PI-algebra with graded involution.Proof. Let Γ be a proper * T-ideal of F Y, Z , and V Γ the * -variety defined by Γ. Consider the Z 4 -graded * -variety V Z 4 Γ of all associative Z 4 -graded * -algebras B such that E 4 (B) ∈ V Γ . Assume that H Γ = (H(k 1 , l 1 ), H(k 2 , l 2 )) is the double hook corresponding to Γ[22]. Take ν = max{k 1 , l 1 , k 2 , l 2 }, and the relatively free algebra R of the rank ν of the Z 4 -graded * -variety V Z 4 Γ . Then as in Remark 2.5 we haveis defined by (10). By Remark 2.5 and Amitsur's theorem[2],[3]R is a PI-algebra. Let us prove thatIt is clear that Id * (E 4 (R)) ⊇ Γ. Take a multilinear polynomial with involution f (y 1 , . . . , y n , z 1 , . . . , z m ) ∈ Id * (E 4 (R)) ∩ P n,m . By Remark 3.2 we can assume that f corresponds to a pair of partitions (λ, µ), where λ ⊢ n, and µ ⊢ m. If (λ, µ) / ∈ H Γ then f ∈ Γ by Theorem 5.9[22]. Suppose that (λ, µ) ∈ H Γ . Then similarly to Lemma 2.5.6[24]we can assume that the set {y 1 , . . . , y n } ∈ Y of the variables of f is divided on at most ν sets of symmetrized variables {y i 1 1 , . . . , y i 1 n i 1 } (i 1 = 1, . . . , ν), and at most ν sets of alternated variables {t i 2 1 , . . . , t i 2ni 2 } (i 2 = 1, . . . , ν). Similarly, the set {z 1 , . . . , z m } consists of at most ν sets of symmetrized variables {z j 1 1 , . . . , z j 1 m j 1 } Let F be a field of characteristic zero. Any * T-ideal of the free associative F -algebra with involution F Y, Z is finitely generated as a * T-ideal. Let F be a field of characteristic zero. Any * T-ideal of the free associative F -algebra with involution F Y, Z is finitely generated as a * T-ideal. Suppose that there exists a proper * T-ideal Γ ⊆ F Y, Z which can not be finitely generated as a * T-ideal. Then there exists an infinite sequence of multilinear * -polynomials {f i (x 1 , . . . , x n i )} i∈N ⊆ Γ, such that deg f i < deg f j for any i < j, and f i / ∈ * T [f 1 , . . . , f i−1 ] for any i ∈ N, where x j ∈ Y ∪ Z. Given i ∈ N let us take the * T-ideal Γ i ⊆ F Y, Z generated by all consequences dimensional over F Z 4 -graded algebra C with graded involution. By Lemma 3.1 [35] C = B ⊕ J, where B is a Z 4 -graded semisimple algebra with a graded involution, and J = J(C) is a Z 4 -graded nilpotent ideal of C. Hence it is enough to prove the theorem for proper * T-ideals. 225It is clear that F Y, Z is generated as a * T-ideal by the set {y 1 , z 1 }, and the zero ideal is. By [5], [36] B has the unit 1 B ∈ B0, and 1 B is symmetric in respect to involution. Therefore, E 4 (C) = E 4 (B) ⊕ E 4 (J), where E 4 (B) is a * -subalgebra of E 4 (C), and E 4 (J) is a References [1] E.Aljadeff, A.Kanel-Belov. Representability and Specht problem for G-graded algebrasProof. It is clear that F Y, Z is generated as a * T-ideal by the set {y 1 , z 1 }, and the zero ideal is generated by the zero polynomial. Hence it is enough to prove the theorem for proper * T-ideals. Suppose that there exists a proper * T-ideal Γ ⊆ F Y, Z which can not be finitely generated as a * T-ideal. Then there exists an infinite sequence of multilinear * - polynomials {f i (x 1 , . . . , x n i )} i∈N ⊆ Γ, such that deg f i < deg f j for any i < j, and f i / ∈ * T [f 1 , . . . , f i−1 ] for any i ∈ N, where x j ∈ Y ∪ Z. Given i ∈ N let us take the * T-ideal Γ i ⊆ F Y, Z generated by all consequences dimensional over F Z 4 -graded algebra C with graded involution. By Lemma 3.1 [35] C = B ⊕ J, where B is a Z 4 -graded semisimple algebra with a graded involution, and J = J(C) is a Z 4 -graded nilpotent ideal of C. By [5], [36] B has the unit 1 B ∈ B0, and 1 B is symmetric in respect to involution. Therefore, E 4 (C) = E 4 (B) ⊕ E 4 (J), where E 4 (B) is a * -subalgebra of E 4 (C), and E 4 (J) is a References [1] E.Aljadeff, A.Kanel-Belov, Representability and Specht problem for G-graded algebras, Adv. math., 225(2010), 2391-2428. Rings with involution. S A Amitsur, Israel J. Math. 6S.A.Amitsur, Rings with involution, Israel J. Math., 6(1968), 99-106. Identities in rings with involution. S A Amitsur, Israel J. Math. 7S.A.Amitsur, Identities in rings with involution, Israel J. Math., 7(1968), 63-68. Group-graded algebras with polynomial identities. Yu, A Bakhturin, D Giambruno, Riley, Israel J. Math. 104Yu.Bakhturin, A.Giambruno, D.Riley, Group-graded algebras with polynomial identities, Israel J. Math., 104(1998), 145-155. Finite-dimensional simple graded algebras. Yu A Bakhturin, S K Sehgal, M V Zaicev, Sb. Math. 1997Yu.A.Bakhturin, S.K.Sehgal, M.V.Zaicev, Finite-dimensional simple graded al- gebras, Sb. Math., 199(7)(2008), 965-983. Y A Bakhturin, I P Shestakov, M V Zaicev, Gradings on simple Jordan and Lie algebras. 283Y.A.Bakhturin, I.P.Shestakov, M.V.Zaicev, Gradings on simple Jordan and Lie algebras, J. Algebra, 283(2005), 849-868. Local finite basis property and local representability of varieties of associative rings, (Russian). A Ya, Belov, Izv. Ross. Akad. Nauk, Ser. Mat. 741Izv. Math.A.Ya.Belov, Local finite basis property and local representability of varieties of associative rings, (Russian), Izv. Ross. Akad. Nauk, Ser. Mat., 74(2010), no. 1, 3-134; English transl. in Izv. Math., 74(2010), no. 1, 1-126. Local finite basis property and local finite representability of varieties of associative rings. (Russian). A Ya, Belov, Dokl. Akad. Nauk. 4326A.Ya.Belov, Local finite basis property and local finite representability of vari- eties of associative rings. (Russian) Dokl. Akad. Nauk, 432(2010), no. 6, 727-731; . English transl. in Dokl. Math. 813English transl. in Dokl. Math. 81(2010), no. 3, 458-461. Structure of Zariski-closed algebras. A Belov-Kanel, L Rowen, U Vishne, Trans. Amer. Math. Soc. 3629A.Belov-Kanel, L.Rowen, U.Vishne, Structure of Zariski-closed algebras, Trans. Amer. Math. Soc., 362(2010), no. 9, 4695-4734. Application of full quivers of representations of algebras to polynomial identities. A Belov, L H Rowen, U Vishne, Comm. Algebra. 3912A.Belov, L.H.Rowen, U.Vishne, Application of full quivers of representations of algebras to polynomial identities, Comm. Algebra, 39(2011), no. 12, 4536-4551. Full exposition of Specht's problem. A Belov, L H Rowen, U Vishne, Serdica Math. J. 381-3A.Belov, L.H.Rowen, U.Vishne, Full exposition of Specht's problem, Serdica Math. J., 38(2012), no. 1-3, 313-370. Full quivers of representations of algebras. A Belov, L H Rowen, U Vishne, Trans. Amer. Math. Soc. 36410A.Belov, L.H.Rowen, U.Vishne, Full quivers of representations of algebras, Trans. Amer. Math. Soc., 364(2012), no. 10, 5525-5569. PI-varieties associated to full quivers of representations of algebras. A Belov, L H Rowen, U Vishne, Trans. Amer. Math. Soc. 3655A.Belov, L.H.Rowen, U.Vishne, PI-varieties associated to full quivers of repre- sentations of algebras, Trans. Amer. Math. Soc., 365(2013), no. 5, 2681-2722. Specht's problem for associative affine algebras over commutative Noetherian rings. A Kanel-Belov, L H Rowen, U Vishne, Trans. Amer. Math. Soc. in pressA.Kanel-Belov, L.H.Rowen, U.Vishne, Specht's problem for associative affine algebras over commutative Noetherian rings, Trans. Amer. Math. Soc., in press. Action of commutative Hopf algebras. J Bergen, M Cohen, Bull. London Math. Soc. 18J.Bergen, M.Cohen, Action of commutative Hopf algebras, Bull. London Math. Soc., 18(1986), 159-164. Representation theory of finite groups and associative algebras, Reprint of the 1962 original. C W Curtis, I Reiner, AMS Chelsea PublishingProvidence, RIC.W.Curtis, I.Reiner, Representation theory of finite groups and associative algebras, Reprint of the 1962 original, AMS Chelsea Publishing, Providence, RI, 2006. Free algebras and PI-algebras. V Drensky, Springer-VerlagSingapore, SingaporeV.Drensky, Free algebras and PI-algebras, Springer-Verlag Singapore, Singa- pore, 2000. V Drensky, E Formanek, Polynomial identity rings. Basel-Boston-BerlinBirkhauser VerlagV.Drensky, E.Formanek, Polynomial identity rings, Birkhauser Verlag, Basel- Boston-Berlin, 2004. Cocharacters, codimensions and Hilbert series of the polynomial identities for 2 × 2 matrices with involution. V Drensky, A Giambruno, Canad. J. Math. 46718733V.Drensky, A.Giambruno, Cocharacters, codimensions and Hilbert series of the polynomial identities for 2 × 2 matrices with involution, Canad. J. Math., 46(1994), 718733. On star-varieties with almost polynomial growth. A Giambruno, S Mishchenko, Algebra Colloquium. 81A.Giambruno, S.Mishchenko, On star-varieties with almost polynomial growth, Algebra Colloquium, 8(1)(2001), 33-42. Super-cocharacters, star-cocharacters and multiplicities bounded by one. A Giambruno, S Mishchenko, Manuscripta Math. 128483504A.Giambruno, S.Mishchenko, Super-cocharacters, star-cocharacters and multi- plicities bounded by one, Manuscripta Math. 128(2009), 483504. Wreath products and P.I. algebras. A Giambruno, A Regev, J. Pure Applied Algebra. 35A.Giambruno, A.Regev, Wreath products and P.I. algebras, J. Pure Applied Algebra, 35(1985), 133-149. . A Giambruno, A Regev, M Zaicev, Polynomial idintities and Combinatorial Methods. Marcel Dekker IncA.Giambruno, A.Regev, M.Zaicev, Polynomial idintities and Combinatorial Methods, Marcel Dekker Inc., New York, Basel, 2003. Polynomial identities and asymptotic methods. A Giambruno, M Zaicev, Amer.Math.Soc., Math. Surveys and Monographs. 122A.Giambruno, M.Zaicev, Polynomial identities and asymptotic methods, Amer.Math.Soc., Math. Surveys and Monographs, 122, Providence, R.I., 2005. J Humphreys, Introduction to Lie Algebras and Representation Theory, Grad. Textbooks. BerlinSpringer-Verlagsecond ed.J.Humphreys, Introduction to Lie Algebras and Representation Theory, Grad. Textbooks, second ed., Springer-Verlag, Berlin, 2003. The Representation Theory of the Symmetric Group. G James, A Kerber, Encyclopedia of Mathematics and Its Applications. 16Addison-WesleyG.James, A.Kerber, The Representation Theory of the Symmetric Group, En- cyclopedia of Mathematics and Its Applications 16, Addison-Wesley, London, 1981. Computational aspects of polynomial identities, A K Peters Ltd. A Kanel-Belov, L H Rowen, Wellesley, MAA.Kanel-Belov, L.H.Rowen, Computational aspects of polynomial identities, A K Peters Ltd., Wellesley, MA, 2005. Ideals of identities of associative algebras. A R Kemer, Amer.Math.Soc. Translations of Math. Monographs. 87A.R.Kemer, Ideals of identities of associative algebras, Amer.Math.Soc. Trans- lations of Math. Monographs, 87, Providence, R.I., 1991. The Book of Involutions. M.-A Knus, A A Merkurjev, M Rost, J.-P Tignol, AMS44M.-A.Knus, A.A.Merkurjev, M.Rost, J.-P.Tignol, The Book of Involutions, Col- loquium Publications, 44, AMS, 1998. K Mccrimmon, A taste of Jordan algebras. Berlin, New YorkSpringer-VerlagUniversitextK.McCrimmon, A taste of Jordan algebras, Universitext, Berlin, New York, Springer-Verlag, 2004. Existence of identities in A ⊗ B. A Regev, Israel J. Math. 11A.Regev, Existence of identities in A ⊗ B, Israel J. Math., 11(1972), 131-152. Gesetze in Ringen I. W Specht, Math. Z. 525W.Specht, Gesetze in Ringen I, Math. Z., 52(5)(1950), 557-589. Identities of PI-algebras graded by a finite abelian group. I Sviridova, Comm. Algebra. 399I.Sviridova, Identities of PI-algebras graded by a finite abelian group, Comm. Algebra, 39(9)(2011), 3462-3490. Finitely generated algebras with involution and their identities. I Sviridova, J. Algebra. 383I.Sviridova, Finitely generated algebras with involution and their identities, J. Algebra, 383(2013), 144-167. I Sviridova, arXiv:1410.2222Identities of finitely generated graded algebras with involution. math.RA. preprintI.Sviridova, Identities of finitely generated graded algebras with involution, arXiv:1410.2222 [math.RA], preprint. Finite gradings on simple Artinian rings. M V Zaicev, S K Sehgal, Vestnik Mosk. Univ., Matem., Mechan. 3RussianM.V.Zaicev, S.K.Sehgal, Finite gradings on simple Artinian rings, Vestnik Mosk. Univ., Matem., Mechan., 3(2001), 21-24 (Russian). Rings that are nearly associative. K A Zhevlakov, A M Slin&apos;ko, I P Shestakov, A I Shirshov, Pure and Applied Mathematics. 104Academic PressK.A.Zhevlakov, A.M.Slin'ko, I.P.Shestakov, A.I.Shirshov, Rings that are nearly associative, Pure and Applied Mathematics, 104, Academic Press, Inc. New York-London, 1982.
[]
[ "Generic Dynamic Scaling in Kinetic Roughening", "Generic Dynamic Scaling in Kinetic Roughening" ]
[ "José J Ramasco \nInstituto de Física de Cantabria\nCSIC-UC\nE-39005SantanderSpain\n\nDepartamento de Física Moderna\nUniversidad de Cantabria\nE-39005SantanderSpain\n", "Juan M López \nDipartamento di Fisica and Unità INFM\nUniversità di Roma \"La Sapienza\"\nI-00185RomaItaly\n", "Miguel A Rodríguez \nInstituto de Física de Cantabria\nCSIC-UC\nE-39005SantanderSpain\n" ]
[ "Instituto de Física de Cantabria\nCSIC-UC\nE-39005SantanderSpain", "Departamento de Física Moderna\nUniversidad de Cantabria\nE-39005SantanderSpain", "Dipartamento di Fisica and Unità INFM\nUniversità di Roma \"La Sapienza\"\nI-00185RomaItaly", "Instituto de Física de Cantabria\nCSIC-UC\nE-39005SantanderSpain" ]
[]
We study the dynamic scaling hypothesis in invariant surface growth. We show that the existence of power-law scaling of the correlation functions (scale invariance) does not determine a unique dynamic scaling form of the correlation functions, which leads to the different anomalous forms of scaling recently observed in growth models. We derive all the existing forms of anomalous dynamic scaling from a new generic scaling ansatz. The different scaling forms are subclasses of this generic scaling ansatz associated with bounds on the roughness exponent values. The existence of a new class of anomalous dynamic scaling is predicted and compared with simulations.
10.1103/physrevlett.84.2199
[ "https://export.arxiv.org/pdf/cond-mat/0001111v1.pdf" ]
206,326,340
cond-mat/0001111
389018b8ba8493d39a01f3336a70f93a20c3b421
Generic Dynamic Scaling in Kinetic Roughening 10 Jan 2000 José J Ramasco Instituto de Física de Cantabria CSIC-UC E-39005SantanderSpain Departamento de Física Moderna Universidad de Cantabria E-39005SantanderSpain Juan M López Dipartamento di Fisica and Unità INFM Università di Roma "La Sapienza" I-00185RomaItaly Miguel A Rodríguez Instituto de Física de Cantabria CSIC-UC E-39005SantanderSpain Generic Dynamic Scaling in Kinetic Roughening 10 Jan 2000 We study the dynamic scaling hypothesis in invariant surface growth. We show that the existence of power-law scaling of the correlation functions (scale invariance) does not determine a unique dynamic scaling form of the correlation functions, which leads to the different anomalous forms of scaling recently observed in growth models. We derive all the existing forms of anomalous dynamic scaling from a new generic scaling ansatz. The different scaling forms are subclasses of this generic scaling ansatz associated with bounds on the roughness exponent values. The existence of a new class of anomalous dynamic scaling is predicted and compared with simulations. We study the dynamic scaling hypothesis in invariant surface growth. We show that the existence of power-law scaling of the correlation functions (scale invariance) does not determine a unique dynamic scaling form of the correlation functions, which leads to the different anomalous forms of scaling recently observed in growth models. We derive all the existing forms of anomalous dynamic scaling from a new generic scaling ansatz. The different scaling forms are subclasses of this generic scaling ansatz associated with bounds on the roughness exponent values. The existence of a new class of anomalous dynamic scaling is predicted and compared with simulations. The theory of kinetic roughening deals with the fate of surfaces growing in nonequilibrium conditions [1,2]. In a typical situation an initially flat surface grows and roughens continuously as it is driven by some external noise. The noise term can be of thermal origin (like for instance fluctuations in the flux of particles in a deposition process), or a quenched disorder (like in the motion of driven interfaces through porous media). A rough surface may be characterized by the fluctuations of the height around its mean value. So, a basic quantity to look at is the global interface width, W (L, t) = [h(x, t) − h] 2 1/2 , where the overbar denotes average over all x in a system of size L and brackets denote average over different realizations. Rough surfaces then correspond to situations in which the stationary width W (L, t → ∞) grows with the system size. Alternatively, one may calculate other quantities related to correlations over a distance l as the height-height correlation function, G(l, t) = [h(x + l, t) − h(x, t)] 2 , or the local width, w(l, t) = [h(x, t) − h l ] 2 l 1/2 , where · · · l denotes an average over x in windows of size l. In absence of any characteristic length in the problem growth processes are expected to show power-law behaviour of the correlation functions in space and time and the Family-Vicsek dynamic scaling ansataz [3,1,2] W (L, t) = t α/z f (L/ξ(t)), ought to hold. The scaling function f (u) behaves as f (u) ∼ u α if u ≪ 1 const. if u ≫ 1 ,(2) where α is the roughness exponent and characterizes the stationary regime, in which the horizontal correlation length ξ(t) ∼ t 1/z (z is the so called dynamic exponent) has reached a value larger than the system size L. The ratio β = α/z is called growth exponent and characterizes the short time behavior of the surface. As occurs in equilibrium critical phenomena, the corresponding critical exponents do not depend on microscopic details of the system under investigation. This has made possible to divide growth processes into universality classes according to the values of these characteristic exponents [1,2]. A most intringuing feature of some growth models is that the above standard scaling of the global width differs substancially from the scaling behaviour of the local interface fluctuations (measured either by the local width or the height-height correlation). More precisely, in some growth models the local width (and the height-height correlation) scales as in Eq.(1), i.e. w(l, t) = t β f A (l/ξ(t)), but with the anomalous scaling function f A (u) ∼ u α loc if u ≪ 1 const if u ≫ 1 ,(3) where the new independent exponent α loc is called the local roughness exponent. This is what has been called anomalous roughening in the literature, and has been found to occur in many growth models [4][5][6][7][8][9][10] as well as experiments [11][12][13][14][15]. Moreover ,it has recently been shown [16,17] that anomalous roughening can take two different forms. On the one hand, there are super-rough processes, i.e. α > 1, for which always α loc = 1. On the other hand, there are intrinsically anomalous roughened surfaces, for which α loc < 1 and α can actually be any α > α loc . Anomalous scaling implies that one more independent exponent, α loc , may be needed in order to asses the universality class of the particular system under study. In other words, some growth models may have exactly the same α and z values seemingly indicating that they belong to the same universality class. However, they may have different values of α loc showing that they actually belong to distict classes of growth. As for the experiments, only the local roughness exponent is measurable by direct methods, since the system size remains normally fixed. Fracture experiments [14] in systems of varying sizes have succeded in measuring both the local and global roughness exponents in good agreement with the scaling picture described above. In this Letter we introduce a new anomalous dynamics in kinetic roughening. We show that, by adopting more general forms of the scaling functions involved, a generic theory of dynamic scaling can be constructed. Our theory incorporates all the different forms that dynamic scaling can take, namely Family-Vicsek, super-rough and intrinsic, as subclasses and predicts the existence of a new class of growth models with novel anomalous scaling properties. Simulations of the Sneppen model (rule A) [18] of self-organized depinning (and other related models) are presented as examples of the new dynamics. Firstly, let us consider the Fourier transform of the height of the surface in a system of size L, which is given by h(k, t) = L −1/2 x [h(x, t) − h(t)] exp(ikx), where the spatial average of the height has been substracted. The scaling behaviour of the surface can now be investigated by calculating the structure factor or power spectrum S(k, t) = h(k, t) h(−k, t) ,(4) which is related to the height-height correlation function G(l, t) defined above by G(l, t) = 4 L 2π/L≤k≤π/a [1 − cos(kl)]S(k, t) ∝ π/a 2π/L dk 2π [1 − cos(kl)]S(k, t),(5) where a is the lattice spacing and L is the system size. In order to explore the most general form that kinetic roughening can take, we study the scaling behaviour of surfaces satisfying what we will call a generic dynamic scaling form of the correlation functions. We will consider that a growing surface satisfies a generic dynamic scaling when there exists a correlation lenght ξ(t), i.e. the distance over which correlations have propagated up to time t, and ξ(t) ∼ t 1/z , being z the dynamic exponent. If no characteristic scale exists but ξ and the system size L, then power-law behaviour in space and time is expected and the growth saturates when ξ ∼ L and the correlations (and from Eq.(5) also the structure factor) become time-independent. The global roughness exponent α can now be calculated in this regime from G(l = L, t ≫ L z ) ∼ L 2α (or W (L, t ≫ L z ) ∼ L α ). In general, as we will see below, the scaling function that enters the dynamic scaling of the local width (or the height-height correlation) takes different forms depending on further restrictions and/or bounds for the roughness exponent values. These kind of restrictions are very often assumed and not valid for every growth model. For instance, only if the surface were self-affine saturation of the correlation function G(l, t) would also occur for intermediate scales l at times t ∼ l z and with the very same roughness exponent. However, the latter does not hold when anomalous roughening takes place as can be seen from the scaling of the local width in Eq.(3). Our aim here is to investigate all the possible forms that the scaling functions can exhibit when solely the existence of generic scaling is assumed. So, if the roughening process under consideration shows generic dynamic scaling (in the sense above explained), and no further assumptions (like for instance surface self-affinity or implicit bounds for the exponents values) are imposed, then we propose that the structure factor is given by S(k, t) = k −(2α+1) s(kt 1/z ),(6) where the scaling function has the general form s(u) ∼ u 2(α−αs ) if u ≫ 1 u 2α+1 if u ≪ 1 ,(7) and the exponent α s is what we will call the spectral roughness exponent. This scaling ansatz is a natural generalization of the scaling proposed for the structure factor in Ref. [16,17] for anomalous scaling. In the case of the global width, one can make use of W 2 (L, t) = 1 L k S(k, t) = dk 2π S(k, t),(8) to prove easily that the global width scales as in Eqs. (1) and (2), independently of the value of the exponents α and α s . However, the scaling of the local width is much more involved. The existence of a generic scaling behaviour like (7) for the structure factor always leads to a dynamic scaling behaviour, w(l, t) ∼ G(l, t) = t β g(l/ξ)(9) of the height-height correlation (and local width), but the corresponding scaling function g(u) is not unique. When substituting Eqs. (7) and (6) into (5), one can see that the various limits involved (a → 0, ξ(t)/L → ∞ and L → ∞) do not conmute [16,17]. This results in a different scaling behaviour of g(u) depending on the value of the exponent α s . Let us now summarize how all scaling behaviours reported in the literature are obained from the generic dynamic scaling ansatz (7). We shall also show how a new roughening dynamics naturally appears in this scaling theory. Two major cases can be distinguised, namely α s < 1 and α s > 1. On the one hand, for α s < 1 the integral in Eq.(5) has already been computed [16,17] and one gets g αs<1 (u) ∼ u αs if u ≪ 1 const if u ≫ 1 .(10) So, the corresponding scaling function is g αs<1 ∼ f A and α s = α loc , i.e. the intrinsic anomalous scaling function in Eq.(3). Moreover, in this case the interface would satisfy a Family-Vicsek scaling (for the local as well as the global width) only if α = α s were satisfied for the particular growth model under study. Thus, the standard Family-Vicsek scaling turns out to be one of the possible scaling forms compatible with generic scaling invariant growth, but not the only one. On the other hand, a new anomalous dynamics shows up for growth models in which α s > 1. In this case, one finds that, in the thermodynamic limit L → ∞, the integral Eq.(5) has a divergence coming from the lower integration limit. To avoid the divergence one has to compute the integral keeping L fixed. We then obtain the scaling function g αs>1 (u) ∼ u if u ≪ 1 const if u ≫ 1 .(11) So that in this case one always gets α loc = 1 for any α s > 1. Thus, for growth models in which α = α s one recovers the super-rough scaling behaviour [16,17]. However, it is worth noting that neither the spectral exponent α s or the global exponent α are fixed by the scaling in Eqs (7) and (11) and, in principle, they could be different. Therefore, growth models in which α s > 1 but α = α s could also be possible and represent a new type of dynamics with anomalous scaling. The main feature of this new type of anomalous roughening is that it can be detected only by determining the scaling of the structure factor. Whenever such a scaling takes place in the problem under investigation the new exponent α s will only show up when analyzing the scaling behaviour of S(k, t) and will not be detectable in either W (L, t), w(l, t) or G(l, t). In fact, as we have shown, the stationary regime of a surface exhibiting this kind of anomalous scaling will be characterized by W (L) ∼ L α and w(l, L) ∼ G(l, L) ∼ lL α−1 , however, the structure factor scales as S(k, L) ∼ k −(2αs+1) L 2(α−αs) where the spectral roughness exponent α s is a new and independent exponent. We can summarize our analitycal results as follows        if α s < 1 ⇒ α loc = α s α s = α ⇒ Family − Vicsek α s = α ⇒ Intrinsic if α s > 1 ⇒ α loc = 1 α s = α ⇒ Super − rough α s = α ⇒ New class(12) In the following we present simulations of a onedimensional growth model that is a nice example of the new dynamics. We have performed numerical simulations of the Sneppen model of self-organized depinning (model A) [18]. We have found that this model exhibits anomalous roughening the type described by Eq.(7) for α s > 1 and α s = α. In this model the height of the interface h(i, t) is taken to be an integer defined on a one-dimensional discrete substrate i = 1, · · · , L. A random pinning force η(i, h) is associated with each lattice site [i, h(i)]. The quenched disorder η(i, h) is uniformly distributed in [0, 1] and uncorrelated. The growth algorithm is then as follows. At every time step t, the site i 0 with the smallest pinning force is chosen and its height h(i 0 , t) is updated h(i 0 , t + 1) = h(i 0 , t) + 1 provided that the conditions |h(i 0 , t)− h(i 0 ± 1, t)| < 2 are satisfied. Periodic boundary conditions are assumed. We have studied the behaviour of the model in systems of different sizes from L = 2 6 up to L = 2 13 . From calculations of the saturated global width W (L) for various system sizes we find a global roughness exponent α = 1.000 ± 0.005 in agreement with previous simulations [18]. We have checked that the scaling of the global width is given by Eq.(1) with a scaling function like (2). Also in agreement with previous work [18] we find that the time exponent α/z = 0.95 ± 0.05. The local width w(l, t) scales as w(l, t) = t α/z g(l/ξ) where the scaling function is given by Eq. (11), and also α loc = 1. From these simulation results one could conclude that the behaviour of the Sneppen growth model is rather trivial and that the exponents α = α loc = z = 1 describe its scaling properties. Quite the opposite, this model exhibits no trivial features that can be noticed when the structure factor is calculated. In Figure 1 we show our numerical results for the structure factor S(k, t) in a system of size L = 2048. Note that in Figure 1 the curves S(k, t) for different times are shifted downwards reflecting that α < α s . This contrasts with the case of intrinsic anomalous roughening [16,17] where α s = α loc and α loc ≤ 1. The slope of the continuous line is −3.7 and indicates that a new exponent α s = 1.35 enters the scaling. This can be better appreciated by the data collapse shown in Figure 2, where one can observe that, instead of being constant, the scaling function s(u) has a negative exponent u −0.7 for u ≫ 1. The exponents used for the data collapse are α = 1, z = 1 and the scaling function obtained is in excellent agreement with Eq.(7) and a spectral exponent α s = 1.35 ± 0.03. The interface in the Sneppen model A is formed by facets with constant slope ±1 [18]. The value of the exponents α = α loc = 1 and α s = 1.35 is related to the faceted form of the interface at saturation. It is easy to understand how the anomalous spectral roughness exponent appears due to the faceted form of the interface. For the simpler (and trivial) case of a faceted interface formed by a finite number of identical segments, N , of constant slope, ±m, one can show analitically that the global width W (L) ∼ m 2 L 2 /N 2 , and the heightheight correlation function G(l) ∼ l 2 m 2 − N m 2 l 3 /L, which leads to α = α loc = 1, while the spectrum S(k, L) ∼ k −4 L −1 as k → 0. A simple comparison with the anomalous scaling form for the stationary spectrum S(k, L) ∼ k −(2αs+1) L 2(α−αs) leads to α s = 1.5. Actually, the facets occuring in the Sneppen model are not formed by identical segments, but rather follow a random distribution [20], which leads to a spectral exponent different from the trivial case. In summary, we have presented a generic theory of scaling for invariant surface growth. We have shown that the existence of power-law scaling of the correlation functions (scale invariance) does not determine a unique form of the scaling functions involved. This leads to the different dynamic scaling forms recently observed in growth models [4][5][6][7][8][9][10] and experiments [11][12][13][14][15] exhibiting anomalous roughening. In particular interface scale invariance does not necessarily imply Family-Vicsek dynamic scaling. We have derived all the types of scaling (Family-Vicsek, super-rough and intrinsic anomalous) from a unique scaling ansatz, which is formulated in the Fourier space. The different types of scaling are subclasses of our generic scaling ansatz associated with bounds on the values that the new spectral roughness exponent α s may take. This generalization has allowed us to predict the existence of a new kind of anomalous scaling with interesting features. Simulations of a model for self-organized interface depinning have been shown to be in excellent agreement with the new anomalous dynamics. It has recently been shown [19] that anomalous roughening stems from a non trivial dynamics of the mean local slopes (∇h) 2 . In contrast, the new anomalous dynamics can be pinned down to growth models in which the stationary state consits of faceted interfaces. The authors would like to thank R. Cuerno for an earlier collaboration that led to Ref. [16,17]. This work has been supported by the DGES of the Spanish Government (project No. PB96-0378-C02-02). JJR is supported by the Ministerio de Educación y Cultura (Spain). JML is supported by a TMR Network of the European Commission (contract number FMRXCT980183). Fig. 1. The exponents used for the collapse are α = 1.0 and z = 1.0. The straight lines have slopes −0.7 (solid) and 3.0 (dashed) and are a guide to the eye. The scaling function is given by Eq. (7) with a spectral roughness exponent αs = 1.35 ± 0.03. The deviations from the scaling for large values of the argument kt 1/z are due to the finite lattice spacing. FIG. 1 . 1Structure factor of the Sneppen model for interface depinning at different times. The continuous straight line is a guide to the eye and has a slope -3.7. Note the anomalous downwards shift of the curves for increasing times. FIG. 2 . 2Data collapse of the graphs in t = 7.8125 t = 3 x 7.8125 t = 8 x 7.8125 t = 20 x 7.8125 t = 100 x 7.8125 address: [email protected]. address: [email protected] A.-L Barabási, H E Stanley, Fractal Concepts in Surface Growth. CambridgeCambridge University PressA.-L. Barabási and H. E. Stanley, Fractal Concepts in Surface Growth, Cambridge University Press, Cam- bridge, (1995). . J Krug, Adv. Phys. 46139J. Krug, Adv. Phys. 46, 139 (1997). . F Family, T Vicsek, J. Phys. A. 1875F. Family and T. Vicsek, J. Phys. A 18, L75 (1985). . J Krug, Phys. Rev. Lett. 722907J. Krug, Phys. Rev. Lett. 72, 2907 (1994). . M Schroeder, Europhys. Lett. 24563M. Schroeder, et. al., Europhys. Lett. 24, 563 (1993). . J M López, M A Rodríguez, Phys. Rev. E. 542189J.M. López and M.A. Rodríguez, Phys. Rev. E 54, R2189 (1996). . S Das Sarma, Phys. Rev. E. 53359S. Das Sarma, et. al., Phys. Rev. E 53, 359 (1996). . C Dasgupta, S Das Sarma, J M Kim, Phys. Rev. E. 544552C. Dasgupta, S. Das Sarma and J.M. Kim, Phys. Rev. E 54, R4552 (1996). . J M López, M A Rodríguez, J. Phys. I. 71191J.M. López and M.A. Rodríguez, J. Phys. I 7, 1191 (1997). . M Castro, Phys. Rev. E. 572491M. Castro, et. al. Phys. Rev. E 57, R2491 (1998). . H Yang, G.-C Wang, T.-M Lu, Phys. Rev. Lett. 732348H.-Yang, G.-C. Wang and T.-M. Lu, Phys. Rev. Lett. 73, 2348 (1994). . J H Jeffries, J , - K Zuo, M M Craig, Phys. Rev. Lett. 764931J.H. Jeffries, J,-K. Zuo and M.M. Craig, Phys. Rev. Lett. 76, 4931 (1996). . J M López, J Schmittbuhl, Phys. Rev E. 576405J.M. López and J. Schmittbuhl, Phys. Rev E 57, 6405 (1998). . S , Phys. Rev. E. 586999S. Morel et. al., Phys. Rev. E 58, 6999 (1998). . A Bru, Phys. Rev. Lett. A. Bru et. al. Phys. Rev. Lett. , 1998 . J M López, M A Rodríguez, R Cuerno, Phys. Rev. E. 563993J.M. López, M.A. Rodríguez and R. Cuerno, Phys. Rev. E 56, 3993 (1997); . J M López, M A Rodríguez, R Cuerno, Physica A. 246329J.M. López, M.A. Rodríguez and R. Cuerno, Physica A 246, 329 (1997). . K Sneppen, Phys. Rev. Lett. 693539K. Sneppen, Phys. Rev. Lett. 69, 3539 (1992). . J M López, Phys. Rev. Lett. 834594J.M. López, Phys. Rev. Lett. 83, 4594 (1999). . J J Ramasco, J M López, M A Rodríguez, J.J. Ramasco, J.M. López and M.A. Rodríguez, unpub- lished.
[]
[ "Investigation of transition frequencies of two acoustically coupled bubbles using a direct numerical simulation technique", "Investigation of transition frequencies of two acoustically coupled bubbles using a direct numerical simulation technique" ]
[ "Masato Ida \nCollaborative Research Center of Frontier Simulation Software for Industrial Science\nInstitute of Industrial Science\nUniversity of Tokyo\n4-6-1 Komaba, Meguro-Ku153-8505Tokyo\n" ]
[ "Collaborative Research Center of Frontier Simulation Software for Industrial Science\nInstitute of Industrial Science\nUniversity of Tokyo\n4-6-1 Komaba, Meguro-Ku153-8505Tokyo" ]
[]
The theoretical results regarding the "transition frequencies" of two acoustically interacting bubbles have been verified numerically. The theory provided by Ida [Phys. Lett. A 297 (2002) 210] predicted the existence of three transition frequencies per bubble, each of which has the phase difference of π/2 between a bubble's pulsation and the external sound field, while previous theories predicted only two natural frequencies which cause such phase shifts.Namely, two of the three transition frequencies correspond to the natural frequencies, while the remaining does not. In a subsequent paper [M. Ida, Phys. Rev. E 67(2003) 056617], it was shown theoretically that transition frequencies other than the natural frequencies may cause the sign reversal of the secondary Bjerknes force acting between pulsating bubbles.In the present study, we employ a direct numerical simulation technique that uses the compressible Navier-Stokes equations with a surface-tension term as the governing equations to investigate the transition frequencies of two coupled bubbles by observing their pulsation amplitudes and directions of translational motion, both of which change as the driving frequency changes. The numerical results reproduce the recent theoretical predictions, validating the existence of the transition frequencies not corresponding to the natural frequency.
10.1143/jpsj.73.3026
[ "https://export.arxiv.org/pdf/physics/0111138v3.pdf" ]
207,154,644
physics/0111138
1a98199e82ba496038d116c6eed0a39edfc32ce0
Investigation of transition frequencies of two acoustically coupled bubbles using a direct numerical simulation technique 8 Oct 2004 Masato Ida Collaborative Research Center of Frontier Simulation Software for Industrial Science Institute of Industrial Science University of Tokyo 4-6-1 Komaba, Meguro-Ku153-8505Tokyo Investigation of transition frequencies of two acoustically coupled bubbles using a direct numerical simulation technique 8 Oct 2004arXiv:physics/0111138v3 [physics.flu-dyn] Typeset with jpsj2.cls <ver.1.2> Full Paperbubble dynamicssecondary Bjerknes forcedirect numerical simulationnatural frequencytransition frequency The theoretical results regarding the "transition frequencies" of two acoustically interacting bubbles have been verified numerically. The theory provided by Ida [Phys. Lett. A 297 (2002) 210] predicted the existence of three transition frequencies per bubble, each of which has the phase difference of π/2 between a bubble's pulsation and the external sound field, while previous theories predicted only two natural frequencies which cause such phase shifts.Namely, two of the three transition frequencies correspond to the natural frequencies, while the remaining does not. In a subsequent paper [M. Ida, Phys. Rev. E 67(2003) 056617], it was shown theoretically that transition frequencies other than the natural frequencies may cause the sign reversal of the secondary Bjerknes force acting between pulsating bubbles.In the present study, we employ a direct numerical simulation technique that uses the compressible Navier-Stokes equations with a surface-tension term as the governing equations to investigate the transition frequencies of two coupled bubbles by observing their pulsation amplitudes and directions of translational motion, both of which change as the driving frequency changes. The numerical results reproduce the recent theoretical predictions, validating the existence of the transition frequencies not corresponding to the natural frequency. Introduction The secondary Bjerknes force is an interaction force acting between pulsating gas bubbles in an acoustic field. [1][2][3] The classical theory originated by Bjerknes predicts either attraction only or repulsion only, depending on whether the driving frequency stays outside or inside, respectively, the frequency region between the partial (or monopole) natural frequencies of two bubbles. However, recent studies show that the force sometimes reverses its own direction as the distance between the bubbles changes. [4][5][6] The first theoretical study on this subject was performed by Zabolotskaya. 4 Employing a linear coupled oscillator model, she showed that the radiative interaction between the bubbles, which results in the change in the natural frequencies of the bubbles, could cause this reversal. In the mid-1990s, Doinikov and Zavtrak arrived at the same conclusion by employing a linear theoretical model in which the multi-ple scattering between the bubbles is more rigorously taken into account. 5 These theoretical results are considered to explain the stable structure formation of bubbles in a sound field, called "bubble cluster" or "bubble grape," which has been observed experimentally by several researchers in different fields. 3,[7][8][9][10][11] In both of the theoretical studies mentioned above, it was assumed that the reversal is due to the change in the natural (or the resonance) frequencies of bubbles, caused by the radiative interaction between bubbles. However, those authors had differing interpretations of how the natural frequencies change. The theoretical formula for the natural frequencies, used by Zabolotskaya 4 and given previously by Shima,12 shows that the higher and the lower natural frequencies (converging to the partial natural frequencies of a smaller and a larger bubble, respectively, when the distance between the bubbles is infinite) reveal an upward and a downward shift, respectively, as the bubbles come closer to one another. 12,13 In contrast, Doinikov and Zavtrak assumed intuitively that both the natural frequencies rise. 5,6 This assumption seems to explain well the sign reversal occurring not only when both bubbles are larger than the resonance size but also when one bubble is larger and the other is smaller than the resonance size. The sign reversal in the latter case, for instance, is thought to occur when the resonance frequency of a larger bubble, increasing as the bubbles come closer to one another, surpasses the driving frequency, resulting in the change in the pulsation phase of the larger bubble, and leading to the change in the phase difference between the bubbles. However, this assumption is obviously inconsistent with the previous theoretical result for the natural frequencies. 4,12 Recently, Ida 14 proposed an alternative theoretical explanation for this phenomenon, also using the linear model Zabolotskaya used. He claimed that this phenomenon cannot be interpreted by only observing the natural frequencies, and that it is useful to define the transition frequencies that make the phase difference between a bubble's pulsation and an external sound π/2 (or 3π/2). It has been pointed out theoretically that the maximum number of natural frequencies and that of transition frequencies are, in general, not in agreement in multibubble cases, 13, 15 while they are, as is well known, consistent in single-bubble cases where the phase difference between a bubble's pulsation and an external sound becomes π/2 only at its natural frequency. (This is not true in strongly nonlinear cases, in which the phase reversal can take place even in frequency regions far from the bubble's natural frequency; see, e.g., refs. 16,17.) In a double-bubble case, for instance, that theory predicts three transition frequencies per bubble, two of which correspond to the natural frequencies. 13 A preliminary discussion on a N -bubble system 15 showed that a bubble in this system has up to 2N − 1 transition frequencies, only N ones of which correspond to the natural frequencies. More specifically, the number of transition frequencies is in general larger than that of natural frequencies. The transition frequencies not corresponding to the natural frequency have differing physical meanings from 2/18 those of the natural frequencies; they do not cause the resonance response of the bubbles. 13,15 The theory for the sign reversal of the force, constructed based on the transition frequencies, predicts that the sign reversal takes place around those frequencies, not around the natural frequencies, and can explain the sign reversal in both cases mentioned above. 14 Moreover, the theory does not contradict the theory for the natural frequencies described previously, because all the natural frequencies are included in the transition frequencies. The aim of this paper is to verify the theoretical prediction of the transition frequencies by direct numerical simulation (DNS). In a recent paper, 18 Ida proposed a DNS technique, based on a hybrid advection scheme, 19 a multi-time-step integration technique, 18 and the Cubic-Interpolated Propagation/Combined, Unified Procedure (CIP-CUP) algorithm, 20, 21 which technique allows us to compute the dynamics (pulsation and translational motion) of deformable bubbles in a viscous liquid even when the separation distance between the bubbles is small. 18,22 In that DNS technique, the compressible Navier-Stokes equations with a surfacetension term are selected as the governing equations, and the convection, the acoustic, and the surface-tension terms, respectively, in these equations are solved by an explicit advection scheme employing both interpolation and extrapolation functions which realizes a discontinuous description of interfaces between different materials, 19 The following sections are organized as follows. In §2, the previously expounded theories are reviewed and reexamined, including those for the transition frequencies and the sign reversal, and in §3, the numerical results and a discussion are provided. Section 4 presents concluding remarks. Theories A single-bubble problem It is well known that, when the wavelength of an external sound is sufficiently large compared to the radius of a bubble (named "bubble 1," immersed in a liquid) and the sphericity of the bubble is maintained, the following second-order differential equation 3, 24 describes the 3/18 linear pulsation of the bubble:ë 1 + ω 2 10 e 1 + δ 1ė1 = − p ex ρ 0 R 10 ,(1)ω 10 = 1 ρ 0 R 2 10 3κP 0 + (3κ − 1) 2σ R 10 , where it is assumed that the bubble's time-dependent radius can be represented by R 1 = R 10 + e 1 (|e 1 | ≪ R 10 ) with R 10 being its equilibrium radius and e 1 being the deviation in the radius, and ω 10 is the bubble's natural frequency, δ 1 is the damping factor, 25, 26 p ex is the sound pressure at the bubble position, ρ 0 is the equilibrium density of the liquid, P 0 is the static pressure, κ is the polytropic exponent of the gas inside the bubble, σ is the surface tension, and the over dots denote the derivation with respect to time. Assuming p ex = −P a sin ωt (where P a is a positive constant), the harmonic steady-state solution of eq. (1) is determined as e 1 = K S1 sin(ωt − φ S1 ), where K S1 = P a ρ 0 R 10 1 (ω 2 10 − ω 2 ) 2 + δ 2 1 ω 2 , φ S1 = tan −1 δ 1 ω ω 2 10 − ω 2 . This result reveals that the phase reversal (or the phase difference of φ S1 = π/2) appears at only ω = ω 10 . Moreover, if δ 1 ≪ ω 10 , the bubble's resonance response can take place at almost the same frequency. Though the resonance frequency shifts away from ω 10 as δ 1 increases, it can in many cases be assumed to be almost the same as ω 10 . (Also, the nonlinearity in bubble pulsation is known to alter a bubble's resonance frequency. 2, 3, 27 ) A double-bubble problem When one other bubble (bubble 2) exists, the pulsation of the previous bubble is driven by not only the external sound but also the sound wave that bubble 2 radiates. Assuming that the surrounding liquid is incompressible and the time-dependent radius of bubble 2 can be represented by R 2 = R 20 + e 2 (|e 2 | ≪ R 20 ), the radiated pressure field is found to be, 15 p(r, t) ≈ ρ 0 R 2 20 rë 2 , where r is the distance measured from the center of bubble 2. The total sound pressure at the position of bubble 1 (p d1 ) is, thus, p d1 ≈ p ex + ρ 0 R 2 20 Dë 2 ,(2) 4/18 where D is the distance between the centers of the bubbles. This total pressure drives the pulsation of bubble 1. Replacing p ex in eq. (1) with p d1 yields the modified equation for bubble 1, e 1 + ω 2 10 e 1 + δ 1ė1 = − p ex ρ 0 R 10 − R 2 20 R 10 Dë 2 ,(3) and exchanging 1 and 2 (or 10 and 20) in the subscripts in this equation yields that for bubble 2,ë 2 + ω 2 20 e 2 + δ 2ė2 = − p ex ρ 0 R 20 − R 2 10 R 20 Dë 1 .(4) This kind of system of differential equations is called a (linear) coupled oscillator model or a self-consistent model, 28,29 and is known to be third-order accuracy with respect to 1/D. 30 The same (or essentially the same) formula has been employed in several studies considering acoustic properties of coupled bubbles. 4, 12, 28-32 Shima 12 and Zabolotskaya, 4 assuming δ j ≈ 0 (for j = 1 and 2), derived the theoretical formula for the natural frequencies, represented as (ω 2 10 − ω 2 )(ω 2 20 − ω 2 ) − R 10 R 20 D 2 ω 4 = 0.(5) This equation predicts the existence of up to two natural frequencies (or eigenvalues of the system (3) and (4) The theoretical formula for the transition frequencies, derived by Ida, 13 can be obtained based on the harmonic steady-state solution of system (3) and (4). Assuming p ex = −P a sin ωt, the solution is determined as follows: e 1 = K 1 sin(ωt − φ 1 ),(6) where K 1 = P a R 10 ρ 0 A 2 1 + B 2 1 ,(7)φ 1 = tan −1 B 1 A 1 (8) with A 1 = H 1 F + M 2 G F 2 + G 2 , B 1 = H 1 G − M 2 F F 2 + G 2 , F = L 1 L 2 − R 10 R 20 D 2 ω 4 − M 1 M 2 , G = L 1 M 2 + L 2 M 1 , H 1 = L 2 + R 20 D ω 2 , 5/18L 1 = ω 2 10 − ω 2 , L 2 = ω 2 20 − ω 2 , M 1 = δ 1 ω, M 2 = δ 2 ω. Exchanging 1 and 2 (or 10 and 20) in the subscripts in these equations yields the formula for bubble 2. Based on the definition, the transition frequencies are given by 13 H 1 F + M 2 G = 0.(9) This equation predicts the existence of up to three transition frequencies per bubble; 13 this number is greater than that of the natural frequencies given by eq. (5). This result means that in a multibubble case, the phase shift can take place not only around the bubbles' natural frequencies but also around some other frequencies. Moreover, it was shown in ref. 13 that the transition frequencies other than the natural frequencies do not cause the resonance response; namely, one of the three transition frequencies has different physical meanings from those of the natural frequency. are determined by the sum of the viscous and radiation ones as δ j = (4µ/ρ 0 R 2 j0 ) + (ω 2 R j0 /c) since the DNS technique does not consider the thermal conduction, 18 where the viscosity of the liquid µ = 1.137 × 10 −3 kg/(m s) and its sound speed c = 1500 m/s. As a reference, in Fig. 1(b) we display the transition frequencies for δ j → 0. As discussed previously, 13 the highest and the second highest transition frequencies of the larger bubble tend to vanish when the damping effect is sufficiently strong; in the present case, they disappear completely. The second highest and the lowest transition frequencies of the smaller bubble cross and vanish at a certain distance, and only one transition frequency remains for sufficiently large D. For D → ∞, the transition frequencies remaining converge to the partial natural frequencies of the corresponding bubble. The solid lines displayed in Fig. 1(b) denote the transition frequencies that correspond to the natural frequencies given by eq. (5). We clarify here physical meanings of the transition frequencies that do not accompany resonance, which meanings have not yet been described in the literature. Let us consider the bubbles under the condition indicated by the dots (the origin of the arrows) shown in Fig. 1. Bubble 2 under this condition emits a strong sound (denoted below by p 2 (D)) whose oscillation phase is (almost) out-of-phase with the external sound p ex , because the driving frequency stays near and slightly above the natural frequency of this bubble. If p 2 (D) is measured at a point sufficiently near bubble 2, its amplitude can be larger than that of p ex and hence the phase of the total sound pressure, p ex + p 2 (D), may be almost the same as that of p 2 (D), i.e., almost out-of-phase with p ex . As the driving frequency shifts toward a higher range along the arrows, the absolute value of p 2 (D) decreases and, at a certain frequency, becomes lower than |p ex |; the phase of the total sound pressure finally becomes (almost) inphase with p ex . This transition of the power balance of the two sounds results in the phase reversal of bubble 1 without accompanying the resonance response. The secondary Bjerknes force The secondary Bjerknes force acting between two pulsating bubbles is represented by 1-3 F ∝ V 1V2 r 2 − r 1 r 2 − r 1 3 ,(10) where V j and r j denote the volume and the position vector, respectively, of bubble j, and · · · denotes the time average. Using eqs. (6)- (8), this equation can be rewritten as 1 F ∝ K 1 K 2 cos(φ 1 − φ 2 ) r 2 − r 1 r 2 − r 1 3 .(11) The sign reversal of this force occurs only when the sign of cos(φ 1 − φ 2 ) (or of V 1V2 ) changes, because K 1 > 0 and K 2 > 0. Namely, the phase property of the bubbles plays an important role in the determination of the sign. Roughly speaking, the force is attractive when the bubbles pulsate in-phase with each other, while it is repulsive when they pulsate out-of-phase. In ref. 14, it was shown theoretically that both the sign reversals are due to the transition frequencies that do not correspond to the natural frequencies; that is, the sign reversals take place near (or, when δ j → 0, at) those frequencies. The present result follows that theoretical prediction. The respective reversals are observed near the second highest transition frequency of bubble 1 and near the highest of bubble 2 for δ j → 0. Meanwhile, we can observe φ 2 > π, which should not be observed in a single-bubble case. Ida 14 explained that such a large phase delay is realizable in a multibubble case, by the radiative interaction. We make here a remark regarding the estimation of the points at which the sign reversal of the force takes place. It was shown previously 14 that the transition points of the force are hardly changed by the damping effects, even when bubbles are small (R j0 ∼ 1 µm) or 8/18 relatively large (R j0 ∼ 1 mm). The present result displayed in Fig. 2 follows that finding. These results may allow us to consider that the simple theoretical formulas for the transition frequencies not causing resonance, Numerical results and discussion In this section, the DNS technique 18,19 is employed to verify the theoretical results for the transition frequencies. The governing equations are the compressible Navier-Stokes equations ∂ρ ∂ t + u · ∇ρ = −ρ∇ · u, ∂u ∂t + u · ∇u = − ∇p ρ + 1 ρ 2∇ · (µT) − 2 3 ∇(µ∇ · u) + F st ρ , ∂p ∂ t + u · ∇p = −ρC 2 S ∇ · u, where ρ, u, p, T, µ, F st , and C S denote the density, the velocity vector, the pressure, the deformation tensor, the viscosity coefficient, the surface tension as a volume force, and the local sound speed, respectively. This system of equations is solved over the whole computational domain divided by grids. The materials (the bubbles and a liquid surrounding them) moving on the computational grids are identified by a scalar function 19 depending on a pure advection equation whose characteristic velocity is u. The bubbles' radii and the initial center-to-center distance between them are set by using the same values as used for Fig. 2, that is, R 10 = 5 µm, R 20 = 9 µm, and D(t = 0) = 20 µm. The content inside the bubbles is assumed to be an ideal gas with a specific heat ratio of 1.33, an equilibrium density of 1.23 kg/m 3 , and a viscosity of 1.78 × 10 −5 kg/(m s). The surrounding liquid is water, whose sound speed is determined by resonance frequency is higher than ω 10 , and the latter is lower than ω 20 (≈ 0.53ω 10 ). These respective resonances are obviously due to the highest and the lowest transition frequencies of bubble 1, both of which correspond to the natural frequencies. The same figure shows that the larger bubble may have a resonance frequency, at ω ≈ ω 10 /2.2. The sign of the interaction force changed twice in the frequency region considered. In the region between ω = ω 10 /0.8 and ω 10 /0.9, being near to but above the higher resonance frequency of the smaller bubble discussed above, the attractive force turns into repulsion as ω decreases, and, at ω ≈ ω 10 /1.2 (≈ 0.83ω 10 ) the repulsive force turns back into attraction. (See also Fig. 5, which shows D(t) in the cases where the deviation in it is small.) It may be difficult to say, using only these numerical results, that the former reversal is not due to the higher natural frequency of the smaller bubble, because the reversal took place near it and the highest transition frequency of the larger bubble is close to it. Therefore, in the following we will focus our attention on the latter sign reversal, which occurred at ω between ω 10 and ω 20 . The latter reversal indicates that a kind of characteristic frequency should exist in the frequency region between the partial natural frequencies of the bubbles. It is evident that this characteristic frequency is not the resonance frequency of the larger bubble, which is, as already discussed, much lower. This result is in opposition to the assumption described by Doinikov and Zavtrak. 5,6 Also, the theory for the natural frequencies (eq. (5)) cannot explain this reversal, because it predicts no natural frequency in the frequency region between ω 10 and ω 20 . This characteristic frequency is, arguably, the second highest transition frequency of the smaller bubble, as was predicted by Ida. 14 As was proved theoretically in ref. 13, resonance response is not observed around this characteristic frequency. In order to confirm that this characteristic frequency is not of the larger bubble, we display in Fig. 6 the R 2 -time and can be observed between these results, whereas they are in agreement in a qualitative sense. We attempt here to identify what caused this discrepancy, using a nonlinear model. Mettin et al. 35 proposed a nonlinear coupled oscillator model for a double-bubble system, 1 −Ṙ 1 c R 1R1 + 3 2 −Ṙ 1 2c Ṙ 2 1 = 1 ρ 0 1 +Ṙ 1 c p k 1 + R 1 ρ 0 c dp k 1 dt − 1 D d dt (Ṙ 2 R 2 2 ),(12)1 −Ṙ 2 c R 2R2 + 3 2 −Ṙ 2 2c Ṙ 2 2 = 1 ρ 0 1 +Ṙ 2 c p k 2 + R 2 ρ 0 c dp k 2 dt − 1 D d dt (Ṙ 1 R 2 1 ),(13) with Figure 7(c) shows the numerical results given using the nonlinear model with a very low driving pressure (P a = 0.001P 0 ). These results are also in good agreement with the DNS results, proving that, in the present case, the nonlinearity in the pulsations is not dominant for the sign reversals. Furthermore, it can be proved that the bubbles' shape oscillation is also not dominant for the sign reversal. Figure 8 shows the bubble surfaces for ω = ω 10 /1.2 at selected times. Only a small deformation of the bubbles can be found in this figure. (The numerical results provided previously in ref. 18, given using the same DNS technique, reveal that the bubbles' sphericity is well maintained even for ω = ω 10 , whereas a noticeable deformation is observed for ω ≈ ω 20 .) These results allow us to consider that the transient is one of the dominant origins of the noticeable quantitative discrepancy found in Fig. 7(a). p k j = P 0 + 2σ R j0 R j0 R j 3κ − 2σ R j − 4µṘ j R j − P 0 − p 15/18 There are some other physical factors that may also be able to cause the quantitative discrepancy but are not taken into account in both the linear and nonlinear coupled oscillator models. It is known, for example, that the translational motion of bubbles can alter the bubbles' pulsation. As has been proved by several researchers, 30,37,38 if the translational motion is taken into consideration in deriving a theoretical model, high-order terms appear in the equations of radial motion, in which not only the bubbles' radii but also their translational velocities are involved. Also, in certain cases the viscosity of the surrounding liquid can alter the magnitude and sign of the interaction force, because acoustic streaming is induced around the bubbles. 39 Investigating the influences of those factors on the quantitative discrepancy using a higher-order model would be an interesting and important subject. Concluding remarks In summary, we have verified the recent theoretical results regarding the transition frequencies of two acoustically interacting bubbles 13 and the sign reversal of the secondary Bjerknes force, 14 using a DNS technique. 18,19 The present numerical results, given by the DNS technique, support those theoretical results at least in a qualitative sense. The most important point validated by DNS is that at least the sign reversal occurring when the driving frequency stays between the bubbles' partial natural frequencies is obviously not due to the natural (or the resonance) frequencies of the double-bubble system. This conclusion is in opposition to the previous explanation described by Doinikov and Zavtrak,5,6 but is consistent with the most recent interpretation by Ida 14 described based on analyses of the transition frequencies, 13,15 thus validating the assertion that the transition frequencies not corresponding to the natural frequencies exist and that the notion "transition frequency" is useful for understanding the sign reversal of the force. Acknowledgment This work was supported by the Ministry of Education, Culture, Sports, Science, and Technology of Japan (Monbu-Kagaku-Sho) under an IT research program "Frontier Simulation Software for Industrial Science." 16/18 by the Combined, Unified Procedure (CUP) being an implicit finite difference method for all-Mach-number flows, 20, 21 and by the Continuum Surface Force (CSF) model being a finite difference solver for the surfacetension term as a volume force. 23 Efficient and accurate time integration of the compressible Navier-Stokes equations under a low-Mach-number condition is achieved by the multi-timestep integration technique, 18 which solves the different-nature terms in these equations with different time steps. Further details of this DNS technique can be found in refs. 18, 19. Employing this DNS technique, in the present study we perform numerical experiments involving two acoustically coupled bubbles in order to investigate the recent theories by observing the bubbles' pulsation amplitudes and the directions of their translational motion. In particular, we focus our attention on the existence of the transition frequencies that do not cause the resonance response. ) per bubble. Exchanging 10 and 20 in the subscripts in this equation yields the same equation; namely, when δ j ≈ 0, both the bubbles have the same natural frequencies. The higher and the lower natural frequencies reveal an upward and a downward shift, respectively, as D decreases. 12, 13 Fig. 1 . 1Transition frequencies as functions of the distance between the bubbles. The lower figure shows the transition frequencies in the case where the damping effect is neglected. The dashed lines denote the transition frequencies that do not cause resonance. The arrows are used for a discussion described in the text. Figure 1 ( 1a) shows the transition frequencies of bubbles 1 and 2, ω 1 and ω 2 , for R 10 = 5 µm and R 20 = 9 µm, as functions of D/(R 10 + R 20 ). These bubble radii are chosen to be small enough so that the sphericity of the bubbles is maintained sufficiently. The parameters are Fig. 2 . 2Sign of the secondary Bjerknes force [cos(φ 1 − φ 2 )] and phase differences [φ 1 and φ 2 ], determined theoretically, as functions of the driving angular frequency. The positive value of cos(φ 1 −φ 2 ) indicates the attraction, while the negative one indicates the repulsion. The dashed lines denote the results for δ j ≈ 0. The solid lines displayed in Fig. 2 denote cos(φ 1 − φ 2 ), φ 1 , and φ 2 as functions of ω/ω 10 , and the dashed lines denote those for δ j ≈ 0. The physical parameters are the same as used for Fig. 1, except for the separation distance fixed to D = 20 µm [D/(R 10 + R 20 ) ≈ 1.43]. In this figure, we can observe sign reversals of cos(φ 1 − φ 2 ) at ω/ω 10 ≈ 0.72 and ω/ω 10 ≈ 1.15. 1 − R 10 /D derived for δ 1 → 0 and δ 2 → 0, 13 can be a good approximation of the transition points, even in the cases of δ j = 0. These formulas yield (ω 1 /ω 10 ) = 0.719 and (ω 2 /ω 10 ) = 1.155, which are consistent with the result shown inFig. 2. C S = 7 (p + 3172 .Fig. 3 . 73172304P 0 )/ρ with p and ρ being the local pressure and density, respectively.The other parameters are set to the same values as those used previously. The axisymmetric coordinate (r, z) is selected for the computational domain, and the mass centers of the bubbles are located on the central axis of the coordinate. The grid widths are set to be constants as ∆r = ∆z = 0.25 µm, and the numbers of them in the r and the z coordinates are 100 and 320, respectively. The sound pressure, applied as the boundary condition to the pressure, is assumed to be in the form of p ex = P a sin ωt, where the amplitude P a is fixed to 0.3P 0 and the driving frequency is selected from the frequency range around the bubbles' partialnatural frequencies. (For the sound amplitude assumed, nonlinear effects may not completely be neglected especially near the natural frequencies. However, we can, unfortunately, not use a lower sound pressure because of a numerical problem called "parasitic currents," DNS results: Bubble radii [(a) ∼ (j)] and corresponding positions [(a') ∼ (j')] as functions of time for different driving frequencies. The lower lines in (a')-(j') denote the position of the smaller bubble. The bubbles coalesce at the time where the number of lines becomes one, see panels (i), (i'), (j), and (j').which rise to the surface when a small bubble or drop is in (nearly) steady state where the amplitudes of volume and shape oscillations are very low. Effects of nonlinearity are briefly discussed later.) The boundary condition for the velocity is free. Figure 3 Fig. 4 . 34displays the bubbles' (mean) radii and mass centers as functions of time for different ω, and figure 4 displays the sign of the force (a), determined by observing the direction of the bubbles' translation, and the bubbles' pulsation amplitudes (b and c). From these figures, we know that the smaller bubble has two resonance frequencies, one at ω ≈ ω 10 /0.9 (≈ 1.1ω 10 ) and the other at ω ≈ ω 10 /2.2 (≈ 0.45ω 10 ), though the former seems to decrease with time because of the repulsion of the bubbles (See Fig. 1, which reveals that the highest transition frequency of bubble 1, causing resonance, decreases as D increases)DNS results: (a) Sign of the force, (b) pulsation amplitude of bubble 1, and (c) that of bubble 2, as functions of the driving frequency. The amplitudes were measured for t < 5 µs (•), and for t < 10 µs but until the coalescence has been observed (•). The result for ω = ω 10 /0.85, not shown in Fig. 3, is presented in (a); see also Fig. 5. Fig. 5 . 5Time-dependent distances between the mass centers of the bubbles in the cases where the deviation in the distance is small (for ω = ω 10 /0.8, ω 10 /0.85, ω 10 /1.2, and ω 10 /1.3). Note that the result for ω = ω 10 /0.85, not shown inFig. 3, is added. [The low-amplitude, high-frequency oscillations observed in these curves may be due to a numerical error originating in the calculation of such a small deviation (comparable to the grid width) on a discrete computational domain.] Fig. 6 . 6p ex -time curves for the area around ω = ω 10 /1.2. This figure shows clearly that the pulsation phase of the larger bubble does not reverse in this frequency region; the bubble maintains its out-of-phase pulsation with the external sound (i.e., the bubble's radius is large when the Radius of the larger bubble (solid lines) and sound pressure as functions of time, for around ω = ω 10 /1.2. sound amplitude is positive), although other modes, which may result from the transient, appear.Here, we discuss how this transient and a nonlinear effect act on the quantitative nature of the sign distribution. InFig. 7(a), we show the sign of the force as functions of ω, determined by the linear theory (the solid line) and by the DNS (the circles). The positive value denotes the attraction, while the negative value the repulsion. A noticeable quantitative discrepancy ex for j = 1 or 2, based on the Keller-Miksis model 36 taking into account the viscosity and compressibility of the surrounding liquid with first-order accuracy. In this system of equations, the last terms of Fig. 7 .Fig. 8 . 78Comparison of the DNS results with (a) the linear harmonic solution, (b) the nonlinear numerical results for P a = 0.3P 0 , and (c) those for P a = 0.001P 0 . The circles denote the DNS results, where the positive value denotes the attraction while the negative value denotes the repulsion.eqs.(12) and(13) represent the radiative interaction between the bubbles. Using this model,R j andṘ j are calculated by the fourth-order Runge-Kutta method, and subsequently the time average in eq. (10) is performed to determine the sign of the force. Although the quantitative accuracy of this model may not be guaranteed for a small D, 35 a rough estimation of the influences of the transient and nonlinearity might be achieved. The physical parameters are the same as used forFig. 1, and D is fixed to 20 µm, that is, the translational motion is Bubble surfaces for ω = ω 10 /1.2 at selected times, given by the DNS technique. Similar figures for ω = ω 10 and ω = ω 10 /1.8 can be found in our previous paper.18 neglected. The solid and the dashed lines displayed inFig. 7(b) show G(T ) ≡ sgn T 0V 1V2 dt T for T = 5 µs and 10 µs, respectively, given using the nonlinear model, where sgn[f ] = 1 for f > 0 and sgn[f ] = −1 otherwise. These numerical results are in good agreement with the DNS results (the circles). . L A Crum, J. Acoust. Soc. Am. 571363L. A. Crum: J. Acoust. Soc. Am. 57 (1975) 1363. . A Prosperetti, Ultrasonics. 22115A. Prosperetti: Ultrasonics 22 (1984) 115. . W Lauterborn, T Kurz, R Mettin, C D Ohl, Adv. Chem. Phys. 110295W. Lauterborn, T. Kurz, R. Mettin and C. D. Ohl: Adv. Chem. Phys. 110 (1999) 295. . E A Zabolotskaya, Sov. Phys. Acoust. 30365E. A. Zabolotskaya: Sov. Phys. Acoust. 30 (1984) 365. . A A Doinikov, S T Zavtrak, Phys. Fluids. 71923A. A. Doinikov and S. T. Zavtrak: Phys. Fluids 7 (1995) 1923. . A A Doinikov, S T Zavtrak, J. Acoust. Soc. Am. 993849A. A. Doinikov and S. T. Zavtrak: J. Acoust. Soc. Am. 99 (1996) 3849. . Y A Kobelev, L A Ostrovskii, A M Sutin, JETP Lett. 30395Y. A. Kobelev, L. A. Ostrovskii and A. M. Sutin: JETP Lett. 30 (1979) 395. P L Marston, E H Trinh, J Depew, T J Asaki, Bubble Dynamics and Interface Phenomena. J. R. Blake et al.DordrechtKluwer AcademicP. L. Marston, E. H. Trinh, J. Depew and T. J. Asaki, in: Bubble Dynamics and Interface Phe- nomena, edited by J. R. Blake et al. (Kluwer Academic, Dordrecht, 1994), pp.343-353. . P C Duineveld, J. Acoust. Soc. Am. 99622P. C. Duineveld: J. Acoust. Soc. Am. 99 (1996) 622. . P A Dayton, K E Morgan, A L Klibanov, G Brandenburger, K R Nightingale, K W Ferrara, IEEE Trans. Ultrason. Ferroelect. & Freq. Control. 441264P. A. Dayton, K. E. Morgan, A. L. Klibanov, G. Brandenburger, K. R. Nightingale and K. W. Ferrara: IEEE Trans. Ultrason. Ferroelect. & Freq. Control 44 (1997) 1264. . P Dayton, A Klibanov, G Brandenburger, K Ferrara, Ultrasound Med. Biol. 251195P. Dayton, A. Klibanov, G. Brandenburger and K. Ferrara: Ultrasound Med. Biol. 25 (1999) 1195. . A Shima, Trans. ASME J. Basic Eng. 93426A. Shima: Trans. ASME J. Basic Eng. 93 (1971) 426. . M Ida, Phys. Lett. A. 297210M. Ida: Phys. Lett. A 297 (2002) 210. . M Ida, Phys. Rev. E. 6756617M. Ida: Phys. Rev. E 67 (2003) 056617. . M Ida, J. Phys. Soc. Jpn. 711214M. Ida: J. Phys. Soc. Jpn. 71 (2002) 1214. . T J Matula, S M Cordry, R A Roy, L A Crum, J. Acoust. Soc. Am. 1021522T. J. Matula, S. M. Cordry, R. A. Roy and L. A. Crum: J. Acoust. Soc. Am. 102 (1997) 1522. . I Akhatov, R Mettin, C D Ohl, U Parlitz, W Lauterborn, Phys. Rev. E. 553747I. Akhatov, R. Mettin, C. D. Ohl, U. Parlitz and W. Lauterborn: Phys. Rev. E 55 (1997) 3747. . M Ida, Comput. Phys. Commun. 150323Erratum in 150M. Ida: Comput. Phys. Commun. 150 (2003) 300; Erratum in 150 (2003) 323. . M Ida, Comput. Phys. Commun. 13244M. Ida: Comput. Phys. Commun. 132 (2000) 44. . T Yabe, P Y Wang, J. Phys. Soc. Jpn. 602105T. Yabe and P. Y. Wang: J. Phys. Soc. Jpn. 60 (1991) 2105. . S Ito, Nat. Cong. of Theor. & Appl. Mech. 311in JapaneseS. Ito: 43rd Nat. Cong. of Theor. & Appl. Mech. (1994) p.311 [in Japanese]. . M Ida, Y Yamakoshi, Jpn. J. Appl. Phys. 403846M. Ida and Y. Yamakoshi: Jpn. J. Appl. Phys. 40 (2001) 3846. . J U Brackbill, D B Kothe, C Zemach, J. Comput. Phys. 100335J. U. Brackbill, D. B. Kothe and C. Zemach: J. Comput. Phys. 100 (1992) 335. T G Leighton, The Acoustic Bubble. LondonAcademic Press291T. G. Leighton: The Acoustic Bubble (Academic Press, London, 1994), p.291. . C Devin, J. Acoust. Soc. Am. 311654C. Devin: J. Acoust. Soc. Am. 31 (1959) 1654. . A Prosperetti, Ultrasonics. 2269A. Prosperetti: Ultrasonics 22 (1984) 69. . S Hilgenfeldt, D Lohse, M Zomack, Eur. Phy. J. B. 4247S. Hilgenfeldt, D. Lohse and M. Zomack: Eur. Phy. J. B 4 (1998) 247. . C Feuillade, J. Acoust. Soc. Am. 981178C. Feuillade: J. Acoust. Soc. Am. 98 (1995) 1178. . Z Ye, C Feuillade, J. Acoust. Soc. Am. 102798Z. Ye and C. Feuillade: J. Acoust. Soc. Am. 102 (1997) 798. . A Harkin, T J Kaper, A Nadim, J. Fluid Mech. 445377A. Harkin, T. J. Kaper and A. Nadim: J. Fluid Mech. 445 (2001) 377. . H Takahira, S Fujikawa, T Akamatsu, JSME Int. J. Ser. II. 32163H. Takahira, S. Fujikawa and T. Akamatsu: JSME Int. J. Ser. II 32 (1989) 163. . P.-Y Hsiao, M Devaud, J.-C Bacri, Eur. Phys. J. E. 45P.-Y. Hsiao, M. Devaud and J.-C. Bacri: Eur. Phys. J. E 4 (2001) 5. . B Lafaurie, C Nardone, R Scardovelli, S Zaleski, G Zanetti, J. Comput. Phys. 113134B. Lafaurie, C. Nardone, R. Scardovelli, S. Zaleski and G. Zanetti: J. Comput. Phys. 113 (1994) 134. . S Popinet, S Zaleski, Int. J. Numer. Methods Fluids. 30775S. Popinet and S. Zaleski: Int. J. Numer. Methods Fluids 30 (1999) 775. . R Mettin, I Akhatov, U Parlitz, C D Ohl, W Lauterborn, Phys. Rev. E. 562924R. Mettin, I. Akhatov, U. Parlitz, C. D. Ohl and W. Lauterborn: Phys. Rev. E 56 (1997) 2924. . J B Keller, M Miksis, J. Acoust. Soc. Am. 68628J. B. Keller and M. Miksis: J. Acoust. Soc. Am. 68 (1980) 628. . H Oguz, A Prosperetti, J. Fluid Mech. 218143H. Oguz and A. Prosperetti: J. Fluid Mech. 218 (1990) 143. . H Takahira, T Akamatsu, S Fujikawa, 297. 17/18JSME Int. J. Ser. B. 37H. Takahira, T. Akamatsu and S. Fujikawa: JSME Int. J. Ser. B 37 (1994) 297. 17/18
[]
[ "Retrieve in Style: Unsupervised Facial Feature Transfer and Retrieval", "Retrieve in Style: Unsupervised Facial Feature Transfer and Retrieval" ]
[ "Min Jin Chong [email protected] \nUniversity of Illinois at Urbana-Champaign\n\n", "Wen-Sheng Chu [email protected] \nGoogle Research\n\n", "Abhishek Kumar \nGoogle Research\n\n", "David Forsyth \nUniversity of Illinois at Urbana-Champaign\n\n" ]
[ "University of Illinois at Urbana-Champaign\n", "Google Research\n", "Google Research\n", "University of Illinois at Urbana-Champaign\n" ]
[]
a) Eyes/nose/mouth (b) Hair transfer (c) Pose transfer Query Retrieved images (d) Facial feature retrieval Source Reference Reference Output Source Reference Output Source Output Figure 1: We propose an unsupervised method to transfer local facial appearance from real reference images to a real source image, e.g., (a) eyes, nose, and mouth. Compared to the state-of-the-art [10], our method enables photo-realistic transfers for (b) hair and (c) pose, and can be naturally extended for (d) semantic retrieval according to different facial features.AbstractWe present Retrieve in Style (RIS), an unsupervised framework for facial feature transfer and retrieval on real images. Recent work shows capabilities of transferring local facial features by capitalizing on the disentanglement property of the StyleGAN latent space. RIS improves existing art on the following: 1) Introducing more effective feature disentanglement to allow for challenging transfers (i.e., hair, pose) that were not shown possible in SoTA methods. 2) Eliminating the need for per-image hyperparameter tuning, and for computing a catalog over a large batch of images. 3) Enabling fine-grained face retrieval using disentangled facial features (e.g., eyes). To our best knowledge, this is the first work to retrieve face images at this fine level. 4) Demonstrating robust, natural editing on real images. Our qualitative and quantitative analyses show RIS achieves both high-fidelity feature transfers and accurate fine-grained retrievals on real images. We also discuss the responsible applications of RIS. Our code is available at https://github.com/ mchong6/RetrieveInStyle.
10.1109/iccv48922.2021.00386
[ "https://arxiv.org/pdf/2107.06256v3.pdf" ]
235,829,350
2107.06256
6339d9370249daf0567bb56ad977c49bfbf8b28d
Retrieve in Style: Unsupervised Facial Feature Transfer and Retrieval Min Jin Chong [email protected] University of Illinois at Urbana-Champaign Wen-Sheng Chu [email protected] Google Research Abhishek Kumar Google Research David Forsyth University of Illinois at Urbana-Champaign Retrieve in Style: Unsupervised Facial Feature Transfer and Retrieval a) Eyes/nose/mouth (b) Hair transfer (c) Pose transfer Query Retrieved images (d) Facial feature retrieval Source Reference Reference Output Source Reference Output Source Output Figure 1: We propose an unsupervised method to transfer local facial appearance from real reference images to a real source image, e.g., (a) eyes, nose, and mouth. Compared to the state-of-the-art [10], our method enables photo-realistic transfers for (b) hair and (c) pose, and can be naturally extended for (d) semantic retrieval according to different facial features.AbstractWe present Retrieve in Style (RIS), an unsupervised framework for facial feature transfer and retrieval on real images. Recent work shows capabilities of transferring local facial features by capitalizing on the disentanglement property of the StyleGAN latent space. RIS improves existing art on the following: 1) Introducing more effective feature disentanglement to allow for challenging transfers (i.e., hair, pose) that were not shown possible in SoTA methods. 2) Eliminating the need for per-image hyperparameter tuning, and for computing a catalog over a large batch of images. 3) Enabling fine-grained face retrieval using disentangled facial features (e.g., eyes). To our best knowledge, this is the first work to retrieve face images at this fine level. 4) Demonstrating robust, natural editing on real images. Our qualitative and quantitative analyses show RIS achieves both high-fidelity feature transfers and accurate fine-grained retrievals on real images. We also discuss the responsible applications of RIS. Our code is available at https://github.com/ mchong6/RetrieveInStyle. Introduction Recent advancements in Generative Adversarial Networks (GANs) [6,18,19] have shown capabilities to generate realistic high resolution images, particularly for faces. Under unconditional settings, it is often hard to interpret or control the outputs of GANs. Conditional GANs are more naturally amenable for semantic editing. However, the degree of meaningful control over the output images is largely dependent on how detailed the annotations are. This presents a challenge for fine-grained face editing as it is often difficult or impossible to annotate datasets with the degree of detail needed for fine-grained editing. Existing works on face editing typically leverage additional information to guide conditional generation, such as manual labels [3,8,21,42,44,45], segmentation masks [12,22], attribute classifiers [14], rendering models [20,38], etc. However, the additional information requires extra computation and is not always available in practice. In addition, the fine-grained facial features (e.g., a distinctive shape of eyes) are difficult to describe as labels or features. As an alternative, unsupervised discovery of latent directions in a pretrained GAN [13,31,39] allows for finding meaningful latent representations in a computationally efficient way. However, such approaches are less effective for fine-grained editing compared to supervised approaches. Recently, Editing in Style (EIS) [10] proposed a mostly unsupervised method for facial feature transfer. While EIS allows semantic editing of spatially coherent facial features (e.g., eyes, nose and mouth), it requires computing a semantic catalog over the whole dataset and separate hyperparameter tuning for each image. Such requirements make EIS non-scalable to large datasets as commonly encountered in retrieval domains. In addition, it remains challenging for EIS to control facial features that are difficult to describe as a spatial map, such as hair and head pose. More importantly, EIS works only on synthetic images and remains untested on how real images could be manipulated. In this study, we propose Retrieve in Style (RIS), a simple and efficient unsupervised framework that tackles both fine-grained facial feature transfer and retrieval. Fig. 1 illustrates the capabilities offered by RIS. RIS improves EIS in several aspects. First, we discover the "submembership" property in the style space, showing that style channels corresponding to a particular feature (e.g., eyes) are different for every image and thus must be computed individually instead of over the entire dataset. As the discovered channels are image-specific, RIS achieves more precise face editing for not only spatially coherent facial features (e.g., eyes, nose, mouth) but also challenging ones (i.e., hair, pose). Second, with the discovered "submembership", we show it is possible to eliminate EIS's requirements on perimage semantic catalog and per-image hyperparameter tuning, and offer better scalability to larger problems. Third, the image-specific representations naturally extend RIS for fine-grained facial feature retrieval that was not shown possible in EIS. Lastly, we demonstrate that RIS offers editing and retrieval of real images when combined with GAN inversion methods, while EIS worked with synthetic images. Although RIS is general and can be applied to a wide range of datasets, this study focuses on faces as there are established conventions on facial parts and its relevance in face retrieval applications (e.g., [4,11,23,25]). Our contributions are: 1. RIS improves over EIS based on our finding of "submembership", obtaining better controllability over facial features that are spatially coherent (eyes, nose, mouth) and incoherent (hair pose), while requiring no hyperparameter tuning. 2. We obtain feature-specific representations (e.g., eyes, nose, mouth, hair), which enable face retrieval by finegrained features that are difficult to describe or annotate even for humans. To our best knowledge, this is the first work to address the fine-grained retrieval problem without supervision. 3. We show that RIS generalizes to GAN-inverted images, allowing transfer and retrieval on real images that was not shown possible in earlier studies. Results on CelebA-HQ validates that RIS achieves high-quality retrieval on large, real-world datasets. Related Work StyleGAN: StyleGAN1 [19] and StyleGAN2 [19] achieve state-of-the-art unconditional image generation. StyleGAN's unique architecture is inspired by style transfer work by Huang et al. [15]. Contrary to previous GAN architectures that map a random noise vector z to an image, StyleGAN maps z to w ∈ W via a non-linear mapping network. Feature maps in the generator are then controlled by w in the AdaIN module [15]. The W+ latent space of StyleGAN has been shown to exhibit disentangled feature representations [1,2,19,31]. Xu et al. [43] further showed that style coefficients σ, where σ = F C(w) with F C being an affine layer, demonstrate more disentangled visual features compared to w. The style coefficients σ are directly used to scale the layer-wise activations in the generator. Latent space image editing: Radford et al. [28] show that the latent space of GANs is semantically meaningful -latent directions can be associated with semantics (e.g., pose, smile), with directions obtained by either supervised (e.g. a pretrained attribute classifier, InterFaceGAN [31]) or unsupervised means (e.g. zooms and shifts, Jahanian [16]). Voynov [39] finds directions corresponding to changes that can be observed by a classifer. GANSpace [13] uses PCA to identify meaningful latent directions. Shen and Zhou [32] propose a closed-form factorization to obtain directions. Feature activation image editing: Local edits can follow from manipulating GAN feature activations. GAN Dissection [5] uses a segmentation model to correspond internal GAN activations to semantic concepts, allowing them to add or remove objects. Feature Blending [34] recursively blends feature activations between source and images to allow local semantics transfer. These methods require a pretrained segmentation model or user-provided masks. One might obtain edits as image-to-image translations. AttGAN [14] allows multi-attribute facial editing via a conditional GAN setup. StarGAN [8] proposes a single generator, multi-domain approach that uses conditional generation to achieve facial editing. GANimation [27] conditions the generator with Action Units annotations to allow smooth facial expression editing. MaskGAN [22] uses segmentation masks to enable interactive spatial image editing. Face retrieval: Current facial retrieval systems generally match faces based on identities and lack the granularity to match on a facial feature level. Non deep-learning based retrieval systems such as Photobook [26] and CAFI-IRIS [40] use features such as Eigenfaces [37], textual descriptions, and/or facial landmarks; but we expect learned features to have advantages. FaceNet [30] learns embeddings via a triplet loss where the Euclidean distances between embeddings correspond to facial similarity by training with identities. Other works [33,35] formulate the problem as a classification task between identities. But these methods perform retrieval at the level of identity and by design, are invariant to details such as expressions and hairstyles. In contrast, RIS aims to improve the granularity of face retrieval. Instead of asking to "retrieve faces with similar features" we are asking to "retrieve faces with similar eyes, nose, mouth, etc. ". GAN Inversion: GAN inversion encodes a real image to the latent space of a GAN. It is commonly done via gradient descent in the latent space [2,19,41] which leads to accurate reconstruction at the expense of scalability. An encoder-based approach [29,43,46] instead allows scalable GAN inversion. Retrieve in Style In this section, we describe the proposed Retrieve in Style (RIS) for both facial feature transfer and retrieval. We first review Editing in Style (EIS) [10] that our method is built upon. Then, we propose improvements to EIS for a more controllable and intuitive transfer, and show that our method can be naturally extended for fine-grained face retrieval, which was not possible in EIS. Editing in Style Unlike methods that manipulate the latent space via vector arithmetic [13,16,31,32,39], EIS formulates the semantic editing problem as copying style coefficients σ of StyleGAN [18] from a reference image to a source image, i.e., the output image carries facial features from the reference images while preserving the remaining features from the source image. The authors show that semantic local transfer is possible on images generated by a pretrained StyleGAN with minimal supervision. One key insight of EIS is that spatial feature activations of a StyleGAN generator can be grouped into clusters that correspond to semantically meaningful concepts such as eyes, nose, mouth, etc. Specifically, let A ∈ R N ×C×H×W be the activation tensor at a particular layer of StyleGAN, where N is the number of images, C the number of channels, H the height and W the width. Spherical K-way k-means [7] is applied spatially over A, i.e., clustering over N × H × W vectors of size C. Each spatial location of A is associated with cluster memberships U ∈ {0, 1} N ×K×H×W , and then used to compute a contribution M k,c = 1 N HW n,h,w A 2 n,c,h,w U n,k,h,w .(1) Intuitively, M k,c tells how much the c-th channel of style coefficients σ ∈ R C contributes to the generation for facial feature k. Note that σ directly scales the activations A in the modulation module -the larger the activations, the more k is affected by the channel c. Transferring a facial feature k across two images is then performed via interpolation between style coefficients σ S , σ R of the source and the reference images. The style coefficient of the edited image σ G k can be obtained by rewriting the style interpolation in Eq. (3) of [10]: σ G k = (1 − q k ) σ S + q k σ R ,(2) where q k ∈ [0, 1] C is the interpolation vector for a given facial feature k. EIS finds q k using a greedy optimization derived from M k,c and manual hyperparameter tuning to determine which channels to ignore. Such hyperparameters can be sensitive to different reference images and lead to suboptimal transfers, as shown in Sec. 4. In addition, M k,c is computed over N images and is fixed for all feature transfers. We argue in Sec. 3.2 that having a fixed M k,c may not be ideal for transfer, as not all images share the same channels to describe the same facial feature. Improving EIS for Facial Feature Transfer Submemberships: EIS assumes that the channels that make a high contribution for a particular feature (say, eyes) are the same for each image. So to compute M k in Eq. (1), EIS averages the scores over a large collection of images of size N . We hypothesize the high-contribution channels may vary from image to image. This means averaging over N images can lose details specific to the source or reference. We visualize the presence of this effect in Fig. 2. Performing Spherical k-means clustering over per image M hair (N = 1) of images in a dataset yields semantically meaningful clusters. Images in each row belong to the same cluster. The hairstyles within the same row are similar, while hairstyles across rows are distinctively different. We further analyze the top active channels (each channel corresponds to a dimension of M k ) for each cluster, and observe that each cluster has its own set of top active channels that are unique to it. Please refer to supplementary materials for more detailed analyses. This validates our hypothesis that high-contribution channels for a semantic feature are not the same across images. That is, the same feature k of different images are controlled by different groups of channels. We term these groups as "submembership", which is a crucial motivation for this work. With "submembership" in mind, instead of computing M k,c over a large batch of N images, we show that the responsible channels are more accurately computed over only the source and reference images, i.e., N = 2. Specifically, M k,c = max h,w A[s] 2 c,h,w U[s] k,h,w , h,w A[r] 2 c,h,w U[r] k,h,w ,(3) where s and r indicate the particular source and reference images of interest, respectively. Intuitively, to transfer from a reference to a source image, we are interested in channels that are important to source, reference, or both. Obtaining interpolation vector: Instead of getting the interpolation vector q k from the greedy optimization process (like in EIS) which is dependent on per-image hyperparameters ρ and , we assume each channel of the style coefficient σ corresponds to one facial feature. This follows from the disentangled style space of StyleGAN and in practice, works well. Under this assumption, we obtain a soft class assignment for each style coefficient channel with a softmax of all classes (rows of M), obtaining: q = Softmax k M τ ,(4) where M ∈ [0, 1] K×C is the stacked contribution score of all facial features, τ is the temperature, q ∈ [0, 1] K×C is the interpolation vector. The interpolation vector for a particular feature k, q k can be indexed from the row of q. q k can be thought of the mask for k that allows interpolation between σ S and σ R . Pose transfer: Karras et al. [19] have shown that the first few layers of StyleGAN2 capture high level features such as pose. In Fig. 3, we show that copying the style coefficients of the first 4 layers of StyleGAN2 (which corresponds to the first 2048 style coefficient channels), transfers mostly pose and hair information from reference to source image, leaving other features like eyes and mouth untouched. By assuming that the first 4 layers only contain pose and hair information, we simply derive: q pose = 1 − q hair ,(5) for only the first 4 layers with the rest zeroed out. Similarly, for all facial features other than hair, the first 4 layers are zeroed out to prevent pose changes. As shown in Fig. 3, q pose captures pose information without affecting hair. One significant advantage of our pose transfer is that it requires no labels or manual tuning. For example, GANSpace [13] requires manually choosing layer subsets; AttGAN [14] and InterFaceGAN [31] requires attribute labels, StyleRig [36] requires a 3D face model. Fig. 4 illustrates our full capability of facial feature transfer. Latent Direction: Unlike EIS that limits facial feature transfer to style interpolation as in Eq. (3), we formulate the problem as traversing along the latent direction, based on work showing StyleGAN's latent space vector arithmetic property [28]. Then, we revise Eq. (3) to: σ G k = σ S + αq k (σ R − σ S ),(6) where the latent direction is n = q k (σ R − σ S ) and the scalar step size is α. If we restrict α ∈ [0, 1], we will be performing a style interpolation. Under the property of vector arithmetic, we can instead use α ∈ R which allows style extrapolation. We show in Fig. 5 that scaling α allows an increase or decrease in the particular facial property. For example, we are able to do smooth pose interpolation. Facial Feature Retrieval This section shows the style representation in Eq. (6) can be adapted to fine-grained facial feature retrieval, which is defined as follows. Given a query image I Q and a retrieval dataset X , we aim to retrieve the top-K closest images T k ⊂ X with respect to a facial feature (e.g., eyes). As described in the previous section, RIS identifies the style channels that mediate the appearance of facial features for particular images. This suggests the style channels can be used to retrieve faces with appearance similar to the facial features in a query face. Face retrieval is usually done by matching on an identity embedding [30,33,35]. However, fine-grained facial feature retrieval is relatively unexplored as it is difficult to collect and annotate training data with fine granularity (e.g., shape of the eyes or nose). For each facial feature k, we have q k ∈ [0, 1] 1×C to encode, for a particular image, how much that channel contributes to that feature. Since q k can be considered as a mask, we construct a feature-specific representation: v Q k = q Q k σ Q .(7) Feature retrieval can be then performed by matching v k , as two images with similar v k suggest a lookalike feature k. We compute the representations v R k = q R k σ R where σ R ∈ Σ and Σ are the style coefficients for the images in X . We then define the distance between the facial features of two style coefficients/face images as Distance k (I Q , I R ) = d(v Q k , v R k ),(8) where d is a distance metric (cosine distance in this study). We then rank the distances for nearest neighbor search for facial feature k. Intuitively, if there is a M k and consequently, a q k mismatch between two images, their distance will be large. Since Fig. 2 shows that similar features have similar M k , vice versa, it follows that smaller distances will reflect more similar features. We show this is true empirically and RIS works as in expectation from Fig. 7. Additionally, we observe better results if we normalize σ Q and σ R using layer-wise mean and standard deviation from Σ. Comparison between SoTA EIS [10] and RIS (ours): Both EIS and RIS share a unique way to perform unsupervised local face editing by attributing transfers to reference images. They differ in how they accomplish it. (1) EIS computes the contribution score M by averaging over a batch of N images. Based on the findings of M's submembership, RIS uses N = 2, which avoids manual per-image hyperparameter tuning and thus allows a more scalable and intuitive transfer. As a result, RIS yields more precise transfer of eyes, nose, and mouth, and enables transferring novel features such as hair and pose that were not shown possible in EIS. (2) RIS redefines M as an image-specific representation, which allows for unsupervised fine-grained face feature retrieval. EIS assumes an averaged representation of M, which will be shown in experiments to be less effective for feature retrieval. Experiments While other work based on StyleGAN, including EIS [10,13], focus on manipulating generated images, we focus on the more relevant problem of manipulating real images. This is a more difficult problem as there are no guarantees that GANs performing well on generated images are stable enough to generalize to real images. To show that RIS generalizes to real datasets, we use CelebA-HQ [17] with 30k images for all our experiments. Since feature-based retrieval requires the inversion of the entire dataset, we opt to use pSp [29], a SoTA encoderbased GAN inversion method, for all our experiments. Facial Feature Transfer In this section, we provide qualitative and quantitative analyses for facial feature transfer on real images. We fixed τ = 0.1 and α = 1.3 for all experiments, as we observed the temperature τ in Eq. (4) is insensitive to different source and reference images. We used N = 200 for EIS [10] following the authors' implementation. Qualitative analysis: Fig. 6 shows a qualitative comparison between RIS (our method) and EIS on real images. It can be observed that RIS offers better localization ability. EIS (Fig. 6(a)) affects skin tone heavily across all transfers, notably changing lighting heavily for hair transfer. In contrast, RIS maintains relatively similar skin tones while transferring the targeted features. EIS also changes the eyes and nose of the source image while transferring mouth ( Fig. 6(a)), indicating entanglement in their representations. While transferring mouth (which includes the chin region), EIS fails to reproduce the beard in the image Reference2 (Fig. 6(c)). On the other hand, RIS faithfully reproduces the beard (Fig. 6(d)). It is noteworthy that RIS is able to generate a female face with beard, representing an out-of-distribution generation that is absent in the training set. Please refer to supplementary materials for more comparisons. Method FID ∞ StyleGAN2 [19] 2.44 EIS [10] 3.47 RIS (ours) 3.73 Ours Table 1: Image fidelity comparison: RIS achieves a comparable FID ∞ compared to EIS and is only slightly worse compared to the base StyleGAN2. The larger FID ∞ can be attributed to our capability of OOD generation, e.g., longhair males or bald females as in the right image. Quantitative analysis: To quantitatively validate our transfer results, we computed FID ∞ [9], an unbiased estimate of FID, for baseline StyleGAN2 [19], EIS [10] and RIS. Details on the setup are provided in the supplementary. Table 1 shows the FID ∞ comparison. Both EIS and RIS achieved small FID ∞ differences compared to the base StyleGAN2. However, RIS yielded a slightly larger FID ∞ , which can be explained by the ability of our method to even generate out-of-distribution samples, if needed for transferring features. Such samples are uncommon in the FFHQ dataset that trains the base StyleGAN2, and thus contribute to a larger FID ∞ , e.g., our method is capable of transferring long hair to a bearded male, or bald hair to a female, as shown in the right of Table 1. Facial Feature Retrieval We evaluate our retrieval performance qualitatively and quantitatively. We use GAN inverted CelebA-HQ images as the retrieval dataset, and cosine distance as the metric. Qualitative analysis: As fine-grained facial retrieval is relatively unexplored, to the best of our knowledge, there are no proper metrics to evaluate this task. Instead, we repurposed the averaged M k in EIS for retrieval and use it as a baseline. Specifically, when computing retrieval representations in Eq. (7), we replaced the individual q k with the q k derived from EIS's averaged M k . Since large-scale hyperparameter tuning for every reference image is infeasible for EIS, we obtained q k with a fixed hyperparameter choice that may not generalize to all images. Fig. 7 shows qualitative comparisons between RIS and EIS. RIS has observably more disentangled representations. Specifically, for eyes retrieval, although the query has distinct eyes, RIS retrieves images with the same eyes but different identities, while EIS only retrieves the same identity. This suggests EIS representations are entangled between eyes and identity features. In addition, EIS retrieves almost the same images for different features (i.e., eyes and nose), suggesting entanglement. For mouth retrieval, RIS recognizes the wide open mouth of the query, retrieving semantically similar (w.r.t. the mouth feature) yet diverse images. EIS, on the other hand, retrieves images with the same skin tone, suggesting a lack of feature localization. Lastly, for hair retrieval, RIS retrieves images with similar hair but with different genders, while EIS only retrieves only female images. Finally, furthest neighbors for RIS differ semantically from the query image. Overall, RIS nearest neighbors exhibit significant variance on non-matching features while EIS nearest neighbors do not. Along with our superior results in Fig. 6 Table 2: We compare AMS between RIS and EIS to measure retrieval accuracy w.r.t. a given facial feature using a pretrained attribute classifier. RIS outperforms EIS in all classes, with mouth retrieval being noticeably better. reinforces that our individual M k yields better disentanglement and feature focus compared to the averaged M k in EIS. This also validates our hypothesis of submemberships. TRSI-IoU. We use retrieval to evaluate how well RIS disentangles facial features. We focus on two retrieved set identity IoU (TRSI-IoU): retrieve two sets of images using two facial feature queries on the same face; TRSI-IoU is computed as intersection-over-union of the identities between these two sets. A full face retrieval method should have a TRSI-IoU close to 1 if the two queries are the same person, and 0 otherwise. Assume a method does not disentangle features, it is possible to approximately predict (say) mouth from eyes. In turn, retrieving using eyes (resp. mouth) will implicitly constrain mouth (resp. eyes), so the two retrieved sets will have many individuals in common; hence TRSI-IoU becomes relatively large. On the other hand, if a method properly disentangled (say) eyes and mouth, its identities should not overlap much; thus TRSI-IoU becomes relatively small. The minimum obtainable value of TRSI-IoU is difficult to know, but lower TRSI-IoU is good evidence a method disentangles better. Fig 8 shows boxplots of TRSI-IoU for RIS and EIS, evaluated for 100 queries and all pairs of facial features (chosen from eyes, nose, mouth, hair). RIS shows significantly lower TRSI-IoU, and the difference is statistically significant. Attribute Matching Score. We used attribute classifiers pretrained on CelebA attributes [24] to further evaluate the quality of our retrieval. Note that these attributes are binary and not sufficiently detailed for fine-grained purposes. There is also a distinct lack of diversity in CelebA and its attributes, e.g., lack of head coverings, curly hairs, etc., which makes evaluation of RIS on generating faces of diverse and inclusive people not possible. The intuition of our procedure is as follows: for retrieval of the k-th feature hair, hair-related attributes A k (e.g., "black hair", "wavy hair", etc.) should remain similar between query and retrieved images. Please see supplementary for the full list of attributes associated with k. We retrieve top-5 images T (i) 5 according to a query image I (i) Q for a feature k. We took an attribute classifier F, and got its prediction for the a-th attribute as F a (·) = [F a (·) > T ], i.e., F a (·) = 1 if the prediction is larger than threshold T = 0.5 and 0 otherwise. Then, Attribute Matching Score (AMS) is defined for the k-th facial feature: Table 2(b) compares AMS scores between EIS and RIS. As the classifier is trained on predefined attributes that do not contain fine granularity, it could be less descriptive to our particular task of fine-grained retrieval. Still, RIS outperforms EIS in all classes under this less-granular setting, with mouth retrieval being noticeably better. AMS k = I (i) Q ∈X t (i) ∈T (i) 5 a∈A k F a (I (i) Q ) = F a (t (i) ) |X | · |T (i) 5 | · |A k | . Conclusion We presented Retrieve in Style (RIS), a simple and efficient unsupervised method of facial feature transfer that works across both short-scale features (eyes, nose, mouth) and long-scale features (hair, pose) on real images without any hyperparameter tuning. RIS produces realistic, accurate feature transfers without modifying the rest of the image, and naturally extends to the fine-grained facial feature retrieval. Note that techniques for photorealistically manipulating images could be misused to produce fake or misleading information, and researchers should be aware of these risks. To the best of our knowledge, this is the first work that enables unsupervised, fine-grained facial retrieval, especially so on real images. Our qualitative and quantitative analyses verify the effectiveness of RIS. Intersection ratio vs K eyes nose mouth hair Figure 9: Intersection Ratio: This figure shows the intersection ratio (y-axis) computed against K, the number of clusters (x-axis). The common channels shared by all clusters decrease as the number of clusters increase. This means that for the same facial feature, images do not share the same contributing channels, validating the "submembership" effect discussed in Sec. 3.1 of the main paper. Interpolation of Transfers In this section, we show that the proposed RIS allows smooth interpolations for facial feature transfers for generated images, in addition to the results shown in Fig. 5 of the original paper. Fig. 10 shows natural and smooth transition for our interpolation on the target facial features, i.e., eyes, nose, mouth, hair, and pose. Note that hair and pose transfers were not shown possible in the state-of-the-art EIS approach [10]. More results: Similar to the figures shown for facial feature transfer and retrieval as in the main paper, Figs. 11 and 12 provide more examples for facial feature transfer retrieval, respectively on generated images. Attribute Classifier for AMS score In this section, we provide details about attribute classifiers that were used to evaluate our Attribute Matching Score (AMS) in Sec. 4.2 of the original paper. In particular, we pretrained a attribute classifier based on 40 attributes on the CelebA dataset [24]. Subsets of features were manually selected to associate attributes with the facial features that the proposed method attempts to retrieve. Table 3 shows the full list of binary attributes for each facial feature. For completeness, Fig. 13 illustrates the accuracy of each of the 40 attributes of our pretrained model, with an average of 85.27% overall accuracy. TRSI-IoU metric The goal of TRSI-IoU is to measure how disentangled the facial feature representations are, and not the accuracy of retrieval (which is evaluated by Attribute Matching Score). For the task of fine-grained feature retrieval, it is pertinent to sufficiently disentangle the feature representations, i.e., the retrieval results of eyes should not predict the retrieval results of nose. In an extreme case where features are fully entangled, the identities retrieved across different features become the same. This task is then trivially reduced to the conventional identity retrieval, a simpler and well-researched task compared to our goal of finegrained feature retrieval. We observe that EIS retrieves the same images and identities for different features (as shown in Fig. 7(a) and (b) for EIS), which signify significant entanglement between facial features. TRSI-IoU is thus introduced to quantify this entanglement. The combination of AMS and TRSI-IoU gives a comprehensive evaluation of both accuracy and entanglement. Inference speed For both EIS and RIS, we perform 100 inference runs (includes both computing M and generating the edited image), and compute the mean and standard deviation of the runs on a single Titan Xp GPU. Measured in seconds, we observe for EIS: 0.0394±0.00289, for RIS: 0.234±0.00633. Although computing instance-level M adds ∼0.2s latency, we believe RIS remains suitable for real world applications. Computing M for a dataset of 50K images for retrieval takes less than 10 minutes on a single Titan Xp GPU (avg 0.12s per image). Effects of noise input In all experiments, we fix the noise input to prevent variations caused by the random noise. We perform an experiment showcasing the effect of varied noise input on RIS, as shown in Fig. 14. From the absolute difference between different random runs, we observe that their delta is negligible. Figure 10: We scale q k according to different α to allow interpolation between the source image (the left most column) and the reference image (the right most column) on a particular facial feature. With the side-by-side comparisons with different α, we observe that RIS is able to produce smooth and realistic transitions between the transfers. The larger value the α, the closer the facial features are similar to the reference images. Note that hair and pose transfers were not shown possible in the state-of-the-art EIS [10]. Figure 2 : 2Submembership: Contribution scores M k from our method allow meaningful clustering. In this figure, each row is a cluster for k = hair; images within a row are similar, showing that clustering is effective. Across rows, the images differ, showing that there is real variation in the hair.score M k,c ∈ [0, 1] K×C : Figure 3 : 3Pose transfer from (a) reference to (b) source. (c) Naively copying style coefficients from the first 4 layers of StyleGAN2 [19] transfers primarily pose and partially hair (shorter hair on left, flatter hair top), showing their style coefficients are entangled in the early layers. (d) Our method matches the pose of the reference image and preserves the hair faithfully from the source. Figure 4 : 4Facial feature transfer: Our method performs effective semantic editing on real images by transferring facial features from (b) a reference image to (a) a source image. Our method transfers spatially coherent features (i.e., eyes, nose, mouth) as well as challenging features hair and pose. Note that real image editing is not possible with SoTA EIS[10]. Figure 5 : 5Latent direction: The α variable in RIS controls interpolation between the source and the reference images, showing a smooth transition of mouth (top row), hair (middle row) and pose (bottom row). Figure 6 : 6Comparison with EIS [10]: (a, b) show transfers from Reference 1 to the source image; (c)(d) from Reference 2. Our method (RIS) generates visually more accurate and natural results. E.g., EIS changed the skin tone in (a) and shirt color in (c), while RIS does not. RIS also achieves beard transfer around mouth in (d), even though beard on female faces is rare or absent in training data. Figure 7 : 7Facial feature retrieval: We compare fine-grained retrieval between our method RIS (submembership M k ) and EIS[10] (universal M k ) on real faces. We show 3 faces each from nearest and furthest retrieval (NR and FR). RIS retrieves semantically similar NRs on all facial features while showing variance on non-matching features. Note EIS retrieves very similar NR on eyes and nose with same query image indicating a lack of feature localization. Figure 8 : 8TRSI-IoU measures the extent of overlapping identities between two different feature queries on the same face. Methods that disentangle facial features better are expected with smaller TRSI-IoU (see text). We compare a boxplot of TRSI-IoU for RIS and EIS. RIS shows noticeable improvement in the median (red line) with much smaller interquartile range (boxes). This suggests our method better disentangles facial features. Figure 11 :Figure 12 : 1112Results of facial feature transfer on generated images. Results of retrieval on generated images. Figure 13 : 13Accuracy on 40 CelebA attributes (in %). Figure 14 : 14Hair transfer with random noise input: The effect of noise is negligible to our results even with 100x magnification. , this furtherAttribute Matching Score (%) Class Ours EIS Eyes 96.3 95.4 Nose 100.0 100.0 Mouth 81.1 75.8 Hair 97.5 97.1 Facial Feature CelebA Attributes Eyes Arched Eyebrows, Bags Under Eyes, Bushy Eyebrows, Narrow Eyes. Nose Big Nose, Pointy Nose. Mouth 5 of Clock Shadow, Big Lips, Goatee, Mouth Slightly Open, Mustache, No Beard, Smiling, Wearing Lipstick. Hair Bald, Bangs, Black Hair, Blond Hair, Brown Hair, Gray Hair, Receding Hairline, Sideburns, Straight Hair, Wavy Hair. Table 3 : 3The relationship between facial features and CelebA attributes that we used to evaluate Attribute Matching Score (AMS) in Sec. 4.4 in the main paper. Supplementary Material for Retrieve in Style: Unsupervised Facial Feature Transfer and RetrievalOverview Even though RIS framework is built upon a pretrained StyleGAN which generates fake images, we focus on applying RIS to real images in the main paper. For completeness, we show RIS on fake images in the supplementary. We further provide more results that could not fit in the main paper due to space constraints. In particular, we offer deeper discussion on these aspects:1. We elaborate the submembership analysis on the contribution scores M k[10]with respect to overlapping channels across different clusters. 2. We show latent interpolation between the source and reference images, verifying the smooth transition for the facial feature transfer. 3. We enumerate the attribute classifier accuracy available in the CelebA attribute dataset and their correspondence to describe facial features, confirming that the accuracy of retrieval performance is meaningful.SubmembershipsA central claim to the proposed method, Retrieve in Style (RIS), is the concept of submemberships, i.e., highly contributing channels that vary from image to image. In order to validate the existence of submemberships as discussed in Sec. 3.1 of the main paper, we conducted the following experiment. We generated N = 5000 images and computed their M k for a particular feature k. Then, we performed spherical K = {2, 5, 10, 20, 50, 100}-way clustering and averaged each cluster's M k . Denote M i k as the average contribution score of feature k for all images belonging to cluster i. With a slight abuse of notation, we obtain:where argsort n is a sorting operator that returns the indices of the top n leading values of M i k (n = 100 in our case). That is, Z i k represents the set of top-n most contributing channel for feature k cluster i. Suppose that there exists a universal M k for all images, Z i k should have a high degree of intersection since the important channels for all clusters should be the same. We thus define an intersection ratio as the number of channels common in Z i k divided by the n. FromFig. 9, the intersection ratio for different features progressively decreases as the number of clusters increases. This means that as the clusters get more specific, the number of overlapping channels decreases, validating our hypothesis on submemberships. Im-age2stylegan: How to embed images into the stylegan latent space?. Rameen Abdal, Yipeng Qin, Peter Wonka, Proceedings of the IEEE international conference on computer vision. the IEEE international conference on computer visionRameen Abdal, Yipeng Qin, and Peter Wonka. Im- age2stylegan: How to embed images into the stylegan latent space? In Proceedings of the IEEE international conference on computer vision, pages 4432-4441, 2019. 2 Im-age2stylegan++: How to edit the embedded images?. Rameen Abdal, Yipeng Qin, Peter Wonka, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. the IEEE/CVF Conference on Computer Vision and Pattern Recognition23Rameen Abdal, Yipeng Qin, and Peter Wonka. Im- age2stylegan++: How to edit the embedded images? In Pro- ceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 8296-8305, 2020. 2, 3 Towards open-set identity preserving face synthesis. Jianmin Bao, Dong Chen, Fang Wen, Houqiang Li, Gang Hua, CVPR. Jianmin Bao, Dong Chen, Fang Wen, Houqiang Li, and Gang Hua. Towards open-set identity preserving face synthesis. In CVPR, 2018. 1 Typicality and familiarity of faces. C James, Susan Bartlett, Warren Hurry, Thorley, Memory & Cognition. 123James C Bartlett, Susan Hurry, and Warren Thorley. Typ- icality and familiarity of faces. Memory & Cognition, 12(3):219-228, 1984. 2 David Bau, Jun-Yan Zhu, Hendrik Strobelt, Bolei Zhou, Joshua B Tenenbaum, T William, Antonio Freeman, Torralba, arXiv:1811.10597GAN dissection: Visualizing and understanding generative adversarial networks. arXiv preprintDavid Bau, Jun-Yan Zhu, Hendrik Strobelt, Bolei Zhou, Joshua B Tenenbaum, William T Freeman, and Anto- nio Torralba. GAN dissection: Visualizing and under- standing generative adversarial networks. arXiv preprint arXiv:1811.10597, 2018. 2 Large scale gan training for high fidelity natural image synthesis. Andrew Brock, Jeff Donahue, Karen Simonyan, arXiv:1809.11096arXiv preprintAndrew Brock, Jeff Donahue, and Karen Simonyan. Large scale gan training for high fidelity natural image synthesis. arXiv preprint arXiv:1809.11096, 2018. 1 Spherical k-means clustering. Christian Buchta, Martin Kober, Ingo Feinerer, Kurt Hornik, Journal of Statistical Software. 5010Christian Buchta, Martin Kober, Ingo Feinerer, and Kurt Hornik. Spherical k-means clustering. Journal of Statisti- cal Software, 50(10):1-22, 2012. 3 Stargan: Unified generative adversarial networks for multi-domain image-to-image translation. Yunjey Choi, Minje Choi, Munyoung Kim, Jung-Woo Ha, Sunghun Kim, Jaegul Choo, CVPR. 1Yunjey Choi, Minje Choi, Munyoung Kim, Jung-Woo Ha, Sunghun Kim, and Jaegul Choo. Stargan: Unified genera- tive adversarial networks for multi-domain image-to-image translation. In CVPR, 2018. 1, 2 Effectively unbiased fid and inception score and where to find them. Min Jin , Chong , David Forsyth, CVPR. Min Jin Chong and David Forsyth. Effectively unbiased fid and inception score and where to find them. In CVPR, 2020. 7 Editing in Style: Uncovering the local semantics of GANs. Edo Collins, Raja Bala, Bob Price, Sabine Susstrunk, CVPR. 1112Edo Collins, Raja Bala, Bob Price, and Sabine Susstrunk. Editing in Style: Uncovering the local semantics of GANs. In CVPR, 2020. 1, 2, 3, 5, 6, 7, 10, 11, 12 Target and distractor typicality in facial recognition. R Michael, John H Courtois, Mueller, Journal of Applied Psychology. 665639Michael R Courtois and John H Mueller. Target and dis- tractor typicality in facial recognition? Journal of Applied Psychology, 66(5):639, 1981. 2 Mask-guided portrait editing with conditional gans. Shuyang Gu, Jianmin Bao, Hao Yang, Dong Chen, Fang Wen, Lu Yuan, CVPR. Shuyang Gu, Jianmin Bao, Hao Yang, Dong Chen, Fang Wen, and Lu Yuan. Mask-guided portrait editing with con- ditional gans. In CVPR, 2019. 1 Erik Härkönen, Aaron Hertzmann, Jaakko Lehtinen, Sylvain Paris, arXiv:2004.02546GANSpace: Discovering interpretable GAN controls. arXiv preprintErik Härkönen, Aaron Hertzmann, Jaakko Lehtinen, and Sylvain Paris. GANSpace: Discovering interpretable GAN controls. arXiv preprint arXiv:2004.02546, 2020. 2, 3, 4, 6 AttGAN: Facial attribute editing by only changing what you want. Zhenliang He, Wangmeng Zuo, Meina Kan, Shiguang Shan, Xilin Chen, IEEE Transactions on Image Processing. 2811Zhenliang He, Wangmeng Zuo, Meina Kan, Shiguang Shan, and Xilin Chen. AttGAN: Facial attribute editing by only changing what you want. IEEE Transactions on Image Pro- cessing, 28(11):5464-5478, 2019. 1, 2, 4 Arbitrary style transfer in real-time with adaptive instance normalization. Xun Huang, Serge Belongie, ICCV. Xun Huang and Serge Belongie. Arbitrary style transfer in real-time with adaptive instance normalization. In ICCV, 2017. 2 On the "steerability" of generative adversarial networks. Ali Jahanian, * , Lucy Chai, * , Phillip Isola, ICLR, 2020. 23Ali Jahanian*, Lucy Chai*, and Phillip Isola. On the "steer- ability" of generative adversarial networks. In ICLR, 2020. 2, 3 Progressive growing of gans for improved quality, stability, and variation. Tero Karras, Timo Aila, Samuli Laine, Jaakko Lehtinen, arXiv:1710.10196arXiv preprintTero Karras, Timo Aila, Samuli Laine, and Jaakko Lehtinen. Progressive growing of gans for improved quality, stability, and variation. arXiv preprint arXiv:1710.10196, 2017. 6 A style-based generator architecture for generative adversarial networks. Tero Karras, Samuli Laine, Timo Aila, CVPR. 13Tero Karras, Samuli Laine, and Timo Aila. A style-based generator architecture for generative adversarial networks. In CVPR, 2019. 1, 3 Analyzing and improving the image quality of stylegan. Tero Karras, Samuli Laine, Miika Aittala, Janne Hellsten, Jaakko Lehtinen, Timo Aila, CVPR. Tero Karras, Samuli Laine, Miika Aittala, Janne Hellsten, Jaakko Lehtinen, and Timo Aila. Analyzing and improving the image quality of stylegan. In CVPR, 2020. 1, 2, 3, 4, 7 CONFIG: Controllable neural face image generation. Marek Kowalski, Stephan J Garbin, Virginia Estellers, Tadas Baltrušaitis, Matthew Johnson, Jamie Shotton, European Conference on Computer Vision (ECCV). 2020Marek Kowalski, Stephan J. Garbin, Virginia Estellers, Tadas Baltrušaitis, Matthew Johnson, and Jamie Shotton. CONFIG: Controllable neural face image generation. In Eu- ropean Conference on Computer Vision (ECCV), 2020. 2 Fader networks: Manipulating images by sliding attributes. Guillaume Lample, Neil Zeghidour, Nicolas Usunier, Antoine Bordes, Ludovic Denoyer, Marc&apos;aurelio Ranzato, NeurIPS. Guillaume Lample, Neil Zeghidour, Nicolas Usunier, An- toine Bordes, Ludovic Denoyer, and Marc'Aurelio Ranzato. Fader networks: Manipulating images by sliding attributes. In NeurIPS, 2017. 1 MaskGAN: Towards diverse and interactive facial image manipulation. Ziwei Cheng-Han Lee, Lingyun Liu, Ping Wu, Luo, CVPR, 2020. 1Cheng-Han Lee, Ziwei Liu, Lingyun Wu, and Ping Luo. MaskGAN: Towards diverse and interactive facial image ma- nipulation. In CVPR, 2020. 1, 2 Suspect identification by facial features. Eric Lee, Thomas Whalen, John Sakalauskas, Glen Baigent, Chandra Bisesar, Andrew Mccarthy, Glenda Reid, Cynthia Wotton, Ergonomics. 477Eric Lee, Thomas Whalen, John Sakalauskas, Glen Baigent, Chandra Bisesar, Andrew McCarthy, Glenda Reid, and Cyn- thia Wotton. Suspect identification by facial features. Er- gonomics, 47(7):719-747, 2004. 2 Deep learning face attributes in the wild. Ziwei Liu, Ping Luo, Xiaogang Wang, Xiaoou Tang, De- cember 2015Proceedings of International Conference on Computer Vision (ICCV). International Conference on Computer Vision (ICCV)811Ziwei Liu, Ping Luo, Xiaogang Wang, and Xiaoou Tang. Deep learning face attributes in the wild. In Proceedings of International Conference on Computer Vision (ICCV), De- cember 2015. 8, 11 Matching faces to photographs: Poor performance in eyewitness memory (without the memory). M Ahmed, A Mike Megreya, Burton, Journal of Experimental Psychology: Applied. 144364Ahmed M Megreya and A Mike Burton. Matching faces to photographs: Poor performance in eyewitness memory (without the memory). Journal of Experimental Psychology: Applied, 14(4):364, 2008. 2 Photobook: Content-based manipulation of image databases. Alex Pentland, Rosalind W Picard, Stan Sclaroff, International journal of computer vision. 183Alex Pentland, Rosalind W Picard, and Stan Sclaroff. Pho- tobook: Content-based manipulation of image databases. International journal of computer vision, 18(3):233-254, 1996. 3 Ganimation: Anatomically-aware facial animation from a single image. Albert Pumarola, Antonio Agudo, M Aleix, Alberto Martinez, Francesc Sanfeliu, Moreno-Noguer, Proceedings of the European conference on computer vision (ECCV). the European conference on computer vision (ECCV)Albert Pumarola, Antonio Agudo, Aleix M Martinez, Al- berto Sanfeliu, and Francesc Moreno-Noguer. Ganimation: Anatomically-aware facial animation from a single image. In Proceedings of the European conference on computer vision (ECCV), pages 818-833, 2018. 2 Unsupervised representation learning with deep convolutional generative adversarial networks. Alec Radford, Luke Metz, Soumith Chintala, arXiv:1511.0643424arXiv preprintAlec Radford, Luke Metz, and Soumith Chintala. Un- supervised representation learning with deep convolu- tional generative adversarial networks. arXiv preprint arXiv:1511.06434, 2015. 2, 4 Encoding in style: a stylegan encoder for image-to-image translation. Elad Richardson, Yuval Alaluf, Or Patashnik, Yotam Nitzan, Yaniv Azar, Stav Shapiro, Daniel Cohen-Or, arXiv:2008.0095136arXiv preprintElad Richardson, Yuval Alaluf, Or Patashnik, Yotam Nitzan, Yaniv Azar, Stav Shapiro, and Daniel Cohen-Or. Encoding in style: a stylegan encoder for image-to-image translation. arXiv preprint arXiv:2008.00951, 2020. 3, 6 Facenet: A unified embedding for face recognition and clustering. Florian Schroff, Dmitry Kalenichenko, James Philbin, CVPR. 35Florian Schroff, Dmitry Kalenichenko, and James Philbin. Facenet: A unified embedding for face recognition and clus- tering. In CVPR, 2015. 3, 5 Interpreting the latent space of GANs for semantic face editing. Yujun Shen, Jinjin Gu, Xiaoou Tang, Bolei Zhou, CVPR, 2020. 24Yujun Shen, Jinjin Gu, Xiaoou Tang, and Bolei Zhou. Inter- preting the latent space of GANs for semantic face editing. In CVPR, 2020. 2, 3, 4 Closed-form factorization of latent semantics in GANs. Yujun Shen, Bolei Zhou, arXiv:2007.0660023arXiv preprintYujun Shen and Bolei Zhou. Closed-form factorization of latent semantics in GANs. arXiv preprint arXiv:2007.06600, 2020. 2, 3 Deeply learned face representations are sparse, selective, and robust. Yi Sun, Xiaogang Wang, Xiaoou Tang, CVPR. 35Yi Sun, Xiaogang Wang, and Xiaoou Tang. Deeply learned face representations are sparse, selective, and robust. In CVPR, 2015. 3, 5 Ryohei Suzuki, Masanori Koyama, Takeru Miyato, Taizan Yonetsuji, Huachun Zhu, arXiv:1811.10153Spatially controllable image synthesis with internal representation collaging. arXiv preprintRyohei Suzuki, Masanori Koyama, Takeru Miyato, Taizan Yonetsuji, and Huachun Zhu. Spatially controllable im- age synthesis with internal representation collaging. arXiv preprint arXiv:1811.10153, 2018. 2 Deepface: Closing the gap to human-level performance in face verification. Yaniv Taigman, Ming Yang, Marc&apos;aurelio Ranzato, Lior Wolf, CVPR. 35Yaniv Taigman, Ming Yang, Marc'Aurelio Ranzato, and Lior Wolf. Deepface: Closing the gap to human-level perfor- mance in face verification. In CVPR, 2014. 3, 5 Stylerig: Rigging stylegan for 3d control over portrait images, cvpr 2020. Ayush Tewari, Mohamed Elgharib, Gaurav Bharaj, Florian Bernard, Hans-Peter Seidel, Patrick Pérez, Michael Zöllhofer, Christian Theobalt, IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEEAyush Tewari, Mohamed Elgharib, Gaurav Bharaj, Flo- rian Bernard, Hans-Peter Seidel, Patrick Pérez, Michael Zöllhofer, and Christian Theobalt. Stylerig: Rigging style- gan for 3d control over portrait images, cvpr 2020. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, june 2020. 4 Eigenfaces for recognition. Matthew Turk, Alex Pentland, Journal of cognitive neuroscience. 31Matthew Turk and Alex Pentland. Eigenfaces for recogni- tion. Journal of cognitive neuroscience, 3(1):71-86, 1991. 3 PuppetGAN: Cross-domain image manipulation by demonstration. Nick Ben Usman, Kate Dufour, Chris Saenko, Bregler, ICCV. Ben Usman, Nick Dufour, Kate Saenko, and Chris Bregler. PuppetGAN: Cross-domain image manipulation by demon- stration. In ICCV, 2019. 2 Unsupervised discovery of interpretable directions in the gan latent space. Andrey Voynov, Artem Babenko, arXiv:2002.0375423arXiv preprintAndrey Voynov and Artem Babenko. Unsupervised discov- ery of interpretable directions in the gan latent space. arXiv preprint arXiv:2002.03754, 2020. 2, 3 Facial image retrieval, identification, and inference system. Jian-Kang Wu, Yew Hock Ang, Lam, Sk Moorthy, Desai Narasimhalu, Proceedings of the first ACM international conference on Multimedia. the first ACM international conference on MultimediaJian-Kang Wu, Yew Hock Ang, PC Lam, SK Moorthy, and A Desai Narasimhalu. Facial image retrieval, identification, and inference system. In Proceedings of the first ACM in- ternational conference on Multimedia, pages 47-55, 1993. 3 Improving inversion and generation diversity in stylegan using a gaussianized latent space. Jonas Wulff, Antonio Torralba, arXiv:2009.06529arXiv preprintJonas Wulff and Antonio Torralba. Improving inversion and generation diversity in stylegan using a gaussianized latent space. arXiv preprint arXiv:2009.06529, 2020. 3 Elegant: Exchanging latent encodings with gan for transferring multiple face attributes. Taihong Xiao, Jiapeng Hong, Jinwen Ma, ECCV. Taihong Xiao, Jiapeng Hong, and Jinwen Ma. Elegant: Ex- changing latent encodings with gan for transferring multiple face attributes. In ECCV, 2018. 1 Yinghao Xu, Yujun Shen, Jiapeng Zhu, Ceyuan Yang, Bolei Zhou, Generative hierarchical features from synthesizing images. arXiv e-prints. 23Yinghao Xu, Yujun Shen, Jiapeng Zhu, Ceyuan Yang, and Bolei Zhou. Generative hierarchical features from synthe- sizing images. arXiv e-prints, pages arXiv-2007, 2020. 2, 3 Towards large-pose face frontalization in the wild. Xi Yin, Xiang Yu, Kihyuk Sohn, Xiaoming Liu, Manmohan Chandraker, CVPR. Xi Yin, Xiang Yu, Kihyuk Sohn, Xiaoming Liu, and Man- mohan Chandraker. Towards large-pose face frontalization in the wild. In CVPR, 2017. 1 Generative adversarial network with spatial attention for face attribute editing. Gang Zhang, Meina Kan, Shiguang Shan, Xilin Chen, ECCV. Gang Zhang, Meina Kan, Shiguang Shan, and Xilin Chen. Generative adversarial network with spatial attention for face attribute editing. In ECCV, 2018. 1 Indomain gan inversion for real image editing. Jiapeng Zhu, Yujun Shen, Deli Zhao, Bolei Zhou, European Conference on Computer Vision. SpringerJiapeng Zhu, Yujun Shen, Deli Zhao, and Bolei Zhou. In- domain gan inversion for real image editing. In European Conference on Computer Vision, pages 592-608. Springer, 2020. 3
[]
[ "Generating High-fidelity, Synthetic Time Series Datasets with DoppelGANger", "Generating High-fidelity, Synthetic Time Series Datasets with DoppelGANger" ]
[ "Zinan Lin [email protected] \nCarnegie Mellon University\nCarnegie Mellon University\nIBM\nCarnegie Mellon University\nCarnegie Mellon University\n\n", "Alankar Jain [email protected] \nCarnegie Mellon University\nCarnegie Mellon University\nIBM\nCarnegie Mellon University\nCarnegie Mellon University\n\n", "Chen Wang [email protected] \nCarnegie Mellon University\nCarnegie Mellon University\nIBM\nCarnegie Mellon University\nCarnegie Mellon University\n\n", "Giulia Fanti [email protected] \nCarnegie Mellon University\nCarnegie Mellon University\nIBM\nCarnegie Mellon University\nCarnegie Mellon University\n\n", "Vyas Sekar [email protected] \nCarnegie Mellon University\nCarnegie Mellon University\nIBM\nCarnegie Mellon University\nCarnegie Mellon University\n\n" ]
[ "Carnegie Mellon University\nCarnegie Mellon University\nIBM\nCarnegie Mellon University\nCarnegie Mellon University\n", "Carnegie Mellon University\nCarnegie Mellon University\nIBM\nCarnegie Mellon University\nCarnegie Mellon University\n", "Carnegie Mellon University\nCarnegie Mellon University\nIBM\nCarnegie Mellon University\nCarnegie Mellon University\n", "Carnegie Mellon University\nCarnegie Mellon University\nIBM\nCarnegie Mellon University\nCarnegie Mellon University\n", "Carnegie Mellon University\nCarnegie Mellon University\nIBM\nCarnegie Mellon University\nCarnegie Mellon University\n" ]
[]
Limited data access is a substantial barrier to data-driven networking research and development. Although many organizations are motivated to share data, privacy concerns often prevent the sharing of proprietary data, including between teams in the same organization and with outside stakeholders (e.g., researchers, vendors). Many researchers have therefore proposed synthetic data models, most of which have not gained traction because of their narrow scope. In this work, we present DoppelGANger, a synthetic data generation framework based on generative adversarial networks (GANs). DoppelGANger is designed to work on time series datasets with both continuous features (e.g. traffic measurements) and discrete ones (e.g., protocol name). Modeling time series and mixed-type data is known to be difficult; DoppelGANger circumvents these problems through a new conditional architecture that isolates the generation of metadata from time series, but uses metadata to strongly influence time series generation. We demonstrate the efficacy of DoppelGANger on three realworld datasets. We show that DoppelGANger achieves up to 43% better fidelity than baseline models, and captures structural properties of data that baseline methods are unable to learn. Additionally, it gives data holders an easy mechanism for protecting attributes of their data without substantial loss of data utility.
null
[ "https://arxiv.org/pdf/1909.13403v1.pdf" ]
203,593,621
1909.13403
32c714e81bc44d535b6d9ea80f0bfecde7b1646c
Generating High-fidelity, Synthetic Time Series Datasets with DoppelGANger Zinan Lin [email protected] Carnegie Mellon University Carnegie Mellon University IBM Carnegie Mellon University Carnegie Mellon University Alankar Jain [email protected] Carnegie Mellon University Carnegie Mellon University IBM Carnegie Mellon University Carnegie Mellon University Chen Wang [email protected] Carnegie Mellon University Carnegie Mellon University IBM Carnegie Mellon University Carnegie Mellon University Giulia Fanti [email protected] Carnegie Mellon University Carnegie Mellon University IBM Carnegie Mellon University Carnegie Mellon University Vyas Sekar [email protected] Carnegie Mellon University Carnegie Mellon University IBM Carnegie Mellon University Carnegie Mellon University Generating High-fidelity, Synthetic Time Series Datasets with DoppelGANger Limited data access is a substantial barrier to data-driven networking research and development. Although many organizations are motivated to share data, privacy concerns often prevent the sharing of proprietary data, including between teams in the same organization and with outside stakeholders (e.g., researchers, vendors). Many researchers have therefore proposed synthetic data models, most of which have not gained traction because of their narrow scope. In this work, we present DoppelGANger, a synthetic data generation framework based on generative adversarial networks (GANs). DoppelGANger is designed to work on time series datasets with both continuous features (e.g. traffic measurements) and discrete ones (e.g., protocol name). Modeling time series and mixed-type data is known to be difficult; DoppelGANger circumvents these problems through a new conditional architecture that isolates the generation of metadata from time series, but uses metadata to strongly influence time series generation. We demonstrate the efficacy of DoppelGANger on three realworld datasets. We show that DoppelGANger achieves up to 43% better fidelity than baseline models, and captures structural properties of data that baseline methods are unable to learn. Additionally, it gives data holders an easy mechanism for protecting attributes of their data without substantial loss of data utility. Introduction Data-driven research is a centerpiece of networking and systems research and development, as well as many other fields [9,14,17,35,46,46,59,60,67,85]. Realistic datasets allow engineers to build a better understanding of existing systems, motivate design choices from actual needs, and prove that proposed new systems can indeed work in practice. Unfortunately, the potential benefits of data-driven research and development have been somewhat restricted: generally, only select players who possess data can reliably perform research or develop products. Many of these players are reluctant to share datasets for fear of revealing business secrets, running afoul of data regulations, or violating customers' privacy; this is certainly true of data sharing for academic research, but it is also often true of sharing data across teams within the same organization, as well as business partnerships (e.g., vendors). Notable exceptions aside (e.g., [1,62]), the issue of data access has been and continues to be a substantial concern in the networking and systems communities. An appealing alternative is to create synthetic datasets that can be safely released to enable research and crossstakeholder collaboration. Such datasets should ideally have three (sometimes conflicting) properties: Fidelity: The synthetic data should be drawn from the same (or a similar) distribution to an underlying real dataset. Flexibility: The models should allow researchers to generate different classes of data. For example, we may wish to augment the amount of data representing anomalous or sparse events such as hardware failures or flash crowds. Privacy: The data release technique should be compatible with anonymization and/or data privatization techniques that do not destroy the utility of the data [8, 53,68,72,86]. In this paper, we consider an important and broad class of networking/systems datasets-time series measurements of multi-dimensional features, associated with multi-dimensional attributes. Many networking and systems datasets fall in this category, including traffic traces [1,76,90], measurements of physical network properties [9, 10], and datacenter/compute cluster usage measurements [13, 41,73]. For example, measurements from a compute cluster might include a separate time series for each task measuring per-epoch CPU usage, memory usage, as well as categorical attributes like the exit code of the task (e.g., KILL, FAIL, FINISH). While there have been decades of work on building synthetic data models for time series data, including for networking and systems applications ( §2.2), existing solutions typically provide only a subset of the desired properties. For example, anonymized data can provide high fidelity but is susceptible to common privacy concerns, including membership inference attacks ( §5.3.1), leakage of sensitive attributes ( §5.3.2), and even deanonymization [8, 68,72,86]. Even combinations of these baselines do not solve the problem; we show in §5.1 that all of the generative model baselines have inadequate fidelity on time series data, whereas anonymized raw data has inadequate privacy, so combinations thereof will exhibit at least one of these weaknesses. In this paper, we explore if and how we can leverage recent advances in the space of Generative Adversarial Networks (GANs) [33] to facilitate data-driven network research. GANs have spurred much excitement in the machine learning community due to their ability to generate photorealistic images [49], previously considered a challenging problem. This suggests the potential promise of using it to synthesize high-fidelity networking and systems datasets that stakeholders can release to the research community and/or commercial data users, either inside or outside the stakeholders' enterprise. We compare the potential of GAN-based solutions with other baselines in Table 1. However, we find that naive GAN implementations are unable to model the data with high fidelity because of the following challenges: (1) Complex correlations between time series and their associated attributes. For instance, in a cluster dataset [73], as the memory usage of a task increases over time, its likelihood of failure increases. Existing GAN architectures do not learn such correlations well. (2) Long-term correlations within time series, such as diurnal patterns. These correlations are qualitatively very different from those found in images, which have a fixed dimension and do not need to be generated pixel-by-pixel. Indeed, existing GANs struggle to generate long time series. Our main algorithmic contribution is a GAN-based framework called DoppelGANger that addresses both challenges. Key features include: (1) Decoupled attribution generation: To learn better correlations between time series and their attributes (e.g., metadata like ISP name or location), DoppelGANger decouples the generation of attributes from time series and feeds attributes to the time series generator at each time step. This contrasts with conventional approaches, where attributes and features are generated jointly. This conditional generation architecture also offers us the flexibility to change the attribute distribution without sacrificing fidelity and enables us to hide the real attribute distribution when it is a privacy concern. (2) Batched generation: To strengthen temporal correlations in time series, DoppelGANger outputs batched samples rather than singletons. This idea has been used widely in Markov modeling [32], but its effects on GANs is still an active research topic [56,75] that has not been studied in the context of time series generation (to the best of our knowledge). (3) Decoupled normalization: We observe that traditional GANs trained on datasets with a highly variable dynamic range across samples tend to exhibit severe mode collapse, where the generator always outputs very similar samples. We believe this phenomenon has not yet been documented in the GAN literature, which typically experiments with a narrow class of signals (e.g., images); in contrast, networking time series exhibit much more variability across each sample's max/min limits. To address this, our architecture separately generates normalized time series and realistic max and min limits conditioned on the sample attributes. We evaluate DoppelGANger on three real-world datasets, including web traffic time series [34], geographicallydistributed broadband measurements [20], and compute cluster usage measurements [73]. To demonstrate fidelity on these datasets, we first show that DoppelGANger is able to learn structural microbenchmarks of each dataset better than baseline approaches. We use this exploration to systematically evaluate how each component in DoppelGANger affects performance, and provide recommended hyper-parameter choices. We then test DoppelGANger-generated data on downstream tasks, such as training prediction algorithms. We find that DoppelGANger consistently outperforms baseline algorithms; predictors trained on DoppelGANger-generated data have test accuracies on real data that are up to 43% higher than when trained on baseline-generated data. We also highlight how DoppelGANger allows end users the flexibility to re-train models to emphasize certain classes of data. Our results on privacy are mixed and suggest more active research is needed. On the positive side, we show that Dop-pelGANger is able to seamlessly obfuscate the distribution of sensitive data attributes without sacrificing utility; this task was specifically highlighted as a privacy concern by a company we talked with. Similarly, we find that an important class of membership inference attacks on privacy can be mitigated by training DoppelGANger on larger datasets ( §5.3.1). This counters conventional data release practices, which advocate releasing smaller datasets to avoid leaking user data [74]. That said, DoppelGANger does not fully solve the privacy problem. To our surprise, we find that recently-proposed techniques for training GANs with differential privacy guarantees [2,12,24] have a poor fidelity-privacy trade-off on our datasets, almost completely destroying temporal correlations for moderate privacy guarantees ( §5.3.1). This suggests that existing differential privacy machine learning techniques may be inadequate for networking applications and require further research. This work is only a first step towards using GANs to generate realistic and privacy-preserving synthetic data. For instance, as mentioned above, the anonymity and privacy properties of GAN-generated data leave much room for future exploration. Nonetheless, we view it as an important first step to demonstrate that GANs can achieve acceptable fidelity and basic privacy on a broad class of datasets, and can thus serve as a basis for broadening the benefits and opportunities of data-driven network design and modeling. Motivation and Related Work In this section, we highlight some motivating scenarios and why existing solutions fail to achieve our goals. Use cases While there are many potential use cases that can benefit from a system like DoppelGANger, we highlight two key types of interactions and three representative tasks in these cases: Stakeholder interactions: There are two natural scenarios: • Collaboration across enterprises: Enterprises often impose restrictions on data access between their own divisions and/or with external vendors due to privacy concerns. DoppelGANger can be used to share representative data models between collaborators without violating privacy restrictions on user data. • Reproducible, open research: Many research ideas rely on access to proprietary datasets on which to test and develop new ideas. However, the provider's policies and/or business considerations may preclude these datasets from being available and thus render the resulting research irreproducible. If the data providers could release a Doppel-GANger model of the shared dataset, researchers could independently reproduce results without requiring access to user data. Data-driven tasks: In such interactions, we can consider three representative tasks: 1. Algorithm design: The design of many resource allocation algorithms such as cluster scheduling, resource allocation, and transport protocol design (e.g., [17,35,46,59,60,67]) often needs workload data to tune control parameters. As such, a key property for generated data is that if algorithm A performs better than algorithm B on the real data, then the same should hold on the generated data. 2. Structural characterization: Many system designers also need to understand structural temporal and/or geographic trends in systems; e.g., to understand the shortcomings and/or resource mismatches in existing systems and to suggest remedial solutions [9,14,46,85]. In this case, generated data should preserve trends and distributions well enough to reveal such structural insights. 3. Predictive modeling: A third use case for time series data is to learn predictive models, especially for rare or anomalous events such as network anomalies [31,47,55]. For these models to be useful, they should have enough fidelity that a predictor trained on generated data should make meaningful predictions on real data. These use cases highlight the need for models that exhibit fidelity, privacy, and flexibility. Without fidelity, data recipients cannot draw meaningful conclusions (or may draw incorrect ones). Without privacy, the data provider may be liable for violations of privacy policies. Without flexibility, the data recipients may be limited in the kinds of experiments or analytics they can run. Related work and limitations Our focus in this work is on multi-dimensional time series datasets, which are common in networking and systems applications. Examples include: 1. Web traffic traces of temporal web page views with attributes of web page names that can be used to predict future daily views, analyze page correlations [84], or generate page recommendations [28, 69]; 2. Network measurements of packet loss rate, bandwidth, delay from Internet-connected devices with attributes such as location or device type that are useful for network management [47]; or 3. Cluster usage measurements of metrics such as CPU/memory usage associated with attributes (e.g., server and job type) that can inform resource provisioning [15] and job scheduling [61]. Each of the listed examples consists of time series data with (potentially) high-dimensional data points and associated attributes (metadata) that can be either numeric or categorical. Generative models for such time series have a rich literature (e.g., [42, 58, 63-66, 80, 89]). However, primary shortcomings of prior work are: low fidelity, low flexibility, and/or requiring detailed domain knowledge. For example, prior works on network trace generation assume parametric models for entities like networks, users, traffic patterns, and file sizes, and use data to infer those parameters [42,80,89]. Since we do not limit ourselves to network traces, we want to avoid the time-intensive task of deriving parametric physical models for many possible data sources (e.g., compute cluster, WAN, web ecosystem). At a high level, most prior efforts for modeling time series data in the networking and systems community are based on one of the following statistical models: Dynamic stationary processes represent each point in the time series (R i , i ≥ 0) as R i = X i +W i , where X i is a deterministic process, and W i is a noise function. This is a widely used approach for modeling traffic time series in the networking community [4, 52, 63-66, 70, 81]. Some of these efforts rely critically on a priori knowledge of key patterns (e.g., diurnal trends) to constrain the deterministic process, while also using naive noise models (e.g., Gaussian). This is infeasible for modeling datasets with complex, unknown correlations. Others, such as the Transform-Expand-Sample (TES) methodology [63][64][65][66], instead use an empirical estimate of each time series' marginal distribution and autocorrelation (assuming stationarity). Unfortunately, empirical histograms are known to be poor estimates of high-dimensional distributions [77]. Markov models (MMs) are a popular approach for modeling categorical time series, by representing system dynamics as a conditional probability distribution satisfying P(R i |R i−1 , . . . , R 1 ) = P(R i |R i−1 ). Variants such hidden markov models [27]) from the text generation literature [21,45] have also been used for modeling the distributions of time series [32,39,54]. More recently, neural network-based approaches have been shown to outperform Markov models in many settings [44,51]. The key weakness of MMs is their inability to encode long-term correlations in data. Auto-regressive (AR) models improve on dynamic stationary processes for time series modeling [25]. In AR models, each point in the time series (R i ) is represented as a function of the previous p points: R i = f (R i−1 , R i−2 , . . . , R i−p ) + W i , where W i is white noise. Nonlinear AR models (e.g., parameterized by neural networks) have gained traction and are the baseline used in this work [3,88,95,96]. The main problem with AR models is fidelity: like Markov models, they only use a limited history to predict the next sample in a time series, leading to over-simplified temporal correlations. Recurrent neural networks (RNNs) have been more recently used for time series modeling in deep learning [43]. Like AR and Markov models, they use the previous sample to determine the next sample, but RNNs also store an internal state variable that captures the entire history of the time series. RNNs have had great success in learning discriminative models of time series data, which predict a label conditioned on a sample [44,51]. However, generative modeling is harder than discriminative modeling. Indeed, we find RNNs are unable to learn certain simple time series distributions. GAN-based methods GANs have emerged as a popular technique for generating or augmenting datasets, especially medical images or patient records [19,29,36,38,78]. As discussed in §3.3, the architectures used for those tasks give poor fidelity in networking data, which has both complex temporal correlations and mixed discrete-continuous data types. Although GAN-based time series generation exists (e.g., text [22,98], medical time series [12, 24]) we find that such techniques fail on networking data, exhibiting poor fidelity on longer sequences and severe mode collapse. This is partially because networking data tends to be more heavy-tailed and variable in length than medical time series, which seems to affect GANs in ways that have not been carefully explored before. In summary, prior approaches fail to capture long-and/or short-term temporal correlations, or do so only with extensive prior knowledge. To illustrate this, we run the above baselines on the Wikipedia Web Traffic dataset, which contains daily page views of web pages (details in §5). Figure 1 shows the autocorrelation of time series samples (averaged over all samples) for real and synthetic datasets. Two patterns emerge in the real data: a short-term weekly correlation pattern indicated by the periodic spikes in autocorrelation and a long-term annual correlation indicated by the local peak at roughly the 1-year mark (365 days). Notably, all of our baselines fail to capture both patterns simultaneously. HMMs (purple line) and AR models (red line) do not store enough states to learn even the weekly correlations, and the state space required to learn long-term correlations would be prohibitively large. RNNs learn the short-term correlation correctly, but do not capture the long-term correlation. Even naive GANs ( §3.3) fail to learn a meaningful autocorrelation. Problem Statement and Scope We abstract our datasets as follows: A dataset D = Figure 2: Workflow for DoppelGANger usage. t i j < t i j+1 ∀1 ≤ j < T i . In our work, we treat the timestamps as equally spaced and hence do not generate them explicitly. However we can easily extend this to unequally spaced timestamps by treating time as a continuous feature and generating inter-arrival times along with other features. This abstraction fits many classes of data that appear in networking applications. For example, Table 6 maps a web traffic measurement dataset to our terminology. It does not apply to other classes of networking and systems datasets, including lists of items (e.g., blacklisted domains), and oneshot measurements (e.g., network topology). Our problem is to take any such dataset as input and learn a model that can generate a new dataset D as output. D should exhibit fidelity, flexibility, and privacy, and the methodology should be general enough to handle datasets in our abstraction. Interface Our intended interface for DoppelGANger users is illustrated in Figure 2. DoppelGANger requires a small amount of tuning enabled by a few auxiliary inputs: Data schema: DoppelGANger needs several properties of the data schema, including attribute/feature dimensionality, and whether they are categorical or numeric. Time series collection frequency: Although this input is optional, we show in §5 that DoppelGANger benefits from knowing the timescale at which data was collected (e.g., minutes, seconds, days) and the total time series duration. Privacy constraints: Data holders should input a list of sensitive attributes, whose distribution can be masked. Once the model is trained and released, the client can generate a synthetic dataset with the following inputs: Desired data quantity: The client can generate as much synthetic data as desired. Desired attribute distribution: The client can optionally specify which attributes should be present in the generated dataset, and with what distribution. This facilitates exploration of anomalous events from the original data, e.g., flash crowds. GANs: Background and promise GANs are a data-driven generative modeling technique based on adversarial training [33]. GANs take as input a set of training data samples and output a model that can produce new samples from the same distribution as the original data, without simply sampling the initial dataset. GANs are seen as a breakthrough in generative modeling for their ability to generate photorealistic images [49], previously considered a difficult task due to the complex correlations in images. More precisely, suppose we have a dataset consisting of n samples (or objects, in the language of our abstraction) O 1 , . . . , O n , where O i ∈ R p , and each sample is drawn i.i.d. from some distribution O i ∼ P O . The goal of GANs is to use these samples to learn a model that can draw samples from distribution P O [33]. The core idea of GANs is to train two components: a generator G and a discriminator D ( Figure 3); in practice, both are instantiated with neural networks. In the original GAN design, the generator maps a noise vector z ∈ R d to a sample O ∈ R p , where p d. z is drawn from some pre-specified distribution P z , usually a Gaussian. Simultaneously, we train the discriminator D : R p → [0, 1], which takes samples as input (either real of fake), and classifies each sample as real (1) or fake (0). Errors in this classification task are used to train the parameters of both the generator and discriminator through backpropagation. The loss function for GANs can be written min G max D E x∼p x [log D(x)] + E z∼p z [log(1 − D(G(z)))]. (1) The generator and discriminator are trained alternately, or adversarially. Unlike prior generative modeling approaches, which typically involve likelihood maximization of parametric models (e.g., §2.2), GANs train models with fewer assumptions about data structure, and may be better-suited to the generative modeling of time series data than baselines. Challenges Despite GANs' success at modeling images, naively applying GANs to networking and systems timeseries datasets gives poor results. To show this, we implemented the first GAN architecture one might think of. The key aspects of a GAN's design are the generator architecture, the discriminator architecture, and the loss function. For this test, we used a fully-connected multilayer perceptron (MLP) generator which generates features and attributes jointly, an MLP discriminator, and Wasserstein loss. 1 Details of this experiment can be found in Appendix B. We train our naive GAN on the Wikipedia Web Traffic dataset. The gray curve in Figure 1 shows the autocorrelation of generated samples from this naive GAN, which capture neither weekly nor annual correlations. Intuitively, this happens because naive architectures cannot generate long time series; during training, the MLP must jointly learn all cross-correlations and cannot exploit the fact that patterns often recur in time series. Moreover, this architecture does not learn the correlations between features and attributes, and it does not allow flexible generation of time series conditioned on attributes; in fact, training does not even reliably converge. Additional evaluations are in §5. Next, we describe how DoppelGANger addresses these problems. Detailed Design Our naive GAN experiments highlight the difficulty of learning long temporal correlations and correlations between features and attributes. During these experiments, we also find that mode collapse-a phenomenon where a trained GAN outputs homogenous samples despite being trained on a diverse dataset [56,83]-can happen even if we use Wasserstein loss. 2 In this section, we discuss design choices in the generator ( §4.1), discriminator ( §4.2), and loss function ( §4.3) that address these challenges; some of these choices are unconventional, emerging precisely because of the class of problems we consider. We summarize the big picture design in §4.4. Generator Our generator is designed to address three key problems: capturing temporal correlations, capturing correlations between features and attributes, and alleviating mode collapse. Temporal correlations DoppelGANger learns temporal correlations by both using RNNs in the generator and generating batched data samples. The generator architecture is shown in Figure 6. RNN generator: One reason the naive architecture fails is that MLPs are too weak to capture long-term correlations. As mentioned in §2.2, RNNs are specifically designed to model time series. RNNs generate one record R i j (e.g., the CPU rate at a particular second) at a time, and take T i passes to generate the entire time series. Unlike traditional neural units, RNN neural units have an internal state that is meant to encode all past states of the signal, so that when generating R i j , the RNN unit can incorporate the patterns in R i 1 , ..., R i j−1 . We use a widely-used RNN called long short-term memory (LSTM) [43], which is known to memorize history well. Note that the output of an RNN has fixed dimensionality, depending on the number of RNN units. To generate data with the desired dimensionality and data types (categorical vs. 2 Wasserstein loss is believed to help with mode collapse [6,37]. continuous), we append an MLP network to the output of the RNN with the desired dimensionality. This is how we use the data schema from §3.1 Also, notice that the RNN is a single neural network, even though Figure 6 suggests that there are many units. This unrolled representation commonly conveys that the RNN is being used many times to generate samples. Generating batched points: Even with an RNN generator, GANs still struggle to capture temporal correlations when the time series length exceeds about 500. We observe a similar phenomenon in prior work using GANs to generate text [26,94]. Improving the learning capability of RNNs is an important research area [16,48] beyond the scope of our paper. Instead of entirely relying on RNNs to capture temporal correlations, we seek a tradeoff between the utilization of RNNs and MLPs. Specifically, at the generation step j, the MLP network reads the output from the RNN and generates a batch of samples R i j , R i j+1 , ..., R i j+S , where S is a tunable parameter. For example, if the time series length is 500 and S = 10, then the RNN only needs to iterate 50 times, and at each time the MLP outputs 10 consecutive records. This way, we decrease the difficulty of learning for the RNN, and exploit MLPs in a domain where they perform well. For example, Figure 4 shows the mean square error between the autocorrelation of our generated signals and real data on the WWT dataset. Even using a small (but larger than 1) batch size gives substantial improvements in signal quality (more plots in Appendix G). Generation flag for variable length: Besides capturing temporal correlations, another challenge is that time series may have different lengths. To generate samples with variable lengths, we add a generation flag that indicates whether the time series has ended or not: if the time series does not end at this time step, the generation flag is [1,0]; if the time series ends exactly at this time step, the generation flag is [0, 1]. We pad the rest of the time series (including all features) with 0's. The generator outputs generation flag [p 1 , p 2 ] through a softmax output layer, so that p 1 , p 2 ∈ [0, 1] and p 1 + p 2 = 1. [p 1 , p 2 ] is used to determine whether we should continue unrolling the RNN to the next time step. One way to interpret this is that p 1 gives the probability that the time series ends at this time step. Therefore, if p 1 < p 2 , we stop generation and pad all future features with 0's; if p 1 > p 2 , we continue unrolling the RNN to generate features for the next time step(s). The generation flags are also fed to the discriminator ( §4.2) as part of the features, so the discriminator can also learn sample length characteristics. Auto-Normalized Figure 5: Ten random, synthetic WWT data samples generated by DoppelGANger with a specific attribute before and after adding auto-normalization to mitigate mode collapse. Correlations between features and attributes Recall that naive GANs fail to capture the correlation between features R i and attributes A i when the data objects O i = (A i , R i ) are generated at once. To address this, we partition the data object generation into two phases: first generate the attributes, then features conditioned on the attributes. Decoupling feature and attribute generation: Our decoupled architecture factorizes the data distribution as follows: P(O i ) = P(A i , R i ) = P(A i ) · P(R i |A i ) For attribute generation, we use a dedicated MLP network that maps an input noise vector to attribute A i . For feature generation, we use the RNN-based architecture from §4.1.1. To explicitly encourage learning of the attribute-time series correlation, we feed the generated A i as an input to the RNN at every step. This design choice separates the hard problem into two easier sub-problems, each solved by an appropriate network architecture. Experiments show that this architecture successfully captures the correlation between features and attributes ( §5.1). Additionally, model users may want to augment the number of samples from a particular set of attributes to simulate rare events (flexibility). In other cases, users may want to hide the attribute distribution, (e.g., hardware types in a compute cluster) when it represents a business secret (privacy). This design allows users to change attribute distributions without hurting the conditional distribution or temporal correlations, by training on the original dataset and then retraining only the attribute generation MLP network to a different, desired distribution. Hence, in addition to fidelity benefits, this architecture also aids in providing flexibility and privacy. Alleviating mode collapse We find that when the range of feature values vary widely across samples in the training data (e.g., one web page has 1k−5k daily page views, while the other has only 0−100 daily page views), naive implementations exihibit severe mode collapse ( Figure 5, left). This cause of mode collapse is undocumented in the GAN literature, to the best of our knowledge. One possible reason is because in natural images-the most widely-used data type in the GAN literature-pixel ranges are similar, whereas networking data tends to exhibit more dramatic fluctuations. We experimented with known state-of-the-art techniques for mitigating mode collapse [56], but found that they did not fully resolve the problem. Auto-normalization: To prevent this mode collapse, we normalize the real data features prior to training and add (max{ f i } ± min{ f i })/2 as two additional attributes to the i-th sample. In the generated data, we can use these two attributes to scale features back to a realistic range. However, if we generate these two fake attributes along with the real ones, we lose the ability to condition feature generation on a particular attribute in §4.1.2. Therefore, we further divide the generation into three steps: (1) generate attributes using the MLP generator (same as §4.1.2); (2) with the generated attributes as inputs, generate the two "fake" (max/min) attributes using another MLP; (3) with the generated real and fake attributes as inputs, generate features using the architecture in §4.1.1. All of this can be inferred automatically from the data. With this design, we can alleviate mode collapse ( Figure 5, right) while preserving flexibility and privacy. Discriminator Our discriminator makes two key design choices: an MLP discriminator, and the decision to use two discriminators to improve fidelity. MLP discriminator: A key design decision is whether to use an RNN or an MLP in the discriminator. The answer depends in part on the choice of loss function. As mentioned in §3.3, Wasserstein loss has been shown to improve the stability of training, and we find empirically that it is especially helpful for generating categorical data. However, optimizing the regularized Wasserstein loss requires the calculation of a second derivative of the loss function. At the time of writing, leading deep learning frameworks (TensorFlow and PyTorch) did not include tools for computing this second derivative; as such, any solutions that rely on such functionality would be less likely to gain traction. For this reason, we decided to use an MLP discriminator. However, we find that when the average length of R i is long, the data fidelity is low, as anticipated from our results on the naive GAN architecture. This may be because an MLP discriminator that discriminates on the entire object O i = (A i , R i ) cannot provide precise information to the generator on how to improve the sample quality when O i is large. We therefore introduce the following novel approach. Auxiliary discriminator for data fidelity: To make the feedback information more effective, we introduce a second discriminator to split up the problem: discriminate only on attribute A i . The generator gets feedback from both discriminators to improve itself during training (which we make more precise in the next section). Since generating good attributes A i is much easier than generating the entire object O i , the generator can learn from the second discriminator's feedback to generate high-fidelity attributes. Further, with the help of the original discriminator on O i , the generator can then learn Figure 6: Architecture of DoppelGANger. The generator consists of three parts for generating attribute, min/max and features. Besides the discriminator for evaluating the entire sample, there is an auxiliary discriminator for evaluating attributes and min/max. to generate O i well. Empirically, we find that this architecture improves the data fidelity significantly (Appendix G). Note that the idea of introducing a second discriminator/component in GANs is not new. But usually those new components are for extending new functions to GANs (e.g., [18,57,87,99]). To the best of our knowledge, the idea of introducing a second discriminator purely for fidelity is new. Loss function As mentioned in §3.3, Wasserstein loss has been widely adopted for improving training stability and alleviating mode collapse. In our own empirical explorations, we find that Wasserstein loss is better than the original loss for generating categorical variables. Because categorical variables are prominent in our domain, we use Wasserstein loss. In order to train the two discriminators simultaneously, we combine the loss functions of the two discriminators by a weighting parameter α. More specifically, the loss function is min G max D 1 ,D 2 L 1 (G, D 1 ) + αL 2 (G, D 2 )(2) where L i , i ∈ {1, 2} is the Wasserstein loss of the original and second discriminator, respectively: L i = E x ∼p x [T i (D i (x))] − E z∼p z [D i (T i (G(z)))] − λEx ∼px (∇xD i (T i (x)) − 1) 2 . Here T 1 (x) = x and T 2 (x) = attribute part of x.x := tx + (1 −t)G(z) where t ∼ Unif[0, 1]. As with all GANs, the generator and discriminators are trained alternatively until convergence. Unlike naive GAN architectures, we did not observe problems with training instability, and on our datasets, convergence required only up to 200,000 batches (400 epochs when the number of training samples is 50,000). Putting it all together The overall DoppelGANger architecture is in Figure 6. The data holder first trains DoppelGANger on the data they want Variablelength signals WWT [34] x MBA [20] x x GCUT [73] x x x Evaluation We evaluate DoppelGANger 3 for fidelity, flexibility, and privacy on three datasets, whose properties are summarized in Table 2. These datasets are chosen to exhibit different combinations of the challenges in §1-3. In particular, they exhibit correlations within time series and across attributes, and/or multi-dimensional features and variable feature lengths. Here we outline these datasets and our baseline algorithms. [73] contains usage traces of a Google Cluster of 12.5k machines over a period of 29 days in May 2011. We focus on the task resource usage logs, containing measurements of task resource usage, and the exit code of each task. Once the task starts running, every second the system measures its resource usage (e.g. CPU usage, memory usage), and every 5 minutes the system logs the aggregated statistics of the measurements (e.g. mean, maximum). Those resource usage values are the features. When the task ends, its end event type (e.g. FAIL, FINISH, KILL) is also logged. Each task has one end event type, which we treat as an attribute. Baselines We compare DoppelGANger to a number of representative baselines ( §2.2). We discuss the specific configurations and extensions we implement in each case: Hidden Markov models (HMM) ( §2. The generated time series after the first presence of p 1 < p 2 will be discarded. Fidelity In line with prior recommendations [64], we explore how Dop-pelGANger captures structural data properties like temporal correlations and attribute-feature joint distributions. 5 Temporal correlations: To show how DoppelGANger captures temporal correlations, Figure 1 shows the average autocorrelation for the WWT dataset for real and synthetic datasets (discussed in §2.2). As mentioned before, the real data exhibits a short-term weekly correlation and a long-term annual correlation. DoppelGANger captures both, as evidenced by the periodic weekly spikes and the local peak at roughly the 1-year mark, unlike our baseline approaches. It also exhibits a 95.8% lower mean square error from the true data autocorrelation than the closest baseline (Naive GAN). The fact that DoppelGANger captures these correlations is surprising, particularly since we are using an RNN generator. Typically, RNNs are able to reliably generate time series around 20 samples, while the WWT dataset has over 500 samples. We believe this is due to a combination of adversarial training (not typically used for RNN training) and our batched sample generation. Empirically, eliminating either feature hurts the learned autocorrelation (Appendix G). Another aspect of learning temporal correlations is generating time series of the right length. Figure 7 shows the duration of tasks in the GCUT dataset for real and synthetic datasets generated by DoppelGANger and RNN. DoppelGANger's length distribution fits the real data well, capturing the bimodal pattern in real data, whereas RNN fails. Other baselines are even worse at capturing the length distribution (Appendix C). We observe this regularly; while DoppelGANger captures multiple data modes, our baselines tend to capture one at best. This may be due to the naive randomness in the other baselines. RNNs and AR models incorporate too little randomness, causing them to learn simplified duration distributions; HMMs instead are too random: they maintain too little state to generate meaningful results. Feature-attribute correlations: Learning correct attribute distributions is necessary for learning feature-attribute correlations. As mentioned in §5.0.1, for our HMM, AR, and RNN baselines, attributes are randomly drawn from the multinomial distribution on training data because there is no clear way to jointly generate attributes and features. Hence, they trivially learn a perfect attribute distribution. Figure 8 shows that DoppelGANger is also able to mimic the real distribution of end event type distribution in GCUT dataset, while naive GANs miss a category entirely; this appears to be due to mode collapse, which we mitigate with our second discriminator. Results on other datasets are in Appendix C. Although our HMM, AR, and RNN baselines learn perfect attribute distributions, it is substantially more challenging to learn the joint attribute-feature distribution. To illustrate this, we compute the CDF of total bandwidth for DSL and cable users in MBA dataset. Table 3 shows the Wasserstein-1 distance between the generated CDFs and the ground truth, 6 showing that DoppelGANger is closest to the real distribution. To make sense of this result, Figures 9(a) and 9(b) plot the 6 full CDFs. Most of the baselines capture the fact that cable users consume more bandwidth than DSL users. However, DoppelGANger appears to excel in regions of the distribution with less data, e.g., very small bandwidth levels. DoppelGANger does not just memorize: A common concern with GANs is whether they are memorizing training data [7,71]. To evaluate this, we ran the following experiment: for a given generated DoppelGANger sample, we find its nearest samples in the training data. We consistently observe significant differences (both in square error and qualitatively) between the generated samples and the nearest neighbors. Typical samples can be found in Appendix C. Downstream case studies Predictive modeling: Given a time series of records, users may want to predict whether an event E occurs in the future, or even forecast the time series itself. For example, in the GCUT dataset, we could predict whether a particular job will complete successfully. In this use case, we want to show that models trained on generated data generalize to real data. We first partition our dataset, as shown in Figure 10. We split our real data into two sets of equal size: a training set A and a test set A'. We then train a generative model (e.g., DoppelGANger or a baseline) on training set A. We generate datasets B and B' for training and testing. Finally, we evaluate event prediction algorithms by training a predictor on A and/or B, and testing on A' and/or B'. This allows us to com- pare the generalization abilities of the prediction algorithms both within a class of data (real/generated), and generalization across classes (train on generated, test on real) [24]. We first predict the task end event type on GCUT data (e.g., EVICT, KILL) from time series observations. Such a predictor may be useful for cluster resource allocators. This prediction task reflects the correlation between the time series and underlying attributes (namely, end event type). For the predictor, we trained various algorithms to demonstrate the generality of our results: multilayer perceptron (MLP), Naive Bayes, logistic regression, decision trees, and a linear SVM. Figure 11 shows the test accuracy of each predictor when trained on generated data and tested on real. Real data expectedly has the highest test accuracy. However, we find that DoppelGANger performs better than other baselines for all five classifiers. For instance, on the MLP predictive model, DoppelGANger-generated data has 43% higher accuracy than the next-best baseline (AR), and 80% of the real data accuracy. Due to space constraints, we cover experiments on the remaining datasets in Appendix D. Algorithm comparison: We evaluate whether algorithm rankings are preserved on generated data on the GCUT dataset by training different classifiers (MLP, SVM, Naive Bayes, decision tree, and logistic regression) to do end event type classification. We also evaluate this on the WWT dataset by training different regression models (MLP, linear regression, and Kernel regression) to do time series forecasting (details in Appendix D). For this use case, users have only generated data, so we want the ordering (accuracy) of algorithms on real data to be preserved when we train and test them on generated data. In other words, for each class of generated data, Table 4: Rank correlation of predication algorithms on GCUT and WWT dataset. Higher is better. we train each of the predictive models on B and test on B . This is different from Figure 11, where we trained on generated data (B) and tested on real data (A ) . We compare this ranking with the ground truth ranking, in which the predictive models are trained on A and tested on A . We then compute the Spearman's rank correlation coefficient [82], which compares how the ranking in generated data is correlated with the groundtruth ranking. Table 4 shows that DoppelGANger and AR achieve the best rank correlations. This result is misleading because AR models exhibit minimal randomness, so all predictors achieve the same high accuracy; the AR model achieves near-perfect rank correlation despite producing lowquality samples; this highlights the importance of considering rank correlation together with other fidelity metrics. More results (e.g., exact prediction numbers) are in Appendix D. Flexibility A typical task in data-driven research is to develop algorithms for handling events that are poorly-represented in the data. For example, data consumers may want to generate more failure events. DoppelGANger provides a natural mechanism for doing so. Suppose a user wants to output more data with a particular attribute vector A = [A 1 , . . . , A m ] corresponding to, say, failure events in a cluster. The user can, starting with the pre-trained attribute generator MLP, re-train it with attribute samples drawn from the desired distribution. To do so, we feed the generated attribute vectors into the same discriminator from Figure 6, but feed zeros to the time series inputs. This allows us to train the MLP generator adversarially without introducing more parameters. Notice that the user does not change the time series generator, nor does she provide additional time series samples. The conditional distribution of time series given a particular attribute combination stays the same. We give an example in Appendix E. Note that simply altering the marginal attribute distribution does not always give realistic results; for example, it is possible that introducing more failure events should change the conditional feature distribution. To our knowledge, no data-driven generative models capture such dependencies; doing so is an interesting question for future work. However, our approach gives more variability than replicating anomalous data events. Privacy Data holders' privacy concerns often fit in two categories: user privacy and business secrets. The first stems from regulatory restrictions and public relations implications, while the second affects data holders' competitive advantages. Understanding the privacy properties of GANs is an active area of research, and a full exploration is outside the scope of this (or any single) paper. We set more pragmatic goals as follows. User Privacy We focus on a narrow but common definition of user privacy in ML algorithms-the trained model should not depend too much on any individual user's data. To this end, we examine a common privacy attack (membership inference) and privacy metric (differential privacy). While we cannot conclusively say that DoppelGANger (or any GAN-based system) inherently protects user privacy, we observe two surprising findings that inform future work in this space: 1: Subsetting hurts privacy. A common practice for protecting privacy is subsetting, or only releasing a subset of the dataset to prevent adversaries from inferring sensitive data properties [74]. We find that subsetting actually worsens a recently-proposed class of attacks called membership inference attacks [40,79]. Given a trained machine learning model and set of data samples, the goal of such an attack is to infer whether those samples were in the training data. The attacker does this by training a classifier to output whether each sample was in the training data. Notice that anonymized datasets are trivially susceptible to membership inference attacks. 7 We measure DoppelGANger's vulnerability to membership inference attacks [40] on the WWT dataset. As in prior literature [40], our metric is success rate, or the percentage of successful trials in guessing whether a sample is in the training dataset. Naive random guessing gives 50%, whereas we found an attack success rate of only 51% (experimental details in Appendix F). This suggests robustness to membership inference attacks in this case. However, when we decrease the number of training samples, the attack success rate increases ( Figure 12). For instance, with 200 training samples, the attack success rate is as high as 99.5%. We find similar trends is other datasets (Appendix F). From a machine learning point of view, this result makes sense: fewer training samples imply weaker generalization [97] and stronger overfitting. Our results suggest a practical guideline: to be more robust against membership attacks, use more training data. This contradicts the common practice of subsetting for better privacy. 2: Differentially-private GANs may destroy fidelity. Differential privacy (DP) [23] has emerged as the de facto standard for privacy-preserving data analysis. It has been applied to deep learning [2] by clipping and adding noise to the gradient updates in stochastic gradient descent. In fact, this technique has also been used with GANs [30,92,93] to generate privacy-preserving time series [12,24] to ensure that any single example in the training dataset does not disproportionately influence the model parameters. These papers argue that DP gives privacy at minimal cost to utility. To evaluate the efficacy of DP in our context, we trained DoppelGANger with DP for the WWT dataset using TensorFlow Privacy [5]. Figure 13 shows the autocorrelation of the resulting time series for different values of the privacy budget, ε. Note that smaller values of ε denote more privacy; typically, ε ≈ 1 is considered a reasonable operating point. As ε is reduced (stronger privacy guarantees), autocorrelations become progressively worse. In this figure, we show results only for the 19th epoch, as the results become only worse as training proceeds. Complete results can be found in Appendix F. These results highlight an important point: although DP seems to destroy our autocorrelation plots, this was not always evident from downstream metrics, such as predictive accuracy. This highlights the need to evaluate generative time series models at a qualitative and quantitative level; prior work has focused mainly on the latter [12,24]. These results also suggest that current DP mechanisms for GANs require significant improvements privacy-preserving time series generation. Business Secrets In our discussions with major data holders, a primary concern about data sharing is leaking information about the types of resources available and in use at the enterprise. Many such business secrets tend to be embedded in the attributes. For instance, a dataset's location attribute could leak market information to competitors. DoppelGANger trivially allows data holders to obfuscate the attribute generator distribution to any desired distribution using the same techniques introduced in §5.2. Notice that this is actually a stronger privacy guarantee than differential privacy on the attribute distribution (equivalently, it corresponds to a perfect privacy level ε = 0)-the data holder can choose any distribution to mask the original. Discussion The design of DoppelGANger was the result of a non-linear trial-and-error process in applying GANs to networking and systems datasets. We conclude with some key lessons. Domain insight is critical: Our design process highlighted a number of fairly domain-specific problems, such as mode collapse for signals with wide dynamic ranges and poor temporal Autocorrelation epsilon=0.55 real epsilon=+inf dp Figure 13: Autocorrelation vs time lag (in days) for real, ε = + inf and differential privacy (dp) with different values of ε). correlations without batching. These problems do not arise in prior work, possibly due to constrained data types in other domains [11,24]. However, these domain-specific problems significantly affected our design, leading to components like the min/max generator and batched outputs. We expect that further insights may arise as we move on to progressively harder classes of time series, such as network traces. Differentially-private ML is not ready for prime time: At first glance, the burgeoning literature at the intersection of privacy and machine learning/generative models appeared to offer a promising solution to the privacy problem in Doppel-GANger. Unfortunately, our experiments suggest a significant gap between the "hype" surrounding these theoretical approaches and the reality. As such, we find that for many of our datasets the fidelity-privacy tradeoff is far from desirable in practice, and serves as a call for further practical research in this space. "Less is more" for privacy leakage: From anecdotal conversations, the instinct of systems operators interested in data sharing seems to release generative models learned on small datasets, in an effort to bound their "attack surface." Perhaps counterintuitively, we find that this seemingly natural strategy can be counterproductive, making the models susceptible to simple membership inference attacks! As DoppelGANgerlike systems become more mature and part of operational workflows to enable data-driven collaborations, we argue that releasing generative models learned on larger datasets serve a dual benefit of providing better fidelity and privacy. Acknowledgments This research was sponsored in part by the U.S. Army Combat Capabilities Development Command Army Research Laboratory and was accomplished under Cooperative Agreement Number W911NF-13-2-0045 (ARL Cyber Security CRA). The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the Combat Capabilities Development Command Army Research Laboratory or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for Government purposes notwithstanding any copyright notation here on. This work was also supported in part by by Siemens AG and by the CONIX Research Center, one of six centers in JUMP, a Semiconductor Research Corporation (SRC) program sponsored by DARPA. We would like to thank Safia Rahmat, Martin Otto, and Abishek Herle for valuable discussions. This work used the Extreme Science and Engineering Discovery Environment (XSEDE), which is supported by National Science Foundation grant number OCI-1053575. Specifically, it used the Bridges system, which is supported by NSF award number ACI-1445606, at the Pittsburgh Supercomputing Center (PSC). [8] Lars Backstrom, Cynthia Dwork, and Jon Kleinberg. Wherefore art thou r3579x?: anonymized social networks, hidden patterns, and structural steganography. [18] Xi Chen, Yan Duan, Rein Houthooft, John Schulman, Ilya Sutskever, and Pieter Abbeel. Infogan: Interpretable representation learning by information maximizing generative adversarial nets. In Advances in neural information processing systems, pages 2172-2180, 2016. [19] Edward Choi, Siddharth Biswal, Bradley Malin, Jon Duke, Walter F Stewart, and Jimeng Sun. Generating multi-label discrete patient records using generative adversarial networks. arXiv preprint arXiv:1703.06490, 2017. [20] Federal Communications Commission. Raw data -measuring broadband america -seventh report, 2018. https://www.fcc.gov/reports-research/ reports/measuring-broadband-america/ raw-data-measuring-broadband-america-seventh. [ A Datasets Google Cluster Usage Trace: Due to the substantial computational requirements of training GANs and our own resource constraints, we did not use the entire dataset. Instead, we uniformly sampled a subset of 100,000 tasks and used their corresponding measurement records to form our dataset. This sample was collected after filtering out the following categories of objects: • 197 (0.17%) tasks don't have corresponding end events (such events may end outside the data collection period) • 1403 (1.25%) tasks have discontinuous measurement records (i.e., the end timestamp of the previous measurement record does not equal the start timestamp of next measurement record) • 7018 (6.25%) tasks have an empty measurement record • 3754 (3.34%) tasks have mismatched end times (the timestamp of the end event does not match the ending timestamp of the last measurement). The maximum feature length in this dataset is 2497, however, 97.06% samples have length within 50. The schema of this dataset is in Table 5. Wikipedia Web Traffic Dataset: The original datasets consists of 145k objects. After removing samples with missing data, 117k objects are left, from which we sample 100k objects for our evaluation. All samples have feature length 550. The schema of this dataset is in Table 6. This small sample set will make us hard to understand the actual dynamic patterns in this dataset. To increase number of valid objects, we take the average of measurements every 6 hours for each home. As long as there is at least one measurement in each 6 hours period, we regard it as a valid object. Using this way, we get 739 valid objects with measurements from 10/01/2017 from 10/15/2017, from which we sample 600 objects for our evaluation. All samples have feature length 56. The schema of this dataset is in Table 7. B Implementation Details DoppelGANger: Attribute generator and min/max generator are MLPs with 2 hidden layers and 100 units in each layer. Feature generator is 1 layer of LSTM with 100 units. Softmax layer is applied for categorical feature and attribute output. Sigmoid or tanh is applied for continuous feature and attribute output, depending on whether data is normalized to [0,1] or [-1,1] (this is configurable). The discriminator and auxiliary discriminator are MLP with 4 hidden layers and 200 units in each layer. Gradient penalty weight was 10.0 as suggested in [37]. The network was trained using Adam optimizer with learning rate of 0.001 and batch size of 100 for both generators and discriminators. AR: We used p = 3, i.e., used the past three samples to predict the next. The AR model was an MLP with 4 hidden layers and 200 units in each layer. The MLP was trained using Adam optimizer [50] with learning rate of 0.001 and batch size of 100. RNN: For this baseline, we used LSTM (Long short term memory) [43] variant of RNN. It is 1 layers of LSTM with 100 units. The network was trained using Adam optimizer with learning rate of 0.001 and batch size of 100. Naive GAN: The generator and discriminator are MLPs with 4 hidden layers and 200 units in each layer. Gradient penalty weight was 10.0 as suggested in [37]. The network was trained using Adam optimizer with learning rate of 0.001 and batch size of 100 for both generator and discriminator. C Additional Fidelity Results Temporal length: Figure 14 shows the length distribution of DoppelGANger and baselines in GCUT dataset. It is clear that DoppelGANger has the best fidelity. Attribute distribution: Figure 15, 16, 17 show the histograms of Wikipedia domain, access type, and agent of Naive GAN and DoppelGANger. DoppelGANger learns the distribution pretty well, whereas naive GAN cannot. Figure 18, 19, 22 show the histograms of ISP, technology, and state of DoppelGANger and all baselines. Again, for HMM, AR, RNN baselines, the attributes are directly drawn from the empirical distribution on training data, therefore, they will have the best fidelity. We compute the Jensen-Shannon divergence (JSD) between generated distribution and the real distribution in Figure 20, 21, 23. We see that DoppelGANger's JSD is actually very close to HMM, AR, RNN. (This in part is because this dataset has only 600 samples to compare.) DoppelGANger does not simply memorize training samples: Figure 24, 25, 26 show the some generated samples from DoppelGANger and their nearest (based on squared error) samples in training data from the three datasets. The results show that DoppelGANger is not memorizing training samples. To achieve the good fidelity results we have shown before, DoppelGANger must indeed learn the underlying structure of the samples. D Additional Case Study Results Predictive modeling: For the WWT dataset, the predictive modeling task involves forecasting of the page views for next 50 days, given those for the first 500 days. We want to learn a (relatively) parsimonious model that can take an arbitrary length-500 time series as input and predict the next 50 time steps. For this purpose, we train various regression models: an MLP with five hidden layers (200 nodes each), and MLP with just one hidden layer (100 nodes), a linear regression model, and a Kernel regression model using an RBF kernel. To evaluate each model, we compute the so-called coefficient of determination, R 2 , which captures how well a regression model describes a particular dataset. 8 Figure 27 shows the R 2 for each of these models for each of our generative models and the real data. Here we train each regression model on generated data (B) and test it on real data (A'), hence it is to be expected that real data performs best. It is clear that DoppelGANger performs better than other baselines for all regression models. Note that sometimes RNN, AR, and naive GANs baselines have large negative R 2 which are therefore not visualized in this plot. Algorithm comparison: Figure 28 of prediction algorithms on DoppelGANger's and baselines' generated data. Combined with 4, we see that Dop-pelGANger and AR are the best for preserving ranking of prediction algorithms. E Additional Flexibility Results To illustrate the process outlined in Section 5.2, we show an example how to generate an arbitrary attribute distribution based on the Wikipedia web traffic dataset. We start with the true joint attribute distribution of domain names and access types. We then impose an arbitrary desired joint distribution (e.g., uniform, discretized Gaussian, impulse). We then retrain our attribute generator to match the target distribution. Figure 30 shows the heatmap of our target joint (Gaussian) probability distribution compared to the (very similar) distribution of the re-trained generator. This result demonstrates the ability of DoppelGANger to change attribute distribution to an arbitrary one according to the need of data holder and consumer. Figure 32 shows the autocorrelation of generated page views from DoppelGANger with different differential privacy degrees (ε) and at different training epochs. We see that no matter what ε is, as the training proceeds, the fidelity of time series gets worse. This may because the noise added for differential privacy during the training process deviates the learning of GANs. On the other hand, for a fixed epoch (e.g., 19), higher ε gives poorer fidelity, which we have highlighted in the main text §5.3.1. F Additional Privacy Results G Additional Design Validation Results Feature batch size S: One parameter in DoppelGANger is feature batch size S ( §4.1.1). In this section we explore how S influences the results, and thus support the recommendation we give in §4.4. We enumerate S = 1, 5, 10, 25, 50 on WWT dataset. Recall that in this dataset the feature (daily page view) has a weekly and annual correlation pattern. We find that when 10 ≤ S ≤ 25, DoppelGANger stably captures both patterns during the training process. The full correlation plot is in Figure 33. Auxiliary discriminator: Figure 34, 35 show the generated (max±min)/2 distribution from DoppelGANger with and without the auxiliary discriminator on WWT dataset. After adding the auxiliary discriminator, the distributions are learned much better. integers Timestamp Discription Possible Values The time of the measurement hour 2015-09-01 1:00, etc. Real DoppelGANger Figure 35: Distribution of (max-min)/2 from Doppel-GANger (a) without and (b) with the auxiliary discriminator (WWT data). Figure 1 : 1Autocorrelation of daily page views for Wikipedia Web Traffic dataset. DoppelGANger captures both weekly and annual correlation pattern. Figure 3 : 3Original GAN architecture from[33]. Figure 4 : 4Batching parameter S vs the MSE of generated and real sample autocorrelations on the WWT dataset. Figure 7 :Figure 8 : 78Histogram of task duration for the Google Cluster Usage Traces. RNN-generated data misses the second mode, but DoppelGANger captures it. Histograms of end event types from GCUT. Figure 9 : 9Total bandwidth usage in 2 weeks in MBA dataset for (a) DSL and (b) cable users. Figure 10 :Figure 11 : 1011Evaluation setup. Our real data consists of {A∪A }; using training data A, we generate a set of samples B ∪ B . Subsequent experiments train models of downstream tasks on A or B, our training sets, and then test on A or B .M LP N ai ve B ay esLo gi st ic R eg re ss io n D ec is io nT re e Li ne ar SV End event type prediction accuracy on the GCUT. Figure 12 : 12Membership inference attack against Doppel-GANger in WWT dataset, when changing training set size. FCC MBA dataset: We used the latest cleaned data published by FCC MBA in December 2018 [20]. This datasets contains hourly traffic measurements from 4378 homes in September and October 2017. However, a lot of measurements are missing in this dataset. Considering period from 10/01/2017 from 10/15/2017, only 45 homes have complete network usage measurements every hour. Figure 14 : 14Histogram of task duration for the GCUT dataset. DoppelGANger gives the best fidelity. Figure 16 : 16Histograms of access type from WWT dataset. Figure 17 : 17Histograms of agent from WWT dataset. Figure 19 :Figure 20 :Figure 21 :Figure 22 :Figure 23 : 1920212223Histograms of technology from MBA dataset. JSD between generated ISP distribution and real ISP distribution from MBA dataset. JSD between generated technology distribution and real technology distribution from MBA dataset.Ia WA DE DC WI WV HI FL WY NH NJ NM TX LA NC NE TN NY PA RI NV VA CO CA AL sc AR VT IL GA IN IA MA AZ ID CT ME MD OK OH UT MO MN MI Al KS MT nv MS SC KY OR DE DC WI WV HI FL WY NH NJ NM TX LA NC NE TN NY PA RI NV VA CO CA AL sc AR VT IL GA IN IA MA AZ ID CT ME MD OK OH UT MO MN MI Al KS MT nv MS SC KY OR DE DC WI WV HI FL WY NH NJ NM TX LA NC NE TN NY PA RI NV VA CO CA AL sc AR VT IL GA IN IA MA AZ ID CT ME MD OK OH UT MO MN MI Al KS MT nv MS SC KY OR WA DE DC WI WV HI FL WY NH NJ NM TX LA NC NE TN NY PA RI NV VA CO CA AL sc AR VT IL GA IN IA MA AZ ID CT ME MD OK OH UT MO MN MI Al KS MT nv MS SC KY OR DE DC WI WV HI FL WY NH NJ NM TX LA NC NE TN NY PA RI NV VA CO CA AL sc AR VT IL GA IN IA MA AZ ID CT ME MD OK OH UT MO MN MI Al KS MT nv MS SC KY OR Histograms of state from MBA dataset. JSD between generated state distribution and real state distribution from MBA dataset. Figure 24 : 24Three time series samples selected uniformly at random from the synthetic dataset generated using DoppelGANger and the corresponding top-3 nearest neighbours (based on square error) from the real WWT dataset. The time series shown here is daily page views (normalized). Figure 25 :Figure 26 :Figure 27 :Figure 28 :Figure 29 : 2526272829Three time series samples selected uniformly at random from the synthetic dataset generated using DoppelGANger and the corresponding top-3 nearest neighbours (based on square error) from the real GCUT dataset. The time series shown here is CPU rate (normalized). Three time series samples selected uniformly at random from the synthetic dataset generated using DoppelGANger and the corresponding top-3 nearest neighbours (based on square error) from the real MBA dataset. The time series shown here is traffic byte counter (normalized).Ke rn elR idg eLin ea rR eg re ss ion ML P (1 lay er ) ML P (Coefficient of determination for WWT time series forecasting. Higher is better. Ranking of end event type prediction algorithms on GCUT dataset. re al da ta Do pp elG AN ge r Ranking of traffic prediction algorithms on WWT dataset. Figure 30 : 30Target vs. generated joint distributions of attributes from the Wikipedia web traffic dataset. We impose a higher probability mass on the attribute combination corresponding to desktop traffic to domain 'fr.wikipedia.org'. Figure 31 : 31Success rate of membership inference attack against DoppelGANger in GCUT dataset, when changing number of training samples. Figure 32 : 32Autocorrelation v.s. time lag (in days) for real, ε = + inf and dp (Differential Privacy, with different values of ε) at different epochs during training. Figure 33 :Figure 34 : 3334Autocorrelation v.s. time lag (in days) for different S at different epochs during training. Distribution of (max+min)/2 from Doppel-GANger (a) without and (b) with the auxiliary discriminator (WWT data). {O 1 , O 2 , ..., O n } is defined as a set of objects O i . the time series might contain the packets sent out from a specific client; each packet consists of a single record. Different objects may contain different numbers of records (i.e., time series of different lengths). The number of records for object O i is given by T i . Note that the timestamps are sorted, i.e.Each ob- ject represents an atomic, high-dimensional data element (i.e., the combination of a single time series with its associated metadata). More precisely, each object O i = (A i , R i ) contains m attributes A i = [A i 1 , A i 2 , ..., A i m ]. For example, attribute A i j could represent user i's physical location, and A i k for k = j the user's ISP. Note that we can support datasets in which multiple objects have the same set of attributes. The sec- ond component of each object is a time series of records R i = [R i 1 , R i 2 , ..., R i T i ]. For example, in a network trace dataset, Each record R i j = (t i j , f i j ) contains a timestamp t i j , and K features f i j = [ f i j,1 , f i j,2 , ..., f i j,K ] (e.g. the features of the packet). Data Consumer Data Holder Original Data 2. Train DoppelGANger Data Model Parameters " 1. Request data 3. Return model parameters " 4. Generate synthetic data Synthetic Data Data Schema Privacy Constraints Collection statistics (optional) Desired attribute distribution Desired quantity (optional) Table 2 : 2Challenging properties of studied datasets.to release. During this process, some parameters can be tuned. We provide recommendations on how to set them as follows and show the supporting results in Appendix G. After training, DoppelGANger parameters can then be released to the data consumer, who can use them to generate a desired sample size and optionally a desired attribute distribution. Feature batch size S ( §4.1.1): This parameter controls the number of features generated at each RNN pass. Empirically, setting S so that T /S (the number of steps RNN needs to take) is around 50 gives good results, whereas prior time This generator can be turned on or off. When the range of feature values varies widely across samples, it can improve data fidelity. Traditional GANs lack this generator, possibly because the kinds of data they consider are more controlled and do not need it. Auxiliary Discriminator ( §4.2): This discriminator can be turned on or off. It helps regulate data fidelity for longer, complex signals. Traditional GAN systems lack such a discriminator because they do not separate the conditional generation of attributes from features; we do this for fidelity and flexibility. Private attribute distributions: If the data holder wants to hide an attribute distribution, DoppelGANger can retrain its Attribute Generator to a different distribution without affecting other parts of the network. This again is unique to DoppelGANger because of its isolated attribute generation.series GANs use S = 1 [11, 24]. This is how we use the data collection frequency information from §3.1. Min/Max Generator ( §4.1.3): Wasserstein-1 distance is the integrated absolute error between 2 CDFs.DoppelGANger AR RNN HMM Naive GAN DSL 0.68 1.34 2.33 3.46 1.14 Cable 0.74 6.57 2.46 7.98 0.87 Table 3 : 3Wasserstein-1 distance of total bandwidth distribution of DSL and cable users. Lower is better. DoppelGANger AR RNN HMM Naive GANGCUT 1.00 1.00 1.00 0.01 0.90 WWT 0.80 0.80 0.20 -0.60 -0.60 Charter Verizon Frontie r Verizon Hawaiia n Telcom Cox Mediac om Hughes Windst ream Wildblu e/ViaSa t Cincinn ati Bell Comca st AT&T Century Link Optimu m Charter Verizon Frontie r Verizon Hawaiia n Telcom Cox Mediac om Hughes Windst ream Wildblu e/ViaSa t Cincinn ati Bell Comca st AT&T Century Link Optimu m Count Charter Verizon Frontie r Verizon Hawaiia n Telcom Cox Mediac om Hughes Windst ream Wildblu e/ViaSa t Cincinn ati Bell Comca st AT&T Century Link Optimu m Charter Verizon Frontie r Verizon Hawaiia n Telcom Cox Mediac om Hughes Windst ream Wildblu e/ViaSa t Cincinn ati Bell Comca st AT&T Century Link Optimu m ISPFigure 18: Histograms of ISP from MBA dataset.Timestamp Discription Possible ValuesThe timestamp that the measurement was conducted on. Different task may have different number of measurement records (i.e. T i may be different)0 50 100 150 200 Real DoppelGANger Charter Verizon Frontie r Verizon Hawaiia n Telcom Cox Mediac om Hughes Windst ream Wildblu e/ViaSa t Cincinn ati Bell Comca st AT&T Century Link Optimu m 0 50 100 150 200 Real HMM 0 50 100 150 200 Real AR 0 50 100 150 200 Real RNN 0 50 100 150 200 Real Naive GAN Attributes Description Possible Values end event type The reason that the task finishes FAIL, KILL, EVICT, etc. Features Description Possible Values CPU rate Mean CPU rate float numbers maximum CPU rate Maximum CPU rate float numbers sampled CPU usage The CPU rate sam- pled uniformly on all 1 second measurements float numbers canonical memory usage Canonical memory usage measurement float numbers assigned memory usage Memory assigned to the container float numbers maximum memory usage Maximum canonical memory usage float numbers unmapped page cache Linux page cache that was not mapped into any userspace process float numbers total page cache Total Linux page cache float numbers local disk space usage Runtime local disk ca- pacity usage float numbers 2011-05-01 01:01, etc. Table 5 : 5Schema of GCUT dataset. Attributes and features are described in more detail in[73].Attributes Description Possible Values Wikipedia domain The main domain name of the Wikipedia page zh.wikipedia.org, com- mons.wikimedia.org, etc. access type The access method mobile-web, desk- top, all-access, etc. agent The agent type spider, all-agent, etc. Features Description Possible Values views The number of views integers Timestamp Discription Possible Values The date that the page view is counted on 2015-07-01, etc. Table 6 : 6Schema of WWT datasetAttributes Description Possible Values technology The connection technol- ogy of the unit cable, fiber, etc. ISP Internet service provider of the unit AT&T, Verizon, etc. state The state where the unit is located PA, CA, etc. Features Description Possible Values ping loss rate UDP ping loss rate to the server that has lowest loss rate within the hour float numbers traffic byte counter Total number of bytes sent and received in the hour (excluding the traf- fic due to the activate measurements) Table 7 : 7Schema of MBA datasetDSL Fiber Satellite Cable IPBB 0 100 200 300 Real DoppelGANger DSL Fiber Satellite Cable IPBB 0 100 200 300 Real HMM DSL Fiber Satellite Cable IPBB 0 100 200 300 Real AR Count DSL Fiber Satellite Cable IPBB 0 100 200 300 Real RNN DSL Fiber Satellite Cable IPBB Wasserstein loss has been widely shown to give better stability and mode coverage than the original cross-entropy loss, and it is now common practice to start with Wasserstein loss in GAN designs[24,37,49]. These design choices are consistent with prior work on GAN-based synthetic data generation[19, 29,36,38]. The code is available at https://github.com/fjxmlzn/DoppelGANger 4 https://www.kaggle.com/c/web-traffic-time-series-forecasting Such properties are sometimes ignored in the ML literature in favor of downstream performance metrics; however, in systems and networking, we argue such microbenchmarks are important. Since the attacker is assumed to already have the victim's data, it can trivially check if that data is in the anonymized dataset. For a time series with points (x i , y i ) for i = 1, . . . , n and a regression function f (x), R 2 is defined as R 2 = 1 − ∑i(yi− f (x i )) 2 ∑i(yi−ȳ) 2 whereȳ = 1 n ∑ i y i is the mean y-value. Notice that −∞ ≤ R 2 ≤ 1, and a higher score indicates a better fit. The internet topology data kit -2011-04. The internet topology data kit -2011- 04. http://www.caida.org/data/active/ internet-topology-data-kit. Deep learning with differential privacy. Martin Abadi, Andy Chu, Ian Goodfellow, Ilya H Brendan Mcmahan, Kunal Mironov, Li Talwar, Zhang, Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security. the 2016 ACM SIGSAC Conference on Computer and Communications SecurityACMMartin Abadi, Andy Chu, Ian Goodfellow, H Brendan McMahan, Ilya Mironov, Kunal Talwar, and Li Zhang. Deep learning with differential privacy. In Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security, pages 308-318. ACM, 2016. Traffic models in broadband networks. Abdelnaser Adas, IEEE Communications Magazine. 357Abdelnaser Adas. Traffic models in broadband networks. IEEE Communications Magazine, 35(7):82-89, 1997. Sparse approximations for high fidelity compression of network traffic data. William Aiello, Anna Gilbert, Brian Rexroad, Vyas Sekar, Proceedings of the 5th ACM SIGCOMM conference on Internet measurement. the 5th ACM SIGCOMM conference on Internet measurementUSENIX AssociationWilliam Aiello, Anna Gilbert, Brian Rexroad, and Vyas Sekar. Sparse approximations for high fidelity compres- sion of network traffic data. In Proceedings of the 5th ACM SIGCOMM conference on Internet measurement, pages 22-22. USENIX Association, 2005. Tensorflow privacy. Galen Andrew, Steve Chien, Nicolas Papernot, Galen Andrew, Steve Chien, and Nicolas Papernot. Ten- sorflow privacy. https://github.com/tensorflow/ privacy, 2019. Martin Arjovsky, arXiv:1701.07875Soumith Chintala, and Léon Bottou. Wasserstein gan. arXiv preprintMartin Arjovsky, Soumith Chintala, and Léon Bottou. Wasserstein gan. arXiv preprint arXiv:1701.07875, 2017. Do gans actually learn the distribution? an empirical study. Sanjeev Arora, Yi Zhang, arXiv:1706.08224arXiv preprintSanjeev Arora and Yi Zhang. Do gans actually learn the distribution? an empirical study. arXiv preprint arXiv:1706.08224, 2017. Differentially private generative adversarial networks for time series, continuous, and discrete open data. Lorenzo Frigerio, Anderson Santana De, Laurent Oliveira, Patrick Gomez, Duverger, IFIP International Conference on ICT Systems Security and Privacy Protection. SpringerLorenzo Frigerio, Anderson Santana de Oliveira, Lau- rent Gomez, and Patrick Duverger. Differentially private generative adversarial networks for time series, continu- ous, and discrete open data. In IFIP International Con- ference on ICT Systems Security and Privacy Protection, pages 151-164. Springer, 2019. Exploring event correlation for failure prediction in coalitions of clusters. Song Fu, Cheng-Zhong Xu, Proceedings of the 2007 ACM/IEEE conference on Supercomputing. the 2007 ACM/IEEE conference on SupercomputingACM41Song Fu and Cheng-Zhong Xu. Exploring event cor- relation for failure prediction in coalitions of clusters. In Proceedings of the 2007 ACM/IEEE conference on Supercomputing, page 41. ACM, 2007. Modeling and forecasting electricity prices with input/output hidden markov models. Alicia Mateo González, Javier Am Son Roque, García-González, IEEE Transactions on Power Systems. 201Alicia Mateo González, AM Son Roque, and Javier García-González. Modeling and forecasting electricity prices with input/output hidden markov models. IEEE Transactions on Power Systems, 20(1):13-24, 2005. Generative adversarial nets. Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, Yoshua Bengio, Advances in neural information processing systems. Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversar- ial nets. In Advances in neural information processing systems, pages 2672-2680, 2014. Web traffic time series forecasting. Google, Google. Web traffic time series forecast- ing, 2018. https://www.kaggle.com/c/ web-traffic-time-series-forecasting. Multiresource packing for cluster schedulers. Robert Grandl, Ganesh Ananthanarayanan, Srikanth Kandula, Sriram Rao, Aditya Akella, ACM SIG-COMM Computer Communication Review. 444Robert Grandl, Ganesh Ananthanarayanan, Srikanth Kandula, Sriram Rao, and Aditya Akella. Multi- resource packing for cluster schedulers. ACM SIG- COMM Computer Communication Review, 44(4):455- 466, 2015. Synthetic medical images from dual generative adversarial networks. John T Guibas, S Tejpal, Peter S Virdi, Li, arXiv:1709.01872arXiv preprintJohn T Guibas, Tejpal S Virdi, and Peter S Li. Syn- thetic medical images from dual generative adversarial networks. arXiv preprint arXiv:1709.01872, 2017. Improved training of wasserstein gans. Ishaan Gulrajani, Faruk Ahmed, Martin Arjovsky, Vincent Dumoulin, Aaron C Courville, Advances in Neural Information Processing Systems. Ishaan Gulrajani, Faruk Ahmed, Martin Arjovsky, Vin- cent Dumoulin, and Aaron C Courville. Improved train- ing of wasserstein gans. In Advances in Neural Infor- mation Processing Systems, pages 5767-5777, 2017. Gan-based synthetic brain mr image generation. Changhee Han, Hideaki Hayashi, Leonardo Rundo, Ryosuke Araki, Wataru Shimoda, Shinichi Muramatsu, Yujiro Furukawa, Giancarlo Mauri, Hideki Nakayama, 2018 IEEE 15th International Symposium on Biomedical Imaging (ISBI 2018). IEEEChanghee Han, Hideaki Hayashi, Leonardo Rundo, Ryosuke Araki, Wataru Shimoda, Shinichi Mura- matsu, Yujiro Furukawa, Giancarlo Mauri, and Hideki Nakayama. Gan-based synthetic brain mr image gener- ation. In 2018 IEEE 15th International Symposium on Biomedical Imaging (ISBI 2018), pages 734-738. IEEE, 2018. Stock market forecasting using hidden markov model: a new approach. Rafiul Md, Baikunth Hassan, Nath, Intelligent Systems Design and Applications, 2005. ISDA'05. Proceedings. 5th International Conference on. IEEEMd Rafiul Hassan and Baikunth Nath. Stock market forecasting using hidden markov model: a new approach. In Intelligent Systems Design and Applications, 2005. ISDA'05. Proceedings. 5th International Conference on, pages 192-196. IEEE, 2005. Membership inference attacks against generative models. Jamie Hayes, Luca Melis, George Danezis, Emiliano De Cristofaro, Logan, Proceedings on Privacy Enhancing Technologies. on Privacy Enhancing Technologies2019Jamie Hayes, Luca Melis, George Danezis, and Emiliano De Cristofaro. Logan: Membership inference attacks against generative models. Proceedings on Privacy Enhancing Technologies, 2019(1):133-152, 2019. Next stop, the cloud: Understanding modern web service deployment in ec2 and azure. Keqiang He, Alexis Fisher, Liang Wang, Aaron Gember, Aditya Akella, Thomas Ristenpart, Proceedings of the 2013 conference on Internet measurement conference. the 2013 conference on Internet measurement conferenceACMKeqiang He, Alexis Fisher, Liang Wang, Aaron Gember, Aditya Akella, and Thomas Ristenpart. Next stop, the cloud: Understanding modern web service deployment in ec2 and azure. In Proceedings of the 2013 conference on Internet measurement conference, pages 177-190. ACM, 2013. Modeling and generating tcp application workloads. Félix Hernández-Campos, Kevin Jeffay, F Donelson Smith, Fourth International Conference on Broadband Communications, Networks and Systems (BROADNETS'07). IEEEFélix Hernández-Campos, Kevin Jeffay, and F Donel- son Smith. Modeling and generating tcp application workloads. In 2007 Fourth International Conference on Broadband Communications, Networks and Systems (BROADNETS'07), pages 280-289. IEEE, 2007. Long shortterm memory. Sepp Hochreiter, Jürgen Schmidhuber, Neural computation. 98Sepp Hochreiter and Jürgen Schmidhuber. Long short- term memory. Neural computation, 9(8):1735-1780, 1997. Bidirectional lstm-crf models for sequence tagging. Zhiheng Huang, Wei Xu, Kai Yu, arXiv:1508.01991arXiv preprintZhiheng Huang, Wei Xu, and Kai Yu. Bidirectional lstm-crf models for sequence tagging. arXiv preprint arXiv:1508.01991, 2015. Markov source modeling of text generation. Frederick Jelinek, The Impact of Processing Techniques on Communications. SpringerFrederick Jelinek. Markov source modeling of text generation. In The Impact of Processing Techniques on Communications, pages 569-591. Springer, 1985. Via: Improving internet telephony call quality using predictive relay selection. Junchen Jiang, Rajdeep Das, Ganesh Ananthanarayanan, A Philip, Venkata Chou, Vyas Padmanabhan, Esbjorn Sekar, Marcin Dominique, Dalibor Goliszewski, Renat Kukoleca, Vafin, Proceedings of the 2016 ACM SIGCOMM Conference. the 2016 ACM SIGCOMM ConferenceACMJunchen Jiang, Rajdeep Das, Ganesh Ananthanarayanan, Philip A Chou, Venkata Padmanabhan, Vyas Sekar, Esb- jorn Dominique, Marcin Goliszewski, Dalibor Kukoleca, Renat Vafin, et al. Via: Improving internet telephony call quality using predictive relay selection. In Proceed- ings of the 2016 ACM SIGCOMM Conference, pages 286-299. ACM, 2016. Cfa: A practical prediction system for video qoe optimization. Junchen Jiang, Vyas Sekar, Henry Milner, Davis Shepherd, Ion Stoica, Hui Zhang, NSDI. Junchen Jiang, Vyas Sekar, Henry Milner, Davis Shep- herd, Ion Stoica, and Hui Zhang. Cfa: A practical pre- diction system for video qoe optimization. In NSDI, pages 137-150, 2016. Gated orthogonal recurrent units: On learning to forget. Li Jing, Caglar Gulcehre, John Peurifoy, Yichen Shen, Max Tegmark, Marin Soljacic, Yoshua Bengio, Neural computation. 314Li Jing, Caglar Gulcehre, John Peurifoy, Yichen Shen, Max Tegmark, Marin Soljacic, and Yoshua Bengio. Gated orthogonal recurrent units: On learning to for- get. Neural computation, 31(4):765-783, 2019. Progressive growing of gans for improved quality, stability, and variation. Tero Karras, Timo Aila, Samuli Laine, Jaakko Lehtinen, arXiv:1710.10196arXiv preprintTero Karras, Timo Aila, Samuli Laine, and Jaakko Lehtinen. Progressive growing of gans for im- proved quality, stability, and variation. arXiv preprint arXiv:1710.10196, 2017. Adam: A method for stochastic optimization. P Diederik, Jimmy Kingma, Ba, arXiv:1412.6980arXiv preprintDiederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014. Neural architectures for named entity recognition. Guillaume Lample, Miguel Ballesteros, Sandeep Subramanian, Kazuya Kawakami, Chris Dyer, arXiv:1603.01360arXiv preprintGuillaume Lample, Miguel Ballesteros, Sandeep Subra- manian, Kazuya Kawakami, and Chris Dyer. Neural ar- chitectures for named entity recognition. arXiv preprint arXiv:1603.01360, 2016. A flow volumes data compression approach for traffic network based on principal component analysis. Qu Li, Hu Jianming, Zhang Yi, Intelligent Transportation Systems Conference. IEEEQu Li, Hu Jianming, and Zhang Yi. A flow volumes data compression approach for traffic network based on principal component analysis. In Intelligent Transporta- tion Systems Conference, 2007. ITSC 2007. IEEE, pages 125-130. IEEE, 2007. On the tradeoff between privacy and utility in data publishing. Tiancheng Li, Ninghui Li, Proceedings of the 15th ACM SIGKDD international conference on Knowledge discovery and data mining. the 15th ACM SIGKDD international conference on Knowledge discovery and data miningACMTiancheng Li and Ninghui Li. On the tradeoff between privacy and utility in data publishing. In Proceedings of the 15th ACM SIGKDD international conference on Knowledge discovery and data mining, pages 517-526. ACM, 2009. Spectrum usage prediction based on high-order markov model for cognitive radio networks. Yang Li, Yu-Ning Dong, Hui Zhang, Hai-Tao Zhao, Haixian Shi, Xin-Xing Zhao, Computer and Information Technology (CIT). IEEEIEEE 10th International Conference onYang Li, Yu-ning Dong, Hui Zhang, Hai-tao Zhao, Hai- xian Shi, and Xin-xing Zhao. Spectrum usage prediction based on high-order markov model for cognitive radio networks. In Computer and Information Technology (CIT), 2010 IEEE 10th International Conference on, pages 2784-2788. IEEE, 2010. Predictive analysis in network function virtualization. Zhijing Li, Zihui Ge, Ajay Mahimkar, Jia Wang, Y Ben, Haitao Zhao, Joanne Zheng, Laura Emmons, Ogden, Proceedings of the Internet Measurement Conference. the Internet Measurement ConferenceACMZhijing Li, Zihui Ge, Ajay Mahimkar, Jia Wang, Ben Y Zhao, Haitao Zheng, Joanne Emmons, and Laura Ogden. Predictive analysis in network function virtualization. In Proceedings of the Internet Measurement Conference 2018, pages 161-167. ACM, 2018. Pacgan: The power of two samples in generative adversarial networks. Zinan Lin, Ashish Khetan, Giulia Fanti, Sewoong Oh, Advances in Neural Information Processing Systems. Zinan Lin, Ashish Khetan, Giulia Fanti, and Sewoong Oh. Pacgan: The power of two samples in generative ad- versarial networks. In Advances in Neural Information Processing Systems, pages 1505-1514, 2018. Infogan-cr: Disentangling generative adversarial networks with contrastive regularizers. Zinan Lin, Giulia Kiran Koshy Thekumparampil, Sewoong Fanti, Oh, arXiv:1906.06034arXiv preprintZinan Lin, Kiran Koshy Thekumparampil, Giulia Fanti, and Sewoong Oh. Infogan-cr: Disentangling generative adversarial networks with contrastive regularizers. arXiv preprint arXiv:1906.06034, 2019. Generation of a random sequence having a jointly specified marginal distribution and autocovariance. Bede Liu, Munson, IEEE Transactions on Acoustics, Speech, and Signal Processing. 306Bede Liu and DC Munson. Generation of a random sequence having a jointly specified marginal distribution and autocovariance. IEEE Transactions on Acoustics, Speech, and Signal Processing, 30(6):973-983, 1982. A hierarchical framework of cloud resource allocation and power management using deep reinforcement learning. Ning Liu, Zhe Li, Jielong Xu, Zhiyuan Xu, Sheng Lin, Qinru Qiu, Jian Tang, Yanzhi Wang, ICDCS. IEEENing Liu, Zhe Li, Jielong Xu, Zhiyuan Xu, Sheng Lin, Qinru Qiu, Jian Tang, and Yanzhi Wang. A hierarchical framework of cloud resource allocation and power man- agement using deep reinforcement learning. In ICDCS, pages 372-382. IEEE, 2017. Resource management with deep reinforcement learning. Hongzi Mao, Mohammad Alizadeh, Ishai Menache, Srikanth Kandula, Proceedings of the 15th ACM Workshop on Hot Topics in Networks. the 15th ACM Workshop on Hot Topics in NetworksACMHongzi Mao, Mohammad Alizadeh, Ishai Menache, and Srikanth Kandula. Resource management with deep reinforcement learning. In Proceedings of the 15th ACM Workshop on Hot Topics in Networks, pages 50-56. ACM, 2016. Job scheduling for cloud computing using neural networks. Mahmoud Maqableh, Huda Karajeh, Communications and Network. 603191Mahmoud Maqableh, Huda Karajeh, et al. Job schedul- ing for cloud computing using neural networks. Com- munications and Network, 6(03):191, 2014. The ripe ncc internet measurement data repository. Tony Mcgregor, Shane Alcock, Daniel Karrenberg, International Conference on Passive and Active Network Measurement. SpringerTony McGregor, Shane Alcock, and Daniel Karrenberg. The ripe ncc internet measurement data repository. In International Conference on Passive and Active Network Measurement, pages 111-120. Springer, 2010. In Performance Evaluation of Computer and Communication Systems. Benjamin Melamed, SpringerAn overview of tes processes and modeling methodologyBenjamin Melamed. An overview of tes processes and modeling methodology. In Performance Evaluation of Computer and Communication Systems, pages 359-393. Springer, 1993. Applications of the tes modeling methodology. Benjamin Melamed, Jon R Hill, Proceedings of. nullBenjamin Melamed and Jon R Hill. Applications of the tes modeling methodology. In Proceedings of 1993 Winter Simulation Conference-(WSC'93). IEEEWinter Simulation Conference-(WSC'93), pages 1330- 1338. IEEE, 1993. The tes methodology: Modeling empirical stationary time series. Benjamin Melamed, Jon R Hill, David Goldsman, Proceedings of the 24th conference on Winter simulation. the 24th conference on Winter simulationACMBenjamin Melamed, Jon R Hill, and David Goldsman. The tes methodology: Modeling empirical stationary time series. In Proceedings of the 24th conference on Winter simulation, pages 135-144. ACM, 1992. Modeling full-length vbr video using markov-renewalmodulated tes models. Benjamin Melamed, Dimitrios E Pendarakis, IEEE Journal on Selected Areas in Communications. 165Benjamin Melamed and Dimitrios E Pendarakis. Mod- eling full-length vbr video using markov-renewal- modulated tes models. IEEE Journal on Selected Areas in Communications, 16(5):600-611, 1998. Homa: A receiver-driven lowlatency transport protocol using network priorities. Yilong Behnam Montazeri, Mohammad Li, John Alizadeh, Ousterhout, SIGCOMM. Behnam Montazeri, Yilong Li, Mohammad Alizadeh, and John Ousterhout. Homa: A receiver-driven low- latency transport protocol using network priorities. In SIGCOMM, 2018. Robust deanonymization of large sparse datasets. Arvind Narayanan, Vitaly Shmatikov, SP 2008. IEEE Symposium on. IEEESecurity and PrivacyArvind Narayanan and Vitaly Shmatikov. Robust de- anonymization of large sparse datasets. In Security and Privacy, 2008. SP 2008. IEEE Symposium on, pages 111-125. IEEE, 2008. Webpage recommendation based on web usage and domain knowledge. Thi Thanh Sang Nguyen, Hai Yan Lu, Jie Lu, IEEE Transactions on Knowledge and Data Engineering. 2610Thi Thanh Sang Nguyen, Hai Yan Lu, and Jie Lu. Web- page recommendation based on web usage and domain knowledge. IEEE Transactions on Knowledge and Data Engineering, 26(10):2574-2587, 2013. The problem of synthetically generating ip traffic matrices: initial recommendations. Antonio Nucci, Ashwin Sridharan, Nina Taft, ACM SIGCOMM Computer Communication Review. 353Antonio Nucci, Ashwin Sridharan, and Nina Taft. The problem of synthetically generating ip traffic matrices: initial recommendations. ACM SIGCOMM Computer Communication Review, 35(3):19-32, 2005. Conditional image synthesis with auxiliary classifier gans. Augustus Odena, Christopher Olah, Jonathon Shlens, Proceedings of the 34th International Conference on Machine Learning. the 34th International Conference on Machine Learning70Augustus Odena, Christopher Olah, and Jonathon Shlens. Conditional image synthesis with auxiliary clas- sifier gans. In Proceedings of the 34th International Con- ference on Machine Learning-Volume 70, pages 2642- 2651. JMLR. org, 2017. Broken promises of privacy: Responding to the surprising failure of anonymization. Paul Ohm, Ucla L. Rev. 571701Paul Ohm. Broken promises of privacy: Responding to the surprising failure of anonymization. Ucla L. Rev., 57:1701, 2009. Google cluster-usage traces: format+ schema. Charles Reiss, John Wilkes, Joseph L Hellerstein, Google Inc., White PaperCharles Reiss, John Wilkes, and Joseph L Hellerstein. Google cluster-usage traces: format+ schema. Google Inc., White Paper, pages 1-14, 2011. Obfuscatory obscanturism: making workload traces of commercially-sensitive systems safe to release. Charles Reiss, John Wilkes, Joseph L Hellerstein, IEEE Network Operations and Management Symposium. IEEECharles Reiss, John Wilkes, and Joseph L Hellerstein. Obfuscatory obscanturism: making workload traces of commercially-sensitive systems safe to release. In 2012 IEEE Network Operations and Management Symposium, pages 1279-1286. IEEE, 2012. Improved techniques for training gans. Tim Salimans, Ian Goodfellow, Wojciech Zaremba, Vicki Cheung, Alec Radford, Xi Chen, Advances in Neural Information Processing Systems. Tim Salimans, Ian Goodfellow, Wojciech Zaremba, Vicki Cheung, Alec Radford, and Xi Chen. Improved techniques for training gans. In Advances in Neural In- formation Processing Systems, pages 2234-2242, 2016. On the fidelity of 802.11 packet traces. Aaron Schulman, Dave Levin, Neil Spring, International Conference on Passive and Active Network Measurement. SpringerAaron Schulman, Dave Levin, and Neil Spring. On the fidelity of 802.11 packet traces. In International Con- ference on Passive and Active Network Measurement, pages 132-141. Springer, 2008. Multidimensional density estimation. Handbook of statistics. W David, Stephan R Scott, Sain, 24David W Scott and Stephan R Sain. Multidimensional density estimation. Handbook of statistics, 24:229-261, 2005. Medical image synthesis for data augmentation and anonymization using generative adversarial networks. Hoo-Chang Shin, A Neil, Jameson K Tenenholtz, Rogers, G Christopher, Schwarz, L Matthew, Jeffrey L Senjem, Katherine P Gunter, Mark Andriole, Michalski, International Workshop on Simulation and Synthesis in Medical Imaging. SpringerHoo-Chang Shin, Neil A Tenenholtz, Jameson K Rogers, Christopher G Schwarz, Matthew L Senjem, Jeffrey L Gunter, Katherine P Andriole, and Mark Michalski. Medical image synthesis for data augmentation and anonymization using generative adversarial networks. In International Workshop on Simulation and Synthesis in Medical Imaging, pages 1-11. Springer, 2018. Membership inference attacks against machine learning models. Reza Shokri, Marco Stronati, Congzheng Song, Vitaly Shmatikov, 2017 IEEE Symposium on Security and Privacy (SP). IEEEReza Shokri, Marco Stronati, Congzheng Song, and Vi- taly Shmatikov. Membership inference attacks against machine learning models. In 2017 IEEE Symposium on Security and Privacy (SP), pages 3-18. IEEE, 2017. Self-configuring network traffic generation. Joel Sommers, Paul Barford, Proceedings of the 4th ACM SIGCOMM conference on Internet measurement. the 4th ACM SIGCOMM conference on Internet measurementACMJoel Sommers and Paul Barford. Self-configuring net- work traffic generation. In Proceedings of the 4th ACM SIGCOMM conference on Internet measurement, pages 68-81. ACM, 2004. How to identify and estimate the largest traffic matrix elements in a dynamic environment. Augustin Soule, Antonio Nucci, Rene Cruz, Emilio Leonardi, Nina Taft, ACM SIGMETRICS Performance Evaluation Review. ACM32Augustin Soule, Antonio Nucci, Rene Cruz, Emilio Leonardi, and Nina Taft. How to identify and estimate the largest traffic matrix elements in a dynamic environ- ment. In ACM SIGMETRICS Performance Evaluation Review, volume 32, pages 73-84. ACM, 2004. The proof and measurement of association between two things. Charles Spearman, American journal of Psychology. 151Charles Spearman. The proof and measurement of as- sociation between two things. American journal of Psychology, 15(1):72-101, 1904. Veegan: Reducing mode collapse in gans using implicit variational learning. Akash Srivastava, Lazar Valkov, Chris Russell, Charles Michael U Gutmann, Sutton, Advances in Neural Information Processing Systems. Akash Srivastava, Lazar Valkov, Chris Russell, Michael U Gutmann, and Charles Sutton. Veegan: Reducing mode collapse in gans using implicit variational learning. In Advances in Neural Information Processing Systems, pages 3308-3318, 2017. Web usage mining: Discovery and applications of usage patterns from web data. Jaideep Srivastava, Robert Cooley, Mukund Deshpande, Pang-Ning Tan, Acm Sigkdd Explorations Newsletter. 12Jaideep Srivastava, Robert Cooley, Mukund Deshpande, and Pang-Ning Tan. Web usage mining: Discovery and applications of usage patterns from web data. Acm Sigkdd Explorations Newsletter, 1(2):12-23, 2000. Challenges in inferring internet congestion using throughput measurements. Srikanth Sundaresan, Xiaohong Deng, Yun Feng, Danny Lee, Amogh Dhamdhere, Proceedings of the 2017 Internet Measurement Conference. the 2017 Internet Measurement ConferenceACMSrikanth Sundaresan, Xiaohong Deng, Yun Feng, Danny Lee, and Amogh Dhamdhere. Challenges in inferring internet congestion using throughput measurements. In Proceedings of the 2017 Internet Measurement Confer- ence, pages 43-56. ACM, 2017. Simple demographics often identify people uniquely. Latanya Sweeney, Health. 671Latanya Sweeney. Simple demographics often identify people uniquely. Health (San Francisco), 671:1-34, 2000. Robustness of conditional gans to noisy labels. Ashish Kiran K Thekumparampil, Zinan Khetan, Sewoong Lin, Oh, Advances in Neural Information Processing Systems. Kiran K Thekumparampil, Ashish Khetan, Zinan Lin, and Sewoong Oh. Robustness of conditional gans to noisy labels. In Advances in Neural Information Pro- cessing Systems, pages 10271-10282, 2018. Comparison of the arma, arima, and the autoregressive artificial neural network models in forecasting the monthly inflow of dez dam reservoir. Mohammad Valipour, Mohammad Ebrahim Banihabib, Seyyed Mahmood Reza Behbahani, Journal of hydrology. 476Mohammad Valipour, Mohammad Ebrahim Banihabib, and Seyyed Mahmood Reza Behbahani. Comparison of the arma, arima, and the autoregressive artificial neural network models in forecasting the monthly inflow of dez dam reservoir. Journal of hydrology, 476:433-441, 2013. Swing: Realistic and responsive network traffic generation. Venkatesh Kashi, Amin Vishwanath, Vahdat, IEEE/ACM Transactions on Networking (TON). 173Kashi Venkatesh Vishwanath and Amin Vahdat. Swing: Realistic and responsive network traffic gen- eration. IEEE/ACM Transactions on Networking (TON), 17(3):712-725, 2009. Geant/abilene network topology data and traffic traces. Ning Wang, Ning Wang. Geant/abilene network topology data and traffic traces, 2004. A learning algorithm for continually running fully recurrent neural networks. J Ronald, David Williams, Zipser, Neural computation. 12Ronald J Williams and David Zipser. A learning al- gorithm for continually running fully recurrent neural networks. Neural computation, 1(2):270-280, 1989. Liyang Xie, Kaixiang Lin, Shu Wang, Fei Wang, Jiayu Zhou, arXiv:1802.06739Differentially private generative adversarial network. arXiv preprintLiyang Xie, Kaixiang Lin, Shu Wang, Fei Wang, and Jiayu Zhou. Differentially private generative adversarial network. arXiv preprint arXiv:1802.06739, 2018. Ganobfuscator: Mitigating information leakage under gan via differential privacy. Chugui Xu, Ju Ren, Deyu Zhang, Yaoxue Zhang, Zhan Qin, Kui Ren, IEEE Transactions on Information Forensics and Security. 149Chugui Xu, Ju Ren, Deyu Zhang, Yaoxue Zhang, Zhan Qin, and Kui Ren. Ganobfuscator: Mitigating informa- tion leakage under gan via differential privacy. IEEE Transactions on Information Forensics and Security, 14(9):2358-2371, 2019. Seqgan: Sequence generative adversarial nets with policy gradient. Lantao Yu, Weinan Zhang, Jun Wang, Yong Yu, AAAI. Lantao Yu, Weinan Zhang, Jun Wang, and Yong Yu. Se- qgan: Sequence generative adversarial nets with policy gradient. In AAAI, pages 2852-2858, 2017. Mobility estimation for wireless networks based on an autoregressive model. R Zainab, Brian L Zaidi, Mark, Global Telecommunications Conference, 2004. GLOBECOM'04. IEEE6Zainab R Zaidi and Brian L Mark. Mobility estima- tion for wireless networks based on an autoregressive model. In Global Telecommunications Conference, 2004. GLOBECOM'04. IEEE, volume 6, pages 3405- 3409. IEEE, 2004.
[ "https://github.com/fjxmlzn/DoppelGANger", "https://github.com/tensorflow/" ]
[ "Spontaneous Capillarity-Driven Droplet Ejection", "Spontaneous Capillarity-Driven Droplet Ejection" ]
[ "Andrew Wollman \nPortland State University\n\n", "Trevor Snyder \nXerox Inc\n\n", "Donald Pettit \nNASA\nJohnson Space Center\n\n", "Mark Weislogel \nPortland State University\n\n" ]
[ "Portland State University\n", "Xerox Inc\n", "NASA\nJohnson Space Center\n", "Portland State University\n" ]
[]
The first large length-scale capillary rise experiments were conducted by R. Siegel using a drop tower at NASA LeRC shortly after the 1957 launch of Sputnik I. Siegel was curious if the wetting fluid would expel from the end of short capillary tubes in a low-gravity environment. He observed that although the fluid partially left the tubes, it was always pulled back by surface tension, which caused the fluid to remain pinned to the tubes' end. By exploiting tube geometry and fluid properties, we demonstrate that such capillary flows can in fact eject a variety of jets and drops. This fluid dynamics video provides a historical overview of such spontaneous capillarity-driven droplet ejection. Footage of terrestrial and low earth orbit experiments are also shown. Droplets generated in a microgravity environment are 10 6 times larger than those ejected in a terrestrial environment. The accompanying article provides a summary of the critical parameters and experimental procedures. Scaling the governing equations reveals the dimensionless groups that identify topological regimes of droplet behavior which provides a novel perspective from which to further investigate jets, droplets, and other capillary phenomena over large length scales.
null
[ "https://arxiv.org/pdf/1209.3999v1.pdf" ]
118,588,879
1209.3999
2074756b853200d90becb00cbb97f2048f13ac95
Spontaneous Capillarity-Driven Droplet Ejection May 2, 2014 Andrew Wollman Portland State University Trevor Snyder Xerox Inc Donald Pettit NASA Johnson Space Center Mark Weislogel Portland State University Spontaneous Capillarity-Driven Droplet Ejection May 2, 2014 The first large length-scale capillary rise experiments were conducted by R. Siegel using a drop tower at NASA LeRC shortly after the 1957 launch of Sputnik I. Siegel was curious if the wetting fluid would expel from the end of short capillary tubes in a low-gravity environment. He observed that although the fluid partially left the tubes, it was always pulled back by surface tension, which caused the fluid to remain pinned to the tubes' end. By exploiting tube geometry and fluid properties, we demonstrate that such capillary flows can in fact eject a variety of jets and drops. This fluid dynamics video provides a historical overview of such spontaneous capillarity-driven droplet ejection. Footage of terrestrial and low earth orbit experiments are also shown. Droplets generated in a microgravity environment are 10 6 times larger than those ejected in a terrestrial environment. The accompanying article provides a summary of the critical parameters and experimental procedures. Scaling the governing equations reveals the dimensionless groups that identify topological regimes of droplet behavior which provides a novel perspective from which to further investigate jets, droplets, and other capillary phenomena over large length scales. Introduction "Spontaneous Capillarity-Driven Droplet Ejection" is a short video that provides a demonstration of the auto-ejection of a liquid from a tube under the influence of capillary forces alone. Attention is given to the historical significance and potential research applications of auto-ejection in terrestrial and low-g environments. NASA scientist R. Siegel was the first to ponder auto-ejection from cylindrical tubes [1]. We repeat Siegel's experiments. Footage of the experiment shows the liquid meniscus rise up the partially submerged tube, pin at the lip of the tube, invert, retract, and remain pinned at the tube's end. Siegel concluded that auto-ejection was not possible. De Gennes et al. [2] use a pressure argument to confirm Siegel's results. However by exploiting tube geometry we demonstrate liquids can in fact auto-eject from a tube's end provided sufficient inertia is generated to overpower surface tension forces. This article provides a brief summary of scaling arguments used to identify critical parametric values for ejection, experiment setup, and results. Scenes from the video are also discussed. Details are provided in [3,4]. Figure 1 introduces the nomenclature of the problem. The fluid wets the interior walls of the partially submerged tube creating a pressure drop. In the absence of gravity the fluid is 'pumped' along the tube, accelerates in the nozzle and if sufficient velocity is achieved, can eject from the tube end. Analysis When t r /t µ 1, where t r ∼ ∀ n /Q t is the residence time of liquid in the nozzle, t µ ∼ ρR 2 avg /µ is the viscous diffusion time of the flow, ∀ n is the volume of the nozzle, Q t is the flow rate entering the nozzle, R avg is the average nozzle radius, and µ is the dynamic viscosity, all complexities of the flow in the nozzle due to developing boundary layers, significant viscous normal stresses leading to large dynamic contact angles, and capillary wave dynamics can be ignored and the constricting flow through the nozzle can be assumed to be laminar and inviscid. Figure 2 depicts events of ejection we consider for analysis. Since the velocity of the meniscus in Figure 2a at position 1 W t1 is relatively simple to measure and model [5], the critical condition for ejection is written in terms of said velocity. As the flow accelerates through the nozzle the pressure decreases (See Figure 2a-b). Applying continuity, the flow rate at each end of the nozzle must balance such that W n3 = W t1 α 2 (1 + K n ) 1/2 ,(1) where K n is the model loss coefficient ascribed to the nozzle and α = R n /R t . The meniscus must invert as depicted in Figure 2c. The accompanying Figure 1: Schematic of tube with nozzle. L 0 2R n z r W (t) (t) L t L n L 2R t θ increase in pressure results in a reduction in velocity from position 3 to 4 given by W 2 n4 = W 2 n3 − 8σ ρR n ,(2) where ρ amd σ are the fluid density and surface tension, respectively. Substituting 1 into 2 yields W n4 = W t 1 2 α 4 (1 + K n ) − 8σ ρR n 1/2 .(3) Velocities below this level 'cannot' invert in the nozzle. After the inversion the flow must still have sufficient inertia to overpower the surface tension force that resists the continued flow required to extend past the Rayleigh breakup length ∼ 2πR n . Balancing inertial and surface tension forces at the end of the nozzle yields the condition above which ejection is expected: ρR n W 2 n4 4σ 1.(4) Substitution of 3 into 4 yields ρR n 4σ (a1) Flow entering the nozzle must (b3) leave inertial, (c) inertia must be able to invert the meniscus from (3) to (4), and (d) remaining inertial forces must overpower surface tension forces sufficient to exceed the (e) Rayleigh breakup length, where (f) at least one drop is pinched off. W 2 t1 α 4 (1 + K n ) − 8σ ρR n 1.(5) Introducing modified Weber number We + = ρR t W 2 t1 /σα 4 (1 + K n ), Eq. 5 reduces to We + 12, revealing a condition necessary for auto-ejection to occur. Similar scaling of equation 3 reveals that the condition We + 9 is not likely to eject. Experiments The video footage shows a selection of the 200 experiments we conducted using Portland State University's 2.1s Dryden Drop Tower. Additional footage of experiments in a terrestrial lab and aboard the International Space Station(ISS) are also shown. We provide brief descriptions of the experiments' setup and procedures below. Cameras image the ensuing capillary rise at 60fps or 400fps. Sample footage is provided in the video. Figure 5 shows an annotated schematic and photograph of the experiment setup used by United States Astronaut Don Pettit who demonstrated autoejection of water aboard the ISS. The demonstration includes a polymer sheet tube, a piece of paper, a camera, and a wire loop. The video footage shows Pettit slowly bringing the tube into contact with the spherical reservoir and the ensuing capillary 'rise' and auto-ejection. The water droplet ejected aboard the ISS is 10 6 times larger than the droplet ejected in the terrestrial experiment. Figure 6 is a map for auto-ejection in terms of parameters modified Suratman number Su + = ρσR 2 t /8µ 2 L 2 t and modified Weber number We + . Table 1 is the legend for Figure 6. Experiments show fair agreement with the scaling arguments as ejection primarily occurs when We + 12 and is essentially absent for We + 9. photograph. The (1) camera is mounted orthogonally to (2) a white sheet of paper. The (3) spherical reservoir is actually a large ∼ 1.5l drop and is held in place by (4) a wire loop. The (5) tube is slowly brought into contact with the spherical 'reservoir'. Drop tower experiments 1g o Experiments ISS experiments Results Additional Footage Explained The video footage shows a single drop tower experiment illustrating the variety of auto-ejection possibilities. Four 10mm ID tubes partially submerged L 0 = 10mm in 0.65cS PDMS with four different nozzles auto-eject jets (which break up into 6, 4, and 2 droplets) and a single droplet. To illustrate repeatability the video shows 10 experiments of nearly identical conditions side by side. A 20.3mm ID tube with a 5mm ID nozzle is partially submerged L 0 = 5mm in a 5cS PDMS reservoir. The ejected drop volumes from each experiment are ∀ drop = 2.11 ± 0.14ml. The video concludes with a montage of sample applications of the autoejection technique. The first three clips show a cartridge with a 10 x 10 array of 5mm tubes partially submerged in test liquid. The clips show > 300 autoejected drops shoot up into the air, impact a flat smooth surface (rebounds abound), and impact a textured surface designed to capture and hold the droplets. Work is continuing in this direction. To illustrate how auto-ejection is viable for droplet combustion experiments, a fourth clip shows an asymmetric u-tube with a nozzle pointed horizontally at a candle. The flammable 0.65cS PDMS liquid rises in the Table 1: Plot legend for Figure 6 Symbol Meaning We + = 12 Figure 6: We + vs Su + for droplet auto-ejection. For values of We + 12 capillarity-driven droplet ejection is expected. For We + 9 it is not. See Table 1 for symbol meaning. smaller tube with the higher curvature pushing a column of fuel vapor ahead of it. The candle back-ignites the combustible vapor and the flame resides at the nozzle exit. The spontaneously ejected droplets that follow ignite and fly through the air leaving a trail of soot behind them. One drop is small enough to self-extinguish. Conclusion Contrary to conventional wisdom, capillarity-driven droplet ejection is possible, predictable, repeatable, and a viable method for drop-on-demand delivery requirements for further capillary fluidic research. The video shows only a few of the many examples of potential applications for auto-ejection. The technique described herein can be used as a means to study drop formation, drop to jet transition, jet breakup, and droplet combustion at larger characteristic lengths then are achievable at 1g o . Droplet impact, splash, rebound, satellites, adhesion and coalescence are ripe fields that could benefit from this approach. Figure 2 : 2Schematic illustrating the conditions necessary for droplet ejection. Figure 3 3shows an annotated schematic and photograph of the drop tower experiment rig. Simax Glass tubes are employed with nozzles formed by heating, pulling, and grinding. The tubes are partially submerged in a liquid reservoir which is mounted to the experiment rig. The experiment rig is released for 2.1s of freefall inside a drag shield within the drop tower. Figure 3 : 3Drop tower experiment: (a) schematic and (b) photograph. At one end of the (1) experiment rig main body a (2) camera is mounted. At the opposite end sits the (3) light panel and (4) splash shield. Inside the splash shield sits the (5) fluid reservoir and (6) light guide. The (7) glass tubes are partially submerged in the reservoir. An (8) onboard battery powers the light panel and (9) weights are adjusted to assure a level platform. Figure 4 Figure 4 : 44shows an annotated schematic and photograph of the 1g o experiment setup. Hardware for the experiments include a high speed camera, parallel light source, fluid reservoir, precision lab jack, vibration isolation table, and tubes with nozzles machined into acrylic blocks. The machined acrylic block is suspended above the fluid reservoir that sits on top of the lab jack. The jack is slowly raised toward the cylindrical capillary pores. Video footage shows the liquid make contact with the block, wick along the base of the block, rise up the pores, accelerate through the nozzles, and auto-1g o experiment setup: (a) schematic and (b) photograph. The experiment is set up on an (4) isolation table using a (1) high speed camera opposite a (5) parallel light source. The (6) acrylic block is mounted between the camera and light source. Below the block the (2) reservoir sits on top of a (3) precision lab jack. Figure 5 : 5Space experiment setup aboard the ISS: (a) schematic and (b) Transient capillary rise in reduced and zero gravity fields. R Siegel, Journal of Appl. Mech. 83R. Siegel. Transient capillary rise in reduced and zero gravity fields. Journal of Appl. Mech, 83:165-170, 1961. Capillarity and wetting phenomena: drops, bubbles, pearls, waves. P G De Gennes, F Brochard-Wyart, D Quéré, Springer VerlagP.G. De Gennes, F. Brochard-Wyart, and D. Quéré. Capillarity and wetting phenomena: drops, bubbles, pearls, waves. Springer Verlag, 2004. Capillary-driven droplet ejection. Andrew Wollman, Portland State UniversityMaster's thesisAndrew Wollman. Capillary-driven droplet ejection. Master's thesis, Portland State University, July 2012. New investigations in capillary fluidics using a drop tower. Andrew Wollman, Mark M Weislogel, Experiments in Fluids. in preperationAndrew Wollman and Mark M. Weislogel. New investigations in capillary fluidics using a drop tower. Experiments in Fluids, 2012. (in preperation). Capillary driven flow in circular cylindrical tubes. M Stange, M E Dreyer, H J Rath, Physics of fluids. 152587M. Stange, M.E. Dreyer, and H.J. Rath. Capillary driven flow in circular cylindrical tubes. Physics of fluids, 15:2587, 2003.
[]
[ "SYNCHROTRON EMISSION FROM THE GALAXY", "SYNCHROTRON EMISSION FROM THE GALAXY" ]
[ "R D Davies \nDepartment of Physics & Astronomy\nJodrell Bank\nUniversity of Manchester\nM13 9PLManchesterEngland\n", "A Wilkinson \nDepartment of Physics & Astronomy\nJodrell Bank\nUniversity of Manchester\nM13 9PLManchesterEngland\n" ]
[ "Department of Physics & Astronomy\nJodrell Bank\nUniversity of Manchester\nM13 9PLManchesterEngland", "Department of Physics & Astronomy\nJodrell Bank\nUniversity of Manchester\nM13 9PLManchesterEngland" ]
[]
Galactic synchrotron emission is a potentially confusing foreground, both in total power and in polarization, to the Cosmic Microwave Background Radiation. It also contains much physical information in its own right. This review examines the amplitude, angular power spectrum and frequency spectrum of the synchrotron emission as derived from the presently available de-striped maps. There are as yet no maps at arcminute resolution at frequencies above 2.4 GHz. This incomplete information is supplemented with data from supernovae, which are thought to be the progenitors of the loops and spurs found in the Galactic emission. The possible variations of the frequency spectral index from pixel to pixel are highlighted. The relative contributions of free-free and synchrotron radiation are compared, and it is concluded that the free-free contribution may be smaller than had been predicted by COBE. New high resolution polarization surveys of the Galactic plane suggest detail on all scales so far observed. At high latitudes the large percentage polarisation means that the foreground contamination of the polarised CMB signal will be more serious than for the unpolarized radiation.
null
[ "https://arxiv.org/pdf/astro-ph/9804208v1.pdf" ]
7,175,505
astro-ph/9804208
7bffd5edadc2d8b203674462bfde0441361fddfa
SYNCHROTRON EMISSION FROM THE GALAXY Apr 1998 R D Davies Department of Physics & Astronomy Jodrell Bank University of Manchester M13 9PLManchesterEngland A Wilkinson Department of Physics & Astronomy Jodrell Bank University of Manchester M13 9PLManchesterEngland SYNCHROTRON EMISSION FROM THE GALAXY Apr 1998arXiv:astro-ph/9804208v1 21 Galactic synchrotron emission is a potentially confusing foreground, both in total power and in polarization, to the Cosmic Microwave Background Radiation. It also contains much physical information in its own right. This review examines the amplitude, angular power spectrum and frequency spectrum of the synchrotron emission as derived from the presently available de-striped maps. There are as yet no maps at arcminute resolution at frequencies above 2.4 GHz. This incomplete information is supplemented with data from supernovae, which are thought to be the progenitors of the loops and spurs found in the Galactic emission. The possible variations of the frequency spectral index from pixel to pixel are highlighted. The relative contributions of free-free and synchrotron radiation are compared, and it is concluded that the free-free contribution may be smaller than had been predicted by COBE. New high resolution polarization surveys of the Galactic plane suggest detail on all scales so far observed. At high latitudes the large percentage polarisation means that the foreground contamination of the polarised CMB signal will be more serious than for the unpolarized radiation. Introduction Galactic emission at radio wavelengths is important to understand in its own right. Moreover it is crucial to be able to quantify and remove this component as a foreground to the cosmic microwave background (CMB). Both synchrotron and free-free emission contribute to this foreground, with the synchrotron emission dominating at low frequencies (≤1 GHz). The synchrotron emissivity is a function of both the relativistic (cosmic ray) density and the local magnetic field strength. The luminosity at frequency ν is given by I(ν) = const LN 0 B (p+1)/2 ν −(p−1)/2 (1) where N 0 is the density of relativistic electrons, L is the emission depth, B is the magnetic field and the relativistic electron energy spectrum 22 is given by dN/dE = N 0 E −p . The radio spectral index is α = (p − 1)/2 in energy terms or 2 + α when expressed as a brightness temperature T B ∝ ν −(2+α) = ν −β . Within the interstellar magnetic field of 2 to 5 microgauss, emission at GHz frequencies is characteristically from relativistic electrons with an energy of 1 to 10 GeV. Both B and N 0 , as well as p, will vary from point to point in the Galactic disk and nearby halo. The cosmic ray electrons are thought to originate mainly in supernovae then diffuse outwards in the expanding remnant. Structure will be formed in the remnant as it collides with the nonuniform ambient medium. The magnetic field will be likewise amplified in compression regions and vary in strength and direction. The net effect is to produce elongated synchrotron emission structures on a wide range of scales. The spectral index of the emission will vary with position for two reasons. Firstly the electron spectral index varies from one supernova to another and secondly the spectrum steepens (∆p = +1) with time due to radiation energy loss thus giving an age-dependent spectral index. This paper will describe the synchrotron features in and near the Galactic plane which are believed to give rise to the structures seen at higher galactic latitudes. All-sky and large area surveys are assessed to give information about the amplitude and spectrum of the high latitude emission which is a potential confusing foreground to the CMB. Comments are given about the role of synchrotron polarization and of free-free emission. 2 Large area surveys at low frequencies Radio surveys at frequencies less than 2 GHz are dominated by synchrotron emission. The wellknown survey by Haslam et al. 7 at 408 MHz is the only all-sky map available. Large-area surveys with careful attention to baselines and calibration have also been made at 1420 MHz 15 and most recently at 2326 MHz 9 . All these investigations have been made with FWHP beamwidths of less than 1 • . Before these surveys can be used to derive the angular power spectrum and the emission (frequency) spectral index, it is necessary to remove the baseline stripes 2 in the most commonly used radio maps at 408 and 1420 MHz. These stripes contain power on angular scales of a few to ten degrees. Lasenby 12 has used the 408 and 1420 MHz surveys to estimate the spatial power spectrum of the high latitude region surveyd by the Tenerife CMB experiments; he found an angular power spectrum somewhat flatter than the l −3 law derived for HI and for IRAS far infrared emission. The spectral index of Galactic synchrotron emission can be readily determined at frequencies less than 1 GHz where the observational baselevel uncertainty is much less than the total Galactic emission. Lasenby 12 used data covering the range 38 to 1420 MHz to determine the spectral index variation over the northern sky. Clear variations in spectral index of at least 0.3 about a mean value of 2.7 were found. There was a steepening in the spectral index at higher frequencies in the brighter features such as the loops and some SNRs. Up to 1420 MHz, no such steepening was found in the regions of weaker emission. At higher Galactic latitudes where no reliable zero level is available at 1420 MHz, an estimate can be made of the spectral index of local features by using the T-T technique. The de-striped 408 and 1420 MHz maps gave spectral indices of β = 2.8 to 3.2 in the northern galactic pole regions 2 . The de-striped 408 MHz map shows there are substantial areas (100 • × 50 • in RA × Dec) in both the northern and southern skies which are devoid of appreciable synchrotron structure and can be used for CMB studies. The Tenerife experiments have been based on the northern low emissivity band centred on Dec = 40 • , RA = 130 • to 250 • where the rms Galactic emission in a 5 • beam is ≤ 4µK at 33 GHz 7 . Supernovae are probably the progenitors of the main structures in the Galactic radio emission at intermediate latitudes. The supernova remnants (SNRs) are the easily recognisable early stages of the expansion phenomenon. A supernova releases about 10 51 ergs into the interstellar medium (ISM). This passes through the free expansion phase, then after it has encountered its own mass in the ISM it moves into the Sedov (adiabatic) phase. The SNR shock ultimately disappears when the expansion velocity slows to the local sound velocity. This process takes 10 5 to 10 6 years to reach the point where the remnant is not clearly recognizable as a single entity but will still give rise to synchrotron emission from the residual CR electrons and magnetic fields. An examination of statistics and structure of SNRs will give some indication of the properties of the emission from their residual structures. Firstly, it is possible to recognise the various evolutionary phases in the SNR phenomenon in individual remnants. Objects like Cas A and the Crab Nebula (500 to 1000 years old) are in the early free expansion stage while the Cygnus Loop (15 to 20,000 years old) is losing energy in multiple shocks. Secondly, the statistics of the spectral index of the integrated emission show a spread of α of ±0.1 about the mean value of 0.7 18 , with some objects such as the Crab Nebula (α = 0.3) having more extreme indices. This spread will presumably result in a spread in the spectral indices of their residual structures. The Cygnus Loop provides an excellent case study of an SNR in its late phase of evolution. High sensitivity maps of arcmin resolutions are available at a range of frequencies. This remnant is 3. What we know from spurs and loops Large features with a synchrotron spectrum extend far from the Galactic plane. The most prominent of these are the spurs and loops which describe small circles on the sky with diameters in the range 60 • to 120 • 1 . Because of their association with HI and, in some cases, with X-ray emission, they are believed to be low surface brightness counterparts of the brighter SNRs seen at lower latitudes. Other more diffuse structure at higher latitudes may be even older remnants. Reich and Reich 16 find that the Loops I (the North Polar Spur) and II have a steeper spectrum than the average Galactic emission. Between 408 MHz and 1420 MHz these two loops have temperature spectral indices of 2.93 and 2.85 respectively. Using T-T plots Davies, Watson and Gutierrez 2 derived a spectral index of β = 3.2 for the brightest part of Loop I. Lawson et al. 12 claim that most of the large structures seen on the maps are related to Loops I and III suggesting that they are the evidence of diffusive shock acceleration of the CR electrons derived from the supernova. They derive a distance for Loop I of 130 ± 75 pc and a radius of 115 ± 68 pc; Loop III is thought to be of similar size. A revealing high sensitivity survey of a substantial part of the southern Galactic plane (l = 238 • to 365 • , b = -5 • to +5 • ) has been made with a resolution of 10 arcmin at 2.4 GHz by Duncan et al. 3 . These authors found a large amount of structure and detail including many low surface brightness loops and spurs. They also list over 30 possible SNR candidates, a number of which have angular diameters of about 10 • . Many of the spurs can be traced even further from the plane in the 2.326 GHz survey of Jonas, Baarth and Nicolson 9 . The spectral index of these new spurs has still to be determined. It is important to make Galactic surveys at intermediate and high latitudes at frequencies closer to those at which the CMB structure is being investigated. Because of the variation of spectral index from one region to another, the structure at 408 MHz will be quite different from that at 10 or 30 GHz. Two separate experiments have been established to address this problem. The first is a high sensitivity short baseline interferometer operating at 5 GHz at Jodrell Bank 14 (Fig. 1). By making observations at a range of baselines the point source and the Galactic contribution to the microwave background can be separated. After correction for the point sources the Galactic emission at 5 GHz can be compared with the published 408 MHz survey. The spectral index of Galactic features in the survey was found to be about 3.0 at intermediate and higher Galactic latitudes. The 10, 15 and 33 GHz beamswitching radiometers at Teide Observatory Tenerife are scaled to give the same resolution (5 • FWHP beamswitched ±8 • ). Comparison of the surveys at the three frequencies should allow the Galactic emission to be separated from the intrinsic CMB component. Observations already available from this experiment can constrain the Galactic contribution at 33 GHz. The observed rms fluctuation level at 10 GHz in the Dec = +40 • scan between RA = 160 • and 230 • is 29 +20/-30 µK 7 . This rms level will include both a Galactic and CMB contribution. Using a 2σ upper limit and subtracting quadratically the detected CMB signal (54 µK) we derive an upper limit for the Galactic emission at 10 GHz of 43 µK. This would produce an upper limit at 33 GHz of 2 µK if it were synchrotron with β = 3.0 and an upper limit of 4 µK if it were free-free emission with β = 2.1. Hancock et al. 7 derive a spectral index for Galactic emission at the 3 frequencies of β = 3.1. 6 Synchrotron versus free-free dust emission. They obtained free-free levels somewhat higher (in a 7 • beam) than measured more directly in the Tenerife experiments. A note on polarization Extensive surveys of the polarization of Galactic synchrotron polarization have been made at frequencies up to 1-2 GHz. Significant polarized signals are found over most of the surveyed sky. The percentage polarization increases with frequency indicating the presence of a Faraday rotating medium with a rotation measure of 8 rad m −2 20 . The mean polarization amplitude at 1400 MHz lies between 20 and 30 % at higher (|b| ≥ 30 • ) Galactic latitudes as seen in an 0. • 6 beam. The polarization degree in Loop I reaches ∼72 % at higher Galactic latitudes. This is close to the theoretical upper limit where the fractional polarization is given by π = 3β − 3 3β − 1(2) where β is the temperature spectral index. The other loops have maximum polarization in the range 30-50 %. Theory indicates that the CMB radiation will be polarized at a level of 5 to 10 percent. On the other hand the synchrotron emission can be 30 percent polarized at 1.4 GHz and probably higher at higher frequencies. Accordingly this foreground Galactic polarization must be considered more seriously than the total power case when measuring CMB polarization. A foreground feature which is 10 percent of an intrinsic CMB feature may have polarization which is 30 to 50 percent of the polarized intensity of the feature. • 5 x 2. • 5 (30 pc × 20 pc) in diameter lying at l = 74 • , b = -8. • 5 and at a distance of 500-800 pc. The 1.4 GHz synthesis map by Leahy, Roger and Ballantyne 13 made with a 1x1 arcmin 2 beam shows both filamentary and diffuse structure on angular scales from a few to 30 arcmins. By comparing maps at 0.408 and 2.695 GHz Green 6 finds significant variations (0.3) in spectral index between the major features of the remnant. Freeigure 1 : 1-free emission is not easily identified at radio frequencies except near the Galactic plane. At higher latitudes it must be separated from synchrotron emission by virtue of its different spectral index. At higher frequencies where free-free emission might be expected to exceed the synchrotron component, the signals are weak and the survey zero levels are indeterminate. Most of the information on the thermal electron content currently available at intermediate and higher frequencies comes from Hα surveys. This diffuse Hα emission is thought to be a good tracer of diffuse free-free emission since both are emitted by the same ionized medium and both intensities are proportional to emission measure (EM = ne 2 dl), the line of sight integral of the free electron density squared. Major Hα structures are a feature of the well-known Local (Gould Belt) System which extends some 40 • from the plane at positive b in the Galactic centre and at negative b in the anticentre. Other Hα features are known to extend 15 • to 20 • from the plane19 .The intermediate latitude Hα distribution may be modelled(Reynolds 1992) as a layer parallel to the Galactic plane with a half-thickness intensity of 1.2 Rayleigh (R). The rms variation in this Hα emission is about 0.6R on degree scales. In the context of the present discussion 1R will give a brightness temperature of about 10 µK at 45 GHz. Further information about angular structure in the Hα emission can be derived from the North Celestial Pole (NCP) study byGaustad et al. 5 . Veeraraghavan & Davies 22 used this material to derive a spatial power spectrum on angular scales of 10 arcmin to a few degrees. The spatial power law index is -2.3 ± 0.1 over this range with an rms amplitude of 0.12cosec(b) Rayleighs on 10 arcmin scales. This level is consistent with the limits derived from the Tenerife experiments and indicates that the free-free rms brightness becomes comparable with the synchrotron value at about 20 GHz where it would comprise 10 to 20 percent of the CMB fluctuation amplitude. Kogut et al.10 compared the COBE DMR maps with the DIRBE maps and claimed a correlation between free-free and Comparison of 5 GHz interferometer narrow spacing (baseline 12 wavelengths ) data with the Greenbank point source survey (dark line). The differences are due to source variability or Galactic synchrotron emission. A detailed study of the polarization of the Cygnus Loop has been made at 1.4 GHz by Leahy, Roger & Ballantyne 13 . They find that the bright filaments have the B field aligned along their length with a maximum polarization in the remnant of 39 percent and a mean value of 7 percent. The lower values of polarization in some areas are most likely due to depolarization in the Faraday screen of the object which has a rotation measure of -20 to -35 rad m −2 . The 5 GHz map of Kundu & Becker 11 shows a fractional polarization of 25 percent over the southern half of the source. The 2.4 GHz Galactic plane survey by Duncan et al. 4 shows considerable complex structure with their 10 arcmin beam. Bright, extended regions of polarization emission of the order of 5 • across include the Vela SNR and a large structure appearing to the north of Sgr A. A quasi-uniform weak component of patchy polarization is seen over the length of the survey. AcknowledgmentsAW would like to acknowledge the receipt of a Daphne Jackson Research Fellowship, sponsored by PPARC. . E M Berkhuijsen, C G T Haslam, C J Salter, Astr. Astrophys. 14Berkhuijsen E.M., Haslam C.G.T., Salter C.J., Astr. Astrophys. 14, 252 (1971). . R D Davies, R A Watson, C M Gutierrez, Mon. Not. R. astr. Soc. 278925Davies R.D., Watson R.A., Gutierrez C.M., Mon. Not. R. astr. Soc. 278, 925 (1996). . A R Duncan, R T Stewart, R F Haynes, K L Jones, Mon. Not. R. astr. Soc. 27736Duncan A.R., Stewart R.T., Haynes R.F., Jones K.L., Mon. Not. R. astr. Soc. 277, 36 (1995). . A R Duncan, R F Haynes, K L Jones, R T Stewart, Mon. Not. R. astr. Soc. 291279Duncan A.R., Haynes R.F., Jones K.L, Stewart R.T., Mon. Not. R. astr. Soc. 291, 279 (1997). . J Gaustad, P Mccullough, D Van Buren, Pub, Astron.Soc.Pacific. 108351Gaustad J., McCullough P., van Buren D., Pub.Astron.Soc.Pacific 108, 351 (1996). . D Green, Astron. J. 1001927Green D., Astron. J. 100, 1927 (1990). . S Hancock, R D Davies, A N Lasenby, C M Guttierrez, R A Watson, . R Rebolo, J E Beckman, Nature. 367333Hancock S. Davies, R.D., Lasenby, A.N. Guttierrez, C.M., Watson, R.A., Rebolo. R. and Beckman, J.E., Nature 367, 333 (1994). . C G T Haslam, C J Salter, H Stoffel, W E Wilson, Astr. Astrophys. 471SupplHaslam C.G.T., Salter C.J., Stoffel H., Wilson W.E., Astr. Astrophys. Suppl. ?47, 1 (1982). . J L Jonas, E E Baart, G D Nicolson, Mon.Not.R.ast.Soc. in pressJonas J.L., Baart E.E., Nicolson G.D., Mon.Not.R.ast.Soc. in press, 1998. . A Kogut, Astrophys. J. 4601Kogut A. et al., Astrophys. J. 460, 1 (1996). . M R Kundu, R H Becker, Astron. J. 77459Kundu M.R., Becker R.H., Astron. J. 77, 459 (1972). . K D Lawson, C J Mayer, J L Osborn, M L Parkinson, Mon. Not. R. astr. Soc. 225307Lawson K.D., Mayer C.J., Osborn J.L., Parkinson M.L., Mon. Not. R. astr. Soc. 225, 307 (1987). . D A Leahy, R S Roger, D Ballantyne, aj. 1142081Leahy D.A., Roger R.S., Ballantyne D., aj 114, 2081 (1997). . S J Melhuish, Mon. Not. R. astr. Soc. 28648Melhuish S.J. et al., Mon. Not. R. astr. Soc. 286, 48 (1997). . P Reich, W Reich, Astr. Astrophys. Suppl. 63205Reich P., Reich W., Astr. Astrophys. Suppl. 63, 205 (1986). . P Reich, W Reich, Astr. Astrophys. Suppl. 747Reich P., Reich W., Astr. Astrophys. Suppl. 74, 7 (1988). . J Reynolds, Astrophys. J. 39235Reynolds J., Astrophys. J. 392, L35 (1992). Galactic and Extragalactic Radio Astronomy. S Reynolds, Springer-VerlagBerlinReynolds S., Galactic and Extragalactic Radio Astronomy, (Springer-Verlag, Berlin, 1988). . J Sivan, Astr. Astrophys. Suppl. 16163Sivan,J.P, Astr. Astrophys. Suppl. 16, 163 (1974). . T A T Spoelstra, Astr. Astrophys. 135238Spoelstra T.A.T., Astr. Astrophys. 135, 238 (1984). Particle Physics and the Early Universe eds. S Veeraraghavan, R D Davies, R Bately, M E Jones, D A Green, Veeraraghavan S., Davies R.D., Particle Physics and the Early Universe eds. Bately R., Jones M.E., Green D.A., (CUP, 1997). Galactic and Extragalactic Radio Astronomy. G L Verschuur, K I Kellermann, Springer-VerlagBerlinVerschuur G.L., Kellermann K.I., Galactic and Extragalactic Radio Astronomy, (Springer- Verlag, Berlin,1988).
[]
[ "Numerical analysis of a singularly perturbed convection diffusion problem with shift in space", "Numerical analysis of a singularly perturbed convection diffusion problem with shift in space" ]
[ "Mirjana Brdar ", "Sebastian Franz ", "Lars Ludwig ", "Hans-Görg Roos " ]
[]
[]
We consider a singularly perturbed convection-diffusion problem that has in addition a shift term. We show a solution decomposition using asymptotic expansions and a stability result. Based upon this we provide a numerical analysis of high order finite element method on layer adapted meshes. We also apply a new idea of using a coarser mesh in places where weak layers appear. Numerical experiments confirm our theoretical results.
10.1016/j.apnum.2023.01.003
[ "https://arxiv.org/pdf/2207.09218v1.pdf" ]
250,644,260
2207.09218
a7bd388b88a503273f9f3a0737da53c014125d9a
Numerical analysis of a singularly perturbed convection diffusion problem with shift in space 19 Jul 2022 Mirjana Brdar Sebastian Franz Lars Ludwig Hans-Görg Roos Numerical analysis of a singularly perturbed convection diffusion problem with shift in space 19 Jul 2022 We consider a singularly perturbed convection-diffusion problem that has in addition a shift term. We show a solution decomposition using asymptotic expansions and a stability result. Based upon this we provide a numerical analysis of high order finite element method on layer adapted meshes. We also apply a new idea of using a coarser mesh in places where weak layers appear. Numerical experiments confirm our theoretical results. Introduction In this paper we want to look at the static singularly perturbed problem given by −εu ′′ (x) − b(x)u ′ (x) + c(x)u(x) + d(x)u(x − 1) = f (x), x ∈ Ω := (0, 2), (1a) u(2) = 0, (1b) u(x) = Φ(x), x ∈ (−1, 0],(1c)where 0 < ε ≪ 1, b ≥ β > 0, d ≥ 0, c − b ′ 2 − d L∞(1,2) 2 ≥ γ > 0. For the function Φ we assume Φ(0) = 0, which is not a restriction as a simple transformation can always ensure this condition. Then, it holds u ∈ U := H 1 0 (Ω). The literature on singularly perturbed problems is vast, see e.g. the book [13] and the references therein. But for problems that in addition also have a shift-operator, sometimes also called a delay-operator, with a large shift, there are not many. For the time-dependent case and a reaction-diffusion type problem there are e.g. [1,3,[5][6][7]. Recently, we also investigated the time dependent version of a singularly perturbed reaction-diffusion problem in [2] using a finite element method in time and space. For convection dominated singularly perturbed problems with an additional shift there are also publications in literature, see e.g. [9,14,15]. They all consider a negative coefficient, here called d, which supports a maximum principle. Then finite differences on layer adapted meshes of rather low order are used. In our paper we consider finite element methods of arbitrary order for positive coefficients d. Standard convection-diffusion problems with a fixed convection coefficient show one boundary layer near the outflow boundary in contrast to two boundary layers for the reaction diffusion problem. Therefore, we expect the behaviour of the problem with a shift also to have some different structure that those of reaction-diffusion type. In e.g. [14] an asymptotic expansion of the solution is given, where the direction of shift and convection is opposite, but only to the lowest order. For the purpose of this paper we want a complete solution decomposition. Therefore, in Section 2 we provide a solution decomposition of u into various layers and a smooth part using a different approach. We prove it rigorously for the constant coefficient case. In Section 3 a numerical analysis is provided for the discretisation using finite elements of arbitrary order on a classical S-type mesh and a new coarser type of mesh. Finally Section 4 provides some numerical results supporting our analysis. We finish this paper with a technical abstract on some terms involving Green's function. Notation: For a set D, we use the notation · L p (D) for the L p −norm over D, where p ≥ 1. The standard scalar product in L 2 (D) is marked with ·, · D . If D = Ω we sometimes drop the Ω from the notation. Throughout the paper, we will write A B if there exists a generic positive constant C independent of the perturbation parameter ε and the mesh, such that A ≤ CB. We will also write A ∼ B if A B and B A. Solution decomposition The considered problem with a shift term has some different properties compared to a convection-diffusion problem without the shift. One of the major ones is, that it is unknown whether a maximum principle holds for d ≥ 0. In the case d ≤ 0 a maximum principle is proved in e.g. [14], but the proof cannot be applied here. For the following solution decomposition we will need a stability result that is provided in the next theorem for the case of constant coefficients and assumed to hold true in the general case of variable coefficient. Theorem 2.1. Consider the problem: Find u = u 1 χ (0,1) + u 2 χ (1,2) , where χ D is the characteristic function of D such that it holds −εu ′′ 1 (x) − bu ′ 1 (x) + cu 1 (x) = f (x), x ∈ (0, 1), u 1 (0) = 0, u 1 (1) = α, −εu ′′ 2 (x) − bu ′ 2 (x) + cu 2 (x) = f (x) − du 1 (x − 1), x ∈ (1, 2) , u 2 (2) = β, u 2 (1) = α + δ with constant b > 0 and c > 0, arbitrary boundary value β ∈ R and jump δ ∈ R, and α ∈ R chosen, such that u ′ 1 (1 − ) = u ′ 2 (1 + ). Then we have u L ∞ f L ∞ (0,2) + |β| + |δ|. Proof. Each of these two sub-problems is a standard convection diffusion problem with a known Green's function G for the case of homogeneous boundary conditions on (0, 1) that can be constructed as shown e.g. in [8,Chapter 1.1]. Thus, we can, usingb(t) := b − ct andĉ(t) := c + dt, represent the solutions as u 1 (x) = αx + 1 0 G(x, t) f (t) + αb(t) dt, u 2 (x) = (α + δ)(2 − x) + β(x − 1) + 1 0 G(x − 1, t) f (t + 1) − (α + δ)(b(t) +ĉ(t)) + βb(t) − d 1 0 G(t, s) f (s) + αb(s) ds dt, where in the second case we have used, that G(x − 1, t − 1) is the Green's function for the problem on (1, 2). The condition for α can now be written as α = N D(2) where N := β − δ + 1 0 G x (0, t) f (t + 1) − δĉ(t) + (β − δ)b(t) − d 1 0 G(t, s)f (s)ds dt − 1 0 G x (1, t)f (t)dt and D := 2 + 1 0 G x (1, t)b(t)dt + 1 0 G x (0, t) b (t) +ĉ(t) + d 1 0 G(t, s)b(s)ds dt. For the given problem we can compute all relevant information concerning the Green's function, see Appendix A, and can estimate |N| |β| + |δ| + |β| + |δ| + f L ∞ (0,2) · 1 ε and D 1 ε . Therefore, we have |α| |β| + |δ| + f L ∞ (0,2) . Now using the representations of u 1 and u 2 and G(x, ·) L 1 (0,1) 1, for all x ∈ (0, 1) we have proved the assertion. Contrary to the reaction-diffusion case, where in addition to the boundary layers a strong inner layer forms, see [2], the convection-diffusion case has only a strong boundary layer at the outflow boundary and a weak inner layer. Theorem 2.2. Let k ≥ 0 be a given integer and the data of (1) smooth enough. Then it holds u = S + E + W, where for any ℓ ∈ {0, 1, . . . , k} it holds S (ℓ) L 2 (0,1) + S (ℓ) L 2 (1,2) 1, |E (ℓ) (x)| ε −ℓ e −β x ε , x ∈ [0, 2],|W (ℓ) (x)| 0, x ∈ (0, 1), ε 1−ℓ e −β (x−1) ε , x ∈ (1, 2). Proof. We prove this theorem using asymptotic expansions. For simplicity we assume b, c and d to be constant. Adjusting the proof for variable smooth coefficients is straightforward using Taylor expansions and assuming Theorem 2.1 to hold true for variable coefficients. We start by writing the problem using u 1 and u 2 as the solution on (0, 1) and (1, 2) resp. [1,2] ) be the outer expansion and by substituting this into the differential system we obtain −εu ′′ 1 (x) − bu ′ 1 (x) + cu 1 (x) = f (x) − dΦ(x − 1), x ∈ (0, 1), −εu ′′ 2 (x) − bu ′ 2 (x) + cu 2 (x) = f (x) − du 1 (x − 1), x ∈ (1, 2), u 1 (0) = 0, u 1 (1) = u 2 (1), u ′ 1 (1) = u ′ 2 (1), u 2 (2) = 0. Let k i=0 ε i (S i,− χ [0,1) + S i,+ χk i=0 ε i −εS ′′ i,− (x) − bS ′ i,− (x) + cS i,− (x) = f (x) − dΦ(x − 1), x ∈ (0, 1), k i=0 ε i −εS ′′ i,+ (x) − bS ′ i,+ (x) + cS i,+ (x) = f (x) − d k i=0 ε i S i,− (x − 1), x ∈ (1, 2) plus boundary conditions and continuity conditions. For the coefficient of ε 0 (including some of the additional conditions) we obtain −bS ′ 0,− (x) + cS 0,− (x) = f (x) − dΦ(x − 1), x ∈ (0, 1), S 0,− (1) = S 0,+ (1), −bS ′ 0,+ (x) + cS 0,+ (x) = f (x) − dS 0,− (x − 1), x ∈ (1, 2), S 0,+ (2) = 0. According to Lemma 2.3, after mapping the second line to (0, 1), there exists a solution S 0 = S 0,− χ [0,1) + S 0,+ χ [1,2] , that is continuous and S 0 (2) = 0, but S 0 (0) = 0. Thus, we correct this with a boundary correction using the stretched variable ξ = x ε and k i=0 ε iẼ i (ξ). Substituting this into the differential equation yields k i=0 ε i −ε −1 (Ẽ ′′ i (ξ) + bẼ ′ i (ξ)) + cẼ i (ξ)) + dẼ i ξ − 1 ε χ ( 1 ε , 2 ε ) = 0. We deal with the shift term later and obtain for the coefficient of ε −1 the boundary correction problem E ′′ 0 (ξ) + bẼ ′ 0 (ξ) = 0,Ẽ 0 (0) = −S 0,− (0), lim ξ→∞Ẽ 0 (ξ) = 0 ⇒Ẽ 0 (ξ) = −S 0,− (0)e −bξ . Furthermore, we correct the jump of the derivative of S 0 at x = 1 with an inner expansion and the variable η = x−1 ε . Using k+1 i=1 ε iW i (η) we have k+1 i=1 ε i −ε −1 (W ′′ i (η) + bW ′ i (η)) + cW i (η) = 0. For the coefficient of ε 0 and initial conditions at η = 0 it follows W ′′ 1 (η) + bW ′ 1 (η) = dẼ 0 (η),W ′ 1 (0) = −[[S ′ 0 (1)]], lim η→∞W 1 (η) = 0. Here we included the shift ofẼ 0 into the differential equation. We obtain a solutioñ W 1 (η) =Q 1 (η)e −bη whereQ 1 a polynomial of degree 1. Thus far we have u 0 = S 0 + E 0 + εW 1 χ [1,2] , u 0 (0) = 0, |u 0 (2)| e − b ε and in addition [[u ′ 0 (1)]] = 0, [[u 0 (1)]] = W 1 (1) ε. Thus we have corrected the jump in the derivative, but introduced a jump in the function value of order ε. In order to correct this jump we continue with the same steps, now for the coefficients of ε i for i > 0. We obtain the problems −bS ′ i,− (x) + cS i,− (x) = S ′′ i−1,− (x), x ∈ (0, 1), S i,− (1) = S i,+ (1) − W i (1) −bS ′ i,+ (x) + cS i,+ (x) = S ′′ i−1,+ (x) − dS i,− (x − 1), x ∈ (1, 2), S i,+ (2) = 0 Lemma 2.3 =⇒ S i = S i,− χ [0,1) + S i,+ χ [1,2] , andẼ ′′ i (ξ) + bẼ ′ i (ξ) = cẼ i−1 (ξ),Ẽ i (0) = −S i,− (0), lim ξ→∞Ẽ i (ξ) = 0 =⇒Ẽ i (ξ) =P i (ξ)e −bξ , W ′′ i+1 (η) + bW ′ i+1 (η) = cW i (η) + dẼ i (η),W ′ i+1 (0) = −[[S ′ i (1)]], lim η→∞W i+1 (η) = 0 =⇒W i+1 (η) =Q i+1 (η)e −bη , whereP i andQ i+1 are polynomials of degree i and i + 1, resp. The following Figure 1 shows in a diagram the dependence of the problems. Dotted lines represent influence on boundary values, while solid ones are via the differential equation. Thus, for the expansion S 0W 1 E 0 S 1W 2 E 1 S 2W 3 E 2 . . . . . . . . .u k := k i=0 ε i S i (x) =:S(x) + k i=0 ε i P i x ε e − bx ε =:E(x) + k+1 i=1 ε i Q i x − 1 ε e − b(x−1) ε χ [1,2] (x) =:W (x) we have [[u k (1)]] =: δ, [[u ′ k (1)]] = 0, u k (0) = 0, u k (2) =: β, where |δ| ε k and |β| e − b ε , and for the remainder R := u k − u follows the same. Finally, it holds −εR ′′ − bR ′ + cR = ε k (S ′′ k,− + ce k ), in (0, 1) −εR ′′ − bR ′ + cR = ε k (S ′′ k,+ + ce k + cw k+1 ) − dR(· − 1), in (1, 2). Using the stability result of Theorem 2.1 we obtain R L ∞ ε k and we can set S :=S + R. Lemma 2.3. The ordinary differential system −V ′ (x) + c 1 (x)V (x) = g 1 (x), x ∈ (0, 1), V (1) = W (0) + α, −W ′ (x) + c 2 (x)W (x) + d(x)V (x) = g 2 (x), x ∈ (0, 1), W (1) = 0 has for positive d and any c 1 , c 2 , g 1 , g 2 , α a unique solution. Proof. For x ∈ (0, 1) the system can be written as V W ′ (x) = c 1 (x) 0 d(x) c 2 (x) V W (x) − g 1 (x) g 2 (x) , V W (1) = W (0) + α 0 or short V W ′ = A V W − g, V W (1) = W (0) + α 0 . With B(x) = x 0 A(y)dy and the matrix exponential, the solution can be represented as V W (x) = e B(x) e −B(1) W (0) + α 0 + 1 x e −B(y) g(y)dy =:T (x) . Now, this solution is still recursively defined. In order to investigate this further, let B(1) = C 1 0 D C 2 , where D := 1 0 d(x)dx, C i = 1 0 c i (d)dx, i ∈ {1,W (0) =b 21 (W (0) + α) + T 2 (0) ⇒ W (0) = T 2 (0) +b 21 α 1 −b 21 , ifb 21 = 1. Due to the assumption d > 0 we have D > 0 and thereforẽ b 21 := −De −C 1 , C 1 = C 2 , D e −C 1 −e −C 2 C 1 −C 2 , C 1 = C 2 , is always negative and thus not 1. Remark 2.4. The condition d > 0 is sufficient, but not necessary. But some condition is needed, as can be seen by the example c 1 = c 2 = 1, d = −e, α = 0 and for example g 1 (x) = g 2 (x) = 1 for which no solution (V, W ) exists: V (x) = 1 + (W (0) − 1)e x−1 , W (x) = e x · (1 − x)W (0) + x − 2 − e −1 + e + 1 fulfils the system and the conditions W (1) = 0, V (1) = W (0), but W (0) = W (0) − 1 + e − e −1 is not defined. Remark 2.5. The related problem −εu ′′ (x) − b(x)u ′ (x) + c(x)u(x) + d(x)u(x + 1) = f (x), x ∈ Ω := (0, 2), u(0) = 0, u(x) = Φ(x), x ∈ [2, 3), where the directions of shift and convection are opposing, can be analysed quite similarly, yielding the same solution decomposition as Theorem 2.2. Here the reduced problems are always solvable, independent of d, but the problems for the boundary correction have to be split into the two subregions. Numerical analysis 3.1 Preliminaries Using standard L 2 -products and integration by parts we define our bilinear and linear form by B(u, v) := ε u ′ , v ′ Ω + cu − bu ′ , v Ω + du(· − 1), v (1,2) = f, v Ω − dφ(· − 1), v (0,1) =: F (v) (3) for u, v ∈ U. With − bu ′ , u Ω = b ′ u, u Ω + bu ′ , u Ω and du(· − 1), u (1,2) ≤ d L∞(1,2) 2 u 2 L 2 (0,1) + u 2 L 2 (1,2) = d L∞(1,2) 2 u 2 L 2 we have coercivity w.r.t. the energy norm |||·||| B(u, u) = ε u ′ 2 L 2 + cu − bu ′ , u Ω + du(· − 1), u (1,2) ≥ ε u ′ 2 L 2 + c − b ′ 2 u, u Ω − d L∞(1,2) 2 u 2 L 2 ≥ ε u ′ 2 L 2 + γ u 2 L 2 =: |||u||| 2 due to our assumptions on the data. On standard S-type meshes For the construction of an S-type mesh, see [12], let us assume the number of cells N to be divisible by 4. Next we define a mesh transition value λ = σε β ln(N), with a constant σ to be specified later. In order to have an actual layer we assume ε to be small enough. To be more precise, we assume σε β ln(N) ≤ 1 2 such that λ ≤ 1/2 follows. Then using a monotonically increasing mesh defining function φ with φ(0) = 0 and φ(1/2) = ln(N), see [12] for the precise conditions on φ, we construct the mesh nodes x i =              σε β φ 2i N , 0 ≤ i ≤ N 4 , 4i N (1 − λ) + 2λ − 1, N 4 ≤ i ≤ N 2 , 1 + x i−N/2 , N 2 ≤ i ≤ N. Let us denote the smallest mesh-width inside the layers by h, for which holds h ≤ ε. Associated with φ is the mesh characterising function ψ = e φ , that classifies the convergence quality of the meshes by the quantity max |ψ ′ | := max t∈[0,1/2] |ψ ′ (t)|. Two of the most common S-type meshes are the Shishkin mesh with φ(t) = 2t ln N, ψ(t) = N −2t , max |ψ ′ | = 2 ln N and the Bakhvalov-S-mesh φ(t) = − ln(1 − 2t(1 − N −1 )), ψ(t) = 1 − 2t(1 − N −1 ), max |ψ ′ | = 2. By definition it holds |E(λ)| N −σ and |W (1 + λ)| εN −σ . As discrete space we use U N := {v ∈ H 1 0 (Ω) : v| τ ∈ P k (τ )}, where P k (τ ) is the space of polynomials of degree k at most on a cell τ of the mesh. Let I be the standard Lagrange-interpolation operator into U N using equidistant points or any other suitable distribution of points. The derivation of the interpolation error can be done like for a standard convection-diffusion problem, see e.g. [13]. We therefore skip the proof. u − Iu L 2 (Ω) (h + N −1 max |ψ ′ |) k+1 , (u − Iu) ′ L 2 (Ω) ε −1/2 (h + N −1 max |ψ ′ |) k and additionally E − IE L 2 ((0,λ)∪(1,1+λ)) ε 1/2 (N −1 max |ψ ′ |) k+1 , E − IE L 2 ((λ,1)∪(1+λ,2)) N −(k+1) , (W − IW ) ′ L 2 (Ω) ε 1/2 (N −1 max |ψ ′ |) k . The numerical method is now given by: Find u N ∈ U N such that for all v ∈ U N it holds B(u N , v) = F (v).(4) Obviously, we have immediately Galerkin orthogonality B(u − u N , v) = 0 for all v ∈ U N . Now the convergence of our method is easily shown. Theorem 3.2. For the solution u of (1) and the numerical solution u N of (4) holds on an S-type mesh with σ ≥ k + 1 |||u − u N ||| (h + N −1 max |ψ ′ |) k . Proof. We start with a triangle inequality |||u − u N ||| ≤ |||u − Iu||| + |||Iu − u N ||| where the first term can be estimated by Lemma 3.1. Let χ := Iu − u N ∈ U N and ψ := u − Iu. Then coercivity and Galerkin orthogonality yield |||χ||| 2 ≤ B(η, χ) = ε η ′ , χ ′ Ω + cη − bη ′ , χ Ω + dη(· − 1), χ (1,2) , (h + N −1 max |ψ ′ |) k |||χ||| + b(E − IE), χ ′ Ω , where Cauchy-Schwarz inequalities and the interpolation error estimates were used for all but the convection term including the strong layer, where integration by parts was applied. For the remaining term we decompose the resulting scalar product into fine and coarse regions. | b(E − IE), χ ′ Ω | ≤ | b(E − IE), χ ′ (0,λ)∪(1,1+λ) | + | b(E − IE), χ ′ (λ,1)∪(1+λ,2) | ε 1/2 (N −1 max |ψ ′ |) k χ ′ L 2 ((0,λ)∪(1,1+λ)) + N −(k+1) χ ′ L 2 ((λ,1)∪(1+λ,2)) (N −1 max |ψ ′ |) k |||χ||| + N −k χ L 2 ((λ,1)∪(1+λ,2)) , where an inverse inequality was used. Combining the results finishes the proof. On a coarser mesh Let us consider a mesh, see also [11] where a similar mesh is used for weak layers, that resolves the weak layer not by an S-type mesh, but just by an even simpler equidistant mesh and a specially chosen transition point, while the strong layer is still resolved by an S-type. Thus let λ := σε β ln N ≤ 1 2 and µ := ε k−1 k β ≤ 1 2 that still implies the weak condition ε (ln N) −1 . Note that in the case k = 1 we set µ = 1 2 . The by the same ideas as in the previous subsection we construct the mesh nodes x i =                      σε β φ 2i N , 0 ≤ i ≤ N 4 , 4i N (1 − λ) + 2λ − 1, N 4 ≤ i ≤ N 2 , 1 + µ 4i N − 2 , N 2 ≤ i ≤ 3N 4 , 4i N (1 − µ) + 4µ − 2, 3N 4 ≤ i ≤ N. Note that for i ≥ N/4 it is always piecewise equidistant, independent of the choice of φ. For the (minimal) mesh width in the different regions it holds h 1 ε, H 1 ∼ N −1 , h 2 ∼ N −1 ε k−1 k and H 2 ∼ N −1 . The proof of the interpolation errors uses local interpolation error estimates, given on any cell τ i with width h i and 1 ≤ s ≤ k + 1 and 1 ≤ t ≤ k by v − Iv L 2 (τ i ) h s i v (s) L 2 (τ i ) ,(5a)(v − Iv) ′ L 2 (τ i ) h t i v (t+1) L 2 (τ i ) ,(5b) for v smooth enough. In principle it is similar to proving interpolation error estimates on S-type meshes but the different layout of the mesh makes some changes in the proof necessary. Lemma 3.4. Let us assume σ ≥ k + 1 and e −ε −1/k ≤ N 1−k .(6) Then it holds u − Iu L 2 (Ω) (h 1 + N −1 max |ψ ′ |) k+1/2 , (7a) |||u − Iu||| (h 1 + N −1 max |ψ ′ |) k (7b) and more detailed W − IW L 2 (Ω) ε 1/2 N −k ,(7c)E − IE L 2 ((λ,1)∪(1+µ,2)) N −(k+1) ,(7d)E − IE L 2 ((0,λ)∪(1,1+µ)) ε 1/2 (N −1 max |ψ ′ |) k . (7e) Proof. Using (5a) and (5b) with s = k + 1 and t = k, resp. we obtain S − IS L 2 (Ω) (h 1 + H 1 + h 2 + H 2 ) k+1 (h 1 + N −1 ) k+1 , (S − IS) ′ L 2 (Ω) (h 1 + H 1 + h 2 + H 2 ) k (h 1 + N −1 ) k . For E we can proceed as on a classical S-type mesh and obtain with (5a) and s = k + 1 E − IE L 2 (0,λ) ε 1/2 (N −1 max |ψ ′ |) k+1 ,(8) while with a triangle inequality and the L ∞ -stability of I it follows E − IE L 2 (λ,2) E L 2 (λ,2) + E L ∞ (λ,2) N −(k+1) .(9) With (5b) and t = k we obtain (E − IE) ′ L 2 (0,λ) ε −1/2 (N −1 max |ψ ′ |) k , and with a triangle and an inverse inequality (E − IE) ′ L 2 ((λ,1)∪(1+µ,2)) E ′ L 2 (λ,2) + N E L ∞ (λ,2) ε −1/2 N −k . In the remaining part (5b) with t = k yields (E − IE) ′ L 2 (1,1+µ) h k 2 E (k+1) L 2 (1,1+µ) N −k ε k−1 ε −(k+1) ε 1/2 E(1) N −k ε −3/2 e −β/ε ε −1/2 N −k due to ε −1 e −β/ε ≤ 1 eβ .(10) For the estimation of W we follow the idea given in [10] and apply (5a) with s = 1 and s = 2 in order to obtain W − IW L 2 (1+µ,2) N −1 W ′ L 2 (1+µ,2) N −1 ε −1/2 W (1 + µ) N −1 ε 1/2 e −ε −1/k , (11) W − IW L 2 (1+µ,2) N −2 W ′′ L 2 (1+µ,2) N −2 ε −1/2 e −ε −1/k . Combining these results we have W − IW L 2 (1+µ,2) N −3/2 e −ε −1/k N −(k+1/2) , due to (6). Note that for k = 1 this approach can also be done on the interval (1, 2), see [10]. For k > 1 we also have with (5a) and s = k + 1 W − IW L 2 (1,1+µ) h k+1 2 W (k+1) L 2 (1,1+µ) N −(k+1) ε 1 2 − 1 k . For the derivative we obtain using (5b) with t = k and t = 1, resp. (W − IW ) ′ L 2 (1,1+µ) h k 2 W (k+1) L 2 (1,1+µ) ε −1/2 N −k , (W − IW ) ′ L 2 (1+µ,2) N −1 W ′′ L 2 (1+µ,2) ε −1/2 N −1 e −ε −1/k ε −1/2 N −k , due to (6). Collecting the individual results gives (7a) and (7b). With (5a) and s = k we also obtain W − IW L 2 (1,1+µ) h k 2 W (k) L 2 (1,1+µ) ε 1/2 N −k and together with (11) and (6) we have (7c). The result (7d) follows directly from (9). For the final results on E we apply (5a) with s = k and obtain E − IE L 2 (1,1+µ) h k 2 E (k) L 2 (1,1+µ) ε −1/2 N −k e −β/ε ε 1/2 N −k , due to (10). Together with (8) we finish the proof. Remark 3.5. Assumption (6) restricts the application of the method for k > 1 slightly. We can rewrite it as N ≤ e 1 (k−1)ε 1/k and Table 1 shows the bounds on N obtained by this requirement. For small k and rea- sonably small ε the coarser mesh approach can be used. For higher polynomial degrees, the weak layer should be resolved by a classical layer-adapted mesh like the S-type mesh. Here we could still increase the value of the transition point, because µ = σε k−1 k β ln(N) > σε β ln(N) would still be enough. Theorem 3.6. For the solution u of (1) and the numerical solution u N of (4) holds on the coarser S-type mesh with σ ≥ k + 1 and e −ε −1/k ≤ N 1−k |||u − u N ||| (h + N −1 max |ψ ′ |) k . Proof. The proof follows that of Theorem 3.2 by considering E and W in the convective term separately and using the estimates of the previous lemma. Numerical example Let us consider as example the following problem −εu ′′ (x) − (2 + x)u ′ (x) + (3 + x)u(x) − d(x)u(x − 1) = 3, x ∈ (0, 2), u(2) = 0, u(x) = x 2 , x ∈ (−1, 0], where d(x) = 1 − x, x < 1, 2 + sin(4πx), x ≥ 1. Here the exact solution is not known. On a Bakhvalov-S-mesh with σ = k + 1 and ε = 10 −6 we obtain the results listed in Table 2. For other values of ε the results are similar. Obviously we see the expected rates of N −k in |||u − u N |||. For the computation of these results instead of an exact solution, a reference solution on a finer mesh and higher polynomial degree was used. On the coarsened mesh we obtain the results shown in Table 3. We observe for ε = 10 −6 almost the same numbers as for the Bakhvalov-S-mesh. Here the conditions of Table 1 are fulfilled and we do not observe a reduction in the orders of convergence. But for the larger ε = 10 −3 there is a visible reduction in the convergence orders for k = 4. This demonstrates clearly, that for higher polynomial degrees and rather large ε a classical layer adapted mesh should be chosen. Figure 1 : 1Dependence graph of the problems in the solution decomposition. 2}, b 21 be the 2,1-component of e −B(1) and T 2 the second component of T . Then we have Lemma 3. 1 ( 1Interpolation error estimates). For σ ≥ k + 1, u = S + E + W assuming the solution decomposition and the Lagrange interpolation operator I it holds Remark 3. 3 . 3We could have also used a different layer adapted mesh, like a Durán mesh, introduced in [4], modified to our problem. The proof of interpolation errors and finally convergence follows again the standard ideas. Table 1 : 1Bounds on N for given ε and k > 1ε k 2 3 4 5 1e-2 2.2e+04 10 2 1 1e-3 5.4e+13 148 6 2 1e-4 2.7e+43 47675 28 4 1e-5 2.7e+137 1.2e+10 375 12 1e-6 2.0e+434 5.2e+21 37832 52 Table 2 : 2Errors |||u − u N ||| on a Bakhvalov-S-mesh .27e-01 0.96 2.17e-02 1.94 3.53e-03 2.89 5.74e-04 3.85 32 6.52e-02 0.98 5.68e-03 1.96 4.77e-04 2.94 3.98e-05 3.92 64 3.31e-02 0.99 1.46e-03 1.98 6.20e-05 2.97 2.63e-06 3.96 128 1.67e-02 0.99 3.69e-04 1.99 7.92e-06 2.98 1.69e-07 3.98 256 8.36e-03 1.00 9.29e-05 1.99 1.00e-06 2.99 1.08e-08 3.98 512 4.19e-03 1.00 2.33e-05 1.99 1.26e-07 3.00 6.81e-10 3.71N k = 1 k = 2 k = 3 k = 4 16 11024 2.10e-03 5.87e-06 1.58e-08 5.21e-11 Table 3 : 3Errors |||u − u N ||| on the coarsened mesh 16 1.27e-01 0.96 2.16e-02 1.93 3.53e-03 2.89 5.74e-04 3.85 32 6.52e-02 0.98 5.67e-03 1.96 4.77e-04 2.94 3.98e-05 3.92 64 3.31e-02 0.99 1.46e-03 1.98 6.20e-05 2.97 2.63e-06 3.96 128 1.67e-02 0.99 3.69e-04 1.99 7.92e-06 2.98 1.69e-07 3.98 256 8.36e-03 1.00 9.29e-05 1.99 1.00e-06 2.99 1.08e-08 3.98 512 4.19e-03 1.00 2.33e-05 1.99 1.26e-07 3.00 6.81e-10 3.71 16 1.27e-01 0.96 2.18e-02 1.93 3.55e-03 2.86 5.94e-04 3.86 32 6.53e-02 0.98 5.72e-03 1.95 4.88e-04 2.96 4.08e-05 3.57 64 3.31e-02 0.99 1.48e-03 1.96 6.26e-05 2.97 3.43e-06 0.66 128 1.67e-02 0.99 3.80e-04 1.98 7.97e-06 2.99 2.17e-06 0.44 256 8.38e-03 0.99 9.63e-05 2.03 1.01e-06 2.98 1.59e-06 0.78 512 4.21e-03 1.00 2.36e-05 2.01 1.28e-07 2.77 9.29e-07 1.61 1024 2.11e-03N k = 1 k = 2 k = 3 k = 4 ε = 10 −6 1024 2.10e-03 5.87e-06 1.58e-08 5.19e-11 ε = 10 −3 5.86e-06 1.88e-08 3.05e-07 Acknowledgment. The first author is supported by the Ministry of Education, Science and Technological Development of the Republic of Serbia under grant no. 451-03-68/2022-14/200134, while the first, second and third authors are supported by the bilateral project "Singularly perturbed problems with multiple parameters" between Germany and Serbia, 2021-2023 (DAAD project 57560935).A Expansion of terms involving Green's functionWhen estimating α in (2) we have α = N D , wherewhere v 1 is the solution of. We can expand the terms in D in powers of ε, here done using the symbolic math program MAPLE, and obtainCombining these expansions into the denominator D we haveand therefore, remember b, d > 0, it follows D 1 ε . Using above expansions again, we also haveand can estimate the numerator N |N| |β| + |δ| + |β| + |δ| + f L ∞ (0,2) · 1 ε . Numerical treatment for the class of time dependent singularly perturbed parabolic problems with general shift arguments. K Bansal, P Rai, K K Sharma, Differ. Equ. Dyn. Syst. 252K. Bansal, P. Rai, and K.K. Sharma. Numerical treatment for the class of time dependent singularly perturbed parabolic problems with general shift arguments. Differ. Equ. Dyn. Syst., 25(2):327-346, 2017. A time dependent singularly perturbed problem with shift in space. M Brdar, S Franz, L Ludwig, H.-G Roos, arXiv:2202.01601submittedM. Brdar, S. Franz, L. Ludwig, and H.-G. Roos. A time dependent singularly per- turbed problem with shift in space. submitted, arXiv:2202.01601. An adaptive mesh method for time dependent singularly perturbed differential-difference equations. P P Chakravarthy, K Kumar, Nonlinear Engineering. 8P.P. Chakravarthy and K. Kumar. An adaptive mesh method for time dependent singularly perturbed differential-difference equations. Nonlinear Engineering, 8:328- 339, 2019. Finite element approximation of convection diffusion problems using graded meshes. R G Durán, A L Lombardi, Appl. Numer. Math. 56R.G. Durán and A.L. Lombardi. Finite element approximation of convection diffusion problems using graded meshes. Appl. Numer. Math., 56:1314-1325, 2006. Higher order numerical approximation for time dependent singularly perturbed differential-difference convection-diffusion equations. V Gupta, M Kumar, S Kumar, Numer. Methods Partial Differential Equations. 34V. Gupta, M. Kumar, and S. Kumar. Higher order numerical approximation for time dependent singularly perturbed differential-difference convection-diffusion equations. Numer. Methods Partial Differential Equations, 34:357-380, 2018. A parameter-uniform numerical method for timedependent singularly perturbed differential-difference equations. D Kumar, M K Kadalbajoo, Appl. Math. Model. 35D. Kumar and M.K. Kadalbajoo. A parameter-uniform numerical method for time- dependent singularly perturbed differential-difference equations. Appl. Math. Model., 35:2805-2819, 2011. Parameter-uniform numerical treatment of singularly perturbed initial-boundary value problems with large delay. D Kumar, P Kumari, Appl. Numer. Math. 153D. Kumar and P. Kumari. Parameter-uniform numerical treatment of singularly perturbed initial-boundary value problems with large delay. Appl. Numer. Math., 153:412-429, 2020. Green's Functions: Construction and Applications. Y A Melnikov, M Y Melnikov, De GruyterY.A. Melnikov and M.Y. Melnikov. Green's Functions: Construction and Applica- tions. De Gruyter, 2012. Singularly perturbed convection-diffusion turning point problem with shifts. Pratima Rai, K Kapil, Sharma, Mathematical analysis and its applications. New DelhiSpringer143Pratima Rai and Kapil K. Sharma. Singularly perturbed convection-diffusion turning point problem with shifts. In Mathematical analysis and its applications, volume 143 of Springer Proc. Math. Stat., pages 381-391. Springer, New Delhi, 2015. Numerical analysis of a system of singularly perturbed convection-diffusion equations related to optimal control. Chr, H.-G Reibiger, Roos, NMTMA. 44Chr. Reibiger and H.-G. Roos. Numerical analysis of a system of singularly perturbed convection-diffusion equations related to optimal control. NMTMA, 4(4):562-575, 2011. Layer-adapted meshes for weak boundary layers. H.-G Roos, H.-G. Roos. Layer-adapted meshes for weak boundary layers, 2022. Sufficient conditions for uniform convergence on layeradapted grids. H.-G Roos, T Linß, Computing. 63H.-G. Roos and T. Linß. Sufficient conditions for uniform convergence on layer- adapted grids. Computing, 63:27-45, 1999. Robust numerical methods for singularly perturbed differential equations. H.-G Roos, M Stynes, L Tobiska, Springer Series in Computational Mathematics. 24Springer-Verlagsecond editionH.-G. Roos, M. Stynes, and L. Tobiska. Robust numerical methods for singularly per- turbed differential equations, volume 24 of Springer Series in Computational Mathe- matics. Springer-Verlag, Berlin, second edition, 2008. Asymptotic initial value technique for singularly perturbed convection-diffusion delay problems with boundary and weak interior layers. V Subburayan, N Ramanujam, Appl. Math. Lett. 2512V. Subburayan and N. Ramanujam. Asymptotic initial value technique for singu- larly perturbed convection-diffusion delay problems with boundary and weak interior layers. Appl. Math. Lett., 25(12):2272-2278, 2012. An initial value technique for singularly perturbed convection-diffusion problems with a negative shift. V Subburayan, N Ramanujam, J. Optim. Theory Appl. 1581V. Subburayan and N. Ramanujam. An initial value technique for singularly per- turbed convection-diffusion problems with a negative shift. J. Optim. Theory Appl., 158(1):234-250, 2013.
[]
[ "FedHC: A Scalable Federated Learning Framework for Heterogeneous and Resource-Constrained Clients", "FedHC: A Scalable Federated Learning Framework for Heterogeneous and Resource-Constrained Clients" ]
[ "Min Zhang [email protected] ", "Fuxun Yu [email protected] ", "Yongbo Yu ", "Minjia Zhang [email protected] ", "Ang Li ", "Xiang Chen ", "\nGeorge Mason University\nMicrosoftUSA, USA\n", "\nGeorge Mason University\nMicrosoftUSA, USA\n", "\nUniversity of Maryland\nUSA\n", "\nGeorge Mason University\nUSA\n" ]
[ "George Mason University\nMicrosoftUSA, USA", "George Mason University\nMicrosoftUSA, USA", "University of Maryland\nUSA", "George Mason University\nUSA" ]
[]
Federated Learning (FL) is a distributed learning paradigm that empowers edge devices to collaboratively learn a global model leveraging local data. Simulating FL on GPU is essential to expedite FL algorithm prototyping and evaluations. However, current FL frameworks overlook the disparity between algorithm simulation and real-world deployment, which arises from heterogeneous computing capabilities and imbalanced workloads, thus misleading evaluations of new algorithms. Additionally, they lack flexibility and scalability to accommodate resource-constrained clients. In this paper, we present FedHC, a scalable federated learning framework for heterogeneous and resource-constrained clients. FedHC realizes system heterogeneity by allocating a dedicated and constrained GPU resource budget to each client, and also simulates workload heterogeneity in terms of frameworkprovided runtime. Furthermore, we enhance GPU resource utilization for scalable clients by introducing a dynamic client scheduler, process manager, and resource-sharing mechanism. Our experiments demonstrate that FedHC has the capability to capture the influence of various factors on client execution time. Moreover, despite resource constraints for each client, FedHC achieves state-of-the-art efficiency compared to existing frameworks without limits. When subjecting existing frameworks to the same resource constraints, FedHC achieves a 2.75x speedup. Code has been released on https://github.com/if-lab-repository/FedHC.
10.48550/arxiv.2305.15668
[ "https://export.arxiv.org/pdf/2305.15668v1.pdf" ]
258,887,977
2305.15668
e1c74412089d0d79f02ac1dd305781d1c62d118d
FedHC: A Scalable Federated Learning Framework for Heterogeneous and Resource-Constrained Clients Min Zhang [email protected] Fuxun Yu [email protected] Yongbo Yu Minjia Zhang [email protected] Ang Li Xiang Chen George Mason University MicrosoftUSA, USA George Mason University MicrosoftUSA, USA University of Maryland USA George Mason University USA FedHC: A Scalable Federated Learning Framework for Heterogeneous and Resource-Constrained Clients Federated Learning (FL) is a distributed learning paradigm that empowers edge devices to collaboratively learn a global model leveraging local data. Simulating FL on GPU is essential to expedite FL algorithm prototyping and evaluations. However, current FL frameworks overlook the disparity between algorithm simulation and real-world deployment, which arises from heterogeneous computing capabilities and imbalanced workloads, thus misleading evaluations of new algorithms. Additionally, they lack flexibility and scalability to accommodate resource-constrained clients. In this paper, we present FedHC, a scalable federated learning framework for heterogeneous and resource-constrained clients. FedHC realizes system heterogeneity by allocating a dedicated and constrained GPU resource budget to each client, and also simulates workload heterogeneity in terms of frameworkprovided runtime. Furthermore, we enhance GPU resource utilization for scalable clients by introducing a dynamic client scheduler, process manager, and resource-sharing mechanism. Our experiments demonstrate that FedHC has the capability to capture the influence of various factors on client execution time. Moreover, despite resource constraints for each client, FedHC achieves state-of-the-art efficiency compared to existing frameworks without limits. When subjecting existing frameworks to the same resource constraints, FedHC achieves a 2.75x speedup. Code has been released on https://github.com/if-lab-repository/FedHC. INTRODUCTION Federated learning (FL) emerges as a new distributed collaborative learning paradigm, which has drawn much attention from academia and industry [13,23,27,37]. Instead of centralizing data in a single server, federated learning allows data to remain decentralized on individual devices like smartphones. The model training process occurs on edge devices, *Corresponding author. with the model updates aggregated and sent to a central server for integration. In order to evaluate the performance of new FL algorithms before real-world deployment, FL simulation frameworks are designed using general CPU/GPUs to host a series of virtual clients, without the need of largescale real edge devices. FL simulation frameworks not only provide researchers with algorithm benchmarks but also efficient experimental environment, which helps researchers in FL community get started quickly and develop new FL algorithms. Many existing FL frameworks [3,8,14,16] focus on providing a platform to simulate different FL algorithms that tackle privacy, security, aggregation, and Non-IID data. Besides, FedML [14] supports different computing paradigm, including hirarchical FL. Flower [3] supports different programming languages and machine learning frameworks. Fedscale [22] supports communication heterogeneity and client availability simulation. However, current FL frameworks neglect the gap between algorithm simulation and real-world deployment, thus misleading evaluations of new algorithms. Additionally, they lack flexibility and scalability to effectively cater to resource-constrained clients. In the following, we describe the challenges of existing works and our proposed solution. Firstly, the computation time of a client can be affected by many factors in realistic scenarios, such as hardware capability, data volume, model size, input sequence length, and batch size but existing FL frameworks often use oversimplified approach to estimate the client computation time. Slow clients caused by such factors train fewer local steps, resulting in low training quality. Aggregating models with varied training qualities slows down global model convergence. Due to the lack of the accurate measurement mechanism for the computation time of the client on existing FL frameworks, when deploying algorithms on real-world edge devices, the heterogeneous computing capabilities and the diverse workloads will cause deviations of the global model convergence Figure 1: The high-level view of FedHC framework. (a) represents the FL workflow, which incorporates scalable clients with system and workload heterogeneity. (b) shows the implementation of FedHC, where varying proportions of computing units (SMs) on a single GPU are allocated to several parallel clients (three in this case), thus achieving system heterogeneity. Workload heterogeneity is reflected by framework-provided runtime. FedHC also enhances efficiency through optimizations at service level, runtime level and resource level. from the FL framework and the real world. Therefore, it is important to reflect the impact of heterogeneous factors on client's computation time. There are three possible ways to represent the computation time of different clients but all of them are limited. (1) Estimation by modeling the execution latency. This approach combines various factors that affect execution time into a single formula. FedScale [22] roughly estimated the execution time by the system speed and data volume, regardless of other factors such as model size, training configuration, input length, resource contention inside the client. It can not support the evaluation of some straggler acceleration algorithms working on reducing straggler's workload, such as resource-aware model personalization [5,10,31,38]. (2) Profiling models on real edge devices. Running the model directly on the real device yields realistic execution time. But this approach is time-consuming and labor-intensive due to variety of models and the need of large-scale edge devices. No FL framework currently uses profiling methods to achieve computational heterogeneity. (3) Framework-provided runtime. This approach executes models on general CPU/GPUs and uses the wall-clock time on these backends as client's execution time. While it effectively handles workload heterogeneity, it lacks support for system heterogeneity within the same backend. For instance, clients sharing the same GPU possess identical computation capabilities, thus losing the ability to simulate diverse computational capacities. To fully support the computation heterogeneity caused by system heterogeneity and workload heterogeneity, we propose a new method by assigning specific resource budgets to different clients for heterogeneous computing capabilities. Based on the resource budget, we further use frameworkprovided runtime to represent execution time. Secondly, existing FL frameworks lack the resource management mechanism so that they fail to support scalable clients when considering resource constraints. They assume that clients have sufficient resources and do not impose any resource constraints on them. Based on this, LEAF [8] and TFF [16] run different clients sequentially in single process, which is very time-consuming. Syft [32] and FederatedScope [36] leverage distributed computing but require hardware nodes equal to the number of clients, which is extremely hardware-costly when scaling to massive clients. Flower [3] and FedScale [22] support multiple clients on each hardware node but lacks resource management according to heterogeneous resource needs. To build an efficient FL framework featuring resource-constrained system heterogeneity, we face the following challenges: (1) As a single GPU's supported parallelism is limited, the prevailing approach for simulating large-scale clients involves launching multiple parallel processes and sequentially running more clients within each process. Nevertheless, the CUDA context correlated with resource budget allocation, created within the process, cannot be altered, conflicting with our techniques for addressing system heterogeneity. (2) Existing frameworks set fixed number of processes which prevents the client's parallelism from adjusting based on resource usage. (3) The total resource budget of parallel clients may not reach 100%, as insufficient remaining resources for the subsequent client result in idleness. Therefore, a reasonable scheduling mechanism is needed to reduce resource waste between clients. (4) Clients with substantial resource budgets may not maximize resource usage, leaving resources underutilized. To overcome the limitation of heterogeneity and tackle the challenges of efficient framework, we propose FedHC, a scalable federated learning framework for heterogeneous and resource-constrained clients. As shown in Fig. 1, our focus lies in enhancing computational flexibility and scalability. We summarize FedHC's five unique features as follows: (i) Supporting system heterogeneity with constrained resource budgets. To simulate the heterogeneous computational capabilities, we design a resource budget manager module that sets a maximum available percentage of GPU resource, representing various resource-constrained clients. We empower users to effortlessly configure the system heterogeneity by controlling resource partition. (ii) Enabling workload heterogeneity in real runtime. Besides hardware capability, we recognize that the running time of each client is also influenced by diverse workloads (caused by data volume, model size, input sequence length, and training configuration). As formulating an equation to accurately model or estimate these factors' impact on execution time proves challenging, we advocate utilizing platform-provided runtime as a representative of execution time. In doing so, diverse workloads are reflected in the runtime. (iii) Enhancing scalability via dynamic process management. In order to solve the conflict between the re-used process and framework-provided system heterogeneity, and the problem of fixed parallelism, we propose dynamic process management. We terminate the existing process after the client completes training and initiate a new process for allocating a new resource budget. Moreover, we devise a dynamic process management scheme, permitting additional processes during GPU resource idleness and fewer processes when GPU schedules are congested. (iv) Optimizing resource utilization among clients via a scheduler. To reduce the resource idleness among clients, we develop a client scheduler that orchestrates client execution order and parallelism based on resource budgets, utilizing a double-pointer selection module to identify the next client and a condition check module to determine deployment feasibility. (v) Improving resource utilization within client by resource-sharing parallelism. To reduce the under-utilization of substantial resource budgets, we propose a resource-sharing method to tackle this problem. Unlike conventional parallelism, our resource-sharing approach allows the cumulative resource budget of SMs percentage to surpass 100% with clients competing for shared resources without breaching their individual maximum thresholds. This not only improves platform resource efficiency but also preserves system heterogeneity among clients. We summarize the contributions of our work as follows: • FedHC bridges the gap between simulation and realworld deployment with full consideration of system heterogeneity and workload heterogeneity. FedHC realizes system heterogeneity by allocating a dedicated and constrained GPU resource budget to each client, and also simulates workload heterogeneity in terms of framework-provided runtime. • FedHC improves the GPU resource utilization through dynamic process management, client scheduling, and resource-sharing parallelism. Flexible parallelism with optimization among and within resource budgets reduce resource idling. It enables executing large-scale FL experiments under various heterogeneity configurations, even on a single GPU. • FedHC offers flexible APIs to extend its compatibility for both the hardware and software design. As the first FL framework to support explicit fine-grained resource management, FedHC enables the evaluation of heterogeneous model designs, resource optimization, and software-hardware co-design. We believe that FedHC will empower FL researchers and practitioners to explore a myriad of design opportunities concerning algorithms and resource optimizations. RELATED WORK Federated learning consists of various heterogeneous clients which collaboratively train a deep learning neural network. Federated Learning faces four core challenges: Non-IID data distribution, privacy concerns, expensive communication, and computation heterogeneity. Some works are proposed [15,18,25,26,39] to obtain convergence guarantees for Non-IID and unbalanced data. Methods like meta-learning and multi-task learning are extended to FL for modeling heterogeneous data [9,11,19,33]. Privacy-preserving approaches typically build upon classical cryptographic protocols like differential privacy [4,28] and SMC [7]. Model compression [1], split learning [35], and data compression techniques such as quantization and sketching [2,17,20] are proposed to reduce communication overheads. To tackle computation heterogeneity, asynchronous communication and active sampling of clients have been developed [6,29]. Model personalization also attracts much attention to reduce straggler's workload recently [5,10,31,38]. FL simulation frameworks are designed to expedite FL algorithm prototyping and evaluations before real-world deployment. They mainly focus on the FL features and platform efficiency. From the aspect of FL features, most FL frameworks aims to establish benchmarks which integrate different datasets and algorithms [8,14,16,32]. But they neglects the heterogeneity of FL clients. Flower [3] provided a fully languageagnostic interface through protocol-level integration, which supporting heterogeneous programming language and ML frameworks (Pytorch and Tensorflow). FederatedScope [36] aims to support personalized FL config, flexible expression of behavior, and different loss function on local models. But both of they can not reflect the system heterogeneity and workload heterogeneity. FedScale [22] supported the system heterogeneity by providing a dataset of different computing speed. However, it can not support the workload heterogeneity caused by model size, input sequence length, data compression, and batch size. Overall, existing FL frameworks neglect the gap between algorithm simulation and real-world settings, thus misleading the evaluation of new algorithms. From the aspect of framework efficiency, LEAF [8], TFF [16], and FedML [14] run different clients sequentially on single hardware node, which is very time-consuming. Syft [32], FederatedScope [36] and FedML [14] support distributed computing but require hardware nodes equal to the number of clients, which is extremely hardware-costly when scaling to massive clients. Flower [3] and FedScale [22] support multiple clients on each hardware node but lacks resource management according to heterogeneous resource needs. All of them can not support large-scale clients when applying framework-provided system heterogeneity. HETEROGENOUS FL IMPLEMENTATION In this section, we briefly introduce the architecture of FedHC. Then we introduce how FedHC implements the system heterogeneity and workload heterogeneity. Architecture Overview We design an FL framework, i.e., FedHC, to simulate the server and resource-constrained clients on GPU. Fig. 2 illustrates the overview of FedHC architecture. The client scheduler determines the parallels clients to run, and then the dynamic process manager launches corresponding processes to execute these clients. The communicator facilitates the transmission of instructions and models between server and clients, while the resource-sharing parallelism module aims to boost GPU utilization. For each client, the corresponding process conducts the heterogeneity initialization to simulate the system heterogeneity. FL Heterogeneity Simulation Definition and impact of heterogeneous devices in FL: From a computational perspective, client execution times result from heterogeneity in numerous aspects, which can Diverse training times lead to varying local model qualities, subsequently impacting the global model. Therefore, simulating workload and system heterogeneity unveils disparities across clients in training times, further influencing aggregation and aiding researchers in developing and evaluating novel algorithms. Implementation of system heterogeneity: Simulating system heterogeneity poses a challenge, as most simulation experiments lack numerous distinct hardware platforms. Running different clients on the same GPU results in homogeneous computation capability. To address this issue, we propose splitting GPU resources and allocating GPU resource shares to various clients. The differences in underlying resources available to each client yield varied computing speed, thus simulating the system heterogeneity. Fig. 3 illustrates how we implement the simulation of heterogeneous computing capabilities. When a model is deployed to execute on the GPU, it is composed of a series of CUDA kernels. The CUDA kernel is a function that get executed on GPU. From a software aspect, a kernel is executed as a grid of thread blocks. When mapped to hardware execution, a CUDA block is executed by one streaming multiprocessor (SM) which consists of some CUDA cores. Depending on CUDA blocks' required resources, one SM can run multiple concurrent CUDA blocks. The CUDA grid comprised of several CUDA blocks is executed on many SMs. The number of SMs occupied depends on the size of the CUDA kernel. When the number of SMs is insufficient, the execution speed of the kernel will slow down. Based on this, our resource partitioning module simulates different computing speeds by limiting the number of SMs available. Instead of arbitrarily using any SMs on a full GPU, we set a resource budget -a subset of SMs, to simulate a resource-constrained client. Different computing capabilites can be obtained by setting different resource budgets. We implement the computing resource partitioning by os.environ["CUDA_MPS_ACTIVE_THREAD_PERCENTAGE"]. The system heterogeneity we designed is very user-friendly, and only requires users to specify different resource budget parameters. Implementation of workload heterogeneity: FedHC supports the simulation and evaluation of various workload heterogeneity. Users can flexibly configure imbalanced data volume or insert data compression method (i.e., data heterogeneity), design heterogeneous models via pruning and additional multi-task model (i.e., model heterogeneity), and customize hyper-parameters such as input sequence length and batch size in training (configuration heterogeneity). As these factors change, the corresponding alterations in training times and impact on global model are reflected. FedHC's ability to capture these changes stems from the deployments in real runtime. Instead of rough estimation, we record each client's wall-clock time as their training time. In synchronous aggregation, one global round's duration is the longest time among all clients' wall-clock times. In asynchronous aggregation, clients' participation in the current communication round is determined by their wall-clock times' order. Overall, FedHC users only need to set the percentage of computing units and workload-related configuration for each client. FedHC will constrain the resource allocated to the client according to the resource budget. By assigning different resource fractions, variations in running time can be incurred, thus realizing the simulation of system heterogeneity. At the same time, the real runtime of GPU has a natural advantage to reflect workload heterogeneity. RESOURCE OPTIMIZATION IMPLEMENTATION Heterogeneous clients have different needs of computing resource in realistic scenario. However, existing FL simulation frameworks ignore the feature so that they lack the resource management according to resource occupation. We find that existing FL framework can not support the efficient large-scale clients execution when applying our proposed heterogeneous FL simulation method. So we design the resource optimization to achieve the scalability in this section. Dynamic Process Manager The number of parallelism m on single GPU is limited, but the number of participants n (n»m) of each global round in FL is massive. To achieve the scalability, we design a dynamic process manager, which contains process status monitor, process switching, process record table, and determination module. Challenge: The current approach when running large-scale clients is to launch multiple parallel processes and run the client serially within each process. But it fails down when combined with system heterogeneity and also has the possibility of resource allocation waste or 'explosion'. As mentioned before, we implement the system heterogeneity by assigning a specific resource budget to a specific client at the start of the process. The resource limitation is maintained in the CUDA context. However, CUDA context within the process can only be created at the beginning of the process. Once the process has started, the CUDA context can not be changed, which means the resource budget within the process is constant. So the existing approach -trying to sequentially run clients within each process -can not support our proposed system heterogeneity. Moreover, existing approach can only keep fixed number of parallel processes. When the resource requirements of parallel clients are relatively small, the degree of parallelism should be increased. Conversely, parallelism should be reduced. Figure 4: Dynamic process manager Method: To tackle this challenge, we need to launch seperated process for each client for system heterogeneity consideration, and make the number of parallel processes adjustable according to resources needs. We propose a process manager as shown in Fig. 4. It has two main functions, process switching to achieve scalability and dynamic parallelism according to resource requirements. The server is implemented in a long-lasting process, which is alive until the experiment ends. The server can dynamically launch processes for different clients. We use the Google Remote Procedure Call (gRPC) [12] to communicate between the server and clients. The communicator transmits the client's request to the server, and transmits the command signal generated by the server to the client. The status monitor processes the client's request and generates the next instruction. For example, when the state monitor receives the training completion signal from a client, it may issue an instruction signal for the client to upload its local model according to aggregation strategy. We also open APIs of testing handler, training handler, aggregation strategy for users to customize their settings. The status monitor stores the generated instructions in the record table. The record table has as many rows as the maximum number of parallel processes. Each row of the record table is a First-In-First-Out queue, containing the events to issue to the process. The processing switching to achieve scalability contains two aspects: terminating the old process and launching new process for the next client. For the aspect of terminating the old process, once the status monitor of the server receives the signal of client training completion, the process determination module will produce the terminate signal and save the signal in the record table of the corresponding row. It is transmitted to the client then. The client will jump out of the loop of continuously requesting the server when it receives the terminate signal. The process executing the client will be terminated. Another aspect of process switching is to launch new processes for subsequent clients. Once the next clients to be executed are determined, the launching module in the server will initiate a new process. The corresponding resource budget for system heterogeneity will be allocated at the beginning of the process, so that the resources available to the process cannot exceed the limit of the resource budget. In this way, we successfully solve the problem that clients running in the same process cannot satisfy system heterogeneity. The process switching mechanism also breaks through the limitation of fixed parallel quantity, which allows dynamic change of parallel quantity. Since the existing framework making the same process be reused by different clients, the number of clients running in parallel is always a fixed value. Unlike the fixed number of parallelism, our process switching is flexible to support dynamic number of parallelism. The process launching module can initiate any number of new processes under the premise that no process blockage occurs. The dynamic number of parallelism is determined by the scheduler which will be introduced in the next section. We use the limitation of total resource budget in scheduler and the parameter of maximum parallelism to avoid process blocking. The newly initiated process and the events to be issued for the new process will be recorded in the new queue in the record table. Client Scheduler As described in the previous section, when running largescale clients, a combination of serial and parallel is required. Thus a scheduler related to temporal scheduling of client execution order and spatial scheduling of the amount of parallelism is necessary, so as to shorten the execution time of one global round and improve resource utilization. Challenge: The current framework [3,22] adopts greedy scheduling. The selected participants in each global round are randomly arranged in the queue and they are scheduled by the order of the queue. The greedy scheduling can cause two problems. Firstly, the low GPU utilization. When the current remaining GPU resources are less than the resource budget required by the next client, the next client cannot be deployed. As a result, the remaining GPU resources are wasted. So the scheduler should select appropriate parallel clients with the consideration of the resource budget. Secondly, the long execution time of the global round. If the slow client is executed last and alone, the total execution time of a global round will be longer. Therefore, the slow client should try to have a higher priority when scheduling. However, if all the slow clients are executed at the beginning, the high parallelism may cause process blocking and has the risk of fragmentation problem. Moreover, the gathering of clients with large resource budgets leads to a decrease in parallelism, which cannot make good use of resources. So the scheduler should coordinate the order of clients for temporal consideration. Overall, the scheduler faces challenges of the spatial optimization to improve the GPU utilization and the temporal optimization to reduce the execution time of the global round. Method: To tackle these challenges, we propose a resourceaware scheduler with double pointers. When a client finishes executing, the server will call the scheduler to get the pending list of clients to run. As Algorithm. 1 shows, we firstly sort the participants according to their resource budgets, and then cyclically use the left pointer and right pointer to alternately fetch the client until the end condition is met. Figure 5: Design of resource sharing. We give a case of two parallel clients on a single GPU with two methods, hard margin resource partition (no resource sharing) and soft margin resource partition (resource sharing). We use the left pointer to fetch the client with minimum resource budget, and use the right pointer to fetch the client with maximum resource budget. Once a client is selected to be executed, the condition checking module checks whether there are enough resources and idle processes to deploy the client. If the current client passes the conditional checks, the client is added to the pending list. On the contrary, when the remaining GPU resources are insufficient to sustain the client pointed by the right pointer, the right pointer will stop. But the left pointer will still continue, because the resource budget of the client on the left is less than that on the right, which can fill the remaining GPU resources. Until the client pointed by the left pointer cannot meet the condition checking, the algorithm will end and the pending list will be output. The pending list of clients with its resource budget and corresponding process will be used by the process launching module in the dynamic process manger as mentioned before. In this way, clients with large and small resource budgets execute in parallel at the beginning, and clients with moderate resource budgets execute in parallel later. This prevents clients with small resource budget from slowing down the execution time of the global round, and also improves resource utilization. Resource-sharing Parallelism In order to host multiple clients running concurrently on single GPU, the server launches multiple processes at the same time. Each process is configured specific resource partition as mentioned in the previous section. We design two strategies of resource partition when clients are running in parallel. Fig. 5 gives an example under these two strategies. Under the hard margin strategy, all clients execute within their own resource budget and do not affect each other. The soft margin strategy allows partial resource sharing. Although different clients compete to use some shared resources, but the amount of resources available to each client will not exceed its own resource budget limit. Fig. 5(a) shows part of the computation graph of Client 1 and Client 2, containing operators such as convolution, batch normalization, and ReLU activation. When executing these two clients on single GPU, we assign 60% and 30% resource budget separately for each client so as to implement the hardware heterogeneity. Fig. 5(b) shows the execution of these two clients under the hard margin resource partition strategy. The resources occupied by the two clients are independent and will not affect each other. Different operators have different requirements for computing resources. Compute-intensive operators like convolution need more computing resource, otherwise it takes more time. When executing operators that don't require too many resources, there will be many idle resources in the large resource budget. These idle resources cause low GPU utilization. To solve this problem, we propose soft margin resource partition strategy. As shown in Fig. 5(c), we allow 15% computing resource sharing. For resource sharing area, two situation exist: Resource contention occurs when two big operators meet; If there are idle resources on one client, another client will fill the resource idle under the premise of not exceeding resource budget constraints. Compared with hard margin resource partition strategy, the soft margin strategy has two advantages. Firstly, the resource idle is reduced thanks to resource sharing, thus improving the GPU utilization. Secondly, because of the resource overlap, the total resource budget is reduced from 90% to 75%, which leaves more resource space to increase the number of parallel clients. We use Multi-Process Service (MPS) [30] to implement these two strategies. FedHC provides the parameter to set the up-bounded resource constraint of all clients. Users only need to set the parameter at the beginning of the experiment. Hard margin resource partition strategy requires the parameter no more than 100%. Otherwise, if the up-bounded resource constraint is set higher than 100% (soft margin resource partition strategy), FedHC will automatically use the excess as a shared resource. Table. 1 summarizes the key differences between FedHC and existing FL frameworks. Heterogeneous Data means the data distribution among clients are Non-IID. The basic feature is supported by all frameworks. × means no support; √ means fully support; † means partially support. Heterogeneous Workload refers to the computation workload which is caused by several factors, such as data volume, data compression, model size, input sequence length and batch size. Existing frameworks(FedML, Flower, FedScale) only consider the unbalanced data volume, but neglects other factors. FedHC takes all of these factors into consideration. Heterogeneous Hardware is related to hardware computing capabilities. FedML and Flower support varying real edge devices, but acquiring a large-scale devices for an experiment is difficult. FedScale provides a dataset with different computing speeds, but limits it as a factor in the estimation formula. FedHC achieves the hardware heterogeneity by assigning different resource shares to different clients, which is more flexible and friendly. Resource Optimization aims to improve the resource utilization and efficiency of the FL framework. Existing framework directly use the mechanism of the machine learning platform, ignoring the combination with resource-constrained clients in federated learning. FedHC conducts the resource optimization from the service level, runtime level and resource level. Scalability refers to execute large-scale clients in an efficient way. TFF and LEAF are limited in single machine simulation. FedML support distributed computing but require hardware nodes equal to the number of clients. Flower and FedScale can simulate large-scale clients on a handful of GPUs, but they fail down when considering heterogeneous resource occupation. FedHC achieves the scalability of clients with different resource consumes. Flexible APIs allows the deployment and extension of diverse FL efforts. Apart from APIs such as data selection and model selection supported by existing frameworks, FedHC also provides heterogeneous resource initialization, explicit resource management, model variant, and personalized training configuration. FL FRAMEWORK COMPARISON EXPERIMENTS In this section, we evaluate FedHC's capabilities in the FL simulation experiments. Our evaluation focuses on two main aspects: (1) Heterogeneous FL. We show that FedHC can simulate the system heterogeneity and workload heterogeneity (2) Efficiency of the framework. We show that FedHC can effectively conduct large-scale FL experiments under various heterogeneity settings. We compare the efficiency with existing state-of-art FL frameworks. Ablation experiments also show that the resource optimization components of FedHC (dynamic process manager, client scheduler, and resourcesharing parallelism) are effective to improve GPU utilization and efficiency. Hardware environment: All experiments are conducted using a single NVIDIA Titan V GPU. We assign different resource budgets to each client in order to simulate hardware heterogeneity. We execute each client with its respective resource budget and record the wall-clock time as the client's execution time. Support for Heterogeneous FL We firstly show that different factors could cause the training time of the client to be changed. Then, we show that how FedHC can use these factors to accelerate stragglers in FL but the state-of-art FL framework FedScale failed. Finally, we show the impact of the hardware heterogeneity and workload heterogeneity on the global convergence. Experimental setup: To show that FedHC can reflect the impact of different factors that could change the client training time, we change the value of resource budget, input sequence length, model layers, and batch size and record the client training time. We experiment with the task of sentiment classification on the dataset SST-2 [34], the movie reviews with binary classes. We partition the data into Non-IID distribution. The base model we use is multi-layer LSTM. We also progressively adjust these factors on FedScale (the state-of-art FL framework) and FedHC to compare the heterogeneity support of different frameworks. Lastly, we design experiments to show the impact of the hardware heterogeneity and workload heterogeneity on the global convergence. To show the effect of workload heterogeneity on the global convergence, we add an extra local model to increase the workload. Some works [24] are proposed to train a local model to perform client personalization, which increased workload for client. We train an image classification FL task on the Cifar10 [21] dataset. 20 clients are generated in an Non-IID setting and 80% clients are used to participate in local training in each global round. The accuracy of the global model over time is recorded separately with and without extra local model. To show the effect of hardware heterogeneity on the global convergence, we conduct experiments with/without hardware heterogeneity setting on FEMNIST dataset. In the setting without hardware heterogeneity, all clients are executed on the whole GPU. In the hardware heterogeneity setting, each client is assigned a specific resource budget to constrain the computing capabilities. Results: Fig. 6 illustrates the client's training time varied under the impact of diverse factors. The smaller the GPU percentage, the longer the running time of the client, reflecting hardware heterogeneity. The training time will be reduced when decreasing the input sequence length and number of model layers or increasing the batch size, which shows the workload heterogeneity. Fig. 7 shows that FedHC enables adjusting different factors to accelerate stragglers. The base model (S0) executed on GPU does not have any constraints of hardware capability and keeps the original workload. When adding hardware constraints setting (S1), the training time of the client on both FedScale and FedHC increases. However, when progressively changing other factors, i.e., increasing batching (S2), decreasing the number of model layers (S3), decreasing the input sequence length (S4), FedScale can reflect the reduced training time but FedScale fails. Fig. 8 shows the impact on convergence of workload heterogeneity and hardware heterogeneity. When adding the extra model, heavier workload occupies part of the client's resource, thus slowing down the speed of convergence. The convergence speed can also slowed down due to the hardware heterogeneity setting. Because clients with small resource budget have weaker computing capabilities, resulting in longer training time. With its ability to respond to different factors, FedHC avoids misleading the evaluation of client training time and convergence speed, thus narrowing the gap between simulation experiments and real-world deployments. Support for Scalability FedHC supports a large scale of clients to participant in the training process in each global round. Unlike existing FL frameworks, FedHC applies the constrained resource on each client. To show the efficiency of FedHC, we compare the round duration with existing FL frameworks. Experimental setup: As FedHC assigns resource budgets to clients for the consideration of hardware heterogeneity. We transfer the computing speed dataset released in FedScale to the resource budget. We generate varying resource budgets for 2800 clients and illustrate the distribution in Fig. 9 (a). The y-axis represents the number of clients with a specific resource budget, while the x-axis indicates the percentage of computing units on the GPU (SMs). For framework comparison, we choose several advanced FL frameworks, FedML, Flower, and FedScale. We use the FeMNIST dataset with a Non-IID partition and apply the same data to all frameworks for fair comparison. We train the model ResNet18 with local data and aggregate models using FedAvg. We set the same hyper parameters on all frameworks to keep the same workload. 10 clients are selected in one global round. For each client, we train on 500 batches of data, and the batch size is 64. We record the duration including training, testing and so on for each round to compare the framework efficiency. We firstly use the original setting for each framework. Next, we compare the framework efficiency in a more practical scenario where clients have limited resource in the context of FedScale, using the same setting as FedHC. We scale the number of participants in each round from 100 to 2000. Finally, we evaluate the model convergence across different numbers of participants on FedHC framework. Results: Fig. 9 (b) illustrates the efficiency comparison with several advanced FL frameworks, including FedML, Flower, and FedScale. Despite FedHC having resource constraints for each client, unlike the other frameworks which have no limits, it still manages to achieve state-of-the-art efficiency However, when we apply the same resource-constrained client settings to FedScale, its efficiency significantly lags behind FedHC. As depicted in Fig. 9 (c), FedHC achieves a 2.75x speedup compared to FedScale when the number of participant reaches 2000. This discrepancy arises from FedScale's lack of resource management for resource-constrained clients. Consequently, it cannot adaptively adjust client parallelism based on resource usage. Conversely, FedHC enhances GPU utilization by incorporating dynamic process management, resource-aware scheduling, and resource sharing mechanisms. Fig. 9 (d) shows the test accuracy across different number of participants. The global model achieves a faster convergence and reaches a higher accuracy when increasing the number of participants per round. Large-scale participants contribute larger data volumes and more diverse data distributions, resulting in higher model quality. Effectiveness of FedHC components To demonstrate the effectiveness of each module, we design ablation experiments. Based on the FedScale structure framework, we added process switching to support the configuration of resource heterogeneity, which we used as the baseline for experimental comparison. We progressively add dynamic process management module, resource-aware scheduling module, and resource sharing module to realize FedHC. We use the same setting as last section. We select 3, 10, and 100 participants respectively, and report the execution time per global round. Fig. 10 shows the execution time per global round with different number of participants, which can prove that each above modules in FedHC can reduce the execution time. Below we analyze the effectiveness of each module separately. Dynamic Process management Unlike the fixed number of parallel processes for client execution in FedScale, the dynamic process management in FedHC automatically determines the appropriate number of parallel processes based on the GPU resource usage. Fig.11(a) illustrates the variation in the number of parallel processes during a single global round involving a total of 20 participants. It is evident that the dynamic process management approach results in a higher and dynamic number of parallel clients compared to the fixed process number setting. As depicted in Fig.11(b), this also translates into a clear advantage in terms of the total resource budget, ultimately leading to a reduction in execution time. This improvement is attributed to the resource management module's ability to adjust the level of parallelism based on the resource budget constraints of the parallel clients. In each global round, the resource management module proactively analyzes available resources and initiates additional processes when it predicts that there will be sufficient free resources. Fig. 11(c) demonstrates that the throughput achieved with the dynamic process manager setting surpasses that of the fixed process number setting. Moving on to Fig. 12, we can visualize the parallelism of the kernel using the Nsight System tool. It is evident that under the fixed process number setting, the parallelism remains constant. However, in the dynamic process manager setting, the kernels execute with higher and varying degrees of parallelism. Scheduling The scheduler module in FedHC is responsible for determining the temporal execution order and spatial parallelism of participating clients, thereby further reducing the execution time of each global round. We present a case study involving 8 participants (A-H) randomly selected from a pool of 2800 clients. These participants have resource budgets of 10, 15, 30, 80, 65, 40, 50, and 10, Figure 13: Performance of resource-aware scheduling respectively. In this study, we compare the outcomes of the existing greedy scheduling method with our resource-aware scheduling method. In Figure 13 (a), the client order and execution durations are depicted. With the resource-aware scheduling, the order of execution for clients is altered, prioritizing the straggling client H to be executed earlier, thereby mitigating its impact on the overall duration. Furthermore, under the greedy scheduling setting, clients A, B, and C utilize 55% of the computing resources, leaving insufficient resources on the GPU to accommodate client D, which requires an 80% resource budget. Our approach enhances resource utilization and parallelism by coordinating the execution order of clients, effectively balancing resource-intensive and resource-light clients. Consequently, the total execution time for a global round has been reduced from 213 seconds to 128 seconds. Fig. 13 (b) shows the total resource budgets with different scheduling. The shaded region between the total resource budget curve and line y=100 represents vacant GPU resource that has not been assigned to any clients. Obviously, the resource-aware scheduling in FedHC has greatly reduced the the resource vacancy compared with the existing method. Resource Sharing Clients with substantial resource budgets may not maximize resource usage, leaving resources underutilized. FedHC uses the soft margin resource partition method, which allows clients to compete for the sharing resource part while each client does not exceed its own resource limit. In this section, we show that resource sharing in FedHC improves the resource utilization, which achieves higher parallelism and throughput. Experimental setup: In hard margin resource partition setting, we set the total resource threshold as 100%, which means no resource sharing. In soft margin resource partition setting, we set the total resource threshold as 150%, which means 50% GPU resource can be shared among parallel clients. We select 10 participants in each global round. Results: Fig. 14(a) shows the total resource budget of all participants in one global round. Resource sharing method improves the total resource budget so that the resource utilization can be improved. As a result, the total execution time of one global round is reduced. The resource sharing method improves the resource utilization by increasing the number of parallel clients as shown in Fig. 14(b). That is because the resource sharing method makes full use of idle resources between resource budget. Fig. 14(c) shows resource sharing method improves the throughput. As Fig. 14(d) shows, resource sharing also brings resource competition, resulting in variations in training time for each client. But we found the change to be small, especially for clients with small resource budgets. Therefore the total time of each global round is slightly affected because the straggler's time dominates. CONCLUSION To enable simulation of large-scale heterogeneous devices in real FL, we introduce FedHC, a scalable federated learning framework for heterogeneous and resource-constrained clients. Existing FL research platforms do not support scalable FL clients especially when considering hardware heterogeneity and diverse workloads. Unlike rough estimation methods, we assign a resource budget to each client, resulting in varying runtimes due to heterogeneous resource constraints, and any workload heterogeneity can also be reflected within the runtime. Furthermore, we enhance resource utilization of GPU for scalable clients through a dynamic process manager to control parallelism, a client scheduler for temporal and spatial coordination, and a resourcesharing method to reduce idling. Experiments show FedHC can perform large scale FL experiments with heterogeneous FL scenarios, enabling researchers to explore more FL design opportunities. Figure 2 : 2Architecture of FedHC framework be distilled into system and workload heterogeneity. System heterogeneity involves the different computing speed due to different hardware capabilities. Heterogeneous workload arises from data volume, model size, training configuration, and intermediate variables. Figure 3 : 3The implementation of heterogeneous computing capabilities. The correspondence of software (left) and hardware (right) for kernel execution on GPU shows how to implement the resource-constrained client by a subset of GPU. Figure 6 : 6FedHC shows varied training time caused by diverse factors with the framework-provided runtime. It can reflect the impact of different factors on client execution time that other frameworks cannot. Figure 7 : 7FedHC Figure 8 : 8Impact of client heterogeneity on convergence Figure 9 :Figure 10 : 910Result Ablation study Figure 11 : 11Performance under fixed process number and dynamic process number when compared to existing frameworks. FedHC exhibits a slightly superior performance to FedScale. Figure 12 : 12Kernel Figure 14 : 14Performance of resource sharing Table 1 : 1Comparison with existing FL frameworks.Features LEAF TFF FedML Flower FedScale FedHC Heter. Data † † √ √ √ √ Heter. Workload × × † † † √ Heter. hardware × × × † † √ Resource optimization × × × † † √ Scalability × † † † † √ Flexible APIs × † √ √ √ √ cpSGD: Communicationefficient and differentially-private distributed SGD. Naman Agarwal, Ananda Theertha Suresh, Felix Xinnan, X Yu, Sanjiv Kumar, Brendan Mcmahan, Advances in Neural Information Processing Systems. 31Naman Agarwal, Ananda Theertha Suresh, Felix Xinnan X Yu, San- jiv Kumar, and Brendan McMahan. 2018. cpSGD: Communication- efficient and differentially-private distributed SGD. Advances in Neural Information Processing Systems 31 (2018). QSGD: Communication-efficient SGD via gradient quantization and encoding. Advances in neural information processing systems. Dan Alistarh, Demjan Grubic, Jerry Li, Ryota Tomioka, Milan Vojnovic, 30Dan Alistarh, Demjan Grubic, Jerry Li, Ryota Tomioka, and Milan Vojnovic. 2017. QSGD: Communication-efficient SGD via gradient quantization and encoding. Advances in neural information processing systems 30 (2017). Flower: A friendly federated learning framework. J Daniel, Taner Beutel, Akhil Topal, Xinchi Mathur, Javier Qiu, Yan Fernandez-Marques, Lorenzo Gao, Kwing Hei Sani, Titouan Li, Pedro Parcollet, Porto Buarque De Gusmão, Daniel J Beutel, Taner Topal, Akhil Mathur, Xinchi Qiu, Javier Fernandez-Marques, Yan Gao, Lorenzo Sani, Kwing Hei Li, Titouan Parcollet, Pedro Porto Buarque de Gusmão, et al. 2022. Flower: A friendly federated learning framework. (2022). Protection against reconstruction and its applications in private federated learning. Abhishek Bhowmick, John Duchi, Julien Freudiger, Gaurav Kapoor, Ryan Rogers, arXiv:1812.00984arXiv preprintAbhishek Bhowmick, John Duchi, Julien Freudiger, Gaurav Kapoor, and Ryan Rogers. 2018. Protection against reconstruction and its appli- cations in private federated learning. arXiv preprint arXiv:1812.00984 (2018). Federated dynamic sparse training: Computing less, communicating less, yet learning better. Sameer Bibikar, Haris Vikalo, Zhangyang Wang, Xiaohan Chen, Proceedings of the AAAI Conference on Artificial Intelligence. the AAAI Conference on Artificial Intelligence36Sameer Bibikar, Haris Vikalo, Zhangyang Wang, and Xiaohan Chen. 2022. Federated dynamic sparse training: Computing less, communi- cating less, yet learning better. In Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 36. 6080-6088. Towards federated learning at scale: System design. Keith Bonawitz, Hubert Eichner, Wolfgang Grieskamp, Dzmitry Huba, Alex Ingerman, Vladimir Ivanov, Chloe Kiddon, Jakub Konečnỳ, Stefano Mazzocchi, Brendan Mcmahan, Proceedings of Machine Learning and Systems. 1Keith Bonawitz, Hubert Eichner, Wolfgang Grieskamp, Dzmitry Huba, Alex Ingerman, Vladimir Ivanov, Chloe Kiddon, Jakub Konečnỳ, Ste- fano Mazzocchi, Brendan McMahan, et al. 2019. Towards federated learning at scale: System design. Proceedings of Machine Learning and Systems 1 (2019), 374-388. Practical secure aggregation for privacy-preserving machine learning. Keith Bonawitz, Vladimir Ivanov, Ben Kreuter, Antonio Marcedone, Sarvar H Brendan Mcmahan, Daniel Patel, Aaron Ramage, Karn Segal, Seth, proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security. the 2017 ACM SIGSAC Conference on Computer and Communications SecurityKeith Bonawitz, Vladimir Ivanov, Ben Kreuter, Antonio Marcedone, H Brendan McMahan, Sarvar Patel, Daniel Ramage, Aaron Segal, and Karn Seth. 2017. Practical secure aggregation for privacy-preserving machine learning. In proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security. 1175-1191. Leaf: A benchmark for federated settings. Sebastian Caldas, Sai Meher Karthik, Peter Duddu, Tian Wu, Jakub Li, Brendan Konečnỳ, Virginia Mcmahan, Ameet Smith, Talwalkar, arXiv:1812.01097arXiv preprintSebastian Caldas, Sai Meher Karthik Duddu, Peter Wu, Tian Li, Jakub Konečnỳ, H Brendan McMahan, Virginia Smith, and Ameet Talwalkar. 2018. Leaf: A benchmark for federated settings. arXiv preprint arXiv:1812.01097 (2018). Variational federated multi-task learning. Luca Corinzia, Ami Beuret, Joachim M Buhmann, arXiv:1906.06268arXiv preprintLuca Corinzia, Ami Beuret, and Joachim M Buhmann. 2019. Variational federated multi-task learning. arXiv preprint arXiv:1906.06268 (2019). TailorFL: Dual-Personalized Federated Learning under System and Data Heterogeneity. Yongheng Deng, Weining Chen, Ju Ren, Feng Lyu, Yang Liu, Yunxin Liu, Yaoxue Zhang, Proceedings of the Twentieth ACM Conference on Embedded Networked Sensor Systems. the Twentieth ACM Conference on Embedded Networked Sensor SystemsYongheng Deng, Weining Chen, Ju Ren, Feng Lyu, Yang Liu, Yunxin Liu, and Yaoxue Zhang. 2022. TailorFL: Dual-Personalized Federated Learning under System and Data Heterogeneity. In Proceedings of the Twentieth ACM Conference on Embedded Networked Sensor Systems. 592-606. Semi-cyclic stochastic gradient descent. Hubert Eichner, Tomer Koren, Brendan Mcmahan, Nathan Srebro, Kunal Talwar, PMLRInternational Conference on Machine Learning. Hubert Eichner, Tomer Koren, Brendan McMahan, Nathan Srebro, and Kunal Talwar. 2019. Semi-cyclic stochastic gradient descent. In International Conference on Machine Learning. PMLR, 1764-1773. 2018. high performance, open-source universal RPC framework. A Grpc, A gRPC. 2018. high performance, open-source universal RPC frame- work. Federated learning for mobile keyboard prediction. Andrew Hard, Kanishka Rao, Rajiv Mathews, Swaroop Ramaswamy, Françoise Beaufays, Sean Augenstein, Hubert Eichner, Chloé Kiddon, Daniel Ramage, arXiv:1811.03604arXiv preprintAndrew Hard, Kanishka Rao, Rajiv Mathews, Swaroop Ramaswamy, Françoise Beaufays, Sean Augenstein, Hubert Eichner, Chloé Kiddon, and Daniel Ramage. 2018. Federated learning for mobile keyboard prediction. arXiv preprint arXiv:1811.03604 (2018). Chaoyang He, Songze Li, Jinhyun So, Xiao Zeng, Mi Zhang, Hongyi Wang, Xiaoyang Wang, Praneeth Vepakomma, Abhishek Singh, Hang Qiu, arXiv:2007.13518Fedml: A research library and benchmark for federated machine learning. arXiv preprintChaoyang He, Songze Li, Jinhyun So, Xiao Zeng, Mi Zhang, Hongyi Wang, Xiaoyang Wang, Praneeth Vepakomma, Abhishek Singh, Hang Qiu, et al. 2020. Fedml: A research library and benchmark for federated machine learning. arXiv preprint arXiv:2007.13518 (2020). LoAdaBoost: Loss-based AdaBoost federated machine learning with reduced computational complexity on IID and non-IID intensive care data. Li Huang, Yifeng Yin, Zeng Fu, Shifa Zhang, Hao Deng, Dianbo Liu, Plos one. 15230706Li Huang, Yifeng Yin, Zeng Fu, Shifa Zhang, Hao Deng, and Dianbo Liu. 2020. LoAdaBoost: Loss-based AdaBoost federated machine learning with reduced computational complexity on IID and non-IID intensive care data. Plos one 15, 4 (2020), e0230706. Alex Ingerman, Krzys Ostrowski, Tensorflow federated. Tensorflow Federated. Alex Ingerman and Krzys Ostrowski. 2019. Tensorflow federated. Tensorflow Federated (2019). Communication-efficient distributed SGD with sketching. Nikita Ivkin, Daniel Rothchild, Enayat Ullah, Ion Stoica, Raman Arora, Advances in Neural Information Processing Systems. 32Nikita Ivkin, Daniel Rothchild, Enayat Ullah, Ion Stoica, Raman Arora, et al. 2019. Communication-efficient distributed SGD with sketching. Advances in Neural Information Processing Systems 32 (2019). Communication-efficient ondevice machine learning: Federated distillation and augmentation under non-iid private data. Eunjeong Jeong, Seungeun Oh, Hyesung Kim, Jihong Park, Mehdi Bennis, Seong-Lyun Kim, arXiv:1811.11479arXiv preprintEunjeong Jeong, Seungeun Oh, Hyesung Kim, Jihong Park, Mehdi Bennis, and Seong-Lyun Kim. 2018. Communication-efficient on- device machine learning: Federated distillation and augmentation under non-iid private data. arXiv preprint arXiv:1811.11479 (2018). Adaptive gradient-based meta-learning methods. Mikhail Khodak, Maria-Florina F Balcan, Ameet S Talwalkar, Advances in Neural Information Processing Systems. 32Mikhail Khodak, Maria-Florina F Balcan, and Ameet S Talwalkar. 2019. Adaptive gradient-based meta-learning methods. Advances in Neural Information Processing Systems 32 (2019). Federated learning: Strategies for improving communication efficiency. Jakub Konečnỳ, Brendan Mcmahan, X Felix, Peter Yu, Ananda Richtárik, Dave Theertha Suresh, Bacon, arXiv:1610.05492arXiv preprintJakub Konečnỳ, H Brendan McMahan, Felix X Yu, Peter Richtárik, Ananda Theertha Suresh, and Dave Bacon. 2016. Federated learning: Strategies for improving communication efficiency. arXiv preprint arXiv:1610.05492 (2016). Cifar-10 (canadian institute for advanced research). Alex Krizhevsky, Vinod Nair, Geoffrey Hinton, Alex Krizhevsky, Vinod Nair, and Geoffrey Hinton. 2010. Cifar-10 (canadian institute for advanced research). FedScale: Benchmarking Model and System Performance of Federated Learning. Fan Lai, Yinwei Dai, Xiangfeng Zhu, V Harsha, Mosharaf Madhyastha, Chowdhury, Proceedings of the First Workshop on Systems Challenges in Reliable and Secure Federated Learning. the First Workshop on Systems Challenges in Reliable and Secure Federated LearningFan Lai, Yinwei Dai, Xiangfeng Zhu, Harsha V Madhyastha, and Mosharaf Chowdhury. 2021. FedScale: Benchmarking Model and System Performance of Federated Learning. In Proceedings of the First Workshop on Systems Challenges in Reliable and Secure Federated Learn- ing. 1-3. Federated learning for keyword spotting. David Leroy, Alice Coucke, Thibaut Lavril, Thibault Gisselbrecht, Joseph Dureau, ICASSP 2019-2019 IEEE international conference on acoustics, speech and signal processing (ICASSP). IEEEDavid Leroy, Alice Coucke, Thibaut Lavril, Thibault Gisselbrecht, and Joseph Dureau. 2019. Federated learning for keyword spotting. In ICASSP 2019-2019 IEEE international conference on acoustics, speech and signal processing (ICASSP). IEEE, 6341-6345. Ditto: Fair and robust federated learning through personalization. Tian Li, Shengyuan Hu, Ahmad Beirami, Virginia Smith, PMLRInternational Conference on Machine Learning. Tian Li, Shengyuan Hu, Ahmad Beirami, and Virginia Smith. 2021. Ditto: Fair and robust federated learning through personalization. In International Conference on Machine Learning. PMLR, 6357-6368. Ameet Talwalkar, and Virginia Smith. 2020. Federated optimization in heterogeneous networks. Tian Li, Anit Kumar Sahu, Manzil Zaheer, Maziar Sanjabi, Proceedings of Machine Learning and Systems. Machine Learning and Systems2Tian Li, Anit Kumar Sahu, Manzil Zaheer, Maziar Sanjabi, Ameet Talwalkar, and Virginia Smith. 2020. Federated optimization in het- erogeneous networks. Proceedings of Machine Learning and Systems 2 (2020), 429-450. Xiang Li, Kaixuan Huang, Wenhao Yang, Shusen Wang, Zhihua Zhang, arXiv:1907.02189On the convergence of fedavg on non-iid data. arXiv preprintXiang Li, Kaixuan Huang, Wenhao Yang, Shusen Wang, and Zhihua Zhang. 2019. On the convergence of fedavg on non-iid data. arXiv preprint arXiv:1907.02189 (2019). Communication-efficient learning of deep networks from decentralized data. Brendan Mcmahan, Eider Moore, Daniel Ramage, Seth Hampson, Blaise Aguera Y Arcas, PMLRArtificial intelligence and statistics. Brendan McMahan, Eider Moore, Daniel Ramage, Seth Hampson, and Blaise Aguera y Arcas. 2017. Communication-efficient learning of deep networks from decentralized data. In Artificial intelligence and statistics. PMLR, 1273-1282. Learning differentially private recurrent language models. Daniel H Brendan Mcmahan, Kunal Ramage, Li Talwar, Zhang, arXiv:1710.06963arXiv preprintH Brendan McMahan, Daniel Ramage, Kunal Talwar, and Li Zhang. 2017. Learning differentially private recurrent language models. arXiv preprint arXiv:1710.06963 (2017). Client selection for federated learning with heterogeneous resources in mobile edge. Takayuki Nishio, Ryo Yonetani, ICC 2019-2019 IEEE international conference on communications (ICC). IEEETakayuki Nishio and Ryo Yonetani. 2019. Client selection for federated learning with heterogeneous resources in mobile edge. In ICC 2019- 2019 IEEE international conference on communications (ICC). IEEE, 1-7. Multi-Process Service. Nvidia, Nvidia. 2021. Multi-Process Service. https://docs.nvidia.com/deploy/ pdf/CUDA_Multi_Process_Service_Overview.pdf Distreal: Distributed resource-aware learning in heterogeneous systems. Martin Rapp, Ramin Khalili, Kilian Pfeiffer, Jörg Henkel, Proceedings of the AAAI Conference on Artificial Intelligence. the AAAI Conference on Artificial Intelligence36Martin Rapp, Ramin Khalili, Kilian Pfeiffer, and Jörg Henkel. 2022. Dis- treal: Distributed resource-aware learning in heterogeneous systems. In Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 36. 8062-8071. A generic framework for privacy preserving deep learning. Theo Ryffel, Andrew Trask, Morten Dahl, Bobby Wagner, Jason Mancuso, Daniel Rueckert, Jonathan Passerat-Palmbach, arXiv:1811.04017arXiv preprintTheo Ryffel, Andrew Trask, Morten Dahl, Bobby Wagner, Jason Man- cuso, Daniel Rueckert, and Jonathan Passerat-Palmbach. 2018. A generic framework for privacy preserving deep learning. arXiv preprint arXiv:1811.04017 (2018). Federated multi-task learning. Virginia Smith, Chao-Kai Chiang, Maziar Sanjabi, Ameet S Talwalkar, Advances in neural information processing systems. 30Virginia Smith, Chao-Kai Chiang, Maziar Sanjabi, and Ameet S Tal- walkar. 2017. Federated multi-task learning. Advances in neural information processing systems 30 (2017). Recursive deep models for semantic compositionality over a sentiment treebank. Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, D Christopher, Manning, Y Andrew, Christopher Ng, Potts, Proceedings of the 2013 conference on empirical methods in natural language processing. the 2013 conference on empirical methods in natural language processingRichard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D Manning, Andrew Y Ng, and Christopher Potts. 2013. Recursive deep models for semantic compositionality over a sentiment treebank. In Proceedings of the 2013 conference on empirical methods in natural language processing. 1631-1642. Splitfed: When federated learning meets split learning. Chandra Thapa, Pathum Chamikara Mahawaga Arachchige, Seyit Camtepe, Lichao Sun, Proceedings of the AAAI Conference on Artificial Intelligence. the AAAI Conference on Artificial Intelligence36Chandra Thapa, Pathum Chamikara Mahawaga Arachchige, Seyit Camtepe, and Lichao Sun. 2022. Splitfed: When federated learning meets split learning. In Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 36. 8485-8493. Yuexiang Xie, Zhen Wang, Daoyuan Chen, Dawei Gao, Liuyi Yao, Weirui Kuang, Yaliang Li, arXiv:2204.05011Bolin Ding, and Jingren Zhou. 2022. Federatedscope: A flexible federated learning platform for heterogeneity. arXiv preprintYuexiang Xie, Zhen Wang, Daoyuan Chen, Dawei Gao, Liuyi Yao, Weirui Kuang, Yaliang Li, Bolin Ding, and Jingren Zhou. 2022. Feder- atedscope: A flexible federated learning platform for heterogeneity. arXiv preprint arXiv:2204.05011 (2022). Federated learning for healthcare informatics. Jie Xu, S Benjamin, Chang Glicksberg, Peter Su, Jiang Walker, Fei Bian, Wang, Journal of Healthcare Informatics Research. 5Jie Xu, Benjamin S Glicksberg, Chang Su, Peter Walker, Jiang Bian, and Fei Wang. 2021. Federated learning for healthcare informatics. Journal of Healthcare Informatics Research 5 (2021), 1-19. Helios: Heterogeneity-aware federated learning with dynamically balanced collaboration. Zirui Xu, Fuxun Yu, Jinjun Xiong, Xiang Chen, 2021 58th ACM/IEEE Design Automation Conference (DAC). IEEEZirui Xu, Fuxun Yu, Jinjun Xiong, and Xiang Chen. 2021. Helios: Heterogeneity-aware federated learning with dynamically balanced collaboration. In 2021 58th ACM/IEEE Design Automation Conference (DAC). IEEE, 997-1002. Parallel restarted SGD for non-convex optimization with faster convergence and less communication. Hao Yu, Sen Yang, Shenghuo Zhu, arXiv:1807.0662927arXiv preprintHao Yu, Sen Yang, and Shenghuo Zhu. 2018. Parallel restarted SGD for non-convex optimization with faster convergence and less com- munication. arXiv preprint arXiv:1807.06629 2, 4 (2018), 7.
[ "https://github.com/if-lab-repository/FedHC." ]
[ "Discrete-time quantum walks on one-dimensional lattices", "Discrete-time quantum walks on one-dimensional lattices" ]
[ "Xin-Ping Xu \nSchool of Physical Science and Technology\nSuzhou University\n215006SuzhouJiangsuP.R. China\n" ]
[ "School of Physical Science and Technology\nSuzhou University\n215006SuzhouJiangsuP.R. China" ]
[]
In this paper, we study discrete-time quantum walks on one-dimensional lattices. We find that the coherent dynamics depends on the initial states and coin parameters. For infinite size of lattice, we derive an explicit expression for the return probability, which shows scaling behavior P (0, t) ∼ t −1 and does not depends on the initial states of the walk. In the long-time limit, the probability distribution shows various patterns, depending on the initial states, coin parameters and the lattice size. The average mixing time M ǫ closes to the limiting probability in linear N (size of the lattice) for large values of thresholds ǫ. Finally, we introduce another kind of quantum walk on infinite or even-numbered size of lattices, and show that the walk is equivalent to the traditional quantum walk with symmetrical initial state and coin parameter.
10.1140/epjb/e2010-00267-2
[ "https://arxiv.org/pdf/1003.1822v1.pdf" ]
119,180,407
1003.1822
de5247149050e36227dbe07058249f07aceab9bd
Discrete-time quantum walks on one-dimensional lattices Xin-Ping Xu School of Physical Science and Technology Suzhou University 215006SuzhouJiangsuP.R. China Discrete-time quantum walks on one-dimensional lattices (Dated: March 9, 2010)numbers: 0367Lx0540Fb0365Db0367Ca In this paper, we study discrete-time quantum walks on one-dimensional lattices. We find that the coherent dynamics depends on the initial states and coin parameters. For infinite size of lattice, we derive an explicit expression for the return probability, which shows scaling behavior P (0, t) ∼ t −1 and does not depends on the initial states of the walk. In the long-time limit, the probability distribution shows various patterns, depending on the initial states, coin parameters and the lattice size. The average mixing time M ǫ closes to the limiting probability in linear N (size of the lattice) for large values of thresholds ǫ. Finally, we introduce another kind of quantum walk on infinite or even-numbered size of lattices, and show that the walk is equivalent to the traditional quantum walk with symmetrical initial state and coin parameter. I. INTRODUCTION Random walk is related to the diffusion models and is a fundamental topic in discussions of Markov processes. Several properties of (classical) random walks, including dispersal distributions, first-passage times and encounter rates, have been extensively studied. The theory of random walk has been applied to computer science, physics, ecology, economics, and a number of other fields as a fundamental model for random processes in time [1][2][3]. Quantum random walk, which is a natural extension of the classical random walk, has attracted a great deal of attention in the scientific community in recent years. The continuous interest in the study of quantum random walk can be partly attributed to its broad applications in the field of quantum information and computation [4,5]. Quantum random walks can be used to design highly efficient quantum algorithms for quantum computer [6,7]. For example, Grovers algorithm can be combined with quantum walks in a quantum algorithm for glued trees which provides even an exponential speed up over classical methods [7][8][9]. Besides their important applications in quantum computation, quantum walks are also used to model the coherent exciton transport in solid state physics [10]. This could be done in the framework of the tight-binding model, which is equivalent to the so-called continuous-time quantum walk on discrete structures [11,12]. It is shown that the dramatic non-classical behavior of quantum walks can be attributed to quantum coherence, which does not exist in the classical random walks. In the literature, there are two types of quantum random walks: discrete-time and continuous-time quantum walks [13]. The main difference between them is that discrete time quantum walk requires a "coin"-which is just any unitary matrix-plus an extra Hilbert space on which the coin acts, while continuous-time quantum walks do not need this extra Hilbert space. Aside from this, these two versions are similar to their classical counterparts. Discrete-time quantum walks evolve by the application of a unitary evolution operator at discrete time intervals, and continuous-time quantum walks evolve under a (usually timeindependent) Hamiltonian. Unlike the classical case, the extra Hilbert space for discrete-time quantum walks means that one cannot obtain the continuous quantum walk from the discrete walk by taking a limit as the time step goes to zero. This is because discrete time quantum walks need an extra Hilbert space, called the "coin" space, and taking the limit where the time step goes to zero does not eliminate this Hilbert space. Although there is no natural limit to go from the discrete to continuous quantum walks for general graphs, Ref. [14] offers a treatment of this limit for the quantum walk on the line, where it is possible to meaningfully extract the continuous-time walk as a limit of the discrete-time walk. The dynamics of quantum walks of both types has been studied in detail for walks on an infinite line-for the continuous-time case in Refs. [11,15,16] and for the discrete-time case in Refs. [17][18][19][20], it has been shown that the properties of discrete and continuous time quantum walks are different. Here, we focus on discrete-time quantum walks (DTQWs). Previous work have studied DTQWs on the line and cycles. The behavior of DTQWs on the line is strikingly different from the classical random walks because of quantum interference. The variance σ 2 of the quantum walk is known to grow quadratically with the number of steps, t, σ 2 ∝ t 2 , compared to the linear growth, σ 2 ∝ t, for the classical random walk [13,19]. Since the cycle (or onedimensional lattice) is a line segment with periodic boundary conditions, the solutions of quantum walks on cycles can be simplified greatly on consideration of the Fourier space of the particle [21]. For a classical random walk on a 1D lattice of size N , the mixing time M ǫ converges to the uniform distribution in time O(−N 2 lnǫ) [21]. Quantum mechanically, the probability oscillates forever and in general do not mix even instantaneously. However, by defining a time-averaged probability distribution, the quantum walks can mix to the uniform or non-uniform distribution. In Ref. [22], Bednarska et al. have studied the longtime limiting probability distribution for a Hadamard walk on 1D lattice (cycles). They have shown that the Hadamard walk converges to the uniform distribution on odd-numbered size of cycles but it converges to a nontrivial distribution on even-numbered 1D lattices. Previous studies related to DTQWs on 1D lattices (cycles) focus on a particular choice of the initial state and coin parameter [23][24][25]. Here, we consider DTQWs on 1D lattices for various initial states and coin parameters, and discuss the effect of these parameters on the properties of the quantum dynamics. In this paper, we give a systematic study of DTQWs on 1D lattice with various initial states and coin parameters. We explore the time evolution of the walk, return probability, long-time limiting probability and mixing times, and compare the properties for various initial states and coin parameters. The paper is organized as follows: In Sec. II we briefly review discrete-time quantum walks on regular graphs. In Sec. III, we derive analytical results for DTQWs on 1D lattice and find an explicit formula for the return probability. We also do computer simulation to implement DTQWs on 1D lattice for various parameters, and find that the numerical results accurately agree with our analytical results. In Sec. IV, we study the properties of mixing times and discuss the influence of the parameters of the walk. In Sec. V, we define another kind of quantum walk on infinite or even-numbered size of lattices, and prove that the defined walk is equivalent to the traditional quantum walk with symmetrical initial state and coin parameter. Conclusions are given in the last part, Sec. VI. II. DISCRETE-TIME QUANTUM WALKS Discrete-time quantum walk was first introduced by Mayer and Aharonov et al. in Ref. [26,27]. Discrete-time quantum walk takes place in a discrete space of positions, with a unitary evolution of coin toss and position shift in discrete time steps. Here, we define discrete-time quantum walks (DTQWs) on d-regular graph, which is a regular graph each vertex has exactly d edges. The discrete-time quantum walk on d-regular graph happens on the coin Hilbert space H c and position Hilbert space H p , the total Hilbert space is given by H = H c ⊗H p [13]. If the dregular graph has N vertices we have H p = {|i : i = 1, 2, ..., N }, H c = {|e i : i = 1, 2, ..., d}. The coin flip operatorĈ and position shift operatorŜ are applied to the total state in H at each time step [13]. The coin flip operationĈ (only acting on H c ) is the quantum equivalent of randomly choosing which way the particle will move, then the position-shift operationŜ moves the particle according to the coin state, transferring the particle into the new superposition state in position space. For every vertex, all the outgoing edges is labeled as 1, 2, . . . , d. The conditional shift operationŜ moves the particle from v to w if the edge (v, w) is labeled by j on v's side [13]: S|e j ⊗ |v = |e j ⊗ |w , if e j v = (v, w) 0, otherwise.(1) The evolution of the system at each step of the walk is governed by the total operator, U =Ŝ(Î ⊗Ĉ),(2) whereÎ is the identity operator in H p . Thus the total state after t steps is given by, |ψ(t) =Û t |ψ(0) .(3) Finally, we obtain the probability distribution, P (x, t) = d i=1 | e i , x|ψ(t) | 2 = d i=1 | x| ⊗ e i |ψ(t) | 2 .(4) For d-regular graphs, the coin flip matrixĈ can be of various forms. The most common coins analyzed in the field are the Grover coin and the discrete Fourier transform (DFT) coin [13,28]. It is shown that the choice of coin flipĈ and initial coin state strongly influence the behavior of discrete-time quantum walks. In the next section, we will concentrate DTQWs on 1D lattice. We will choose a general form of the initial coin state and coin flip matrix, and derive analytical results for the walk. III. DISCRETE-TIME QUANTUM WALKS ON ONE-DIMENSIONAL LATTICE DTQWs on the line has already been analyzed in detail and the equivalence of all unbiased coin operators has been noted by several authors [17][18][19][20][29][30][31]. Here, we continue to use this framework and extend the calculations for DTQWs on 1D lattices. A. Analytical solutions In the following, we restrict our attention to DTQWs on 1D lattice. Without loss of generality, we consider the one-parameter family of coins, C = √ ρ √ 1 − ρ √ 1 − ρ − √ ρ , 0 ρ 1(5) The value of ρ = 1/2 corresponds to the Hadamard coins, which is a balanced coin and involves the coin into each direction in H c with equal probability. Suppose the particle was initially (t = 0) localized at vertex x 0 and the initial coin states distributed in the coin subspace, |ψ(0) = ( √ a|e 1 + √ 1 − ae iφ |e 2 ) ⊗ |x 0 ,(6) where the two parameters a ∈ [0, 1] and φ ∈ [0, 2π). The position shift operatorŜ has the following form [13], S = ( i |i − 1 i|) ⊗ |e 1 e 1 | + ( i |i + 1 i|) ⊗ |e 2 e 2 |.(7) The total states |ψ(t) and probability distribution P (x, t) after t steps are determined by Eqs. (3) and (4). The solution of the problem can be greatly simplified in the Fourier space. The Fourier transformation of the state in particle Hilbert space can be written as, | ψ N (k, t) = 1 √ N N x=1 e 2πikx/N |ψ N (x, t) , x ∈ {1, 2, ..., N }.(8) Then the time evolution of the states in the Fourier picture turn into a single difference equation, | ψ N (k, t) = U (k)| ψ N (k, t − 1) ,(9) where U (k) is time evolution operator in the Fourier space, U (k) = √ ρe −2πki/N √ 1 − ρe −2πki/N √ 1 − ρe 2πki/N − √ ρe 2πki/N .(10) The solution of (9) is, | ψ N (k, t) = U (k) t | ψ N (k, 0) ,(11) where | ψ N (k, 0) is the Fourier transformation of the initial state. To evaluate the powers of the propagator U (k), it is convenient to diagonalize U (k) using its eigenvalues and eigenstates. Using E n (k) and |q n (k) to represent the nth eigenvalue and orthonormalized eigenvector of the propagator ( U (k)|q n (k) = E n (k)|q n (k) , n = 1, 2), Eq. (11) can be written as, | ψ N (k, t) = 2 i=1 E t i (k) q i (k)| ψ N (k, 0) |q i (k) .(12) In the above Equation, Fourier transformation of the initial states is given by | ψ N (k, 0) = 1 √ N e 2πikx 0 /N |C 0 , where the initial coin state |C 0 ≡ √ a|e 1 + √ 1 − ae iφ |e 2 . By performing the inverse Fourier transformation, we obtain the particle state in position representation as follows, |ψ N (x, t) = 1 √ N N k=1 e −2πikx/N | ψ N (k, t) = 1 N N k=1 e −2πik(x−x 0 )/N 2 j=1 E t j (k) q j (k)|C 0 |q j (k) .(13) Finally, we get the probability distribution, P (x, t) = | e 1 |ψ N (x, t) | 2 + | e 2 |ψ N (x, t) | 2 = 1 N 2 2 m=1 | N k=1 e −2πik(x−x 0 )/N × 2 j=1 E t j (k) e m |q j (k) q j (k)|C 0 | 2 .(14) Substituting eigenvalues E j (k) and eigenvectors |q j (k) (See Eq. (B1) in Appendix B) into the above equation, we obtain the probability distribution for DTQWs on 1D lattice. We have performed numerical implementations which confirm the prediction of Eq. (14). It is evident that the probability distribution depends on the initial coin states |C 0 , eigenvalues and eigenstates of U (k). In the following, we will use the above equation to report the time evolution of the probability distribution for different initial states and coin parameters. B. Time evolution In this section, we explore the probability distribution according to Eq. (14). Specifically, we consider the following initial coin states (a) Figure 1 shows the probability distribution P (x, t) at t = 20 on 1D lattice of size N = 40. We note that the initial coin states |C 0 give strong influence to evolution of P (x, t). For The three rows are for ρ = 3/4 (row 1), ρ = 1/2 (row 2) and ρ = 1/4 (row 3) while the three columns correspond to the initial state |C 0 = |e 1 (column (a)), |C 0 = |e 1 , (b) |C 0 = 1 √ 2 (|e 1 ± |e 2 ) and (c) |C 0 = 1 √ 2 (|e 1 ± i|e 2 ) with ρ = 3/4, ρ = 1/2 and ρ = 1/4.|C 0 = 1 √ 2 (|e 1 ± |e 2 ) (column (b)) and |C 0 = 1 √ 2 (|e 1 ± i|e 2 ) (column (c)). The initial node is at x 0 = N/2 = 20. the initial states (a) and (b), P (x, t) is asymmetric at the initial states. On the contrary, for initial state (c), P (x, t) is symmetric and displays the same distribution for different initial phase φ = ±π/2. We also note from the figure that, the coin parameter ρ does not bias the walk; whether P (x, t) symmetric or asymmetric is totally determined by the initial coin state |C 0 . Another interesting observation is that the velocities of the two counterpropagating peaks is different for different values of ρ. We find that the peaks spread faster for large values of ρ (Compare the row (1)-(3) in Fig. (1)). This result is consistent with the result in Ref. [32] where the positions of the peaks vary linearly with the time steps t. In Ref. [32], the authors show that the peak velocity v is a constant value (v ∝ √ ρ) and differs in sign for the two directions. It is instructive to consider the extreme values of parameter ρ. If ρ = 0, the coin flip operationĈ becomes the Pauli X operation, the two states cross each other going back and forth, thereby remaining close to initial excited node. If ρ = 1, the coin flip operationĈ becomes the Pauli Z operation, the two superposition states |e 1 and |e 2 move away from each other without any diffusion and interference. These two extreme cases are not of much importance, but they define the limits of the behavior. Intermediate values of ρ between these extremes show intermediate drifts and quantum interference. We also studied the evolution of probability distribution P (x, t) on different lattice size, initial states and coin parameters. The results are analogous to the case we have shown. The probability distribution generated by DTQWs consists of two counterpropagating peaks. Between the two dominant peaks the probability decays like t −1 while outside the peaks the decay is exponential. The probability distribution P (x, t) exhibits symmetric or asymmetric characteristics depending on the initial coin states. C. Return probabilities Now, we consider a particular case of the probability distribution, return probability P (x = x 0 , t), which means the probability of finding the walker at the initial node. In order to study the scaling behavior of P (x = x 0 , t), we consider return probability on infinite size of lattice. In the continuum limit N → ∞, θ(k) = 2πk/N in the eigenvalues and eigenstates (See Eq. (B1)in Appendix B) become quasicontinuous, the return probability P (x = x 0 , t) in Eq. (14) can be written as an integral form, P (x = x 0 , t)| N →∞ = 1 2π 2 m=1 | π −π ( 2 j=1 e m |q j (θ) × q j (θ)|C 0 E t j (θ))dθ| 2 = 1 2π 2 m=1 | π −π e m |q 1 (θ) q 1 (θ)|C 0 e −itω(θ) dθ + π −π e m |q 2 (θ) q 2 (θ)|C 0 (−1) t e itω(θ) dθ| 2(15) where E 1 = e −iω(θ) , E 2 = −e iω(θ) and ω(θ) = arcsin( √ ρ sin θ) are applied in the above equation. In Appendix B, we use the stationary phase approximation (SPA) (See Appendix A) to calculate the above integral. We find that the integral is finally simplified as, P (x = x 0 , t)| N →∞ = 2 √ 1/ρ−1 πt , if t ∈ Even, 0 if t ∈ Odd.(16) Equation (16) indicates that the return probability show scaling behavior P (x = x 0 , t)| N →∞ ∼ t −1 . We note that the return probability does not depend on the parameters (a and φ in |C 0 , see Eq. (6)) of the initial states. Particularly, when ρ = 1/2, we obtain the return probability for Hadamard walks: P (x 0 , t)| N →∞ = 2/(πt). The parameter ρ only affects the coefficient of the scaling t −1 . The scaling behavior of P (x 0 , t)| N →∞ is analogous to the return probability of continuous-time quantum walks. For continuous-time quantum walks on 1D lattice, the return probability is given by π(t) = |J 0 (2t)| 2 ≈ sin 2 (2t + π/4)/(πt), where J n (x) is the Bessel function of the first kind [33,34]. Thus, both the return probability of the discrete-time and continuous-time quantum walks show the same scaling behavior t −1 . In order to test the prediction of Eq. (16), Fig. 2 shows the return probability P (x = x 0 , t) on a 1D lattice of size N = 200 with ρ = 1/4, ρ = 1/2 and ρ = 3/4 for the first 100 steps. In our calculation, we fix the value of ρ and try to change the initial states of the walks, and find that the return probabilities are exact the same. This confirms our conclusion that the return probability is independent on the initial coin states. We also show the theoretical predictions of Eq. (16) in Fig. 2, which are in good agreement with the numerical results. D. Long-time limiting probabilities In this section, we consider long time averages of the probability distribution. Generally, the time-averaged distributionP (x, T ) ≡ 1 T T t=1 P (x, t) converges to a constant value as T → ∞. This value is defined as the long-time limiting probability, χ(x) = lim T →∞P (x, T ).(17) Substituting the eigenvalues E 1 = e −iω(θ k ) and E 2 = −e iω(θ k ) into Eq. (14), we obtain, P (x, t) = 1 N 2 2 m=1 | N k=1 e −2πik(x−x 0 )/N ×(e −iω(k)t e m |q 1 (k) q 1 (k)|C 0 +(−1) t e iω(k)t e m |q 2 (k) q 2 (k)|C 0 )| 2 = 1 N 2 2 m=1 N k=1,k ′ =1 e −2πi(k−k ′ )(x−x 0 )/N ×[e −iω(k)t e m |q 1 (k) q 1 (k)|C 0 +(−1) t e iω(k)t e m |q 2 (k) q 2 (k)|C 0 ] ×[e iω(k ′ )t C 0 |q 1 (k ′ ) q 1 (k ′ )|e m +(−1) t e −iω(k ′ )t C 0 |q 2 (k ′ ) q 2 (k ′ )|e m ].(18) Our goal is to calculate the long time averages of the probability distribution. Only two terms of the product in Eq. (19) contributes if ω(k) = ω(k ′ ), since lim T →∞ 1 T T t=1 e ±i(ω(k)−ω(k ′ ))t = δ ω(k),ω(k ′ ) and lim T →∞ 1 T T t=1 (−1) t e ±i(ω(k)+ω(k ′ ))t = 0. Thus, the long-time limiting probability χ(x) can be simplified as, χ(x) is dependent on the initial coin states |C 0 and coin parameter ρ (note the eigenstates |q(k) depends on ρ). Here, we report the long-time limiting probabilities for various initial states |C 0 and coin parameter ρ. χ(x) = 1 N 2 2 m=1 N k=1,k ′ =1 δ ω(k),ω(k ′ ) e −2πi(k−k ′ )(x−x 0 )/N ×[ e m |q 1 (k) q 1 (k)|C 0 C 0 |q 1 (k ′ ) q 1 (k ′ )|e m + e m |q 2 (k) q 2 (k)|C 0 C 0 |q 2 (k ′ ) q 2 (k ′ )|e m ](19) We consider the following initial coin states (a) |C 0 = |e 1 , (b) |C 0 = 1 √ 2 (|e 1 ± |e 2 ) and (c) |C 0 = 1 √ 2 (|e 1 ± i|e 2 ) with ρ = 3/4, ρ = 1/2 and ρ = 1/4. Fig. 3 shows the long-time limiting probabilities χ(x) on 1D lattice of size N = 40. For initial state (c), χ(x) is a symmetric distribution, i.e., χ(x) = χ(x ′ ) if x − x 0 = x 0 − x ′ . χ(x) also displays localization on the nodes nearing the initial node x 0 and the opposite nodex 0 ≡ x 0 + N/2 (mod N ). For the same initial state, it seems that small values of ρ lead to a strong localization than large values of ρ (compare different rows in the same column). For initial state (a), χ(x) has large values at the initial node x 0 (or opposite nodex 0 ) and its next node x 0 + 1 (orx 0 + 1). These high probabilities are exactly equal, i.e., χ(x 0 ) = χ(x 0 + 1) = χ(x 0 ) = χ(x 0 + 1). A similar phenomena is also observed for the initial state (b) (See column (b) in Fig 3). χ(x) has large values at the x 0 's (orx 0 's) previous or next nodes, depending on the sign of the initial state. Here, χ(x 0 ) = χ(x 0 + 1) = χ(x 0 ) = χ(x 0 + 1) holds only when ρ = 1/2 (See Fig. (3(2b))). The probabilities are equal at nodes x and x + N/2 (mod N ) for all the initial states and values of ρ. This behavior is a natural consequence of the periodic boundary conditions of the 1D lattice. The limiting probability for initial state (c) is symmetric at the origin node x 0 , i.e., χ(x − x 0 ) = χ(x 0 − x). However, this is not true for conditions (b) and (c). This feature is related to the initial coin states. Generally, symmetric initial state leads to unbiased limiting probability distribution while asymmetric coin state produces biased quantum walks. In the above analysis, we have studied the limiting probabilities on lattice of size N = 40. For odd values of N , we find that χ(x) is a uniform distribution for all the initial states and values of ρ > 0. For even values of N , we find that the distribution pattern depends on the parity of N/2. If N/2 is an even number, χ(x) has peaks at the nodes nearing origin node and the opposite node. However, if N/2 is an odd number, χ(x) has a peak nearing the origin node but a minimum nearing the opposite node. The minimum in odd N/2 is virtually the mirror of the peak nearing the origin node x 0 (See Fig. 1 in Ref. [22]). Nevertheless, the situation is different for the extreme case ρ = 1 and ρ = 0. If ρ = 1, two superposition states move away from each other without any diffusion and interference, the limiting probability is a uniform distribution for all the initial states and values of N . If ρ = 0, the limiting probability χ(x) is total determined by the initial states |C 0 ≡ ( √ a|e 1 + √ 1 − ae iφ |e 2 ). More specifically, the limiting probability χ(x) for ρ = 0 is summarized as, χ(x) =        a/2, if x = x 0 + 1, 1/2, if x = x 0 , (1 − a)/2, if x = x 0 − 1, 0, Otherwise(20) It is worth mentioning that the limiting probability distributions between the continuoustime and discrete-time quantum walks are different. For continuous-time quantum walks (CTQWs), the limiting probability only depends on the parity of N (See Eq. (21) and (22) in Ref. [34]). On the contrary, the limiting probability for discrete-time quantum walks depends on more ingredients. This is because the coin degrees of freedom of DTQWs offer a wider range of controls over the evolution of the walk than the continuous-time quantum walk. IV. MIXING TIME As mentioned in the previous section, the time averaged probability distributionP (x, T ) converges to the limiting probability χ(x) as T → ∞. Here, we study this issue using the concept of mixing time. Mixing time represents the rate at which the average probability distributionP (x, T ) approaches its asymptotic distribution χ(x). The average mixing time is defined as follows, Fig. 4 (a) shows the time dependence of the variation distance V (T ) ≡ x |P (x, T )−χ(x)| on 1D lattices of size N = 10, N = 20 and N = 100. For long times, the variation distance V (T ) oscillate frequently and decays approximately as 1/T . For odd-numbered size of N , the probability mixes to the uniform distribution, we also find a similar behavior of V (T ) (See V (T ) vs T for N = 11, N = 21 and N = 101 in Fig. 4 (b)). We also try to compare the average mixing time M ǫ for different values of ρ and initial states |C 0 . We consider quantum walks for the initial states (a), (b), (c) with ρ = 3/4, ρ = 1/2 and ρ = 1/4. We find that quantum walk for initial states (c) with ρ = 3/4 has a smaller mixing time M ǫ than the other cases considered here. This may suggest that quantum walks with symmetric initial states and large values of ρ mix to the limiting probability distribution fast. We hope this conclusion can be used in constructing efficient quantum algorithms. M ǫ = min{ τ | ∀ T > τ, x |P (x, T ) − χ(x)| < ǫ}.(21) V. NEW QUANTUM WALK ON 1D LATTICE In this section, we introduce another kind of quantum walk on 1D lattice. The quantum walk is defined on an infinite or even-numbered size of lattice. The walk starts at node x 0 with initial coin state C 0 | = a 0 |e 1 + b 0 |e 2 , we endow "direction" to the edges in the graph. We label each edge a direction (|e 1 or |e 2 ), so that the edges incident on every node can be labeled as two different directions and every edge between two nodes has the same label at either end. Only 1D lattices of even-numbered size satisfy this condition. We illustrate this kind of labeling in Fig. 6. The walk is evolved into the superposition of the coin space by applying the coin flip operation, then the shift operationŜ ′ moves the walker according toŜ ′ = [ x∈G 1 (|x − 1 x| ⊗ |e 1 e 1 | + |x + 1 x| ⊗ |e 2 e 2 |)] +[ x∈G 2 (|x + 1 x| ⊗ |e 1 e 1 | + |x − 1 x| ⊗ |e 2 e 2 |)] ≡Ŝ 1 +Ŝ 2 ,(22) where we useŜ 1 andŜ 2 to denote the two terms in the above equation. We separate the total shift operationŜ into two different flip operators,Ŝ 1 andŜ 2 , which are applied to two different node group G 1 = {x|..., x = x 0 − 4, x = x 0 − 2, x = x 0 , x = x 0 + 2, x = x 0 + 4, ...} and G 2 = {x|..., x = x 0 − 3, x = x 0 − 1, x = x 0 + 1, x = x 0 + 3, . ..}, respectively. We implement the above process iteratively to realize a large number of steps of the quantum walk. The peculiarity of this walk distinguished from the traditional quantum walk is the conditional shift operationŜ. In the traditional quantum walk, the shift operationŜ moves the walker to the same side of the node, regardless of the position of the walker. However, in our defined quantum walk, the shift operationŜ moves the walker toward different sides, depending on the position of walker. A natural question is "what's the relationship between the two kinds of quantum walks?". To answer this question, we consider the two quantum walks on the same 1D lattice. For the sake of simplicity, we compare the wave function of the two quantum walks with different initial states and coin flip matrixes. Concretely, we consider the traditional quantum walk with initial coin state |C 0 = a 0 |e 1 + b 0 |e 2 and shift matrixĈ in Eq. (5), and our defined quantum walk with initial coin state |C ′ 0 = b 0 |e 1 + a 0 |e 2 and shift matrix C ′ = √ 1 − ρ √ ρ √ ρ − √ 1 − ρ . After t steps, suppose the state of the traditional quantum walk at node x is |ψ(x, t) = (A x,t |e 1 + B x,t |e 2 )|x , then the state of our defined quantum walk |ψ ′ (x, t) is given by, |ψ ′ (x, t) = (B x,t |e 1 + A x,t |e 2 )|x , x ∈ G 1 if t ∈ Even, (A x,t |e 1 − B x,t |e 2 )|x , x ∈ G 2 if t ∈ Odd.(23) The above conclusion can be proved using the method of mathematical induction. For t = 0 and t = 1, it is easy to see that the wave functions satisfy the above equation. Now suppose for t = T 0 (T 0 > 1) the conclusion also holds, then we obtain the wave function |ψ(x, T 0 + 1) at t = T 0 + 1 according to the iterative relations, |ψ(x, T 0 + 1) = ( √ ρA x+1,T 0 + √ 1 − ρB x+1,T 0 )|e 1 |x +( √ 1 − ρA x−1,T 0 − √ ρB x−1,T 0 )|e 2 |x ≡ A x,T 0 +1 |e 1 |x + B x,T 0 +1 |e 2 |x(24) where we use A x,T 0 +1 and B x,T 0 +1 to represent the first two terms. ApplyingŜ ′ (Î ⊗Ĉ ′ ) to |ψ ′ (x, T 0 ) and using the iterative relations, we obtain the wave function |ψ ′ (x, T 0 + 1) at t = T 0 + 1 |ψ ′ (x, T 0 + 1) =        ( √ ρA x+1,T 0 + √ 1 − ρB x+1,T 0 )|e 1 |x + (− √ 1 − ρA x−1,T 0 + √ ρB x−1,T 0 )|e 2 |x , x ∈ G 2 if T 0 ∈ Even, ( √ 1 − ρA x−1,T 0 − √ ρB x−1,T 0 )|e 1 |x + ( √ ρA x+1,T 0 + √ 1 − ρB x+1,T 0 )|e 1 |x . x ∈ G 1 if T 0 ∈ Odd.(25) Comparing Eqs. (24) and (26), we have |ψ ′ (x, T 0 + 1) = A x,T 0 +1 |e 1 |x − B x,T 0 +1 |e 2 |x x ∈ G 2 , if T 0 ∈ Even, B x,T 0 +1 |e 1 |x + A x,T 0 +1 |e 2 |x , x ∈ G 1 , if T 0 ∈ Odd. Therefore, our conclusion is also true for t = T 0 + 1. According to the mathematical induction, the relation holds for all the time steps. This completes our proof. According to the deduction, the two types of quantum walks have the same probability distribution. This indicates that our defined quantum walk is equivalent to the traditional quantum walk with a symmetrical initial state and coin parameter ρ. We have performed numerical implementations to realize the quantum walks defined here, and the results support our findings. VI. CONCLUSIONS In summary, we have studied discrete-time quantum walks on one-dimensional lattices. We show that the evolution of the quantum dynamics depends on the initial states and coin parameters. For infinite size of lattice, we derive an explicit expression for the return probability, which shows scaling behavior P (x = x 0 , t) ∼ t −1 and does not depends on the initial states of the walk. In the long-time limit, the probability distribution shows various patterns, depending on the initial states, coin parameters and the lattice size. The average mixing time M ǫ closes to the limiting probability in linear N for large values of thresholds ǫ. Finally, we define another kind of quantum walk on infinite or even-numbered size of lattices, and find that the walk is equivalent to the traditional quantum walk with symmetric initial state and coin parameter. where f 1 = iz 1 , f 2 = −iz 2 , f 3 = (−1) t f 2 , f 4 = (−1) t f 1 . (B7) If t is odd, z 1 + z 4 = z 2 + z 3 = 0 and f 1 + f 4 = f 2 + f 3 = 0, the integral equals to 0. If t is even, z 1 = z 4 , z 2 = z 3 , f 1 = f 4 , f 2 = f 3 , the integral is simplified as, I = I 1 + I 2 = |2(z 1 + z 2 )| 2 + |2(f 1 + f 2 )| 2 = √ 1/ρ−1 2πt × [|( √ a + i √ 1 − ae iφ )e −itω 0 + (−i √ a − √ 1 − ae iφ )e itω 0 | 2 +|(i √ a − √ 1 − ae iφ )e −itω 0 + (− √ a + i √ 1 − ae iφ )e itω 0 | 2 ] = √ 1/ρ−1 2πt × 4 = 2 √ 1/ρ−1 πt (B8) Therefore, we obtain the return probability P (x = x 0 , t)| N →∞ = 2 √ 1/ρ−1 πt , if t ∈ Even, 0 if t ∈ Odd.(B9) FIG. 1 : 1(Color online) Probability distribution of DTQWs on 1D lattice of size N = 40 at t = 20. online) Return probability P (x = x 0 , t) on 1D lattice of size N = 200 with ρ = 1/4 (triangles), ρ = 1/2 (dots) and ρ = 3/4 (squares) in the first 100 steps. The lines show the predictions of Eq. (16) for ρ = 1/4 (dashed line), ρ = 1/2 (solid line) and ρ = 3/4 (dotted line), respectively. Since P (x = x 0 , t) equals to 0 at odd-numbered step t, we only plot P (x = x 0 , t) at even-numbered step t in the figure. FIG. 3 : 3(Color online) Long-time limiting probability χ(x) on 1D lattice of size N = 40 for ρ = 3/4 (row 1), ρ = 1/2 (row 2) and ρ = 1/4 (row 3) with initial state |C 0 = |e 1 (column (a)), |C 0 = 1 √ 2 (|e 1 ± |e 2 ) (column (b)) and |C 0 = 1 √ 2 (|e 1 ± i|e 2 ) (column (c)). The walk starts at the initial node x 0 = N/2 = 20. Fig. 5 FIG. 4 : 54shows the dependence of the average mixing time M ǫ on the lattice size N with different values of threshold ǫ, for the initial state |C 0 = 1 √ 2 (|e 1 ± i|e 2 ). For sufficiently (Color online) Variation distance V (T ) as a function of time T for 1D lattices of size N = 10 (dotted curve), N = 20 (solid curve) and N = 100 (dashed curve). The results are for Hadamard quantum walks (ρ = 1/2) with symmetric initial state |C 0 = 1 √ 2 (|e 1 ± i|e 2 ). large ǫ, the average mixing time is a linear function of N . However, for small values of ǫ, M ǫ shows wild fluctuation around the linear behavior M ǫ ∝ N . online) Dependence of mixing time M ǫ on the lattice size N with ǫ = 0.05, ǫ = 0.1, ǫ = 0.2 and ǫ = 0.4. The results are obtained using ρ = 1/2 and symmetric initial state |C 0 = 1 √ 2 (|e 1 ± i|e 2 ). FIG. 6 : 6(Color online) Illustration of the edge labeling on lattice of size N = 4 (a) and N = 3 (b). In (a), the edges 1 ↔ 2, 3 ↔ 4 are labeled as direction |e 1 , edges 2 ↔ 3, 1 ↔ 4 are labeled as direction |e 2 . In (b), the edges 1 ↔ 2, 2 ↔ 3 are labeled as direction |e 1 and |e 2 , but edge 1 ↔ 3 can't be labeled as a single direction. (c) The labeling of coin direction for the traditional quantum walk, where the left side of each node is labeled as |e 1 and right side labeled as |e 2 . (d) The labeling of our defined quantum walk. |e 1 and |e 2 are labeled to each edge and two nodes of the edge has the same label. AcknowledgmentsThis work is supported by National Natural Science Foundation of China under projects 10975057 and the new Teacher Foundation of Suzhou University.APPENDIX A: THE STATIONARY PHASE APPROXIMATION (SPA)Stationary phase approximation (SPA) is an approach for solving integrals analytically by evaluating the integrands in regions where they contribute the most[35][36][37]. This method is specifically directed to evaluating oscillatory integrands, where the phase function of the integrand is multiplied by a relatively high value. Suppose we want to evaluate the behavior of function I(λ) for large λ,The SPA asserts that the main contribution to this integral comes from those points where, the integral is approximated asymptotically by,If there are more than one stationary points satisfy The eigenvalues and eigenstates of U (k) can be written as,where θ(k) = 2πk/N and sin ω(k) = √ ρ sin θ(k).In the continuum limit N → ∞, the values of θ k = 2πk/N are quasicontinuous, then the return probability P (x = x 0 , t) can be written as the integral form in Eq.(15). Now we apply SPA to calculate this integral. In Eq. (15), P (x = x 0 , t)| N →∞ can be written as,whereThe stationary points of the above integrals satisfy ω ′ (θ) = d arcsin(ρ sin θ)/dθ = √ ρ cos(θ)/ 1 − ρ sin 2 (θ) = 0. So the contribution of each integral comes from two stationary points θ = π/2 and θ = −π/2. The second-order derivations at θ = ±π/2 yield ω ′′ (±π/2) = ∓ √ ρ/ √ 1 − ρ. According to the SPA and substituting θ = ±π/2 into Eqs. (B1)-(B3), we obtain the integral I 1 and I 2 as follows,wherewhere ω 0 = arcsin √ ρ. And Dynamic Random Walks: Theory and Application (Elsevier. N Guillotin-Plantard, R Schott, AmsterdamN. Guillotin-Plantard and R. Schott, Dynamic Random Walks: Theory and Application (El- sevier, Amsterdam, 2006). W Woess, Random Walks on Infinite Graphs and Groups. CambridgeCambridge University PressW. Woess, Random Walks on Infinite Graphs and Groups (Cambridge: Cambridge University Press, 2000). Brian H Kaye, A random walk through fractal dimensions. New York, USAVCH PublishersBrian H. Kaye, A random walk through fractal dimensions (VCH Publishers, New York, USA, 1989). D Supriyo, Quantum Transport: Atom to Transistor. LondonCambridge University PressD. Supriyo, Quantum Transport: Atom to Transistor (Cambridge University Press, London, 2005). P A Mello, N Kumar, Quantum Transport in Mesoscopic Systems: Complexity and Statistical Fluctuations. USAOxford University PressP. A. Mello and N. Kumar, Quantum Transport in Mesoscopic Systems: Complexity and Statistical Fluctuations (Oxford University Press, USA, 2004). . R P Feynman, Found. Phys. 16507R. P. Feynman, Found. Phys. 16, 507 (1986). . A M Childs, W Dam, arxiv:0812.0380A. M. Childs and W. Dam, arxiv:0812.0380. A Fast Quantum Mechanical Algorithm for Database Search. L K Grover, Proceedings of the Twenty-Eighth Annual ACM Symposium on Theory of Computing pp. the Twenty-Eighth Annual ACM Symposium on Theory of Computing ppNew YorkACML. K. Grover, A Fast Quantum Mechanical Algorithm for Database Search, Proceedings of the Twenty-Eighth Annual ACM Symposium on Theory of Computing pp. 212-219( ACM, New York, 1996). From Schrödinger's equation to quantum search algorithm. L K Grover, American Journal of Physics. 697L. K. Grover, From Schrödinger's equation to quantum search algorithm, American Journal of Physics, 69(7): 769-777, 2001. C Kittel, Introduction to solid state physics. New YorkWileyC. Kittel, Introduction to solid state physics (Wiley, New York, 1986). . E Farhi, S Gutmann, Phys. Rev. A. 58915E. Farhi, S. Gutmann, Phys. Rev. A 58, 915 (1998). . A Childs, J Goldstone, Phys. Rev. A. 7042312Phys. Rev. AA. Childs, J. Goldstone, Phys. Rev. A 70, 022314 (2004); ibid Phys. Rev. A 70, 042312 (2004). . J Kempe, Contemp. Phys. 44307J. Kempe, Contemp. Phys. 44, 307 (2003). . F W Strauch, Phys. Rev. A. 7430301F. W. Strauch, Phys. Rev. A 74, 030301R (2006). A M Childs, E Farhi, S Gutmann, Quantum Information Processing. 135A.M. Childs, E. Farhi and S. Gutmann, Quantum Information Processing 1, 35 (2002). N Konno, Quantum Information Processing. 1N. Konno, Quantum Information Processing 1 (5), pp. 345-354 (2002). . T A Brun, H A Carteret, A Ambainis, Phys. Rev. Lett. 91130602T. A. Brun, H. A. Carteret and A. Ambainis, Phys. Rev. Lett. 91, 130602 (2003). . T A Brun, H A Carteret, A Ambainis, Phys. Rev. A. 6732304T. A. Brun, H. A. Carteret and A. Ambainis, Phys. Rev. A 67, 032304 (2003). . A Nayak, A Vishwanath, quant-ph/0010117A. Nayak and A. Vishwanath, quant-ph/0010117. One-dimensional quantum walks. A Ambainis, E Bach, A Nayak, A Vishwanath, J Watrous, Proceedings of the 33rd Annual ACM Symposium on Theory of Computing. the 33rd Annual ACM Symposium on Theory of ComputingA. Ambainis, E. Bach, A. Nayak, A. Vishwanath and J. Watrous, One-dimensional quantum walks, In: Proceedings of the 33rd Annual ACM Symposium on Theory of Computing, pp. 37-49 (2001). Quantum walks on graphs. D Aharonov, A Ambainis, J Kempe, U Vazirani, Proc. 33rd Annual ACM STOC. 33rd Annual ACM STOCNYACMD. Aharonov, A. Ambainis, J. Kempe, U. Vazirani, Quantum walks on graphs. In: Proc. 33rd Annual ACM STOC. ACM, NY, pp.50-59 ( 2001). . M Bednarska, A Grudka, P Kurzyński, T Luczak, A Wójcik, Phys. Lett. A. 31721M. Bednarska, A. Grudka, P. Kurzyński, T. Luczak and A. Wójcik, Phys. Lett. A 317, 21 (2003). . N Konno, T Namiki, T Soshi, J. Phys. A. 36241N. Konno, T. Namiki, T. Soshi, et al., J. Phys. A 36, 241 (2003). . M Bednarska, A Grudka, P Kurzynski, Int. J. Quantum. Inf. 2453M. Bednarska, A. Grudka, P. Kurzynski, et al., Int. J. Quantum. Inf. 2, 453 (2004). . W Adamczak, K Andrew, L Bergen, Int. J. Quantum. Inf. 5781W. Adamczak, K. Andrew, L. Bergen, et al., Int. J. Quantum. Inf. 5, 781 (2007). . D A Meyer, J. Stat. Phys. 85551D. A. Meyer, J. Stat. Phys. 85, 551 (2006); . Phys. Lett. A. 223337Phys. Lett. A 223, 337 (2006). . Y Aharonov, L Davidovich, N Zagury, Phys. Rev. A. 481687Y. Aharonov, L. Davidovich, N. Zagury, Phys. Rev. A 48, 1687 (1992). . V Kendon, Math. Struct. Comp. Sci. 171169V. Kendon, Math. Struct. Comp. Sci. 17, 1169 (2006). . C M Chandrashekar, R Srikanth, R Laflamme, Phys. Rev. A. 7732326C. M. Chandrashekar, R. Srikanth, R. Laflamme, Phys. Rev. A 77, 032326 (2008). . E Bach, S Coppersmith, M P Goldschen, R Joynt, J Watrous, J. Comput. Syst. Sci. 69E. Bach, S. Coppersmith, M. P. Goldschen, R. Joynt, J. Watrous, J. Comput. Syst. Sci. 69, 562 (2004). N Konno, QUANTUM POTENTIAL THEORY Book Series: LECTURE NOTES IN MATH-EMATICS. 309N. Konno, QUANTUM POTENTIAL THEORY Book Series: LECTURE NOTES IN MATH- EMATICS, 1954, 309 (2008). . B Tregenna, W Flanagan, R Maile, V Kendon, New J. Phys. 5B. Tregenna, W. Flanagan, R. Maile, V. Kendon, New J. Phys. 5, 83, (2003); . M Štefaňák, T Kiss, I Jex, Phys. Rev. A. 7832306M.Štefaňák, T. Kiss and I. Jex, Phys. Rev. A 78, 032306 (2008). . O Mülken, A Blumen, Phys. Rev. E. 7136128O. Mülken and A. Blumen, Phys. Rev. E 71, 036128 (2005). . X P Xu, Phys. Rev. E. 7761127X.P. Xu, Phys. Rev. E 77, 061127 (2008). C M Bender, S A Orszag, Advanced Mathematical Methods for Scientists and Engineers. New YorkMcGraw-HillC. M. Bender and S. A. Orszag, Advanced Mathematical Methods for Scientists and Engineers (McGraw-Hill, New York, 1978). . O Mülken, V Pernice, A Blumen, Phys. Rev. E. 7721117O. Mülken, V. Pernice, and A. Blumen, Phys. Rev. E 77, 021117 (2008). J Scales, Theory of Seismic Imaging. Berlin; HeidelbergSpringerJ. Scales, Theory of Seismic Imaging, pp. 121-125 (Springer Berlin, Heidelberg, 1995).
[]
[ "Exploring Timbre Disentanglement in Non-Autoregressive Cross-Lingual Text-to-Speech", "Exploring Timbre Disentanglement in Non-Autoregressive Cross-Lingual Text-to-Speech" ]
[ "Haoyue Zhan [email protected] \nNetEase Games AI Lab\nGuangzhouChina\n", "Xinyuan Yu [email protected] \nNetEase Games AI Lab\nGuangzhouChina\n", "Haitong Zhang [email protected] \nNetEase Games AI Lab\nGuangzhouChina\n", "Yang Zhang [email protected] \nNetEase Games AI Lab\nGuangzhouChina\n", "Yue Lin [email protected] \nNetEase Games AI Lab\nGuangzhouChina\n" ]
[ "NetEase Games AI Lab\nGuangzhouChina", "NetEase Games AI Lab\nGuangzhouChina", "NetEase Games AI Lab\nGuangzhouChina", "NetEase Games AI Lab\nGuangzhouChina", "NetEase Games AI Lab\nGuangzhouChina" ]
[]
In this paper, we study the disentanglement of speaker and language representations in non-autoregressive cross-lingual TTS models from various aspects. We propose a phoneme length regulator that solves the length mismatch problem between IPA input sequence and monolingual alignment results. Using the phoneme length regulator, we present a FastPitch-based crosslingual model with IPA symbols as input representations. Our experiments show that language-independent input representations (e.g. IPA symbols), an increasing number of training speakers, and explicit modeling of speech variance information all encourage non-autoregressive cross-lingual TTS model to disentangle speaker and language representations. The subjective evaluation shows that our proposed model can achieve decent naturalness and speaker similarity in cross-language voice cloning.
10.21437/interspeech.2022-205
[ "https://export.arxiv.org/pdf/2110.07192v3.pdf" ]
238,857,163
2110.07192
b6d78136a98a0991a96a150d80206a9fb49082d0
Exploring Timbre Disentanglement in Non-Autoregressive Cross-Lingual Text-to-Speech Haoyue Zhan [email protected] NetEase Games AI Lab GuangzhouChina Xinyuan Yu [email protected] NetEase Games AI Lab GuangzhouChina Haitong Zhang [email protected] NetEase Games AI Lab GuangzhouChina Yang Zhang [email protected] NetEase Games AI Lab GuangzhouChina Yue Lin [email protected] NetEase Games AI Lab GuangzhouChina Exploring Timbre Disentanglement in Non-Autoregressive Cross-Lingual Text-to-Speech Index Terms: Text-to-Speechcross-lingualmonolingual cor- pusnon-autoregressive In this paper, we study the disentanglement of speaker and language representations in non-autoregressive cross-lingual TTS models from various aspects. We propose a phoneme length regulator that solves the length mismatch problem between IPA input sequence and monolingual alignment results. Using the phoneme length regulator, we present a FastPitch-based crosslingual model with IPA symbols as input representations. Our experiments show that language-independent input representations (e.g. IPA symbols), an increasing number of training speakers, and explicit modeling of speech variance information all encourage non-autoregressive cross-lingual TTS model to disentangle speaker and language representations. The subjective evaluation shows that our proposed model can achieve decent naturalness and speaker similarity in cross-language voice cloning. Introduction In the past years, end-to-end Text-to-Speech (TTS) synthesis systems have gained great success in generating natural monolingual speech [1,2,3,4]. However, for a deployed TTS system, it is very common to synthesize mixed language utterances. A recent review [5] further shows that cross-lingual TTS systems can help boost the quality of synthesized speech for lowresource language. Nevertheless, state-of-the-art TTS models still have a gap in generating natural cross-lingual utterances, especially when only monolingual training data are available. One of the difficulties in building a cross-lingual TTS model with monolingual data lies in that the speaker representations and language representations are often entangled with each other. On one hand, various adversarial methods are adopted to ease this problem. [6] employs domain adversarial training to disentangle the text and speaker representations. In [7], the authors use domain adaptation and perceptual similarity regression to find similar cross-lingual speaker pairs to build cross-lingual TTS models. [8] introduces domain adversarial training into the non-autoregressive acoustic model, and builds a multi-speaker multi-style multilingual TTS system. On the other hand, some researchers have studied the implication of input representations for cross-lingual TTS systems. [9] builds a shared phoneme set for three different languages. [10] proposes a phonetic transformation network to learn target symbol distribution with the help of Automatic Speech Recognition (ASR) systems. In [11,12], language-independent Phonetic PosteriorGram (PPG) features of ASR models are used as input for cross-lingual TTS models. [13] further proposes a mixedlingual grapheme-to-phoneme (G2P) frontend to improve the pronunciation of mixed-lingual sentences in cross-lingual TTS systems. Besides language-dependent representation, some researches focus on using language-independent representation as input representations. [14] uses International Phonetic Alphabet (IPA) as text input representations and adopts a ResCNNbased speaker encoder to encode speaker representations. [15] combines IPA with dynamic soft windowing mechanism and language-dependent style token to improve intelligibility, naturalness, and speaker similarity of code-switching speech. In [16,17], the authors propose to transform IPA to phonological features to build cross-lingual TTS models. Staib et al. [16] even extend the model to an unseen language. These studies show that language-independent representations can simplify the training procedure of cross-lingual TTS models and help disentangle speaker and language representations. Recently, non-autoregressive TTS models [4,3] have achieved state-of-the-art performance for monolingual TTS in terms of speech intelligibility and naturalness. Nonetheless, most of the aforementioned works adopt autoregressive TTS models as their backbone framework, and few works exploit non-autoregressive architecture in cross-lingual TTS models. One important reason is that typical non-autoregressive TTS models contain separate duration modules which depend on external aligners [3] to provide ground truth labels for training. This impedes the use of language-independent representation (e.g. IPA symbols) as model input. To fill this research gap, in this paper, we seek to adopt non-autoregressive architecture to build cross-lingual TTS models. We propose a phoneme length regulator to integrate IPA input representations into non-autoregressive TTS models, and study how different input representations contribute to speaker disentanglement from language. We scale up the numbers of training speakers to investigate its impact on timbre disentanglement for non-autoregressive cross-lingual TTS models. Throughout extensive experiments, we find that a FastPitch-based [4] cross-lingual model with IPA symbols as input representations achieves the best speech naturalness and speaker similarity. We further verify the effectiveness of each component of the model by ablation studies. S T AY1 N Z Steins Gate 的 选 择 S T AY1 N Z G EY1 T d e0 x uan3 z e2 s t ˈaɪ n z g ˈeɪ t d ɤ˧ ɕ ɥoen˨˩˦ ts ɤ˧˥ 1 1 3 1 1 1 3 1 1 2 1 6 2 3 Figure 1: The process of converting language-dependent phoneme sequence to IPA sequence. Our contributions are as follows. (1) We propose a phoneme length regulator to build non-autoregressive crosslingual TTS models with IPA input. (2) We evaluate the effect of different numbers of training speakers on timbre disentanglement for non-autoregressive cross-lingual TTS models. (3) We study the impact of adversarial training and variance adaptors on naturalness and speaker similarity for non-autoregressive cross-lingual TTS models. 2. Proposed Approach 2.1. Mapping language-dependent phoneme to IPA symbols For any given text sequence, we first convert each word or syllable to its language dependent phoneme. Then, we use a custom dictionary 1 to map each language-dependent phoneme to IPA symbols. Since IPA symbols are fine-grained phonetic notations, one language-dependent phoneme (LDP) can usually be decomposed into one or more IPA symbols. We refer to the number of IPA symbols of the corresponding LDP as the phoneme length. Figure 1 illustrates the above process with a sentence containing both Mandarin and English. In the first two rows of Figure 1, each word is converted to either ARPABET for English or Pinyin for Mandarin. In the last two rows, we map each ARPABET or Pinyin symbol to its corresponding IPA symbols and phoneme length. Phoneme length regulator Different from autoregressive TTS models, most nonautoregressive TTS models [4,3] rely on external force aligners to provide phoneme duration information during training. However, IPA sequences have different lengths from the LDP sequences and cannot use the monolingual alignment results. To solve this length mismatch problem, we propose a phoneme length regulator to convert IPA embeddings back to LDP embeddings, which effectively bridges the IPA input sequences and the monolingual alignment results. Following the process described in Section 2.1, given the IPA sequence X = {Xj} T X j=1 and the corresponding phoneme length sequence L = {Li} T L i=1 , the phoneme length regulator outputs the aggregated embedding sequence Y by adding the IPA embeddings corresponding to the same languagedependent phoneme based on the phoneme length sequence as: cτ = 0, τ = 0 cτ = τ k=1 L k , 0 < τ ≤ TL (1) Yi = c i k=c i−1 +1 X k , f or 1 ≤ i ≤ TL(2) Where cτ is the cumulative sum of the phoneme length sequence and Y = {Yi} T L i=1 is the aggregated embedding sequence. One may see an aggregated embedding as a languagedependent embedding, because it's aggregated from the IPA embeddings corresponding to the same language-dependent phoneme. Now that the aggregated sequence has a one-to-one corresponding relationship to the original LDP sequence, one can use the duration information provided by a monolingual aligner to train non-autoregressive TTS models. Core acoustic model The proposed phoneme length regulator in Section 2.2 can be easily applied to any existing non-autoregressive models to enable IPA input. In this section, we propose a FastPitchbased [4] cross-lingual model with IPA symbols as input. We show in Section 3 that, when trained on a multi-speaker crosslingual dataset, the proposed model achieves the best MOS score among other strong baselines. As shown in Figure 2, the backbone of the proposed model is inherited from FastPitch [4]. The main architecture of the proposed model consists of an encoder and a decoder. Both the encoder and the decoder have 4 layers of Feed-Forward Transformer (FFT) blocks. However, unlike [4] which adds speaker embedding to the encoder input, we add trainable speaker embedding to the encoder output. In addition, the original FastPitch model only use a pitch predictor to improve the quality of generated speech. To increase the stability of our proposed model, we add an extra energy predictor as in [3]. To ensure the training of these variance adaptors will not negatively affect the training of encoder [18,19,20], we add a stop-gradient operation to the input of all variance adaptors as shown in the middle part of Figure 2. Frame-aligned energy is calculated in the same way as [3]. We then average the energy frames that correspond to the same language-dependent phoneme based on extracted duration, and quantize energy in the same way as [3]. Meanwhile, we obtain ground truth pitch value through WORLD [21] and average the pitch value using the same method as energy. Instead of quantizing the pitch value, we follow [4] and use a 1-D convolution layer to convert pitch value into pitch embedding. The variance adaptors and the main model are all optimized with mean-squared error (MSE) loss similar to [4,3]. [24] dataset (EN) and our proprietary dataset (CN). We take subsets from the aforementioned datasets, and combine them into three crosslingual datasets, namely d1, d2, and d3. All models are trained on one or more of the three datasets. Details of the three datasets are shown in Table 1. We carefully choose the gender and language of the speakers so that the language and gender of the datasets are as balanced as possible [5]. Note that the speakers in d1 have the most training data. Thus, we consider the two speakers as our target voices and denote them as d1-CN-M and d1-EN-F, respectively. Experiment and Results Dataset setup Experimental setup All speech data are resampled at 16kHz. The audio features are represented as a sequence of 80-dim log-mel spectrogram frames, computed from 40ms windows shifted by 10ms. The hidden size of FFT blocks of our proposed model is set to 256. Each feed-forward layer of the FFT blocks consists of 2 1-D convolution layers with kernel size 9 and 1024 intermediate channels. The variance adaptors, including the duration predictor, follow that of [3]. In all experiments, the proposed model is trained with a batch size of 32 using Adam optimizer for 200k steps. The initial learning rate is set to 0.001, and we apply the same learning rate schedule as [25]. We compare the Proposed model with the following baseline models. (1) A Tacotron-based cross-lingual model reimplemented from [25]. (2) A FastSpeech-LDP cross-lingual model which is a FastSpeech [2] based model with languagedependent phoneme as input and adversarial training method. This model is implemented based on the M3 w/o FSE model described in [8]. (3) A FastSpeech-IPA cross-lingual model, which adopts the same phoneme length regulator and IPA input as the proposed model, but has no variance adaptors. Notably, for a fair comparison with FastSpeech-LDP, we also apply adversarial training method to FastSpeech-IPA. All baseline models are trained with the same hyper-parameter settings as the Proposed model. All generated mel-spectrograms are converted to speech using a universal and fine-tuned HiFi-GAN vocoder [26]. Evaluation To evaluate the trained models, we conduct Mean Opinion Score (MOS) tests on speech naturalness and speaker similarity. We select 50 Mandarin (CN) utterances, 50 English (EN) utterances, and 50 mixed-lingual (CN-EN) utterances, and generate speech samples with the selected utterances using both speaker d1-CN-M and speaker d1-EN-F in Section 3.1. All generated speech utterances are rated by 15 human raters on a scale from 1 to 5 with 0.5 point increments. All raters are native Mandarin speakers with basic English skills. For speaker similarity, a reference utterance of the same speaker is selected, and raters are instructed to judge whether the given synthesized utterance and reference utterance are spoken by the same person or not. 5 points indicate that the voices of the utterances are definitely the same as the reference utterance whereas 1 point indicates that the voices of the utterances are definitely not the same as the reference utterance. Note that it's difficult to ask raters to ignore the content of generated speech, especially when raters are listening to a non-native language. Thus, bad intelligibility and naturalness of speech might result in a lower MOS score. Our synthesized speech samples can be found on this website 2 . Effect of numbers of training speakers We first investigate the effect of different numbers of training speakers on speaker and language disentanglement. We train our proposed model in Section 2.3 on different datasets with different numbers of speakers, and evaluate its performance on speaker d1-CN-M. The result is presented in Table 2. As shown in Table 2, model trained on d1 (i.e. only contains the target voices) has the lowest MOS score for foreign language (EN). Whereas the MOS score for native language (CN) reaches the marginal level. This implies that datasets with only one speaker for a language have little help in improving the target voice's ability to speak foreign languages. We infer that although IPA symbols are shared by all languages, there are still some IPA symbols that are unique to certain languages, which entangles speaker and language representations of the model when the dataset only consists of one speaker for a language. At the same time, increasing the number of training speakers constantly improves the model's performance of crosslanguage voice cloning. In addition, the standard deviation of the naturalness scores also becomes more stable as the number of training speakers increases. These results show that the diversity of different speakers not only help the model learn to disentangle language and speaker representations, but also stabilize the generated results of non-autoregressive cross-lingual TTS models. Given the abovementioned findings, we train all the models on dataset d1 + d2 + d3 for our experiments in the rest of the paper. Table 3 that FastSpeech-IPA has an overall better speaker similarity score than FastSpeech-LDP. This suggests that using language-independent input representations can better disentangle language and speaker information for non-autoregressive TTS models. In addition, the speech naturalness score of FastSpeech-IPA is no worse or even better than FastSpeech-LDP. This indicates that the proposed phoneme length regulator in Section 2.2 effectively maps IPA embeddings back to LDP embeddings and the model learns the pronunciation of different languages. Nonetheless, FastSpeech-IPA and FastSpeech-LDP can only achieve decent MOS scores when the target speakers speak their native language. On the contrary, the proposed model achieves the best MOS scores in all utterances except for the similarity score of CN utterances of speaker d1-CN-M. This suggests that the variance adaptors can further help the model disentangle speaker and language representations. It should be noted that these variance adaptors are originally proposed to ease the one-to-many mapping problem in non-autoregressive TTS models [3]. Yet, our findings suggest that explicitly factorizing speech variations help disentangle language representations from other representations in cross-lingual TTS, even when we don't use any supervision or adversarial method on these hidden representations. Finally, one can still observe a gap between the scores of target speakers speaking native and foreign languages. We speculate that, since the reference utterances are always in speakers' native language, the raters might be affected by the accent of the speaker or other bias factors when listening to samples. Ablation studies We perform ablation studies to (1) evaluate how adversarial training method would affect the proposed model, (2) verify the effectiveness of the two variance adaptors of the proposed model. For adversarial training, we add a gradient reversal layer (GRL) and a speaker classifier after encoder output [6,8]. We adopt the same speaker classifier and scale factor, λ, for GRL as in [27]. We report the MOS score of speech naturalness and speaker similarity on all kinds of utterances. The result is presented in Table 4. It can be seen that adversarial training has little effect on the proposed model. We hypothesize that IPA input and the diversity of speakers in the training dataset have already disentangled most language and speaker information. So the model can only obtain little information from the GRL layer. Furthermore, we empirically find that the GRL layer is sensitive to the scale factor, λ. Thus, the hyper-parameters we use may not be optimal for cross-lingual TTS and a more careful tuning might lead to a better result. Even so, the proposed model can achieve decent cross-lingual performance without any complex training method or hyper-parameter tuning. On the other hand, a missing of the energy and pitch predictors leads to a significant drop in both naturalness and speaker similarity scores, which demonstrates the effectiveness of these variance adaptors in cross-lingual TTS. Conclusions In this paper, we study the disentanglement of speaker and language representations in non-autoregressive cross-lingual TTS from various aspects. We propose the phoneme length regulator, which facilitates the implementation of non-autoregressive cross-lingual TTS models with IPA input representations using monolingual force aligners. We build a FastPitch-based model with IPA input that achieves decent speech naturalness and speaker similarity without any complex adversarial training method. Our experimental results show that an increasing number of training speakers, the IPA input representations and the variance adaptors in [3] can help non-autoregressive crosslingual TTS models disentangle speaker and language representations. In future work, we will investigate better methods to model the accent of target speakers in cross-lingual TTS, closing the gap between the native and foreign language of target speakers. We will also investigate better methods to model the prosody and emotions of speech in cross-lingual TTS. Figure 2 : 2Overview of the proposed model. 1 https://github.com/open-dsl-dict/ipa-dict-dslPhoneme Embedding Phoneme Length Sequence Phoneme Length Regulator Encoder Repeat Decoder Duration Predictor Pitch Predictor Energy Predictor Mel Spectrogram Stop Gradient Add [3, 2] [2, 3] Speaker Embedding Variance Adaptors Table 1 : 1Details of training datasets.Dataset Speaker Source Duration (Used) d1 CN male Internal 5 hours EN female LJSpeech [22] 5 hours d2 CN female Databaker [23] 1 hour EN male cmu bdl [24] 1 hour d3 CN male Internal 1 hour CN female Internal 1 hour EN male cmu rms [24] 1 hour EN female cmu clb [24] 1 hour We conduct all the experiments in two languages, Mandarin (CN) and English (EN). We use the LJSpeech [22] dataset (EN), the Databaker [23] dataset (CN), the CMU arctic Table 2 : 2Naturalness and speaker similarity of speaker d1-CN-M in different training datasets.MOS Dataset Utterances Type CN CN-EN EN Naturalness d1 4.46±0.15 3.61±0.30 3.37±0.34 d1+d2 4.49±0.15 3.88±0.21 3.64±0.27 d1+d2+d3 4.51±0.15 4.00±0.19 3.84±0.21 Similarity d1 4.07±0.36 3.20±0.42 1.76±0.35 d1+d2 4.05±0.37 3.52±0.36 2.97±0.36 d1+d2+d3 4.05±0.36 3.61±0.35 3.21±0.36 Table 3 : 3Naturalness and speaker similarity of different kinds of utterances for all models.Target Speaker Model CN utterances CN-EN utterances EN utterances Naturalness Similarity Naturalness Similarity Naturalness Similarity d1-CN-M Tacotron-based [25] 2.74±0.33 3.10±0.42 2.57±0.36 2.90±0.40 2.51±0.35 2.53±0.37 FastSpeech-LDP [8] 4.41±0.19 4.16±0.33 3.63±0.29 3.34±0.39 3.51±0.34 2.23±0.42 FastSpeech-IPA 4.44±0.18 4.17±0.34 3.75±0.28 3.57±0.38 3.51±0.32 2.20±0.41 Proposed 4.46±0.18 4.15±0.34 4.02±0.24 3.69±0.36 3.97±0.23 3.40±0.37 d1-EN-F Tacotron-based [25] 2.77±0.35 2.95±0.43 2.85±0.37 3.06±0.44 2.93±0.39 3.11±0.46 FastSpeech-LDP [8] 3.11±0.33 2.13±0.42 3.53±0.30 3.22±0.45 3.88±0.30 3.66±0.43 FastSpeech-IPA 3.28±0.33 2.72±0.44 3.67±0.29 3.38±0.42 3.92±0.28 3.72±0.42 Proposed 3.92±0.22 3.26±0.42 3.95±0.20 3.54±0.41 4.15±0.23 3.80±0.41 3.5. Comparing with baseline models Table 3 3presents the result of speech naturalness and speaker similarity MOS tests for the proposed model and all the baseline models in Section 3.2. Generally speaking, non-autoregressive models all outperforms the Tacotron-based baseline model. This result shows the effectiveness of non-autoregressive archi- tecture in cross-lingual TTS. It can be observed from Table 4 : 4Overall naturalness and speaker similarity of ablation studies. Energy & pitch denotes the energy predictor and pitch predictor, respectively. Naturalness Similarity Naturalness Similarity Proposed 4.19±0.24 3.55±0.38 4.03±0.23 3.32±0.43 + GRL 4.19±0.24 3.54±0.38 4.04±0.24 3.33±0.44 -energy 4.10±0.28 3.40±0.42 4.02±0.23 3.29±0.44 -pitch 3.83±0.38 3.28±0.46 3.54±0.33 3.06±0.44Model d1-CN-M d1-EN-F https://hyzhan.github.io/NAC-TTS/ Natural tts synthesis by conditioning wavenet on mel spectrogram predictions. J Shen, R Pang, R J Weiss, M Schuster, N Jaitly, Z Yang, Z Chen, Y Zhang, Y Wang, R Skerrv-Ryan, 2018 IEEE International Conference on Acoustics, Speech and Signal Processing. IEEEICASSPJ. Shen, R. Pang, R. J. Weiss, M. Schuster, N. Jaitly, Z. Yang, Z. Chen, Y. Zhang, Y. Wang, R. Skerrv-Ryan et al., "Natural tts synthesis by conditioning wavenet on mel spectrogram pre- dictions," in 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2018, pp. 4779- 4783. . Y Ren, Y Ruan, X Tan, T Qin, S Zhao, Z Zhao, T.-Y , Y. Ren, Y. Ruan, X. Tan, T. Qin, S. Zhao, Z. Zhao, and T.-Y. Fastspeech: Fast, robust and controllable text to speech. Liu, Advances in Neural Information Processing Systems(NeurIPS). Liu, "Fastspeech: Fast, robust and controllable text to speech," in Advances in Neural Information Processing Systems(NeurIPS), 2019, pp. 3171-3180. Fastspeech 2: Fast and high-quality end-to-end text to speech. Y Ren, C Hu, X Tan, T Qin, S Zhao, Z Zhao, T.-Y Liu, International Conference on Learning Representations(ICLR. 2021Y. Ren, C. Hu, X. Tan, T. Qin, S. Zhao, Z. Zhao, and T.-Y. Liu, "Fastspeech 2: Fast and high-quality end-to-end text to speech," in International Conference on Learning Representations(ICLR), 2021. Fastpitch: Parallel text-to-speech with pitch prediction. A Łańcucki, 2021 IEEE International Conference on Acoustics, Speech and Signal Processing. A. Łańcucki, "Fastpitch: Parallel text-to-speech with pitch pre- diction," in 2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2021, pp. 6588-6592. A Systematic Review and Analysis of Multilingual Data Strategies in Textto-Speech for Low-Resource Languages. P Do, M Coler, J Dijkstra, E Klabbers, Proc. Interspeech. InterspeechP. Do, M. Coler, J. Dijkstra, and E. Klabbers, "A Systematic Review and Analysis of Multilingual Data Strategies in Text- to-Speech for Low-Resource Languages," in Proc. Interspeech, 2021, pp. 16-20. Learning to Speak Fluently in a Foreign Language: Multilingual Speech Synthesis and Cross-Language Voice Cloning. Y Zhang, R J Weiss, H Zen, Y Wu, Z Chen, R Skerry-Ryan, Y Jia, A Rosenberg, B Ramabhadran, Proc. Interspeech. InterspeechY. Zhang, R. J. Weiss, H. Zen, Y. Wu, Z. Chen, R. Skerry-Ryan, Y. Jia, A. Rosenberg, and B. Ramabhadran, "Learning to Speak Fluently in a Foreign Language: Multilingual Speech Synthesis and Cross-Language Voice Cloning," in Proc. Interspeech, 2019, pp. 2080-2084. Cross-Lingual Text-To-Speech Synthesis via Domain Adaptation and Perceptual Similarity Regression in Speaker Space. D Xin, Y Saito, S Takamichi, T Koriyama, H Saruwatari, Proc. Interspeech. InterspeechD. Xin, Y. Saito, S. Takamichi, T. Koriyama, and H. Saruwatari, "Cross-Lingual Text-To-Speech Synthesis via Domain Adapta- tion and Perceptual Similarity Regression in Speaker Space," in Proc. Interspeech, 2020, pp. 2947-2951. Incorporating Cross-Speaker Style Transfer for Multi-Language Text-to-Speech. Z Shang, Z Huang, H Zhang, P Zhang, Y Yan, Proc. Interspeech. InterspeechZ. Shang, Z. Huang, H. Zhang, P. Zhang, and Y. Yan, "Incorpo- rating Cross-Speaker Style Transfer for Multi-Language Text-to- Speech," in Proc. Interspeech, 2021, pp. 1619-1623. Multi-Lingual Multi-Speaker Text-to-Speech Synthesis for Voice Cloning with Online Speaker Enrollment. Z Liu, B Mak, Proc. Interspeech, 2020. Interspeech, 2020Z. Liu and B. Mak, "Multi-Lingual Multi-Speaker Text-to-Speech Synthesis for Voice Cloning with Online Speaker Enrollment," in Proc. Interspeech, 2020, pp. 2932-2936. End-to-End Textto-Speech for Low-Resource Languages by Cross-Lingual Transfer Learning. Y.-J Chen, T Tu, C Yeh, H.-Y. Lee, Proc. Interspeech. InterspeechY.-J. Chen, T. Tu, C. chieh Yeh, and H.-Y. Lee, "End-to-End Text- to-Speech for Low-Resource Languages by Cross-Lingual Trans- fer Learning," in Proc. Interspeech, 2019, pp. 2075-2079. Code-switched speech synthesis using bilingual phonetic posteriorgram with only monolingual corpora. Y Cao, S Liu, X Wu, S Kang, P Liu, Z Wu, X Liu, D Su, D Yu, H Meng, 2020 IEEE International Conference on Acoustics, Speech and Signal Processing. Y. Cao, S. Liu, X. Wu, S. Kang, P. Liu, Z. Wu, X. Liu, D. Su, D. Yu, and H. Meng, "Code-switched speech synthesis using bilingual phonetic posteriorgram with only monolingual corpora," in 2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2020, pp. 7619-7623. Towards Natural Bilingual and Code-Switched Speech Synthesis Based on Mix of Monolingual Recordings and Cross-Lingual Voice Conversion. S Zhao, T H Nguyen, H Wang, B Ma, Proc. Interspeech. InterspeechS. Zhao, T. H. Nguyen, H. Wang, and B. Ma, "Towards Natural Bilingual and Code-Switched Speech Synthesis Based on Mix of Monolingual Recordings and Cross-Lingual Voice Conversion," in Proc. Interspeech, 2020, pp. 2927-2931. On Improving Code Mixed Speech Synthesis with Mixlingual Grapheme-to-Phoneme Model. S Bansal, A Mukherjee, S Satpal, R Mehta, Proc. Interspeech, 2020. Interspeech, 2020S. Bansal, A. Mukherjee, S. Satpal, and R. Mehta, "On Improv- ing Code Mixed Speech Synthesis with Mixlingual Grapheme-to- Phoneme Model," in Proc. Interspeech, 2020, pp. 2957-2961. Cross-Lingual, Multi-Speaker Text-To-Speech Synthesis Using Neural Speaker Embedding. M Chen, M Chen, S Liang, J Ma, L Chen, S Wang, J Xiao, Proc. Interspeech. InterspeechM. Chen, M. Chen, S. Liang, J. Ma, L. Chen, S. Wang, and J. Xiao, "Cross-Lingual, Multi-Speaker Text-To-Speech Synthesis Using Neural Speaker Embedding," in Proc. Interspeech, 2019, pp. 2105-2109. Dynamic Soft Windowing and Language Dependent Style Token for Code-Switching End-to-End Speech Synthesis. R Fu, J Tao, Z Wen, J Yi, C Qiang, T Wang, Proc. Interspeech. InterspeechR. Fu, J. Tao, Z. Wen, J. Yi, C. Qiang, and T. Wang, "Dynamic Soft Windowing and Language Dependent Style Token for Code- Switching End-to-End Speech Synthesis," in Proc. Interspeech, 2020, pp. 2937-2941. Phonological Features for 0-Shot Multilingual Speech Synthesis. M Staib, T H Teh, A Torresquintero, D S R Mohan, L Foglianti, R Lenain, J Gao, Proc. Interspeech. InterspeechM. Staib, T. H. Teh, A. Torresquintero, D. S. R. Mohan, L. Foglianti, R. Lenain, and J. Gao, "Phonological Features for 0- Shot Multilingual Speech Synthesis," in Proc. Interspeech, 2020, pp. 2942-2946. Cross-Lingual Low Resource Speaker Adaptation Using Phonological Features. G Maniati, N Ellinas, K Markopoulos, G Vamvoukakis, J S Sung, H Park, A Chalamandaris, P Tsiakoulis, Proc. Interspeech. InterspeechG. Maniati, N. Ellinas, K. Markopoulos, G. Vamvoukakis, J. S. Sung, H. Park, A. Chalamandaris, and P. Tsiakoulis, "Cross- Lingual Low Resource Speaker Adaptation Using Phonological Features," in Proc. Interspeech, 2021, pp. 1594-1598. Glow-tts: A generative flow for text-to-speech via monotonic alignment search. J Kim, S Kim, J Kong, S Yoon, Advances in Neural Information Processing Systems. 33J. Kim, S. Kim, J. Kong, and S. Yoon, "Glow-tts: A generative flow for text-to-speech via monotonic alignment search," in Ad- vances in Neural Information Processing Systems, vol. 33, 2020, pp. 8067-8077. Controllable Neural Text-to-Speech Synthesis Using Intuitive Prosodic Features. T Raitio, R Rasipuram, D Castellani, Proc. Interspeech, 2020. Interspeech, 2020T. Raitio, R. Rasipuram, and D. Castellani, "Controllable Neural Text-to-Speech Synthesis Using Intuitive Prosodic Features," in Proc. Interspeech, 2020, pp. 4432-4436. Improving naturalness and controllability of sequence-to-sequence speech synthesis by learning local prosody representations. C Gong, L Wang, Z Ling, S Guo, J Zhang, J Dang, 2021 IEEE International Conference on Acoustics, Speech and Signal Processing. C. Gong, L. Wang, Z. Ling, S. Guo, J. Zhang, and J. Dang, "Im- proving naturalness and controllability of sequence-to-sequence speech synthesis by learning local prosody representations," in 2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2021, pp. 5724-5728. World: a vocoder-based high-quality speech synthesis system for real-time applications. M Morise, F Yokomori, K Ozawa, IEICE Transactions on Information and Systems. 997M. Morise, F. Yokomori, and K. Ozawa, "World: a vocoder-based high-quality speech synthesis system for real-time applications," IEICE Transactions on Information and Systems, vol. 99, no. 7, pp. 1877-1884, 2016. The lj speech dataset. K Ito, K. Ito, "The lj speech dataset," https://keithito.com/ LJ-Speech-Dataset/, 2017. The biaobei dataset. D T Co, D. T. Co., "The biaobei dataset," https://www.data-baker.com/ open source.html. The cmu arctic speech databases. J Kominek, A W Black, Fifth ISCA workshop on speech synthesis. J. Kominek and A. W. Black, "The cmu arctic speech databases," in Fifth ISCA workshop on speech synthesis, 2004. Improve Cross-Lingual Text-To-Speech Synthesis on Monolingual Corpora with Pitch Contour Information. H Zhan, H Zhang, W Ou, Y Lin, Proc. Interspeech, 2021. Interspeech, 2021H. Zhan, H. Zhang, W. Ou, and Y. Lin, "Improve Cross-Lingual Text-To-Speech Synthesis on Monolingual Corpora with Pitch Contour Information," in Proc. Interspeech, 2021, pp. 1599-1603. Hifi-gan: Generative adversarial networks for efficient and high fidelity speech synthesis. J Kong, J Kim, J Bae, Advances in Neural Information Processing Systems. Curran Associates, Inc3333J. Kong, J. Kim, and J. Bae, "Hifi-gan: Generative adversarial networks for efficient and high fidelity speech synthesis," in Ad- vances in Neural Information Processing Systems, vol. 33. Cur- ran Associates, Inc., 2020, pp. 17 022-17 033. Daftexprt: Robust prosody transfer across speakers for expressive speech synthesis. J Zaïdi, H Seuté, B Van Niekerk, M.-A Carbonneau, arXiv:2108.02271arXiv preprintJ. Zaïdi, H. Seuté, B. van Niekerk, and M.-A. Carbonneau, "Daft- exprt: Robust prosody transfer across speakers for expressive speech synthesis," arXiv preprint arXiv:2108.02271, 2021.
[ "https://github.com/open-dsl-dict/ipa-dict-dslPhoneme" ]
[ "Low Redshift Intergalactic Absorption Lines in the Spectrum of HE 0226-4110 1", "Low Redshift Intergalactic Absorption Lines in the Spectrum of HE 0226-4110 1" ]
[ "N Lehner \nDepartment of Astronomy\nUniversity of Wisconsin\n475 North Charter Street53706MadisonWI\n\nDepartment of Physics\nUniversity of Notre Dame\n225 Nieuwland Science Hall\n\nNotre Dame\n46556IN\n", "B D Savage \nDepartment of Astronomy\nUniversity of Wisconsin\n475 North Charter Street53706MadisonWI\n", "B P Wakker \nDepartment of Astronomy\nUniversity of Wisconsin\n475 North Charter Street53706MadisonWI\n", "K R Sembach \nSpace Telescope Science Institute\n3700 San Martin Drive21218BaltimoreMD\n", "T M Tripp \nDepartment of Astronomy\nUniversity of Massachusetts\n01003AmherstMA\n" ]
[ "Department of Astronomy\nUniversity of Wisconsin\n475 North Charter Street53706MadisonWI", "Department of Physics\nUniversity of Notre Dame\n225 Nieuwland Science Hall", "Notre Dame\n46556IN", "Department of Astronomy\nUniversity of Wisconsin\n475 North Charter Street53706MadisonWI", "Department of Astronomy\nUniversity of Wisconsin\n475 North Charter Street53706MadisonWI", "Space Telescope Science Institute\n3700 San Martin Drive21218BaltimoreMD", "Department of Astronomy\nUniversity of Massachusetts\n01003AmherstMA" ]
[]
We present an analysis of the Far Ultraviolet Spectroscopic Explorer (FUSE) and the Space Telescope Imaging Spectrograph (STIS E140M) spectra of HE 0226-4110 (z em = 0.495) that have a nearly continuous wavelength coverage from 910 to 1730Å. We detect 56 Lyman absorbers and 5 O VI absorbers. The number of intervening O VI systems per unit redshift with W λ 50 mÅ is dN (O VI)/dz ≈ 11. For 4 of the 5 O VI systems other ions (such as C III, C IV, O III, O IV) are detected. The O VI systems unambiguously trace hot gas only in one case. For the 4 other O VI systems, photoionization and collisional ionization models are viable options to explain the observed column densities of the O VI and the other ions. If photoionization applies for those systems, the broadening of the metal lines must be mostly non-thermal or several components may be hidden in the noise, but the H I broadening appears to be mostly thermal. If the O VI systems are mostly photoionized, only a fraction of the observed O VI will contribute to the baryonic density of the warm-hot ionized medium (WHIM) along this line of sight. Combining our results with previous ones, we show that there is a general increase of N (O VI) with increasing b(O VI). Cooling flow models can reproduce the N -b distribution but fail to reproduce the observed ionic ratios. A comparison of the number of O I, O II, O III, O IV, and O VI systems per unit redshift show that the low-z IGM is more highly ionized than weakly ionized. We confirm that photoionized O VI systems show a decreasing ionization parameter with increasing H I column density. O VI absorbers with collisional ionization/photoionization degeneracy follow this relation, possibly suggesting that they are principally photoionized. We find that the photoionized O VI systems in the low redshift IGM have a median abundance of 0.3 solar. We do not find additional Ne VIII systems other than the one found by Savage et al., although our sensitivity should have allowed the detection of Ne VIII in O VI systems at T ∼ (0.6 − 1.3) × 10 6 K (if collisional ionization equilibrium applies). Since the bulk of the WHIM is believed to be at temperatures T > 10 6 K, the hot part of the WHIM remains to be discovered with FUV-EUV metal-line transitions.
10.1086/500932
[ "https://arxiv.org/pdf/astro-ph/0602085v1.pdf" ]
16,880,371
astro-ph/0602085
96c3f7a67120bf526a9693f4da28bc26bb06a72a
Low Redshift Intergalactic Absorption Lines in the Spectrum of HE 0226-4110 1 3 Feb 2006 N Lehner Department of Astronomy University of Wisconsin 475 North Charter Street53706MadisonWI Department of Physics University of Notre Dame 225 Nieuwland Science Hall Notre Dame 46556IN B D Savage Department of Astronomy University of Wisconsin 475 North Charter Street53706MadisonWI B P Wakker Department of Astronomy University of Wisconsin 475 North Charter Street53706MadisonWI K R Sembach Space Telescope Science Institute 3700 San Martin Drive21218BaltimoreMD T M Tripp Department of Astronomy University of Massachusetts 01003AmherstMA Low Redshift Intergalactic Absorption Lines in the Spectrum of HE 0226-4110 1 3 Feb 2006Accepted for publication in the ApJSSubject headings: cosmology: observations -intergalactic medium -quasars: absorption lines -quasars: individual (HE 0226-4110) We present an analysis of the Far Ultraviolet Spectroscopic Explorer (FUSE) and the Space Telescope Imaging Spectrograph (STIS E140M) spectra of HE 0226-4110 (z em = 0.495) that have a nearly continuous wavelength coverage from 910 to 1730Å. We detect 56 Lyman absorbers and 5 O VI absorbers. The number of intervening O VI systems per unit redshift with W λ 50 mÅ is dN (O VI)/dz ≈ 11. For 4 of the 5 O VI systems other ions (such as C III, C IV, O III, O IV) are detected. The O VI systems unambiguously trace hot gas only in one case. For the 4 other O VI systems, photoionization and collisional ionization models are viable options to explain the observed column densities of the O VI and the other ions. If photoionization applies for those systems, the broadening of the metal lines must be mostly non-thermal or several components may be hidden in the noise, but the H I broadening appears to be mostly thermal. If the O VI systems are mostly photoionized, only a fraction of the observed O VI will contribute to the baryonic density of the warm-hot ionized medium (WHIM) along this line of sight. Combining our results with previous ones, we show that there is a general increase of N (O VI) with increasing b(O VI). Cooling flow models can reproduce the N -b distribution but fail to reproduce the observed ionic ratios. A comparison of the number of O I, O II, O III, O IV, and O VI systems per unit redshift show that the low-z IGM is more highly ionized than weakly ionized. We confirm that photoionized O VI systems show a decreasing ionization parameter with increasing H I column density. O VI absorbers with collisional ionization/photoionization degeneracy follow this relation, possibly suggesting that they are principally photoionized. We find that the photoionized O VI systems in the low redshift IGM have a median abundance of 0.3 solar. We do not find additional Ne VIII systems other than the one found by Savage et al., although our sensitivity should have allowed the detection of Ne VIII in O VI systems at T ∼ (0.6 − 1.3) × 10 6 K (if collisional ionization equilibrium applies). Since the bulk of the WHIM is believed to be at temperatures T > 10 6 K, the hot part of the WHIM remains to be discovered with FUV-EUV metal-line transitions. Introduction From the earliest time to the present-day epoch, most of the baryons are found in the intergalactic medium (IGM). The Lyα forest is a signature of the IGM that is imprinted on the spectra of QSOs and allows measurements of the evolution of the universe over a wide range of redshift. At z > 1.5, H I in the IGM can be observed with ground-based 8-10 m telescopes. In the low redshift universe (z ≤ 1.5), the Lyα transition and most of the metal resonance lines fall in the ultraviolet (UV) wavelength range, requiring challenging space-based observations of QSOs. Most Lyα absorbers at z < 1.5 do not show detectable metal absorption lines in currently available spectra, but those that do show associated metals provide further information about the abundances, kinematics, and ionization corrections in the low-redshift IGM and halos of nearby galaxies. Metal-line systems also give indications about the metallicity evolution of the IGM and can contain large reservoirs of baryons at low and high redshift (e.g., Tripp, Savage, & Jenkins 2000;Carswell et al. 2002;Bergeron & Herbert-Fort 2005). Detecting metal-line systems in the IGM requires not only high signal-to-noise but also high spectral resolution FUV spectra and simple lines of sight to avoid blending between the different galactic and intergalactic absorbers. In the last few years, a few high quality spectra of QSOs covering the wavelength range from the Lyman limit to about 1730Å have been obtained with the Space Telescope Imaging Spectrograph (STIS) onboard the Hubble Space Telescope (HST) and Far Ultraviolet Spectroscopic Explorer (FUSE), thereby allowing sensitive measurements of the metal-line systems in the tenuous IGM (e.g., Tripp et al. 2001;Savage et al. 2002;Prochaska et al. 2004;Richter et al. 2004;Sembach et al. 2004;Williger et al. 2005). The cold dark matter cosmological simulations provide a self-consistent explanation of the Lyα absorbers seen in the QSO spectra (e.g., Davé et al. 1999;Davé & Tripp 2001). At high redshift, they predict that most of the Lyα absorbers consist of cool (less than a few 10 4 K) photoionized gas and virtually all baryons are observed in this gas-phase at z > 2 (e.g., Weinberg et al. 1997;Rauch et al. 1997). As the universe expands, the initial density perturbations collapse, producing shock-heated gas at temperatures of 10 5 -10 7 K (Cen & Ostriker 1999;Davé et al. 1999;Fang et al. 2002). At the present epoch (z 1), the hydrodynamical simulations predict that ∼30-50% of the normal baryonic matter of the universe lies in a tenuous warm-hot intergalactic medium (WHIM), and another ∼30% of the baryons lies in a cooler, photoionized, tenuous intergalactic gas; observations appear to support the prediction regarding low-z photoionized gas (e.g., Penton et al. 2004). At temperatures of 10 5 -10 7 K, the most abundant elements C, N, O, and Ne are highly ionized. Detecting the hot component of the WHIM at T > 10 6 K is possible through measurements of X-ray O VII Kα and O VIII absorption lines at 21.870 and 18.973+18.605Å, respectively. These have been detected at low redshift with Chandra along one line of sight (Nicastro et al. 2005). However, apart from detections at z = 0, most of the current X-ray observations of O VII and O VIII reported in the IGM remain marginal or the claims are contradicted by higher signal-to-noise spectra (Rasmussen et al. 2003). The EUV lines of Ne VIII λλ770, 780 tracing gas at 7 × 10 5 K are redshifted to the ultraviolet (UV) wavelength region for z > 0.18 and Savage et al. (2005) reported the first detection of the Ne VIII doublet in the spectrum of the bright QSO HE 0226-4110. The cooler part (T 5 × 10 5 K) of the WHIM is currently better observed via the O VI doublet at 1031.926 and 1037.617Å, which can be efficiently observed by combining STIS and FUSE observations (e.g., Tripp & Savage 2000;Tripp, Savage, & Jenkins 2000;Danforth & Shull 2005). However, the O VI absorbers can arise in warm collisionally ionized gas (T ∼ 3 × 10 5 K), as well as in cooler, photoionized, low density gas (e.g., Tripp et al. 2001;Savage et al. 2002;Prochaska et al. 2004). A study of the origin(s) of ionization of the O VI is therefore important for estimates of the density of baryons in the WHIM and the IGM in general. In this paper, we report the complete FUSE and STIS observations and analyses of the IGM absorption lines in the spectrum of the z = 0.495 quasar HE 0226-4110. From these observations we derive accurate cloud parameters (redshift, column density, Doppler width) for clouds in the Lyman forest and any associated metals. While all the IGM measurements for HE 0226-4110 are reported here, the metal-free Lyα forest observations will be discussed in more detail in a future paper where we will combine the present results with results from other sight lines in order to derive the physical and statistical properties of the Lyα forest at low redshift. We concentrate in this work on defining the origin(s) of the observed O VI systems and the implications for the IGM by combining our observations with recent analyses of other QSO spectra at low redshift (in particular, Tripp et al. 2001;Prochaska et al. 2004;Richter et al. 2004;Sembach et al. 2004;Williger et al. 2005). The physical properties and ionization conditions in metal-line absorbers can be most effectively studied by obtaining observations of several species in different ionization stages. Using the same species in several ionizing stages provides a direct way to constrain simultaneously metallicity, ionization, and physical conditions. For example, for oxygen, EUV measurements of O II, O III, O IV, O V can be combined with FUV measurements of O I and O VI. The HE 0226-4110 line of sight is particularly favorable for a search for EUV and FUV absorptions because the H 2 absorption in our Galaxy is very weak, producing a relatively clean FUV spectrum for IGM studies. In particular, we systematically search for and report the measurements of C III, O III, O IV, O VI, and Ne VIII associated with each Lyα absorber in order to better characterize the physical states of the Lyα forest at low redshift. The organization of this paper is as follows. After describing the observations and data reduction of the HE 0226-4110 spectrum in §2, we present the line identification and analysis to estimate the redshift, column density (N ), Doppler parameter (b) for the IGM clouds in §3. In §4, we determine the physical properties and abundances of the metal-line absorbers observed toward HE 0226-4110. The implications of our results combined with recent observations of the IGM are discussed in §5. A summary of the main results is presented in §6. Observations and Data Reduction We have obtained a high quality FUV spectrum of HE 0226-4110 [(l, b) = (253. • 94, −65. • 78); z em = 0.495] covering nearly continuously the wavelength range from 916 to 1730Å. To show the broad ultraviolet continuum shape of HE0226-4110 as well as the locations and shapes of the QSO emission lines, we display in Figure 1 the FUSE and STIS data with a spectral bin size of 0.1Å. This figure shows the overall quality of the spectrum and the spectral regions covered by the various FUSE channels and the STIS spectrum. Several emission features can be discerned, which are associated with gas near the QSO (see Ganguly et al. 2005, in preparation). The feature near 1537Å is mostly Lyβ emission, with possibly some O VI λ1031 and O VIλ1037 emission mixed in. Around 1454Å, weak Lyγ emission is visible. Intrinsic Ne VIII emission may be present in the 1150-1160Å region. The feature centered on 1049Å is most likely O III λ702.332 emission. A corresponding, slightly weaker O III λ832.927 emission feature can be seen near 1240Å. Figure 1 also shows that strong lines are sparse in this sight line; the weakness of Galactic H 2 lines in particular makes this a valuable sight line for the study of extragalactic EUV/FUV absorption lines. We note that occasional hot pixels in the STIS spectrum are clearly evident in this figure, and the possible contamination of absorption profiles by warm/hot pixels must be borne in mind. In addition, residual geocoronal Lyα, Lyβ and Lyγ emission is visible. All wavelengths and velocities are given in the heliocentric reference frame in this paper. Toward HE 0226-4110, the Local Standard of Rest (LSR) and heliocentric reference frames are related by v LSR = v helio − 14.3 km s −1 . Savage et al. (2005) described in detail the data used in this paper and its processing, and we give below only a brief summary. HST/STIS Observations The STIS observations of HE 0226-4110 (GO program 9184, PI: Tripp) were obtained with the E140M intermediate-resolution echelle grating between 2002 December 26 and 2003 January 1, with a total integration time of 43.5 ksec (see Table 1). The entrance slit was set to 0. ′′ 2 × 0. ′′ 06. The spectral resolution is 7 km s −1 with a detector pixel size of 3.5 km s −1 . The S/N per 7 km s −1 resolution element of the spectrum of HE 0226-4110 is 11, 11, and 8, at 1250, 1500, 1600Å, respectively. The S/N is substantially lower for λ 1180Å and λ 1650Å. The STIS data reductions provide an excellent wavelength calibration. The velocity uncertainty is ∼1 km s −1 with occasional errors as large as 3 km s −1 (see the appendix of . In order to align the FUSE spectra that have an uncertain absolute wavelength calibration (see § 2.2) with the STIS spectrum, we systematically measured the velocity of the interstellar lines that are free of blends. Those lines are: S II λλ1250, 1253, 1259, N I λλ1199, 1200, 1201, O I λ1302, Si II λλ1190, 1193, 1304, 1526, Fe II λ1608, Ni II λ1370, and Al II λ1670. Using these species, we find v ISM helio = 10.4 ± 1.4 km s −1 toward HE 0226-4110. We use this velocity to establish the zero point wavelength calibration of the FUSE observations. FUSE Observations The FUSE observations of HE 0226-4110 were obtained between 2000 and 2001 from the science team O VI project (Wakker et al. 2003;Savage et al. 2003;Sembach et al. 2003), and between 2002 and 2003 as part of the FUSE GO program D027 (PI: Savage) (see Table 2). The total exposure time of these programs in segments 1A and 1B is 194 ks, in segments 2A and 2B 191 ks. The night data typically account for 65% of the total exposure time. The measurements were obtained in time-tagged mode and cover the wavelengths between 916 to 1188Å with a spectral resolution of ∼20 km s −1 . In the wavelength range 916 to 987Å only SiC 2A data are available because of channel alignment problems. Over the wavelength region from 987 to 1182Å we used LiF 1A, and from 1087 to 1182Å LiF 2A was used. The lower S/N observations in LiF 2B and LiF 1B were used to check for fixed pattern noise in LiF 1A and LiF 2A observations, respectively. To reduce the effects of detector fixed-pattern noise, some of the exposures were acquired using focal plane split motions (see Table 2), wherein subsequent exposures are placed at different locations on the detector. The spectra were processed with CALFUSE v2.1.6 or v2.4.0 (see Table 2). The most difficult task was to bring the different extracted exposures into a common heliocentric reference frame before coadding them. To do so, we fitted the ISM lines in each segment of each of the 8 exposures and shifted them to the heliocentric frame measured with the STIS spectrum. Typically, in LiF 1A, we use Si II λ1020, Ar I λλ1048, 1066, Fe II λ1063; in LiF 1B/ 2A, Fe II λλ1096, 1112, 1121, 1125, 1142, 1143, 1144in SiC 2B Ar I λλ1048, 1066, Fe II λλ1063, 1096and in SiC 2A O I λλ921, 924, 925, 929, 930, 936, 948, 950, 971, 976. We forced the ISM lines in each exposure of the FUSE band to have v helio = 10.4 km s −1 . The rms of the measured velocities of the ISM lines is typically 4-6 km s −1 for the short exposures, 3-4 km s −1 for the two longer ones. We therefore estimate that the velocity zero-point uncertainty for the FUSE data is ∼5 km s −1 (1σ). The relative velocity uncertainty is also ∼5 km s −1 although it may be larger near the edge of the detector. The oversampled FUSE spectra were binned to a bin size of 4 pixels (0.027Å), providing about three samples per 20 km s −1 resolution element. Data taken during orbital day and orbital night were combined, except in cases where an airglow line contaminated a spectral region of interest. Then only night data were employed. The S/N per 20 km s −1 resolution element is typically 11 in SiC 2A (λ < 987Å), and 18 in LiF 1A and LiF 2A (λ > 987Å). Analysis Line Identification, Continuum, and Equivalent Width We show in Figs. 2 and 3 the FUSE and STIS spectra of HE 0226-4110, respectively, where about 250 absorption features are identified. All the ISM and IGM absorption lines are labeled in Figs. 2 and 3. We first identified all the absorption features associated with interstellar resonance and excited UV absorption lines using the compilation of atomic parameters by Morton (2003) and the H 2 molecular line list of Abgrall et al. (1993a,b). The EUV atomic parameters were obtained from Verner, Verner, & Ferland (1996). Because HE 0226-4110 is at high latitude (b = −65. • 78) and in a favorable Galactic direction, the molecular absorption from H 2 remains very weak and greatly reduces the problem of blending with IGM absorptions. We identified every H 2 line in the HE 0226-4110 FUSE spectrum and we modeled the H 2 lines by measuring the equivalent width in each J = 0 − 4 rotational level (see . We found that the total H 2 column density is log N (H 2 ) = 14.54, corresponding to a very small molecular fraction of f (H 2 ) = 3.7 × 10 −6 . The atomic-ionic ISM gas component consists principally of two main clouds, a low-velocity component at v helio = 10 km s −1 and a high-velocity component (HVC) at 190 km s −1 (see Fox et al. 2005, B. D. Savage et al. 2006. There is also a weaker ISM absorber at about −20 km s −1 (B. D. Savage et al. 2006, in prep.). The HVC is detected in the high ions (O VI, Si IV, and C IV) and only in the strongest transitions of the low-ions (C II, C III, and Si II λ1260) (see Fox et al. 2005, and see Figs. 2 and 3). Each of these velocity components has to be considered carefully for possible blending with IGM absorption. An example of such blending occurs between Lyα at z = 0.08735 and Ni II λ1317.217. In the footnote of Table 3, we highlight any blending problems between ISM and IGM lines, and between IGM lines at different redshifts. After identifying all the intervening Galactic absorption lines, we searched for Lyα absorption at z > 0. For each Lyα absorption line, we checked for additional Lyman-series and associated metal lines. Since this line of sight has so little H 2 , it provides an unique opportunity to search for weak IGM metal lines. We systematically searched for the FUV lines of C III λ977 and the O VI λλ1031, 1037 doublet, and when the redshift allows, for the EUV lines O III λ832, O IV λ787, and the Ne VIII λλ770,780 doublet. If one of these lines was lost in a terrestrial airglow emission line, we considered the night data only. Since the wavelength coverage is not complete to the redshift of the QSO and because shock heated gas does not have to be associated with a narrow H I system, we always made sure that none of the Lyα systems was an O VI or Ne VIII system by using the atomic properties of these doublets. We found one possible O VI system at z = 0.42663 not associated with H I Lyβ (Lyα being beyond detection at this redshift). Note that we identify in Figs. 2 and 3 the associated system to the QSO at z = 0.49253, but we do not report any measurements (see Ganguly et al. 2005 for an analysis of this system, which is associated with gas very close to the AGN). We find a total of 59 systems (excluding the associated system at z = 0.49253) toward HE 0226-4110. Two of the systems (z = 0.20701, 0.27155) clearly have multi-component H I absorption (see § 3.2), and one possible system is detected only in O VI at z = 0.42663 (see also §3.3). All our detection limits reported in this work (except otherwise stated) are 3σ. The 3σ limit will vary depending on its wavelength position in the spectrum because the S/N varies with wavelength. In Table 3, we report our measurements of the line strength (equivalent width), line width (Doppler parameter), and column density for all the detected IGM species or the 3σ limits on the equivalent width and column density for the non-detected species. All our measurements are in the rest-frame. The equivalent widths and uncertainties were measured following the procedures of Sembach & Savage (1992). The adopted uncertainties for the derived equivalent widths, column densities and Doppler parameters (see §3.2) are ±1σ. These errors include the effects of statistical noise, fixed-pattern noise for FUSE data when two or more channels were present, the systematic uncertainties of the continuum placement, and the velocity range over which the absorption lines were integrated. The continuum levels were obtained by fitting low-order (< 4) Legendre polynomials within 500 to 1000 km s −1 of each absorption line. For weak lines, several continuum placements were tested to be certain that the continuum error was robust. For the FUSE data we considered data from multiple channels whenever possible to assess the fixed-pattern noise. An obvious strong fixed-pattern detector feature is present in the LiF 1A channel at 1043.45Å (see Fig 2). Redshift, Column Density, and Doppler Parameter To measure the centroids of the absorption line, the column densities and the Doppler parameters, we systematically used two methods: the apparent optical depth (AOD, see Savage & Sembach 1991) and a profile fitting method. To derive the column density and measure the redshift, we used the atomic parameters for the FUV and EUV lines listed in Morton (2003) and Verner, Verner, & Ferland (1996), respectively. Note that the system's redshifts in this paper refers to the H I centroids, except for the O VI system at z = 0.42660 In the AOD method, the absorption profiles are converted into apparent optical depth (AOD) per unit velocity, τ a (v) = ln[I c /I obs (v)], where I obs , I c are the intensity with and without the absorption, respectively. The AOD, τ a (v), is related to the apparent column density per unit velocity, N a (v), through the relation N a (v) = 3.768×10 14 τ a (v)/(f λ(Å)) cm −2 (km s −1 ) −1 . The total column density is obtained by integrating the profile, N = +v −v N a (v)dv. We also computed the average line centroids and the velocity dispersions through the first and second moments of the AODv = +v −v vτ a (v)dv/ +v −v τ a (v)dv km s −1 , and b = (2 +v −v (v − v) 2 τ a (v)dv/ +v −v τ a (v)dv) 0.5 km s −1 , respectively. Note that the equivalent widths were measured over the same velocity range [−v, +v] indicated in column 8 of Table 3. We also fitted the absorption lines with the Voigt component software of Fitzpatrick & Spitzer (1997). In the FUSE band, we assume a Gaussian instrumental spread function with a FWHM = 20 km s −1 , while in the STIS band, the STIS instrumental spread function was adopted (Proffitt et al. 2002). Note that to constrain further the fit to the H I absorption, we systematically (except as otherwise stated in Table 3) use non-detected low-order Lyman-series lines (all the lines used in the fit are listed in Table 3 in the line corresponding to the fit result). For this reason, we favored the b-values and column densities derived from profile fitting when realized. We note, however, that the results obtained from the moments of the optical depth and profile fitting methods are in agreement to within 1σ for most cases. We show the normalized spectra (with the fit to the absorption lines when realized) against the rest-frame velocity of the H I absorbers in Figs. 4 (systems detected only in Lyα), 5 (systems detected in Lyα and Lyβ), 6 (systems detected in Lyα, Lyβ, Lyγ), 7 (system detected in Lyα, Lyβ and Lyγ, and Lyδ), and the metal systems in Figs. 8,9,10,11,and 12. See also Figs. 2 and 4 in Savage et al. (2005) for the metal system at z = 0.20701. We find several broad H I absorption profiles with b > 50 km s −1 . For most of these systems a 2 component fit does not improve significantly the reduced-χ 2 . The system at z = 0.38420 appears to have an asymmetric profile in both the Lyα and Lyβ lines, and for this system a 2 component fit looks better by eye but not statistically (see dotted lines in Fig. 5). In the note of Table 3 we give the results of the fit for this system: a combination of broad (b = 75 km s −1 ) and narrow (26 km s −1 ) lines seem to adequately fit the H I profiles. Hence, in any case, a broad component is present. But, only the systems at z = 0.20701 and 0.27155 are clearly multiple components blended together. We note that several systems are only a separated by less than a few hundreds km s −1 . We will discuss further these systems in our future paper on the H I in the low redshift IGM (N. Lehner et al. 2006, in prep.). For all the non-detections we list in Table 3 the 3σ upper limits to the rest-frame equivalent width and to the column density by assuming the absorption lines lie on the linear part of the curve of the growth. We adopted the velocity range ∆v = [v 1 , v 2 ] either from other observed metals or from the Lyman lines if no metals were detected (see Table 3 for more details); except if ∆v(H I) > 100 km s −1 , we set ∆v = [−50, 50] km s −1 . (z, N, b) and the O VI column densities are from profile fitting. For the other ions, the column density is from AOD (except for the columns of the system at z = 0.20701 that resulted from profile fitting, see Savage et al. 2005). Possible Misidentification While we have done our best to identify properly all the absorption features in the spectrum of HE 0226-4110, misidentifications are possible because we do not have access to the full redshift path to HE 0226-4110. The highest redshift at which Lyα is detectable is z = 0.423 (see §3.4). Thus, Lyα between 0.423 and 0.495 is not detectable and could produce Lyβ between 1458.6 and 1533.4Å, i.e. between redshift z = 0.199 and 0.261 (Lyα cannot be Lyγ because that would imply z(Lyα) ∼ 0.54.) The following systems are thus potentially affected : z = 0. 19860, 0.20701, 0.22005, 0.22099, 0.23009, 0.23964, 0.24514. However, the 0.19860, 0.20055, 0.20701, 0.22005 We note that the z = 0.42289 O VI system could be Lyα at z = 0.21089 and 0.21756, or Lyβ at z = 0.43523 and 0.44314. However, the derived physical parameters appear to match very well the atomic properties of the O VI doublet (see Table 3 and Fig. 12). We also note that several cases where O VI is observed without Lyα or Lyβ are reported in T. M. Tripp et al. (2006, in prep.): (i) O VI observed without H I is detected in four cases in associated systems; (ii) the intervening system at z = 0.49510 toward PKS 0405-125 has strong O VI with no Lyβ,; (iii) for PKS 1302-102 at z = 0.22744 there is an excellent O VI doublet detection and very low (< 2 sigma) significance Lyα and no Lyβ detection; (iv) there are several cases in the more complex H I systems where there is clearly detected O VI well displaced in velocity from the H I absorption. Therefore, the z = 0.42289 system is likely to be an O VI system, but we would need a FUV spectrum that covers the wavelength up to 1850Å to have a definitive answer. We will therefore treat this system in the paper as a tentative O VI system and this system is also marked by "!" in Tables 3 and 4. We finally note that there may be O VI at z = 0.22005 and 0.29134. For the system at z = 0.22005, O VI λ1037 is identified as Lyα at z = 0.04121, but the O VI λ1032 appears weaker than expected and is not 3σ. Therefore, we do not report these features as O VI. For the system at z = 0.29134, O VI would be shifted by −30 km s −1 with respect to H I. Neither of the O VI lines are 3σ, and therefore we elected not to include them as reliable detections. Unblocked-Redshift Path We will need later (see §5.2) the unblocked redshift path for several species under study. The maximum redshift path available for Lyα is set by the maximum wavelength available with STIS E140M, which is 1729.5Å, corresponding to z max = 0.423. Note that the redshift of HE 0226-4110 is larger, z QSO = 0.495. The blocked redshift interval arising from interstellar lines, other intervening intergalactic absorption lines, and the gaps existing in the wavelength coverage is ∆z B = 0.022. The unblocked redshift path for H I is z U = z max − ∆z B = 0.401. For O VI, we follow a similar method but we note that O VI can arise without detection of H I (see §3.3) since we are not covering the whole wavelength range for Lyα. We therefore do not restrict the O VI redshift path to H I. We also restrict the part of spectrum to where a 3σ limit integrated over [−50, 50] km s −1 is 50 mÅ. Therefore the STIS spectrum between 1182Å to 1225Å and above 1565Å was not used (at λ < 1182Å, the FUSE spectrum was used to search for O VI). The unblocked redshift path for O VI is z U = 0.450. In principle the unblocked redshift path is larger for the C III and the EUV lines but because they are much more difficult to detect (single line or weaker doublet), we adopt for C III, O II, O III, O IV, and Ne VIII the redshift path of O VI corrected for any blocked redshift interval arising from interstellar lines and other intervening intergalactic absorption lines at λ < 1032Å. We also consider only wavelengths with λ 924Å for those lines because at smaller wavelengths the spectrum is too confused with the ISM Lyman series absorption lines. This would give an unblocked redshift path for O II and O III of about 0.350 and for O IV and Ne VIII of 0.283. Physical Conditions in the Metal-Line Absorbers One of the main issues with the detection of O VI absorbers is given the measurements of the different species, can we distinguish between photoionization and collisional ionization, between warm and hot gas? This is a fundamental question because to be able to estimate Ω b (O VI) in the WHIM, we need to know how much of the observed O VI is actually in shock-heated hot gas rather than in cooler photoionized gas. For each of the observed absorbers we will first investigate if a photoionization equilibrium model is a viable option for the source of ionization. We will then consider other sources, in particular collisional ionization equilibrium (CIE) models from Sutherland & Dopita (1993). Photoionization In the IGM, the EUV ionizing radiation field can be energetic enough to produce high ions in a very low density gas with a long path length. To evaluate whether or not photoionization can explain the observed properties of the O VI absorbers, we have run the photoionization equilibrium code CLOUDY (Ferland et al. 1998) with the standard assumptions, in particular that there has been enough time for thermal and ionization equilibrium to prevail. We model the column densities of the different ions through a slab illuminated by the Haardt & Madau (1996) UV background ionizing radiation field from QSOs and AGNs appropriate for the redshift of a given system. The models assume solar relative heavy element abundances from Grevesse & Sauval (1998), but with the updates from Holweger (2001) for N and Si, from Allende Prieto et al. (2002) for C, and from Asplund et al. (2004) for O and Ne (see Table 5 in Savage et al. 2005, and see also §5.3 for Ne). With these assumptions, we varied the metallicity and the ionization parameter (U = n γ /n H = H ionizing photon density/total hydrogen number density [neutral + ionized]) to search for models that are consistent with the constraints set by the column densities of the various species and the temperature given by the broadening of an absorption line: T = A(b/0.129) 2(1) (where A is atomic weight of a given chemical element). The temperature is only an upper limit because mechanisms other than thermal Doppler broadening could play a role in the broadening of the line. The System at z = 0.01746: The absorber system at z = 0.01748 (see Fig. 8) has the lowest redshift of all absorbers in our HE 0226-4110 data. It is detected in Lyα, C IV (∼ 3σ), and O VI λ1031 (8.9σ, O VI λ1037 is blended with O IV at z = 0.34034). The profile fit to H I implies T < 2.4 × 10 4 K. The kinematics appear to be simple with all the different species detected at the same velocity within 1σ. These species may therefore arise in the same ionized gas. To investigate this possibility we ran CLOUDY models with log N (H I) = 13.22. We estimated that a CLOUDY model with a solar abundance could reproduce the observed measurements of H I, C IV and O VI and the limits on C III and N V (see Fig. 13). The physical parameters are tightly constrained by the O VI column density, with the ionization parameter log U ≃ −1.1, the total H density n H ≃ 5.6 × 10 −6 cm −3 . The corresponding cloud thickness is about 17 kpc and the total hydrogen column density 3.2 × 10 17 cm −2 . The gas temperature is 1.9 × 10 4 K, similar to the temperature implied by the broadening of the narrow H I line. These properties are reasonable for a nearby IGM absorber: photoionization is a likely source of ionization for the system at z = 0.01746 The System at z = 0.20701: This system has the highest total H I column density. It is detected in several H I Lyman series lines and in many lines of heavier elements in a variety of ionization stages, with in particular the detection of Ne VIII. This is the most complex metal-line system in the spectrum with at least two velocity-components in H I and one should refer to Savage et al. (2005) for detailed analysis and interpretation of this system. This system has a metallicity of −0.5 dex and has multiple phases, including photoionized and shock-heated gas phases. The System at z = 0.34034: This system has absorption seen in Lyα, Lyβ, C III, O IV, and O VI (see Fig. 10). The velocitycentroids of these species agree well within the 1σ errors, and therefore a single-phase photoionized model may explain the observed column densities of the various observed species. We ran the CLOUDY models with log N (H I) = 13.68. We estimated that a simple photoionized model with a 1/2 solar abundance could reproduce the observed measurements of H I, C III and O VI and the limits (see Fig. 14, we did not plot the limits for clarity). The physical parameters are tightly constrained by the O VI and C III column densities, with the ionization parameter log U ≃ −1.0, the total H density n H ≃ 1.1 × 10 −5 cm −3 . At higher or lower metallicity, the models cannot reproduce uniquely all the column densities of the various observed ions. The corresponding cloud thickness is about 40 kpc and the total hydrogen column density 1.4 × 10 18 cm −2 . The profile fit to H I implies T < (6.7± 2.1 1.8 ) × 10 4 K, which is compatible with the gas temperature 2.5 × 10 4 K derived by CLOUDY at 2σ with no additional non-thermal broadening. In conclusion, the observed simple kinematics and the column densities can be explained by a photoionization model with a 1/2 solar metallicity, and log U ≃ −1.0. Therefore, photoionization can be a dominant process for this system too. The System at z = 0.35523: This system is the third most complex metal-system toward HE 0226-4110, with absorption seen in Lyα, Lyβ, O IV, and O VI. The LiF 2A channel suggests a nearly 3σ feature for O III, but a comparison of LiF 2A and LiF 1B shows it is only a noise feature (see Fig. 11). The O IV feature is, however, real since it is present in both channels, LiF 1A and LiF 2B. The profiles of Lyα and Lyβ are noisy and do not suggest a multicomponent structure. H I and O IV align very well, but the O VI profile appears to be more complex. The deeper part of the O VI λ1031 trough aligns well with O IV, but there is a positive velocity wing, a ∼3σ feature (W = 17.0 ± 5.7 mÅ) at ∼+50 km s −1 . This feature could be an extra O VI component or a weak intervening Lyα absorption. The S/N is not high enough to really understand the full complexity of the O VI profile. We therefore treated O VI as being co-existent with O IV, and in particular we use the O IV profile to define the velocity range for integration of the O VI absorption lines. We ran a CLOUDY simulation with log N (H I) = 13.60. The total column density of O IV and O VI and the limit on C III can be satisfied simultaneously in the photoionization model for a very narrow range of metallicity near log Z/Z ⊙ = −0.55 at the given log N (H I) = 13.60. At a higher metallicity (log Z/Z ⊙ ≥ −0.52) the O VI and C III column density models diverge, and at a lower metallicity, the models do not predict enough O IV. In Fig 15, we show the CLOUDY model with 0.28 solar metallicity. The small error on O VI allows only a very narrow range of ionization parameters at log U ≃ −1.0 (n H ≃ 1.1×10 −5 cm −3 ). The corresponding cloud thickness is about 40 kpc and the total hydrogen column density 1.5 × 10 18 cm −2 . The gas temperature is 2.9 × 10 4 K. The Doppler parameter of H I implies that T < (4.4± 2.5 1.9 ) × 10 4 K, which is compatible with T derived in the CLOUDY simulation. For the system at z = 0.35523, a photoionization model with 0.28 solar metallicity and log U ≃ −1.0 can explain the measured O IV and O VI column densities and the limit on N (C III). The System at z = 0.42660: This system is only detected in both O VI lines (see Fig. 12) and could be misidentified (see §3.3). But, although the O VI λ1031 line is confused with a spike due to hot pixels, both the strength and the separation of the absorption lines match the atomic parameters of the O VI doublet. No H I is found associated with this system, but the wavelength range only allows us to access Lyβ. Because O VI λ1031 is blended with an emission artifact, it is not clear if the O VI profile has only one or more components. The negative velocity part of the profile of O VI λ1031 where the line is not contaminated by the instrumental spike suggests a rather smooth profile. The O VI λ1037 absorption is noisy and too weak to indicate if more than one component is needed. We therefore fitted the O VI lines with one component and removed the apparent emission feature from the fit. We note that if the fit is made using O VI λ1037 alone, the parameters are consistent with those of the fit to the doublet. The errors on the b-value may be larger than the formal errors presented in Table 3. The fit to the O VI absorption lines yields b = 40.4 ± 5.0 km s −1 , implying T < 1.6 × 10 6 K. We explored the possibility that this system may be principally photoionized by the UV background. Since we do not know the amount of H I for this absorber and have only O VI, the results remain uncertain. For a wide variety of inputs (log N (H I) = 13.55, 13.30, 13.05, and log Z = [−0.6, 0]), the observed O VI column density can be reproduced with a reasonable ionization parameter of log U ∼ −0.7 or smaller and a cloud thickness less than 100 kpc. The broadening of the O VI profiles is non-thermal since photoionization models give a gas temperature of 3 × 10 4 K. The Hubble flow broadening appears also negligible in most cases. We note that none of the other column density limits constrain the model further. Collisional Ionization We showed above that the O VI systems along with the ancillary ions can be modeled by photoionization alone, except for the system at z = 0.20701 described by Savage et al. (2005) for which O VI clearly traces hot gas. If photoionization is the dominant source of ionization of these systems, the broadening of O VI and the other metal-lines is dominated by non-thermal broadening or substructure may be present since the temperature of the photoionized gas is typically a few 10 4 K. For H I, however, there is little room for other broadening mechanisms since b thermal ≈ b total . We note that O VI is detected at z = 0.01746 in the FUSE spectrum, which has an instrumental broadening of about b inst ≃ 12.5 km s −1 . For this system, it is not possible to determine whether b is smaller than its instrumental width. Within the errors, the broadening of the line can be reconciled with a broadening from nearly purely thermal motions. The other O VI systems lie in the STIS spectrum, but the S/N of those data is not good enough to distinguish between single and multiple absorption components. In particular, we note the complexity of the O VI profile at z = 0.35523. The broadening of the O VI lines (and C IV and O IV when detected) implies temperatures of a few ×10 5 K if the broadening is purely thermal. At these temperatures, collisional ionization can be an important source of ionization (Sutherland & Dopita 1993). If CIE applies, these systems must be multi-phase since the observed narrow H I absorptions cannot arise in hot gas. Therefore, there is a large degree of uncertainty in any attempt to derive parameters from CIE models since the fraction of O VI or other ions arising in photoionized or collisionally ionized gas is unknown, and the kinematics do not allow a clear separation of gas phases within different ionization origins. With this caveat and making the strong assumption that the metal-ions are not produced in photoionized gas, we review now if CIE could match the observed column densities of the metal-line systems: The System at z = 0.01746: The broadening of O VI implies T ∼ 2.2 × 10 5 K (with a large uncertainty) if it is purely thermal. The limit for T (O VI) is very close to the peak temperature for O VI in collisional ionization equilibrium (T = 2.8 × 10 5 K; Sutherland & Dopita 1993). The ratios log[N (C IV/O VI)] = −0.5 is compatible with the highly ionized gas being in collisional ionization equilibrium at T ≈ 2.0 × 10 5 K, close to the thermal broadening for O VI. 10. Yet, the non-detection of N V implies a N/O ratio less than 0.4 times solar. A low N/O ratio has recently been observed in a solar metallicity environment in the low-z IGM toward PHL 1811 (Jenkins et al. 2005). Therefore, this system could be collisionally ionized. The System at z = 0.34034: We find N (O VI) ≈ N (O IV). In CIE, O IV and O VI have the same ionic fraction at T ∼ 2.6 × 10 5 K, which corresponds to the thermal broadening of these lines within 1σ. The fact that log[N (N V/O VI)] < −0.5 is consistent with this temperature. A temperature of ∼ 2.6 × 10 5 K implies b(H I) = 66 km s −1 if the broadening is purely thermal. Such a broad component could be hidden in the noise of the spectrum. The blue part of the Lyα spectrum may indicate the presence of a broad wing (see Fig. 10), but the total recovery of the flux on the red part of the Lyα spectrum indicates that the features within the blue wing are not part of the main H I absorption. While we can force a broad Lyα in the observed profile in the same way that we did for the system at z = 0.01746, the data do not warrant it. CIE may be viable for this system, but higher quality data are needed to check this. The photoionization model with the adopted parameter predicts log N (N V) = 13.21, while CIE predicts log N (N V) = 12.75 (assuming a solar abundance). An increase in the S/N by about a factor 3 would provide a good 3σ limit on N V and several other species which would allow a discrimination between these models. The System at z = 0.35523: The same discussion applies for this system as for the z = 0.34034 system since the ionic ratios are similar. Higher S/N data would provide better constraints on N V here as well. However, the broadening of the O IV and O VI lines are more consistent with T ∼ 5 − 6 × 10 5 K. Therefore either there is substructure in the profile as the complexity of the O VI λ1031 profile may suggest, or non-thermal broadening is present if CIE applies. The System at z = 0.42660: The fit to the O VI absorption lines yields b = 40.4 ± 5.0 km s −1 , implying T < 1.6 × 10 6 K. In CIE at T ∼ 1.6 × 10 6 K, we should find log N (Ne VIII) = 13.83 if the relative abundance of Ne and O is solar. Our 3σ limit for Ne VIII suggests a much smaller column density, less than 13.57 dex. If the temperature is 5.4 × 10 5 K (corresponding to a thermal b-value for O VI of about 25 km s −1 ), the ratio of Ne VIII to O VI would be consistent with the observed ratio (this would also fit the ratio limit for O IV to O VI). If the gas is hot, either the profiles may be more complicated, the broadening not solely thermal, or the abundance of Ne to O not solar. We note that the AOD b-value of O VI λ1037 is less than 2σ from 25 km s −1 , where N (Ne VIII)/(N O VI) < 1. The System at z = 0.16339 This absorber has not been discussed yet because no O VI is detected and there is only a tentative measurement of C III. It has also a broad H I component, potentially tracing hot gas. It is seen in Lyα and Lyγ, with Lyβ hidden by interstellar Si II λ1193 (see Table 3 and Fig. 9). O III is confused with H 2 (see Table 3). There is a 2.9σ detection of C III in LiF 1A. The data from LiF 1B have lower S/N than LiF 2A and imply a 2.9σ upper limit of 12.51 dex for C III, although one pixel is aligned with C III in LiF 2A (see Fig 9). Within the 1σ errors, H I and C III have compatible redshifts. The profile of H I is very well fitted with a single Gaussian with b = 46.3 ± 1.9 km s −1 . If the broadening is purely thermal, this would imply T (H I) = 1.29 × 10 5 K. This is the only broad H I system for which a metal ion is (tentatively) detected along this line of sight. Generally, O VI is a more likely ion to associate with broad H I Sembach et al. 2004), but ions in lower ionization stages can constrain the metallicity of broad Lyα absorbers as well. We find [C/H] < −2, if CIE and pure thermal broadening apply. Summary of the Origin of the O VI systems in the Spectrum of HE 02260-4110 In summary, the O VI systems at z = 0.01746, 0.34034, 0.35523, 0.42660 can be explained by photoionization models. In Table 5, we summarize the basic properties of the observed O VI systems assuming photoionization and the possible origins of these systems. The broadening of the metal lines appears to be mostly non-thermal if solely photoionization applies or there may be unresolved sub-structure buried in the noise of the spectrum. Only the system at z = 0.20701 described by Savage et al. (2005) appears to be clearly multiphase with photoionized and collisionally ionized gas. The ionic ratios observed in the system at z = 0.01746 can be reconciled with a single CIE model if the relative abundances are non-solar. For the other systems, CIE is possible with current constraints. In particular, only a little non-thermal broadening would be needed to explain the broadening of the metal lines. Higher S/N FUV spectra would provide better understanding of the shape and kinematics of the observed profiles and access to other key elements such as N V. Implications for the Low Redshift IGM Discriminating between Photoionization and Collisional Ionization The analysis in §4 shows that it is difficult to clearly differentiate between photoionization and collisional ionization for several of the metal-line systems. Savage et al. (2005) showed that the system at z = 0.20701 consists of a mixture of photoionized and collisionally ionized gas. This is the only system along this line of sight for which O VI cannot be explained by photoionization. We showed that the system at z = 0.01746 can be collisionally ionized if N/O is sub-solar, but the observed column densities can also be explained by a simple photoionization model with relative solar abundance. For the O VI systems at z = 0.34034 and 0.35523 and the tentative O VI system at z = 0.42660, photoionization is also the likely source of ionization, although collisional ionization cannot be ruled out. If the photoionization dominates the ionization in a large fraction of the O VI systems, estimates of the baryonic density of the WHIM as traced by O VI would need to be reduced by a similarly large factor (a factor 6 for the HE 0226-4110 line of sight). The recent observations in the FUV of the lines of sight toward H 1821+643 (Tripp et al. 2001), PG 1116+215 , PG 1259+593 , and PKS 0405-123 (Prochaska et al. 2004) have similarly shown that the O VI systems are complex, with a mix of photoionized, and collisionally ionized systems. Prochaska et al. (2004) noted a general decline of the photoionization parameter U estimated in the CLOUDY model with increasing observed H I column density. Davé et al. (1999) found that n H ∝ N (H I) 0.7 10 −0.4z for the low-z Lyα absorbers. Assuming that the H I ionizing intensity is constant gives U ∝ N (H I) −0.7 (Prochaska et al. 2004). To check if this trend holds with more data and in particular to check if the O VI systems that could be explained by both photoionization and CIE deviate from this trend, in Fig. 17 we combine our results summarized in Table 5 for the photoionized O VI systems with results from Prochaska et al. (2004), Richter et al. (2004), Savage et al. (2002), and Sembach et al. (2004). We show in Table 6 the redshift, log U , and H I column density. A linear fit using a least squares fit to the data gives log U = −0.58 log(N (H I)/10 14 ) − 1.23 (we did not include the peculiar O VI system at z = 0.36332 toward PKS 0405-123 where the very weak Lyα is offset from the metal-line transitions, see Prochaska et al. 2004; including this system would change the slope to −0.55). For the fit we treated the lower limit of log U toward PKS 0405-123 as an absolute measure, but excluding this limit from the fit would not change the result. The solid curve in Fig. 17 shows the fit. While the slope of −0.58 is close to the predicted slope of −0.7 from the numerical simulations (Davé et al. 1999;Davé & Tripp 2001), the simulations appear to provide an upper limit to this relation (see dotted line in Fig. 17). Note that metal-line systems with no O VI do not seem to follow this correlation. For Fig. 17. Although the correlation could be somewhat fortuitous for the O VI systems that can be explained by both photoionization and CIE origins, it would have to occur for all these systems (e.g., 2 systems presented in this paper, and the system at z = 0.14232 toward PG 0953+415 in Savage et al. 2002). This may favor photoionization for the O VI systems that can be fitted by both photoionization and CIE models. So far, we have mainly considered photoionization and CIE models to explain the observed O VI systems. However, if the gas is hot (between 10 5 and 10 6 K) and dense enough, it should cool rapidly since this is the temperature range over which the cooling of an ionized plasma is maximal (Sutherland & Dopita 1993). O VI is the dominant coolant in this temperature range. Heckman et al. (2002) (Savage et al. 2003). There is no obvious separation in Fig. 18 between systems that could be photoionized or collisionally ionized. In this figure, we also reproduce Heckman et al.'s radiative cooling models for 10 5 and 10 6 K and the N/b (O VI) linear regime (∆v = 0 km s −1 , see Heckman et al.). Their models can reproduce the observed b − N distribution. Heckman et al. (2002) also computed the column densities of several ancillary species. We reproduce the observed and predicted ratios of O IV/O VI, Ne VIII/O VI, and S VI/O VI in Table 7. Generally, their models produce too much Ne VIII. For O IV and S VI, the comparison may be more difficult because photoionization may play a role, but photoionization can also produce O VI if the IGM has a very low density. So while these cooling models can reproduce the distribution of N (O VI) and b obs (O VI), they generally fail to predict observed ionic ratios, especially Ne VIII/O VI. The pure isobaric model is always ruled out, as it predicts a O IV/O VI ratio that is too low. Note that these models assume solar relative elemental abundances. In summary, our results show that for 1 out of 5 O VI systems toward HE 0226-4110, collisional ionization is the likely origin of the O VI. For the 4 other systems, comparison of the ionic ratios and the kinematics of the ionic profiles does not yield a single solution for the origin of the O VI. We note that U and N (H I) of these systems correlate and follow the same distribution as pure photoionized O VI systems, suggesting that photoionization could be the origin of these O VI systems. To better understand the basic properties of the IGM in the low redshift universe, observations with high S/N data are needed. At high redshift, data with S/N of 100 per 7 km s −1 resolution element have answered many fundamental questions that remained unanswered with low quality data. While in the near future FUV spectra will not have the quality of the current optical data, S/N of ∼50 or higher could be achieved on bright QSOs with the Cosmic Origin Spectrograph (COS) when/if it is installed on the HST. Frequency of Occurrence Of Oxygen in Different Ionization Stages Oxygen is certainly the best metal element for the study of the physical properties of the IGM because it is the most abundant and because it has a full range of ionization stages, from O I to O VIII, accessible to existing spectrographs. While we are far from being able to combine X-ray (O VII, O VIII) lines with FUV lines because of sensitivity and spectral resolution issues, searching for O I to O VI is possible by combining EUV and FUV lines available in the FUSE and STIS wavelength ranges. In Table 8, we summarize the transitions of oxygen observable in the EUV-FUV and the redshift at which these transitions can be observed. Toward HE 0226-4110, we unfortunately miss O V λ629 (because the redshift path of HE 0226-4110 is not large enough (O V is detected in the associated system, see Fig. 2). The close match in the oscillator strengths among these ions also allow direct comparisons of lines with similar column density sensitivities (see Table 8). For the Lyα systems detected toward HE 0226-4110 it is almost always possible to search for the associated metal lines, as blending is rarely a problem. The only system where we cannot study O VI is the one at z = 0.09059. For all the other systems, we are able to measure the O VI column density or estimate a 3σ limit. For 5 absorbers, (z = 0.06083, 0.10667, 0.16237, 0.16971, 0.19861) O VI λ1031 is blended with other IGM or ISM absorptions, so the limits were estimated with O VI λ1037. For O III and O IV there are only 8 and 2 systems, respectively, for which we cannot make a measurement because of blending (see Table 4). We do not report estimates for O I and O II because these lines are rarely detected in the Lyα forest (they are sometimes detected in the high H I column systems with log N (H I) > 16.1 , see Prochaska et al. 2004 Since the values of λf of the oxygen lines are similar, the column density limits for these different ions are similar. The ionization potentials of O III and C III are similar and since there are more systems for which we can search for C III, we also report the measurement of the strong C III line (log λf = 2.85) in Table 4. Only for the systems at z = 0.06083, 0.08901, 0.24514, 0.42660 were we not able to estimate the amount of C III. We find dN (C III)/dz = 4 − 6. Richter et al. (2004) O III)/dz ≈ 5] < [dN (O IV)/dz ≈ 13] ≈ [dN (O VI)/dz ≈ 13]. The general pattern that emerges is that there is a slightly larger number of O IV and O VI systems per unit redshift compared to O III, about the same number of O IV and O VI systems per unit redshift, but a much larger number of O III systems per unit redshift than for O I and O II. The low redshift IGM is more highly ionized than weakly ionized. This is consistent with the picture of the Lyα forest consisting mainly of very low density photoionized gas and hot gas. We also note that when O III, O IV, and O VI are detected simultaneously the column densities of these ions generally do not differ by more than a factor 2-3. Observations of Ne VIII Li-like Ne VIII provides a powerful diagnostic of hot gas because in CIE it peaks at 7 × 10 5 K, Ne has a relatively high cosmic abundance ([Ne/H] ⊙ = −4.16) and the Ne VIII lines have relatively high f -values (log f λ = 1.90, 1.60). In CIE at T ∼ (0.6 − 1.3) × 10 6 K, Savage et al. (2005) discussed the system at z = 0.20701 in the spectrum of HE 0226-4110 and they reported the first detection of an intervening Ne VIII system in the IGM. The z = 0.20701 O VI/Ne VIII system arises in hot gas at T ∼ 5.4 × 10 5 K. With little contamination from H 2 lines and its high redshift path, HE 0226-4110 currently provides the best line of sight to search for Ne VIII. We found, however, only one detection of Ne VIII among the 32 H I systems (excluding the associated system) at redshifts where Ne VIII could be measured. We also searched for pairs of absorption features with the appropriate separation of the Ne VIII doublet, not associated with a Lyα feature, and found none. Note that none of the H I systems observed are broad enough to produce a significant fraction of Ne VIII, since the broadest H I implies T < 2.8 × 10 5 K. N (Ne VIII)/N (O VI) > 1. For T ∼ (0.6 − 1.0) × 10 6 K, N (Ne VIII) ∼ (2 − 3) × N (O VI). In Fig. 19a we show the column density limits and measurement for Ne VIII against those of O VI for the HE 0226-4110 line of sight. The upper limits on N (Ne VIII) were typically integrated over a velocity range of 80-100 km s −1 , which corresponds to T ∼ (0.6 − 1.1) × 10 6 K if the profile is purely thermally broadened. Fig. 19b shows the relationship of the ratio of the Ne VIII to O VI column with the temperature for the CIE model of Sutherland & Dopita (1993) assuming a solar relative abundance. The circles overplotted on this curve are obtained from the measurements of O VI and measurements or upper limits on Ne VIII obtained toward HE 0226-4110, PKS 0405-123 (Prochaska et al. 2004;Williger et al. 2005), and PG1259+594 ) lines of sight. These data cannot fit the curve at higher temperatures because the observed O VI broadening always implies T < 10 6 K, except for the system at z = 0.42660. For this system the limit on Ne VIII is too small. If CIE applies along with solar Ne/O, all these data imply T < 6 × 10 5 K for the O VI systems. Fig. 18 shows that most of the detected O VI systems have column densities > 13.5 dex. Hence, the present observations should be sufficiently sensitive to detect Ne VIII systems with T ∼ 10 6 K since most of our 3σ limits are less than 13.8 dex (see Fig. 19a). We note that even though at higher temperatures N (Ne VIII) ≫ N (O VI), the Ne VIII profile would be broadened beyond detection with the current S/N observations. For the O VI system at z = 0.42660, the broadening of the O VI absorption favors a high temperature, but no Ne VIII is detected; since the O VI system can be photoionized and non-thermally broadened, a cooler temperature is possible. Solar reference abundances have changed quite significantly over the recent years for C, O, and Ne. Since Savage et al. (2005) produced the summary of recent solar abundance revisions (see their Table 5), recent helioseismological observations have been used to argue that the Ne abundance given by Asplund et al. (2004) is too low to be consistent with the solar interior (Bahcall et al. 2005). Recently, Drake & Testa (2005) derived Ne/O abundances from Chandra X-ray spectra of solar-type stars and they found Ne/O abundances that are 2.7 times larger than the values reported by Asplund et al. (2004), which alleviates the helioseismology problem. But if the Ne abundance is −3.73 instead of −4.16, it would further complicate the explanation for the low frequency of occurrence of Ne VIII systems. It is still unclear what the definitive value is for the solar Ne abundance, since recent re-analysis of solar spectra still yield a low value (e.g., Young 2005). The WHIM at T 10 6 K (where the bulk of the baryons should be formed, according to simulations) remains still to be discovered with EUV-FUV metal-line observations. So far only the X-ray detections of O VII in nearby systems at z = 0.011, 0.027 discussed by Nicastro et al. (2005) imply temperatures of a few ×10 6 K. The non-detection of hot systems with T > 7 × 10 5 K via Ne VIII may have several explanations and implications. If the O VI systems are mainly photoionized it is not surprising to find very few containing Ne VIII. Assuming that the numerical simulations of the WHIM are correct, the bulk of the WHIM may have a lower metallicity than the gas traced by hot O VI absorbers. Broad Lyα absorbers which trace the WHIM appear in some cases to have a 0.01 solar abundance (see §4.2). The relative abundances may not be always solar. Low metallicity environments are known to have relative abundances different than those in the Sun. CIE models may not be a valid representation of the distribution of ions in hot gas in the WHIM, but we note that the non-equilibrium cooling flow models by Heckman et al. (2002) would generally also predict too much Ne VIII (see Table 7). Additional and more sensitive searches for Ne VIII would be valuable for better statistics and a better understanding of the Ne VIII systems, and their frequency of occurrence in the IGM. Metallicity of the Low Redshift Metal-Systems Metallicity is a quantity that is fundamental in obtaining estimates of Ω b from metal-line absorbers and following the chemical enrichment of the IGM. A metallicity of 0.1 solar is generally adopted for estimating Ω b (O VI) (e.g., Tripp, Savage, & Jenkins 2000;Danforth & Shull 2005), but since Ω b ∝ Z −1 , a change in the adopted metallicity can significantly alter the estimated Ω b . Table 6 provides recent metallicity estimates via photoionization models of O VI systems in the low redshift IGM. If these systems are primarily photoionized, they are not tracers of the WHIM. But abundance measurements in collisionally gas are generally not reliable. Combining the results in Table 6, we find a median abundance for the photoionized O VI systems of −0.5 dex solar with a large scatter of ±0.5 dex around this value. Only 4 of 13 systems appear to be either at solar abundance or less than 1/10 solar. If those systems are not taken into account, the metallicity is −0.45 ± 0.10 dex. If the metallicity of the photoionized O VI systems reflects the metallicity of the collisionally ionized O VI systems, Ω b derived via O VI would decrease by a factor ∼ 3 since Ω b ∝ Z −1 . Summary We present the analysis of the intervening absorption and the interpretation of the metal-line systems in the FUSE and STIS FUV spectra of the QSO HE 0226-4110 (z em = 0.495). Due to the low fraction of Galactic ISM molecular hydrogen along this line of sight, HE 0226-4110 provides an excellent opportunity to search for metals associated with the Lyα forest. For each Lyα absorber, we systematically search for C III, O III, O IV, O VI, and Ne VIII. We also search for O VI and Ne VIII doublets not associated with H I. For each metal-line system detected, we also search for any other metals that may be present. The richest intervening metal-line system along the HE 0226-4110 sight line is at z = 0.20701. This system was fully discussed by Savage et al. (2005). We examine the ionic ratios to constrain the ionization mechanisms, the metallicity, and physical conditions of the absorbers using single-phase photoionization and collisional ionization models. We discuss our results in conjunction with analyses of the metal-line systems observed in several other low redshift QSO spectra of similar quality. The main results are as follows: 1. We detect 4 O VI absorbers with rest frame equivalent widths W λ 50 mÅ. A tentative detection at z = 0.42660 is also reported but this system could be misidentified because Lyα cannot be accessed. 2. One O VI system at z = 0.20701 cannot be explained by photoionization by the UV background (see Savage et al. 2005). For the other 4 O VI systems, photoionization can reproduce the observed column densities. However, for these systems, collisional ionization may also be the origin of the O VI. We note that if photoionization applies, the broadening of the metal lines must be mostly non-thermal, but the H I broadening is mostly thermal. ionization mechanism in several cases is indeterminate. High S/N FUV spectra obtained with COS or other future FUV instruments will be needed to solve several of the current observational uncertainties. Oxygen is particularly useful to study the ionization and physical conditions in We thank Marilyn Meade for calibrating the many FUSE datasets. The STIS observations of HE0226-4110 were obtained as part of HST program 9184 with financial support from NASA grant HST GO-9184.08-A. Support for this research was provided by NASA through grant HST-AR-10682.01-A. NL was also supported by NASA grants NNG04GD885G and NAG5-13687. BPW was supported by NASA grants LTSA NAG5-9179 and ADP NNG04GD885G. TMT was supported in part by NASA LTSA grant NNG04GG73G. This research has made use of the NASA Astrophysics Data System Abstract Service and the SIMBAD database, operated at CDS, Strasbourg, France. · · · · · · +0.0 ± 6.2 37.9 ± 11.5 13.17 ± 0.08 FIT · · · STIS C III λ977 1222.242 < 32.1 · · · · · · < 12.70 3σ [−40, 40] STIS Table 3-Continued · · · · · · +0.0 ± 3.1 13.9 ± 6.7 13.03 ± 0.11 FIT · · · STIS C III λ977 Note. -"...": species cannot be observed because of blending; "n.a.": species is not observable at this redshift. "!": Possible misidentification, see §3.3 for more details. a The total column densities are presented, but the structure is more complex (see Savage et al. 2005). b These three system correspond to the 3 absorption lines observed in the system denoted z ≃ 0.27155 in Table 3. c The limit is based on the Lyβ line (no detection), the corresponding Lyα line is redward of the available STIS wavelength range. · · · 14.03 Photo/Coll · · · · · · · · · · · · Note. -a This column lists the ionization mechanism for O VI: photoionization (photo), collisional ionization (coll). "Photo/Coll" indicates that both ionization mechanisms could explain the observed O VI column densities. b The metallicity, ionization parameter, H column density, and path length of the absorber are from the CLOUDY photoionization models, except for the O VI system at z = 0.20701 which are from CIE (see Savage et al. 2005 for more details). c Note this is the total H I column density measured in the photoionized system that does not contain O VI. The estimated H I parameters associated to the O VI system are log N (H I) = 13.70 and b = 97 km s −1 (see Savage et al. 2005). "!": Possible misidentification, see §3.3 for more details. PKS 0405-123 0.16701 · · · · · · −1.90 0.18292 < −0.57 · · · · · · 0.36335 +0.21 < −0.03 · · · 0.49512 > −0.15 (<) − 0.74 < −1.87 Species λ obs W λ v b log N Method [−v, +v] Instrument (Å) (mÅ) (km s −1 ) (km s −1 ) (dex) (km s −1 ) (1) (2) (3) (4) (5) (6) (7) (8)(9Species λ obs W λ v b log N Method [−v, +v] Instrument (Å) (mÅ) (km s −1 ) (km s −1 ) (dex) (km s −1 ) (1) (2) (3) (4)(5)Species λ obs W λ v b log N Method [−v, +v] Instrument (Å) (mÅ) (km s −1 ) (km s −1 ) (dex) (km s −1 ) (1) (2) (3) (4) (5) (6) (7) (8)(9)Species λ obs W λ v b log N Method [−v, +v] Instrument (Å) (mÅ) (km s −1 ) (km s −1 ) (dex) (km s −1 ) (1) (2) (3) (4) (5) (6) (7) (8)(9)Species λ obs W λ v b log N Method [−v, +v] Instrument (Å) (mÅ) (km s −1 ) (km s −1 ) (dex) (km s −1 ) (1) (2) (3) (4) (5) (6) (7) (8)(9)Species λ obs W λ v b log N Method [−v, +v] Instrument (Å) (mÅ) (km s −1 ) (km s −1 ) (dex) (km s −1 ) (1) (2) (3) (4)(5)Species λ obs W λ v b log N Method [−v, +v] Instrument (Å) (mÅ) (km s −1 ) (km s −1 ) (dex) (km s −1 ) (1) (2) (3) (4) (5) (6) (7) (8)(9)Species λ obs W λ v b log N Method [−v, +v] Instrument (Å) (mÅ) (km s −1 ) (km s −1 ) (dex) (km s −1 ) (1) (2) (3) (4) (5) (6) (7) (8)(9)Species λ obs W λ v b log N Method [−v, +v] Instrument (Å) (mÅ) (km s −1 ) (km s −1 ) (dex) (km s −1 ) (1) (2) (3)(4)Species λ obs W λ v b log N Method [−v, +v] Instrument (Å) (mÅ) (km s −1 ) (km s −1 ) (dex) (km s −1 ) (1) (2) (3)(4)Species λ obs W λ v b log N Method [−v, +v] Instrument (Å) (mÅ) (km s −1 ) (km s −1 ) (dex) (km s −1 ) (1) (2) (3)(4)Species λ obs W λ v b log N Method [−v, +v] Instrument (Å) (mÅ) (km s −1 ) (km s −1 ) (dex) (km s −1 ) (1) (2) (3)(4) Note. -The calculations of the cooling columns at constant pressure (isobaric) and density (isochoric) are from Heckman et al. (2002). The measurements are from: HE 0226-4110, this paper; PKS 0405-123, Prochaska et al. (2004) andWilliger et al. (2005); PG 1259+593, Richter et al. (2004). 986Å, data are from SiC 2A channel, between 986Å and 1083Å, data are from LiF 1A channel, and at λ 1086Å data are from LiF 2A channel. Line detections are denoted by tick marks above the spectra. Line identifications for these detections are listed on the right-hand side of each panel. Redshifts (in parentheses) are indicated for intergalactic lines. In cases where the +190 km s −1 high velocity interstellar feature is present, an offset tick mark is attached to the primary tick mark at the rest wavelength of the line. Table 3 for more details). Other lines present are identified. Pressure, temperature, density, ionization parameter, total hydrogen column density, and cloud thickness are plotted along the x-axes. The thick solid lines along the thin model curves show the ionization parameter ranges which are consistent with the observed column densities (within their 1 sigma uncertainties); the ions corresponding to each curve are indicated by the key at the right. The ionization parameter derived from CLOUDY models is plotted against the measured H I column density for the systems that could be mostly photoionized. We combine data from the present work and from the analysis of several other low redshift QSOs (see §5.1 for more detail). The solid line is a fit to the data with a slope of −0.58, while the cosmological simulation predicts a slope of −0.7 (dotted line). Sembach et al. 2004), and PG 1259+593 (squares, Richter et al. 2004). The relation predicted by Heckman et al. (2002) for radiatively cooling gas is shown in dotted lines for assumed O VI temperatures 10 5 K and 10 6 K. The solid line corresponds to the linear regime where identical components have the same central velocity (∆v = 0 km s −1 in Heckman et al. 2002). The solid curve represents the relationship between the ratio of Ne VIII to O VI and the temperature in a CIE model assuming a solar relative abundance from Asplund et al. (2004) between Ne and O. The circles are observed column densities for which N (O VI) is measured and N (Ne VIII) is a measure (full circle) or a 3σ upper limit (open circles). These data have been obtained along the lines of sight to HE 0226-4110, PKS 0405-123, and PG 1259+593. These data are put on the CIE curve. This figure shows that, if CIE applies and if the relative abundances are solar, the WHIM log T ∼ 5.7-5.8 has only been detected in one of ten O VI systems. Most O VI systems trace gas with T < 5 × 10 5 K based on the broadening of the lines. and 0.24514 systems have Lyβ and Lyγ. The system at z = 0.20055 would correspond to Lyβ at z = 0.42289. At z = 0.42289, a feature at 1383.822Å (marked as Lyα at z = 0.13832 in Fig. 3) could possibly be identified with Lyγ, but the measured H I column densities for Lyβ and γ are discrepant by 0.25 dex. Hence, the system at z = 0.20055 is most likely to be Lyα too. The remaining Lyα systems at z = 0.22099, 0.23009, 0.23964 could actually be Lyβ at z = 0.44721, 0.45799, 0.46931, respectively. Those are marked by "!" inTables 3 and 4. But log[N (N V/O VI)] < −0.1 implies that N must be subsolar because CIE predicts a fraction of 0.2 at log T = 5.30. At this temperature b(H I) broad = 58 km s −1 for pure thermal broadening; such a broad component could be superposed on the narrow H I absorption and hidden in the noise of the spectrum. To constrain the H I column density of the broad component, we fit the Lyα profile simultaneously with both narrow and broad lines. For the broad component we fix b(H I) broad = 58 km s −1 and v(H I) broad ≡ v(O VI). The parameters v, b, N are free to vary for the narrow component. We find a fit with a very similar reducedχ 2 as that of the one component fit, giving log N (H I) broad = 12.71 ± 0.37. In Fig. 16, we show the fit to the Lyα line for log N (H I) broad = 12.71. In CIE, at log T ≈ 5.30, the logarithmic ratio of O VI fraction to H I fraction is log[f (O VI)/f (H I)] = 3.82 (Sutherland & Dopita 1993) and the solar oxygen abundance is log[O/H] ⊙ = −3.34 (Asplund et al. 2004), implying [O/H] ∼ 0.4 if log N (H I) broad = 12.71 and [O/H] ∼ 0.0 if log N (H I) broad = 13. example the system at z = 0.00530 toward 3C 273 gives log N (H I) = 15.85 and log U = −3.4 (Tripp et al. 2002), and the system at z = 0.16610 gives log N (H I) = 14.62 and log U = −2.6 (Sembach et al. 2004): both systems are very much below the distribution of the data plotted in investigated the non-equilibrium cooling of O VI and computed the relation between N (O VI) and b obs (O VI) expected for radiatively cooling gas at temperatures of 10 5 and 10 6 K. Considering the 4 IGM sight lines that have now been fully analyzed (HE 0226-4110, this paper; PKS 0405-123, Williger et al. 2005; PG 1116+215, Sembach et al. 2004; PG 1259+593, Richter et al. 2004), we plot in Fig. 18 the column density against the observed width for O VI absorption systems seen along these lines of sight. Most of the data lie between the b-range [10,35] km s −1 and log N -range [13.5,14.1] with a large scatter. Considering data outside these ranges, there is a general trend of the O VI column increasing with increasing O VI width, as observed in the Galaxy, Small and Large Magellanic Clouds A useful statistical quantity to use for constraining the ionization of the IGM is the number of intervening systems per unit redshift.For O VI, five intervening systems toward HE 0226-4110 are detected in either one (z = 0.01748) or both O VI lines (z = 0.20701, 0.34034, 0.35523, 0.42660). These 5 systems have W (1031) > 48 mÅ. The number of intervening O VI systems with W λ 50 mÅ per unit redshift is dN (O VI)/dz = 11 for an unblocked redshift path of 0.450 (see §3.4). This number would be 9 if the tentative z = 0.42660 system is not included. There is one definitive detection of O III and there are 3 detections of O IV in the spectrum of HE 0226-4110. Using the redshift path defined in §3.4, dN (O III)/dz = 3 and dN (O IV)/dz = 11. If we adopt the unblocked redshift path of H I instead of O VI, these numbers do not change significantly. While this sample still suffers from small number statistics, it suggests dN (O III)/dz < dN (O IV)/dz ≈ dN (O VI)/dz in the IGM along the HE 0226-4110 sight line. Tripp, T. M., Bowen, D. V.,Sembach, K. R., Jenkins, E. B., Savage, B. D., & Richter, P. 2004, "Astrophysics in the Far Ultraviolet: Five Years of Discovery with FUSE", [astro-ph/0411151] Tripp, T. M., Giroux, M. L., Stocke, J. T., Tumlinson, J., & Oegerle, W. R. 2001, ApJ, 563, 724 Tripp, T. M., Jenkins, E. B., Bowen, D. V., Prochaska, J. X., Aracil, B., & Ganguly, R. 2005, ApJ, 619, 714 Tripp, T. M., Lu, L., & Savage, B. D. 1998, ApJ, 508, 200 Tripp, T. M. & Savage, B. D. 2000, ApJ, 542, 42 Tripp, T. M., Savage, B. D., & Jenkins, E. B. 2000, ApJ, 534, L1 Tripp, T. M., et al. 2002, ApJ, 575, 697 Verner, D. A., Verner, E. M., & Ferland, G. J. 1996, Atomic Data Nucl. Data Tables, 64, 1 Wakker, B. P. 2006, ApJ, in press [astro-ph/0512444] Wakker, B. P., et al. 2003, ApJS, 146, 1 Weinberg, D. H., Miralda-Escude, J., Hernquist, L., & Katz, N. 1997, ApJ, 490, 564 Williger, G. M., Heap, S. R., Weymann, R. J., Dave, R., Ellingson, E., Carswell, R. F., Tripp, T. M., & Jenkins, E. B. 2005, ApJ, in press [astro-ph/0505586] Young, P. R. 2005, A&A, 444, L45This preprint was prepared with the AAS L A T E X macros v5.2. Fig. 1 . 1-FUSE and STIS FUV spectra of HE 0226-4110. The data have been binned into 0.1Å samples and we only present the FUSE night data and for clarity we only present the FUSE night data in this illustration. The spike in the middle of the damped Galactic Lyα profile is the geocoronal Lyα emission from the Earth'FUSE spectra of HE 0226-4110 as a function of the heliocentric wavelength between 917 and 1183Å. The data are binned by 4 pixels providing about three samples per 20 km s −1 resolution element. At λ Fig. 4 . 4-H I intervening absorbers detected only in Lyα. The continuum normalized fluxes are plotted against the rest-frame velocity. The redshifts of each absorber are indicated. Profile fits with one component to each intervening Lyα are overplotted as solid lines (see text for details). Other lines present are identified. Note that the systems at z = 0.08938 and z = 0.08950 are shown in the panel for z = 0.08901. Note that the fit for the system at z = 0.08950 was realized by first removing the blend H I λ1025 at z = 0.29134. Fig. 5 . 5-H I intervening absorbers detected in Lyα and β. The continuum normalized fluxes are plotted against the rest-frame velocity. The redshifts of each absorber are indicated. Profile fits with one component to each intervening absorber are overplotted as solid lines. For the system at z = 0.38420 a two component fit is shown in the dashed line (see note in Fig. 6 . 6-H I intervening absorbers detected in Lyα, β, and γ. The continuum normalized fluxes are plotted against the rest-frame velocity. The redshifts of each absorber are indicated. Profile fits with one component to each intervening absorber are overplotted as solid lines. Other lines present are identified. Fig. 7 . 7-H I intervening absorber at z = 0.06083 detected in Lyα, β, γ, and possibly δ. The continuum normalized fluxes are plotted against the rest-frame velocity. Fig. 9 . 9-Metal-line absorber at z = 0.16339. Profile fits with one component are overplotted as solid lines. Other lines present are identified. C III λ977 is possibly present. If the H I line is thermally broadened and CIE applies, Z/Z ⊙ 0.01 in this system. -57 -Fig. 10.-Metal-line absorber at z = 0.34034. Profile fits with one component are overplotted as solid lines. Other lines present are identified. Fig. 11 . 11-Metal-line absorber at z = 0.35523. Profile fits with one component are overplotted as solid lines. Other lines present are identified. The bold spectra in the panels of O III and O IV are LiF 1A and LiF 2A, respectively. The overplotted spectra in the panels of O III and O IV are LiF 2B and LiF 1B, respectively. Fig. 13 . 13-Predicted column densities for the photoionization model of the z = 0.01746 absorber for a solar metallicity and log N (H I) = 13.26 dex. Fig. 14 . 14-Predicted column densities for the photoionization model of the z = 0.34034 system for 1/2 solar metallicity. Axes are defined inFig. 13. Observed column densities with 1σ uncertainties are shown in the thick solid lines along the thin model curves for each ion. Fig. 15 . 15-Predicted column densities for the photoionization model of the z = 0.35523 system. Axes are defined inFig. 13. Observed column densities with 1σ uncertainties are shown in the thick solid lines along the thin model curves for each ion. Fig. 16 . 16-Broad component fit to the Lyα line at z = 0.01746. The solid line is a two component fit with b = 58 km s −1 and log N (H I) = 12.71 for the broad component. The dotted line shows the broad component profile only. Fig. 17 . 17- Fig. 18 . 18-Column density vs. the Doppler parameter for the intervening O VI absorption systems observed toward HE 0226-4110 (circles, this paper), PKS0405-123 (stars, Williger et al. 2005), PG 1116+215 (triangles, Fig. 19 . 19-(a) Logarithmic column densities of Ne VIII against O VI observed toward HE 0226-4110. Upside down triangles represent data that are 3σ upper limits for N (O VI) and N (Ne VIII). The solid straight line corresponds to a 1:1 relationship. (b) Table 4 4is a summary table that presents the redshift, the H I column density and Doppler parameter, and the column densities of C III, O III, O IV, O VI, and Ne VIII. The derived H I parameters ). Since no unidentified features with W ≥ 3σ lie at the wavelengths where O I and O II associated with H I are expected, we can confidently say that there are no intervening systems with O I and O II lines in the spectrum of HE 0226-4110. reported 4 definitive O VI systems and detection of two O III and O IV systems (one of them not associated with O VI) toward PG 1259+593. Adopting the same procedure as above, toward PG 1259+593, we find [dN (O III)/dz = 8] < [dN (O IV)/dz = 10] ≈ [dN (O VI)/dz = 11]. There are 5 reported C III systems against 6 O VI systems along the sight line toward PKS 0405-123 (Prochaska et al. 2004), implying dN (C III)/dz dN (O VI)/dz. Along the path to PKS 0405-123, there are 1 O III system and 4 detected O IV systems that imply [dN (O III)/dz = 4] < [dN (O IV)/dz = 17] ≈ [dN (O VI)/dz = 16]. Combining these sight lines, we find: [dN ( The number of intervening O VI systems with W λ 50 mÅ per unit redshift is dN (O VI)/dz ≈ 11 along the HE 0226-4110 line of sight. For 4 of the 5 O VI systems other ions (such as C III, C IV, O III, O IV) are detected. the IGM because a wide range of ion states are accessible at FUV and EUV restframe wavelengths, including O I, O II, O III, O IV, O V, and O VI. Our results imply that dN (O II)/dz ≪ dN (O III)/dz ≤ dN (O IV)/dz ≈ dN (O VI)/dz. The low redshift IGM is more highly ionized than weakly ionized since the transitions for the different oxygen ions have similar values of λf . 4. Following Prochaska et al. (2004), we confirm with a larger sample that the photoionized metal-line systems have a decreasing ionization parameter with increasing H I column density. The O VI systems that can be explained by both photoionization and collisional ionization follow this relationship. This implies that the systems with photoionization/CIE degeneracy may likely be photoionized. 5. Combining our results with those toward 3 other sight lines, we show that there is a general increase of N (O VI) with increasing b(O VI) but with a large scatter. The observed distribution of N (O VI) and b(O VI) can be reproduced by cooling flow models computed byHeckman et al. (2002), but these models fail to reproduce the observed ionic ratios. K. We did not detect other Ne VIII systems although our sensitivity should have allowed the detection of Ne VIII in O VI systems at T ∼ (0.6−1.3)×10 6 K if CIE applies and Ne/O is solar. Since the bulk of the warm-hot ionized medium (WHIM) is believed to be at temperatures T > 10 6 K, the hot part of the WHIM remains to be discovered with FUV-EUV metal-line transitions.8. This work shows that the origins of the O VI absorption (and the metal-line systems in general) are complex and cause several uncertainties in attempts to estimate Ω b from O VI. In particular the O VI6. Combining results for several QSOs we find that the photoionized O VI systems in the low redshift IGM have a median abundance of 0.3 solar, about 3 times the metallicity typically used to derive Ω b in the warm-hot IGM gas. 7. Along the path to HE 0226-4110, there is one detection of Ne VIII at z = 0.20701 that implies a gas temperature of 5.4 × 10 5 Table 1 . 1STIS Observation of HE 0226-4110 Table 2. FUSE Observations of HE 0226-4110Note.a : The first number is the total exposure time. The night-only exposure time is in parenthesis.ID Date Exp. Time (ks) O6E107010 2002-12-25 2.1 O6E107020 2002-12-26 3.0 O6E107030 2002-12-26 3.0 O6E108010 2002-12-26 2.1 O6E108020 2002-12-26 3.0 O6E108030 2002-12-26 3.0 O6E109010 2002-12-26 2.1 O6E109020 2002-12-27 3.0 O6E109030 2002-12-27 3.0 O6E110010 2002-12-29 2.1 O6E110020 2002-12-29 3.0 O6E110030 2002-12-29 3.0 O6E111010 2002-12-31 2.1 O6E111020 2002-12-31 3.0 O6E111030 2003-01-01 3.0 O6E111040 2003-01-01 3.0 Total = 43.5 Table 3 . 3Properties of the Intervening Systems toward HE 0226-4110 Table 3 - 3Continued Table 3 - 3Continued Table 3 - 3Continued Table 3 - 3Continued Table 3 - 3Continued Table 3 - 3Continued Table 3 - 3Continued H I λ1215 1700.455 153.0 ± 45.0 −18.3 ± 17.5 84.9 ± 27.6 13.58 ± 0.11 AOD [−125, 125] STIS H I λλ1215,1025(5) (6) (7) (8) (9) O III λ832 1154.740 < 21.6 · · · · · · < 13.52 3σ [−41, 41] LiF 2A O IV λ787 1092.051 < 18.9 · · · · · · < 13.49 3σ [−41, 41] LiF 2A O VI λ1031 1430.621 < 29.3 · · · · · · < 13.37 3σ [−41, 41] STIS Ne VIII λ770 1068.064 < 17.5 · · · · · · < 13.51 3σ [−41, 41] LiF 1A z = 0.39641 H I λ1215 1697.574 179.1 ± 40.1 +0.2 ± 10.3 57.2 ± 9.3 13.69 ± 0.10 AOD [−90, 90] STIS H I λλ1215,1025 · · · · · · +0.0 ± 11.8 62.8 ± 22.7 13.59 ± 0.10 FIT · · · STIS C III λ977 1364.321 < 36.3 · · · · · · < 12.75 3σ [−50, 50] STIS O III λ832 1163.110 < 22.0 · · · · · · < 13.53 3σ [−50, 50] LiF 2A O IV λ787 1099.968 < 25.9 · · · · · · < 13.63 3σ [−50, 50] LiF 2A O VI λ1031 1440.992 < 33.7 · · · · · · < 13.43 3σ [−50, 50] STIS Ne VIII λ770 1075.807 < 26.2 · · · · · · < 13.69 3σ [−50, 50] LiF 1A z = 0.39890 a · · · · · · +0.0 ± 46.6 151.7 : 13.50 ± 0.16 FIT · · · STIS C III λ977 1366.636 < 28.4 · · · · · · < 12.65 3σ [−50, 50] STIS O III λ832 1165.085 < 28.0 · · · · · · < 13.63 3σ [−50, 50] LiF 2A O IV λ787 1101.834 < 20.3 · · · · · · < 13.52 3σ [−50, 50] LiF 2A O VI λ1031 1443.438 < 35.9 · · · · · · < 13.46 3σ [−50, 50] STIS Ne VIII λ770 1077.633 < 22.3 · · · · · · < 13.62 3σ [−50, 50] LiF 2A z = 0.40034 a H I λ1215 1702.351 138.7 ± 32.5 −1.0 ± 10.1 59.9 ± 8.4 13.53 ± 0.11 AOD [−86, 86] STIS H I λλ1215,1025 · · · · · · +0.0 ± 12.4 60.7 ± 26.1 13.39 ± 0.11 FIT · · · STIS C III λ977 1368.160 < 30.1 · · · · · · < 12.67 3σ [−50, 50] STIS O III λ832 1166.384 < 22.7 · · · · · · < 13.54 3σ [−50, 50] LiF 2A O IV λ787 1103.063 < 20.2 · · · · · · < 13.52 3σ [−50, 50] LiF 2A O VI λ1031 1445.047 < 35.1 · · · · · · < 13.45 3σ [−50, 50] STIS Ne VIII λ770 1078.835 < 25.8 · · · · · · < 13.68 3σ [−50, 50] LiF 2A z = 0.40274 a H I λ1215 1705.245 384.7 ± 35.2 · · · · · · · · · AOD [−82, 82] STIS H I λ1025 1438.801 93.1 ± 14.6 +2.3 ± 5.9 44.1 ± 6.7 14.19 ± 0.06 AOD [−82, 82] STIS H I λλ1215,1025,972,949 · · · · · · +0.0 ± 5.4 45.7 ± 4.2 14.13 ± 0.04 FIT · · · STIS C III λ977 1370.486 < 29.3 · · · · · · < 12.66 3σ [−50, 50] STIS O III λ832 1168.366 < 33.7 · · · · · · < 13.71 3σ [−50, 50] LiF 2A O IV λ787 1104.938 < 22.2 · · · · · · < 13.56 3σ [−50, 50] LiF 2A O VI λ1031 1447.503 < 25.1 · · · · · · < 13.30 3σ [−50, 50] STIS Ne VIII λ770 1080.668 < 26.3 · · · · · · < 13.69 3σ [−50, 50] LiF 1A Table 3 - 3Continued Table 4 . 4IGM Absorbers toward HE 0226-4110 III) log N (O III) log N (O IV) log N (O VI) log N (NeVIII) 20701 a 15.28 ± 0.04 39.9 ± 1.8 14.10 ± 0.23 14.33 ± 0.06 22099 ! 12.99 ± 0.12 34.1 ± 18.1z log N (H I) b(H I) log N (C 0.01746 13.22 ± 0.06 17.9 ± 4.3 < 13.01 n.a. n.a. 13.60 ± 0.10 n.a. 0.02679 13.22 ± 0.08 41.6 ± 11.0 < 12.78 n.a. n.a. < 13.34 n.a. 0.04121 12.82 ± 0.14 23.6 ± 18.6 < 12.43 n.a. n.a. < 13.50 n.a. 0.04535 12.71 ± 0.13 16.2 ± 12.8 < 12.44 n.a. n.a. < 13.26 n.a. 0.04609 13.66 ± 0.03 25.0 ± 2.1 < 12.64 n.a. n.a. < 13.37 n.a. 0.06015 13.19 ± 0.06 35.5 ± 7.6 < 12.61 n.a. n.a. < 13.30 n.a. 0.06083 14.65 ± 0.02 44.5 ± 1.0 · · · n.a. n.a. < 13.33 n.a. 0.07023 13.81 ± 0.11 26.0 ± 12.4 < 12.41 n.a. n.a. < 13.24 n.a. 0.08375 13.67 ± 0.05 : 29.6 ± 4.5 : < 12.50 n.a. n.a. < 13.31 n.a. 0.08901 13.33 ± 0.05 23.8 ± 3.7 · · · n.a. n.a. < 13.33 n.a. 0.08938 12.59 ± 0.22 8.2 : < 12.30 n.a. n.a. < 13.09 n.a. 0.08950 12.72 ± 0.20 : 22.0 : < 12.14 n.a. n.a. < 13.22 n.a. 0.09059 13.71 ± 0.03 28.3 ± 2.0 < 12.60 n.a. n.a. · · · n.a. 0.09220 12.94 ± 0.11 40.2 ± 18.0 < 12.70 n.a. n.a. < 13.29 n.a. 0.10668 13.09 ± 0.08 32.7 ± 9.2 < 12.52 · · · n.a. < 13.53 n.a. 0.11514 12.90 ± 0.09 10.4 ± 4.1 < 12.44 · · · n.a. < 13.15 n.a. 0.11680 13.27 ± 0.05 23.7 ± 3.9 < 12.57 · · · n.a. < 13.37 n.a. 0.11733 12.64 ± 0.15 15.0 : < 12.50 · · · n.a. < 13.22 n.a. 0.12589 13.01 ± 0.09 29.2 ± 10.1 < 12.47 · · · n.a. < 13.25 n.a. 0.13832 13.19 ± 0.06 25.9 ± 5.3 · · · < 13.72 n.a. < 13.32 n.a. 0.15175 13.42 ± 0.05 48.6 ± 6.7 < 12.63 < 13.66 n.a. < 13.66 n.a. 0.15549 13.13 ± 0.08 34.7 ± 9.8 < 12.51 < 13.68 n.a. < 13.60 n.a. 0.16237 13.04 ± 0.08 29.7 ± 8.6 < 12.55 < 13.58 n.a. < 13.67 n.a. 0.16339 14.36 ± 0.04 46.3 ± 1.9 12.51 ± 0.14 0.20 · · · n.a. < 13.59 n.a. 0.16971 13.35 ± 0.05 25.3 ± 3.9 < 12.52 < 13.44 n.a. < 14.16 n.a. 0.18619 13.26 ± 0.08 53.9 ± 16.2 < 12.60 < 13.82 < 13.77 < 13.47 < 14.08 0.18811 13.47 ± 0.05 22.4 ± 3.3 < 12.58 < 13.90 < 13.80 < 13.32 < 14.03 0.18891 13.34 ± 0.07 22.2 ± 4.0 < 12.46 < 13.77 < 13.64 < 13.25 < 13.94 0.19374 13.20 ± 0.06 : 28.7 ± 6.0 : < 12.57 < 13.67 < 14.06 < 13.40 < 14.50 0.19453 12.89 ± 0.12 26.1 ± 14.0 < 12.55 < 13.86 < 13.60 < 13.28 < 13.84 0.19860 14.18 ± 0.04 37.0 ± 2.0 < 12.71 < 13.70 < 13.75 < 13.70 < 14.02 0.20055 13.38 ± 0.05 38.9 ± 6.4 < 12.73 < 13.77 < 13.75 < 13.38 < 14.19 0.> 14.50 : 14.37 ± 0.03 13.89 ± 0.11 0.22005 14.40 ± 0.04 27.7 ± 1.1 < 12.91 < 13.43 < 13.77 < 13.26 < 14.12 0.< 12.76 < 13.29 < 13.65 < 13.33 < 13.74 0.23009 ! 13.69 ± 0.04 67.9 ± 7.5 < 12.84 < 13.58 < 13.81 < 13.33 < 13.81 0.23964 ! 13.13 ± 0.08 28.8 ± 8.8 < 12.87 · · · · · · < 13.34 < 13.70 0.24514 14.20 ± 0.03 34.5 ± 1.6 · · · · · · < 13.75 < 13.39 < 13.71 0.25099 13.17 ± 0.08 37.9 ± 11.5 < 12.70 < 13.57 < 13.72 < 13.22 < 14.17 0.27147 b 13.85 ± 0.07 25.7 ± 4.2 < 12.52 < 13.40 < 13.65 < 13.22 < 13.73 0.27164 b 13.33 ± 0.28 26.2 : < 12.52 < 13.40 < 13.65 < 13.22 < 13.73 0.27175 b 12.88 ± 0.38 11.2 : < 12.69 < 13.53 < 13.81 < 13.32 < 13.89 0.27956 13.22 ± 0.14 36.3 ± 24.6 < 12.71 < 13.43 < 13.60 < 13.24 < 13.73 0.28041 13.03 ± 0.11 13.9 ± 6.7 < 12.53 < 13.49 · · · < 13.21 < 13.79 0.29134 13.53 ± 0.07 27.0 ± 6.2 < 12.69 < 13.59 < 13.39 < 13.26 < 13.87 0.29213 13.19 ± 0.12 33.4 ± 17.8 < 12.68 < 13.67 < 13.42 < 13.37 < 13.90 0.30930 14.26 ± 0.03 43.8 ± 2.3 < 12.57 < 13.56 < 13.60 < 13.30 < 13.68 0.34034 13.68 ± 0.06 33.4 ± 4.9 12.66 ± 0.16 0.13 < 13.42 13.86 ± 0.12 13.89 ± 0.04 < 13.34 Table 4 - 4Continued III) log N (O III) log N (O IV) log N (O VI) log N (Ne VIII)z log N (H I) b(H I) log N (C 0.35523 13.60 ± 0.07 27.1 ± 6.8 < 12.52 < 13.43 13.64 ± 0.07 13.68 ± 0.05 < 13.55 0.37281 13.16 ± 0.12 25.9 ± 13.3 < 12.59 < 13.40 < 13.50 < 13.35 < 13.54 0.38420 13.91 ± 0.04 62.0 ± 7.1 < 12.58 < 13.52 < 13.55 < 13.34 < 13.59 0.38636 13.36 ± 0.09 38.1 ± 12.7 < 12.57 < 13.52 < 13.49 < 13.37 < 13.51 0.39641 13.59 ± 0.10 62.8 ± 22.7 < 12.75 < 13.53 < 13.63 < 13.43 < 13.69 0.39890 13.50 ± 0.16 151.7 : < 12.65 < 13.63 < 13.52 < 13.46 < 13.62 0.40034 13.39 ± 0.11 60.7 ± 26.1 < 12.67 < 13.54 < 13.52 < 13.45 < 13.68 0.40274 14.13 ± 0.04 45.7 ± 4.2 < 12.66 < 13.71 < 13.56 < 13.30 < 13.69 0.42660 c,! < 13.55 · · · · · · < 13.79 < 13.54 14.03 ± 0.04 : < 13.57 Table 5 . 5Summary of Physical Properties of the O VI Systems toward HE 0226-4110 log N (H I) b(HI) log N (O VI) Ionization a [O/H] b log N (H) b log U b L bz dex km s −1 dex km s −1 dex dex dex kpc 0.01746 13.22 19.8 13.80 Photo 0.0 17.50 −1.10 17 0.20701 15.28 c 39.9 c 14.37 Coll −0.5 19.92 · · · < 10 3 0.34034 13.47 31.2 13.86 Photo/Coll −0.3 18.22 −1.00 40 0.35523 13.60 27.1 13.68 Photo/Coll −0.55 18.16 −0.99 40 0.42660 ! < 13.55 Table 6 . 6Summary of Photoionization Parameters and Metallicity in the low-z IGM for the O VI Systems log N (H I) log Z/Z ⊙ log Uz HE 0226-4110 0.01746 13.22 +0.0 −1.1 0.34034 13.47 −0.3 −1.0 0.35523 13.60 −0.6 −1.0 PG 0953+415 0.06807 14.39 −0.4 −1.4 PG 1116+215 0.13847 16.20 −0.5 −2.5 PG 1259+593 0.21949 15.25 −1.3 −1.5 0.29236 14.50 −0.5 −1.7 PKS 0405-123 0.09180 14.52 > −1.4 > −1.5 0.09658 14.65 −1.5 −1.2 0.16710 16.45 −0.3 −2.9 0.36080 15.12 > −0.7 −2.0 0.36332 13.43 0.0 −1.4 0.49510 14.39 > −0.3 −1.3 Table 7 . 7Comparison of Observed Logarithmic Column density ratios with those from Cooling Flow models z O IV/O VI Ne VIII/O VI S VI/O VIIsobaric Model −0.6 +0.1 −2.8 Isochoric Model 0.0 −0.3 −2.4 HE 0226-4110 0.20701 > 0.11 −0.48 −1.59 0.34034 −0.03 < −0.55 < −1.13 0.35523 −0.04 < −0.13 < −0.88 0.42660 < −0.49 < −0.46 < −1.18 PG 1259+593 0.21949 −0.20 < −0.10 · · · Table 8 . 8EUV-FUV Oxygen Transitions O II 832.757 1.57 > 0.105 O II 833.329 1.87 > 0.104 O II 834.466 2.04 > 0.103 O III 702.332 1.98 > 0.310 O III 832.927 1.95 > 0.104 O IV 553.330 1.79 > 0.663 O IV 554.075 2.09 > 0.660 O IV 608.398 1.61 > 0.339 O IV 787.711 1.94 > 0.169 O V 629.730 2.51 > 0.463 O VI 1031.926 2.14 Note. -Column 4 shows the lower redshift at which a given transition can be observed in the FUV. We only list the strongest FUV O I transitions since they are rarely detected in the IGM. Atomic parameters can be found in Morton 2005 (FUV transitions) and Verner et al. 1996 (EUV transitions).Ion λ (Å) log λf z (1) (2) (3) (4) O I 971.738 1.10 > 0 O I 988.773 1.65 > 0 O I 1039.230 0.97 > 0 O I 1302.169 1.82 > 0 > 0 O VI 1037.617 1.83 > 0 Fig. 4-Continued. (3)5-0 is estimated to be 22.4 mÅ using our H 2 model from observed H 2 absorption lines and was removed from the absorption line. O VI λ1037 is blended with O IV λ787 (z = 0.34034); O VI λ1037 was not used for the profile fitting. z = 0.02679: H I λ1025 is contaminated by H 2 LP(2) 4-0 at 10453.284Å, and was not used for the profile fitting. z = 0.06083: H I λ972 is blended with ISM O VI λ1031, while C III λ977 is blended with ISM C II λ1036. z = 0.08375. H I λ1215 is partially blended with ISM Ni ii λ1317, producing uncertain measurements. H I λ1025 is blended with IGM H I λ920 (z = 0.20701) on the blue-side and with S IV λ744 from the associated system at z = 0.49253. The red side of H I λ972 is blended with H 2 LR(3) 4-0 λ1053.976. z = 0.08901: H I λ1025 is blended with S IV λ748 from the associated system at z = 0.49253, while C III λ977 is blended with ISM Fe II λ1063. z = 0.08950: H I λ1215 is blended with IGM H I λ1025 (0.29133). We first estimated the strength of H I λ1025 at z = 0.29134 by using our fit on H I λ1215 at z = 0.29134, and then removed H I λ1025 from the profile. This process can introduce other errors not accounted for in our measurements, and that is why we add a colon to acknowledge the result is uncertain. our H 2 model, we estimated the equivalent width of this H 2 feature to be 5.7 mÅ, which was removed from the absorption feature to measure O IV λ787. z = 0.38420: The H I profiles appear asymmetric and may have two components. The reduced-χ 2 does not improve significantly between a 1 component fit (reduced-χ 2 = 0.89) and a 2 component fit (reduced-χ 2 = 0.87). The results for the 2 component fit are: v 1 = −17.8 ± 15.9 km s −1 , b 1 = 75.2 ± 18.5 km s −1 , log N 1 = 13.74 ± 0.13; v 2 = +18.0 ± 6.0 km s −1 , b 2 = 25.7 ± 19.3 km s −1 , log N 2 = 13.48 ± 0.22. The fit is shown inFig. 5. H I λ1025 is affected by an emission peak for v > +50 km s −1 . O III λ832 is not detected and the 3σ limit can be determined in the range [−10, 85] km s −1 because ISM P II λ1152 lies next to it. z = 0.38636: H I λ1025 is lost in IGM H I λ1215 (z = 0.16971) (see text for more details). z = 0.39890: An emission feature is present near C III λ977 at −46 km s −1 . z = 0.40034: At −55 km s −1 from O III λ832, there is H I λ972 (z = 0.19860). At −23 km s −1 from O VI λ1031, there is H I λ1215 (z = 0.18811). z = 0.40274: We use the night only data to remove the airglow contamination to be able to estimate the limit on O III λ832. z = 0.42660: C III λ977 is lost in Galactic Si IV λ1393. Because the O VI λ1031 absorption line is blended with an emission feature, the error estimates are uncertain. . H Abgrall, E Roueff, F Launay, J Y Roncin, J L Subtil, A&AS. 101273Abgrall, H., Roueff, E., Launay, F., Roncin, J. Y., & Subtil, J. L. 1993a, A&AS, 101, 273 . H Abgrall, E Roueff, F Launay, J Y Roncin, J L Subtil, A&AS. 101323Abgrall, H., Roueff, E., Launay, F., Roncin, J. Y., & Subtil, J. L. 1993b, A&AS, 101, 323 . Allende Prieto, C Lambert, D L Asplund, M , ApJ. 573137Allende Prieto, C., Lambert, D. L., & Asplund, M. 2002, ApJ, 573, L137 . M Asplund, N Grevesse, A J Sauval, C Prieto, D Kiselman, A&A. 417751Asplund, M., Grevesse, N., Sauval, A. J., Allende Prieto, C., & Kiselman, D. 2004, A&A, 417, 751 . J N Bahcall, S Basu, A M Serenelli, ApJ. in pressBahcall, J. N., Basu, S., & Serenelli, A. M. 2005, ApJ, in press J Bergeron, S Herbert-Fort, astro-ph/0506700Probing Galaxies Through Quasar Absorption Lines. IAU 199 Conference ProceedingBergeron, J., & Herbert-Fort, S. 2005, IAU 199 Conference Proceeding, "Probing Galaxies Through Quasar Absorption Lines", [astro-ph/0506700] . G L Bryan, M Machacek, P Anninos, M L Norman, ApJ. 51713Bryan, G. L., Machacek, M., Anninos, P., & Norman, M. L. 1999, ApJ, 517, 13 . S Burles, K M Nollett, M S Turner, ApJ. 5521Burles, S., Nollett, K. M., & Turner, M. S. 2001, ApJ, 552, L1 . B Carswell, J Schaye, T Kim, ApJ. 57843Carswell, B., Schaye, J., & Kim, T. 2002, ApJ, 578, 43 . R Cen, J P Ostriker, ApJ. 5141Cen, R., & Ostriker, J. P. 1999, ApJ 514, 1 . C W Danforth, J M Shull, ApJ. 624555Danforth, C. W., & Shull, J. M. 2005, ApJ, 624, 555 . R Davé, L Hernquist, N Katz, D H Weinberg, ApJ. 511521Davé, R., Hernquist, L., Katz, N., & Weinberg, D. H. 1999, ApJ, 511, 521 . R Davé, T M Tripp, ApJ. 553528Davé, R., & Tripp, T. M. 2001, ApJ, 553, 528 . J J Drake, P Testa, Nature. 436525Drake, J. J., & Testa, P. 2005, Nature, 436, 525 . T Fang, G L Bryan, C R Canizares, ApJ. 564604Fang, T., Bryan, G. L., & Canizares, C. R. 2002, ApJ, 564, 604 . G J Ferland, K T Korista, D A Verner, J W Ferguson, J B Kingdon, E M Verner, PASP. 110761Ferland, G. J., Korista, K. T., Verner, D. A., Ferguson, J. W., Kingdon, J. B., & Verner, E. M. 1998, PASP, 110, 761 . E L Fitzpatrick, L J Spitzer, ApJ. 475623Fitzpatrick, E. L. & Spitzer, L. J. 1997, ApJ, 475, 623 . A J Fox, B P Wakker, B D Savage, T M Tripp, K R Sembach, J Bland-Hawthorn, ApJ. 630332Fox, A. J., Wakker, B. P., Savage, B. D., Tripp, T. M, Sembach, K. R., & Bland-Hawthorn, J. 2005, ApJ, 630, 332 . M Fukugita, C J Hogan, P J E Peebles, ApJ. 503518Fukugita, M., Hogan, C. J., & Peebles, P. J. E. 1998, ApJ, 503, 518 . R Ganguly, K R Sembach, T M Tripp, B D Savage, B P Wakker, ApJ. submittedGanguly, R., Sembach, K. R., Tripp, T. M., & Savage, B. D., Wakker, B. P. 2005, ApJ, submitted . N Grevesse, A J Sauval, Space Science Reviews. 85161Grevesse, N., & Sauval, A. J. 1998, Space Science Reviews, 85, 161 . F Haardt, P Madau, ApJ. 46120Haardt, F., & Madau, P. 1996, ApJ, 461, 20 . T M Heckman, C A Norman, D K Strickland, K R Sembach, ApJ. 577691Heckman, T. M., Norman, C. A., Strickland, D. K., & Sembach, K. R. 2002, ApJ, 577, 691 . L Hernquist, N Katz, D H Weinberg, M Jordi, ApJ. 45751Hernquist, L., Katz, N., Weinberg, D. H., & Jordi, M. 1996, ApJ, 457, L51 H Holweger, AIP Conf. Proc. 598: Joint SOHO/ACE workshop "Solar and Galactic Composition. 59823Holweger, H. 2001, AIP Conf. Proc. 598: Joint SOHO/ACE workshop "Solar and Galactic Composition", 598, 23 . E B Jenkins, D V Bowen, T M Tripp, K R Sembach, ApJ. 623767Jenkins, E. B., Bowen, D. V., Tripp, T. M., & Sembach, K. R. 2005, ApJ, 623, 767 . D C Morton, ApJS. 149205Morton, D. C. 2003, ApJS, 149, 205 . H W Moos, ApJ. 5381Moos, H. W., et al. 2000, ApJ, 538, L1 . F Nicastro, Nature. 433495Nicastro, F., et al. 2005, Nature, 433, 495 . S V Penton, J T Stocke, J M Shull, ApJS. 15229Penton, S. V., Stocke, J. T., & Shull, J. M. 2004, ApJS, 152, 29 . J X Prochaska, H Chen, J C Howk, B J Weiner, J Mulchaey, ApJ. 617718Prochaska, J. X., Chen, H., Howk, J. C., Weiner, B. J., & Mulchaey, J. 2004, ApJ, 617, 718 C Proffitt, v6.0STIS Instrument Handbook. BaltimoreSTScIProffitt, C., et al. 2000, STIS Instrument Handbook, v6.0, (Baltimore:STScI) A Rasmussen, S M Kahn, F Paerels, The IGM/Galaxy Connection. The Distribution of Baryons at z=0. 281109Rasmussen, A., Kahn, S. M., & Paerels, F. 2003, ASSL Vol. 281: The IGM/Galaxy Connection. The Distribution of Baryons at z=0, 109 . M Rauch, ApJ. 4897Rauch, M., et al. 1997, ApJ, 489, 7 . P Richter, B D Savage, K R Sembach, T M Tripp, astro-ph/0509539A&A. in pressRichter, P., Savage, B. D., Sembach, K. R., & Tripp, T. M. 2005, A&A, in press [astro-ph/0509539] . P Richter, B D Savage, T M Tripp, K R Sembach, ApJS. 153165Richter, P., Savage, B. D., Tripp, T. M., & Sembach, K. R. 2004, ApJS, 153, 165 . D J Sahnow, ApJ. 5387Sahnow, D. J., et al. 2000, ApJ, 538, L7 . B D Savage, N Lehner, B P Wakker, K R Sembach, T M Tripp, ApJ. 626776Savage, B. D., Lehner, N., Wakker, B. P., Sembach, K. R., & Tripp, T. M. 2005, ApJ, 626, 776 . B D Savage, K R Sembach, ApJ. 379245Savage, B. D., & Sembach, K. R. 1991, ApJ, 379, 245 . B D Savage, ApJS. 146125Savage, B. D., et al. 2003, ApJS, 146, 125 . B D Savage, K R Sembach, T M Tripp, P Richter, ApJ. 564631Savage, B. D., Sembach, K. R., Tripp, T. M., & Richter, P. 2002, ApJ, 564, 631 . K R Sembach, B D Savage, ApJS. 83147Sembach, K. R. & Savage, B. D. 1992, ApJS, 83, 147 . K R Sembach, ApJS. 146165Sembach, K. R., et al. 2003, ApJS, 146, 165 . K R Sembach, T M Tripp, B D Savage, P Richter, ApJS. 155351Sembach, K. R., Tripp, T. M., Savage, B. D., & Richter, P. 2004, ApJS, 155, 351 . D N Spergel, ApJS. 148175Spergel, D. N., et al. 2003, ApJS, 148, 175 . R S Sutherland, M A Dopita, ApJS. 88253Sutherland, R. S. & Dopita, M. A. 1993, ApJS, 88, 253 HE 0226-4110: this work. Pg, Savage, References. -HE 0226-4110: this work; PG 0953+415: Savage et al. 2002; . Pg 1116+215: Sembach, PG 1116+215: Sembach et al. 2004; . Pg, Richter, PG 1259+593: Richter et al. 2004; PKS 0405-123: Prochaska et. PKS 0405-123: Prochaska et al. 2004. Profile fits with one component are overplotted as solid lines. Fig. 8.-Metal-line absorber at z = 0.01746. Other lines present are identified. O VI λ1031 is contaminated by H 2 LR(1) 4-0 1049.960Å, in the profile that is shown this contaminating H 2 line was removed (see note in Table 3Fig. 8.-Metal-line absorber at z = 0.01746. Profile fits with one component are overplotted as solid lines. Other lines present are identified. O VI λ1031 is contaminated by H 2 LR(1) 4-0 1049.960Å, in the profile that is shown this contaminating H 2 line was removed (see note in Table 3).
[]
[ "S-ConvNet: A Shallow Convolutional Neural Network Architecture for Neuromuscular Activity Recognition Using Instantaneous High- Density Surface EMG Images", "S-ConvNet: A Shallow Convolutional Neural Network Architecture for Neuromuscular Activity Recognition Using Instantaneous High- Density Surface EMG Images", "S-ConvNet: A Shallow Convolutional Neural Network Architecture for Neuromuscular Activity Recognition Using Instantaneous High- Density Surface EMG Images", "S-ConvNet: A Shallow Convolutional Neural Network Architecture for Neuromuscular Activity Recognition Using Instantaneous High- Density Surface EMG Images" ]
[ "Md Rabiul Islam \nDept. of Electrical and Computer Engineering\nUniversité du Québec à Trois-Rivières\nQCCanada\n", "Daniel Massicotte \nDept. of Electrical and Computer Engineering\nUniversité du Québec à Trois-Rivières\nQCCanada\n", "Francois Nougarou \nDept. of Electrical and Computer Engineering\nUniversité du Québec à Trois-Rivières\nQCCanada\n", "Philippe Massicotte ", "Wei-Ping Zhu \nDept. of Electrical and Computer Engineering\nUniversité du Québec à Trois-Rivières\nQCCanada\n\nDept. of Electrical and Computer Engineering\nConcordia University\nMontrealQCCanada\n", "Md Rabiul Islam \nDept. of Electrical and Computer Engineering\nUniversité du Québec à Trois-Rivières\nQCCanada\n", "Daniel Massicotte \nDept. of Electrical and Computer Engineering\nUniversité du Québec à Trois-Rivières\nQCCanada\n", "Francois Nougarou \nDept. of Electrical and Computer Engineering\nUniversité du Québec à Trois-Rivières\nQCCanada\n", "Philippe Massicotte ", "Wei-Ping Zhu \nDept. of Electrical and Computer Engineering\nUniversité du Québec à Trois-Rivières\nQCCanada\n\nDept. of Electrical and Computer Engineering\nConcordia University\nMontrealQCCanada\n" ]
[ "Dept. of Electrical and Computer Engineering\nUniversité du Québec à Trois-Rivières\nQCCanada", "Dept. of Electrical and Computer Engineering\nUniversité du Québec à Trois-Rivières\nQCCanada", "Dept. of Electrical and Computer Engineering\nUniversité du Québec à Trois-Rivières\nQCCanada", "Dept. of Electrical and Computer Engineering\nUniversité du Québec à Trois-Rivières\nQCCanada", "Dept. of Electrical and Computer Engineering\nConcordia University\nMontrealQCCanada", "Dept. of Electrical and Computer Engineering\nUniversité du Québec à Trois-Rivières\nQCCanada", "Dept. of Electrical and Computer Engineering\nUniversité du Québec à Trois-Rivières\nQCCanada", "Dept. of Electrical and Computer Engineering\nUniversité du Québec à Trois-Rivières\nQCCanada", "Dept. of Electrical and Computer Engineering\nUniversité du Québec à Trois-Rivières\nQCCanada", "Dept. of Electrical and Computer Engineering\nConcordia University\nMontrealQCCanada" ]
[]
The concept of neuromuscular activity recognition using instantaneous high-density surface electromyography (HD-sEMG) images opens up new avenues for the development of more fluid and natural muscle-computer interfaces. However, the existing approaches employed a very large deep convolutional neural network (ConvNet) architecture and complex training schemes for HD-sEMG image recognition, which requires the network architecture to be pre-trained on a very large-scale labeled training datasets, as a result it makes computationally very expensive. To overcome this problem, we propose S-ConvNet and All-ConvNet models, a simple yet efficient framework for learning instantaneous HD-sEMG images from scratch for neuromuscular activity recognition. Without using any pre-trained models, our proposed S-ConvNet and All-ConvNet demonstrate very competitive recognition accuracy to the more complex state of the art for neuromuscular activity recognition based on instantaneous HD-sEMG images, while using a ≈ × and reducing learning parameters to a large extent. The experimental results proved that the S-ConvNet and All-ConvNet are highly effective for learning discriminative features for instantaneous HD-sEMG image recognition especially in the data and high-end resource constrained scenarios.
10.1109/embc44109.2020.9175266
[ "https://arxiv.org/pdf/1906.03381v1.pdf" ]
182,952,357
1906.03381
77fe6e0f524bd8c108ab45db218c4d9d237266bf
S-ConvNet: A Shallow Convolutional Neural Network Architecture for Neuromuscular Activity Recognition Using Instantaneous High- Density Surface EMG Images Md Rabiul Islam Dept. of Electrical and Computer Engineering Université du Québec à Trois-Rivières QCCanada Daniel Massicotte Dept. of Electrical and Computer Engineering Université du Québec à Trois-Rivières QCCanada Francois Nougarou Dept. of Electrical and Computer Engineering Université du Québec à Trois-Rivières QCCanada Philippe Massicotte Wei-Ping Zhu Dept. of Electrical and Computer Engineering Université du Québec à Trois-Rivières QCCanada Dept. of Electrical and Computer Engineering Concordia University MontrealQCCanada S-ConvNet: A Shallow Convolutional Neural Network Architecture for Neuromuscular Activity Recognition Using Instantaneous High- Density Surface EMG Images Index Terms-Neuromuscular activity recognitionShallow convolutional neural networksFeature learningHD-sEMGGesture recognitionMuscle-computer interfaceDeep neural networks The concept of neuromuscular activity recognition using instantaneous high-density surface electromyography (HD-sEMG) images opens up new avenues for the development of more fluid and natural muscle-computer interfaces. However, the existing approaches employed a very large deep convolutional neural network (ConvNet) architecture and complex training schemes for HD-sEMG image recognition, which requires the network architecture to be pre-trained on a very large-scale labeled training datasets, as a result it makes computationally very expensive. To overcome this problem, we propose S-ConvNet and All-ConvNet models, a simple yet efficient framework for learning instantaneous HD-sEMG images from scratch for neuromuscular activity recognition. Without using any pre-trained models, our proposed S-ConvNet and All-ConvNet demonstrate very competitive recognition accuracy to the more complex state of the art for neuromuscular activity recognition based on instantaneous HD-sEMG images, while using a ≈ × and reducing learning parameters to a large extent. The experimental results proved that the S-ConvNet and All-ConvNet are highly effective for learning discriminative features for instantaneous HD-sEMG image recognition especially in the data and high-end resource constrained scenarios. INTRODUCTION Neuromuscular activity recognition has been a driving motivation for research because of its respective novel applications in real life. The major application domains are non-invasive control of active prosthesis [1], wheelchairs [2], exoskeletons [3] or providing interaction methods for video games [4] and neuromuscular diagnosis [5]. The conventional approaches for neuromuscular activity recognition immensely rely on multi-channel surface electromyography (sEMG) sensors and windowed descriptive and discriminatory sEMG features [6][7][8][9][10]. However, the sparse multi-channel sEMG based methods are not suitable for real-world applications due to its limitations to electrode shift and positioning and therefore malfunctioning in any one of the channels requires retraining the entire system [11], [12]. To overcome this problem, the high-density sEMG (HD-sEMG) based methods have been proposed in recent years [11][12][13]. The HD-sEMG records myoelectric signals using two-dimensional (2D) electrode arrays that characterize the spatial distribution of myoelectric activity over the muscles that reside within the electrode pick-up area [14], [15]. The collected HD-sEMG data are spatially correlated which enabled both temporal and spatial changes and robust against malfunction of the channels with respect to the previous counterparts [12]. However, the existing HD-sEMG based neuromuscular activity recognition methods are still depending on the windowed sEMG which demands to find an optimal window length otherwise influence in the classification accuracy and controller delay especially in the application of assistive technology, physical rehabilitation and human computer interfaces [13]. To overcome this problem and develop a more fluid and natural muscle-computer interfaces (MCI's), more recently, Geng et al., [13] and Islam et al., [16] explored the patterns inside the instantaneous sEMG images spatially composed from HD-sEMG enables neuromuscular based gesture recognition solely with the sEMG signals recorded at a specific instant. In their approach, the instantaneous values of HD-sEMG signals at each sampling instant were arranged in a 2D grid in accordance with the electrode positioning. Afterwards, this 2D grid was converted to a grayscale sEMG image. Using Histogram of Oriented Gradients (HOG) as discriminative features and pairwise SVM's classification method in [16], a competitive neuromuscular activity recognition accuracy of an 8hand gesture has been achieved as par with the state-ofthe-art method for an intra-subject test. However, the state-of-the-art methods [13], [17] employed a DeepFace [18] like very large deep convolutional neural network (ConvNet) architecture for sEMG image classification, which requires to be pre-trained on a very large-scale training dataset, as a result it makes computationally very expensive to be practical for real-world MCI's applications. In their large deep ConvNet includes two locally connected layer (LCN) and three fully connected layers among the other convolutions (CNN) and a G-way fully connected layer. The LCN layers are different from the CNN layers in a sense that it assigns an independent filter weight to each of the local receptive field in each layer, while CNN layers adopt a filter weight sharing strategy [19]. More explicitly, in CNN weight sharing strategy, the feature map of a local receptive field is acquired for a given sEMG image by forming a patch with a shared kernel, i.e. = , where stands for the position of the patch in the sEMG image. Therefore, CNN is capable to learn translation/location invariant features by adopting the local connectivity and weight sharing strategies. In contrast, the feature map of a local receptive filed in an LCN layer is acquired by forming a patch with an independent kernel, i.e. = . Due to this unshared weight strategies, the LCN layer fails to model the relations of parameters in different locations. Also, the number of learning parameters increases considerably from to × , where ≫ , where is the number of patches and is the number of kernels. As a result, a very large-scale labeled training datasets are required to train the LCN layer [18], [19]. However, the LCN layer may be useful in an application where the precise location of the feature is dependent of the class labels with the huge cost of collecting a large-scale labeled dataset (≈ 4.4 million as suggested in [18]) and computational burden of introducing a more sophisticated alignment algorithm to the network structure, though the later part of this claim is ignored and have not been discussed in the existing methods for instantaneous HD-sEMG image recognition. By considering the above-mentioned fact, we must investigate the most important and a very basic research questions, while designing a new CNN-based architecture for instantaneous HD-sEMG image classification-(i) Do we expect the devised model to produce a location/translation invariant feature representation? or, (ii) do we need a location dependent feature representation? If the answer is YES to the first question, then we can let up the LCN layer while designing a CNN-based architecture for instantaneous HD-sEMG image classification and hence may reduce the burden of gathering a large-scale labeled training dataset significantly and surplus a requirement of introducing a robust alignment algorithm. Apart from that, the network architecture employed by the existing approaches are heavily rely on pretrained on large-scale HD-sEMG training datasets (≈ 0.72 millions). The conventional paradigm of using pretrained models in the literature when the source task A is different from the target task B and when there are not enough target data available to make the training accomplishable alone on the target task B [20]. However, in the existing approaches for instantaneous HD-sEMG image recognition (e.g., [13], [17]), the source task A and the target task B are the same, and the pre-training on large-scale HD-sEMG training datasets have been performed with the assumption of preventing overfitting while re-training with the training data available for the target task B i.e intra-subject test. Therefore, it is not surprising to achieve high target task accuracy with this highly resource-based and finedtuned network architecture. Nevertheless, this longstanding conventional wisdom of pre-training have recently been falsified by He et al. [20], where pretraining does not necessarily reduce overfitting or improve the final target task accuracy is proved to be claimed. Moreover, there are other critical limitations of using pre-trained networks for instantaneous HD-sEMG image classification: (i) Constrained structure design space-pre-trained networks are very deep and large and trained on a large-scale HD-sEMG datasets, therefore, containing a massive number of parameters. Hence, there is a little flexibility to control/adjust the network structures (even for a small changes) by directly adopting the pre-trained network to the target task. The requirement of computing resources and large-scale pretrained datasets are also bounded by the large network structures. (ii) Domain mismatch-sEMG signals are highly subject specific and the distributions of the sEMG signals vary considerably even between recording sessions of the same subject within the same experimental set up [17]. This problem becomes more challenging, where the learned model is used to recognize muscular activities in a new recording session. Though the fine-tuning of the pre-trained model can reduce the gap due to the deformations in a new recording session of the target task. However, it's still a serious problem, when there is a huge mismatch between the source task and the target task [21]. (iii) Learning bias-the distributions and the loss functions between the source task and the target task may vary significantly, which may lead to different searching/optimization spaces. Therefore, the learning may be biased towards a local minimum which is not the optimal for the target task [22]. To overcome these above-mentioned problems, our work is motivated by the following research question-is it possible to train the neuromuscular activity recognition model from the scratch? To achieve this goal, we propose shallow convolutional neural network (S-ConvNet) architectures, a simple yet effective framework, which could learn neuromuscular activity from scratch using only the makeshift HD-sEMG database available for the target task. S-ConvNet is reasonably flexible, which can be tailored to various network structures for different computing platforms such as desktop, server, mobile and even embedded electronics. Though it's being simple and flexible, the S-ConvNet also help keep competitive performance on final target task accuracy as par with very complex state of the art methods while reducing the learning parameters to a large extent. For instantaneous sEMG image based neuromuscular activity recognition, the challenge remains open because very limited research has been done on it. We present S-ConvNet, according to the best of our knowledge, this is the first framework that can train instantaneous HD-sEMG based neuromuscular activity recognition model from scratch with the competitive performance as par with very complex state of the art methods. Following subsections II and III presents the proposed framework, model description and design principles for S-ConvNet respectively. Section IV presents the All-ConvNet. Section V provides experimental results and discusses the performance of the proposed S-ConvNet and All-ConvNet models for instantaneous HD-sEMG based neuromuscular activity recognition. II. THE PROPOSED FRAMEWORK The proposed framework for neuromuscular activity recognition using instantaneous HD-sEMG images includes following three major computational components: (i) pre-processing and sEMG image generation (ii) architectural design of the S-ConvNet model and (iii) classification. A schematic diagram of the proposed framework of muscular activity recognition by instantaneous sEMG images are shown in Fig. 1. First, the power-line interferences were removed from the acquired HD-sEMG signals with a band-stop filtered between 45 and 55 Hz using a 2 nd order Butterworth filter. Then, the HD-sEMG signals at each sampling instant were arranged in a 2-D grid according to their electrode positioning. This grid was further transformed into an instantaneous sEMG image by linearly transforming the values of sEMG signals from to color intensity as [−2.5 , 2.5 ] to [0 255]. Thus, an instantaneous grayscale sEMG image was formed with the size of 16 × 8. Secondly, we devised different S-ConvNet models which describes in Section III. Finally, providing instantaneous HD-sEMG images and their corresponding labels, our devised S-ConvNet model is trained offline to predict to which muscular activity an instantaneous HD-sEMG image belongs. Then, this trained S-ConvNet model is used to recognize different neuromuscular activities online from the instantaneous HD-sEMG images. The proposed S-ConvNet network architectures differs from existing approaches for HD-sEMG image recognition in several key aspects. Firstly, our S-ConvNet models are trained from random initialization i.e. from scratch without any pre-training. The pretraining in the existing approaches (e.g., [13], [17]) involves over .72 million of images acquired from 18 different sulobjects. However, considering the targeted application domains of sEMG based neuromuscular activity recognition (e.g. assistive technology, physical rehabilitation etc.), it is always difficult to gather such a large amount dataset required for the pre-training of a very deep neural network models. We can't expect an amputee or a patient to provide a large set of training examples over multiple number of trials and sessions. Moreover, there is no evidence if this specialized very large and deep neural network architectures are required for models to be pre-trained for instantaneous HD-sEMG image recognition. Our work demonstrates that it is often possible to match the accuracy of highly resource-based and fined-tuned network architecture when training from scratch even using a simple S-ConvNet network architectures. Training from random initialization, our S-ConvNet models requires (≈ 12 × than its pre-trained counterparts for HD-sEMG image recognition. Fig. 2 shows the total number of images are used during training for pretraining + fine-tuning vs random initialization. Secondly, the network architecture of the existing methods for HD-sEMG image recognition requires pretraining using a large-scale HD-sEMG dataset, therefore the question arises which components of CNNs are necessary for achieving competitive performance as per with these existing methods from random initialization. Motivated by the work in [23], we take a first step towards answering this question by studying the simplest architecture we could conceive: a network with consisting of convolution layers, with a maximum of one fully connected layers with a small number of neurons and an occasional dimensionality reduction by using a max/average pooling or using a stridden CNN. The use of a small number of convolutions and fully connected layers in S-ConvNet greatly reduce the number of parameters and thus also serve as a form of regularization. The S-ConvNet architectural design are adopted based on the following principles and observations: (i) it is hypothesized that the different muscular activities produce different intensity distributions, which is reproducible across the trials of the same muscular activities and discriminative among different activities. However, we observed that the spatial intensity distributions within the same muscular activities are not locally invariant and the precise location of the features are also independent to the class labels. Fig. 3 demonstrate sequence of HD-sEMG images derived from the same class. The CNN alone have the great abilities to exploit locally translational invariance features through adopting local connectivity and weight sharing strategies [19]. Hence, the LCN are ablated in designing our S-ConvNet models as the location of the features are not dependent to the class labels. (ii) in designing S-ConvNet, we also make use of the fact that if the instantaneous HD-sEMG image is covered by the units in the topmost convolutions layers could be large enough to recognize its content [23]. Thirdly, HD-sEMG image classifier requires normalization to help optimization. In addition to deploying successful forms of normalized parameter initialization method [23], [24] employing an effective activation normalization method is equally important when an instantaneous sEMG image recognition model such as S-ConvNet are required to be trained from scratch. Batch Normalization (BN) [25] is a widely used activation normalization technique in the development of deep learning-based methods. BN is used to normalize features by computing the mean and variances over a mini-batches of instantaneous HD-sEMG image samples, which have also shown to be promising in many other applications to ease optimization and enable deep networks to converge faster. Moreover, the stochastic uncertainty of the batch statistics provides some form of regularization which may yield better generalization [26]. In addition to BN, Dropout [27] is another most popular regularization technique and a simple way to prevent deep neural networks from overfitting. However, Dropout and BN often lead to worse performance when they are combined. This is due to the fact that the Dropout would shift the variance of a specific neural unit when the state of the network transfer from training to test. On the other hand, BN would maintain its statistical variance, which is accumulated from the entire training process, in the test phase. This inconsistencies in variance causes the unstable numerical behavior when the signal goes deeper through the network, which may even lead to incorrect predictions [28]. Unlike the existing Fig. 3. HD-sEMGs derived from the same muscular activity class which demonstrates that the distributions are independent to the class labels approaches, the Dropout and BN applied separately in all our S-ConvNet models and evaluated their respective performance. A. S-ConvNet Architecture and Training We train our S-ConvNet on a multi-class neuromuscular activity recognition task, namely, to recognize an activity class through an instantaneous HD-sEMG image. The overall architecture of S-ConvNet models are described in Table I. Starting from the simplest model A, the depth and number of parameters in the network gradually increases to model C. The instantaneous HD-sEMG image is passed through a convolutional (conv.) layers, where a small receptive field with a 3×3 filters are used. The smallest receptive field with 3×3 filters is the minimum filter size to allow overlapping convolutions and spatialpooling with a stride of 2, which also capture the notion of left, right and center amicably. It can be observed that the model B from the Table I is a variant of the Network in Network architecture [29], where only 1×1 convolution is performed after each normal 3×3 convolutions layers. The 1×1 convolution act as a linear transformation of the input channels followed by a nonlinearity [30]. We also highlight that the model C is a variant of the simple ConvNet models introduced by Springenberg et al., [23] for object recognition in which the spatial-pooling is performed by using a stridden CNN. The operation of a convolution map and a subsequent spatial pooling are illustrated in Fig. 4. The output of a convolution map produced by a convolution layer is computed as follows: , , ( ) = ∅ (∑ ∑ ∑ ℎ, , , ⋅ =1 =1 ℎ=1 (ℎ, , , , ) )(1) where are the convolutional weights or filters and (ℎ, , , , ) = ( . + ℎ, . + , ) is the function mapping from position in to position in respecting the stride . and ℎ are the width and height of the filters and is the number of channels (in case is the output of a convolutional layer, is the number of filters). ∈ [1, ] is the number of output feature or channels of the convolutional layer. ∅(⋅) is the activation funtion, an exponential linear unit ELU ∅( ) = { (exp( ) − 1), < 0 , ≥ 0(2) Then, the spatial pooling operations are performed to the convolution map as follows: The spatial pooling operated on a convolutional map make the networks more robust to local translations and may cope with the electrode shifting problem encountered in different HD-sEMG recording trials and sessions. However, the spatial pooling operations may cause the network to lose information about the detailed texture and micro-textures of an instantaneous sEMG image. Therefore, the pooling operations are only introduced to our models after the first convolutional layers in order to investigate the effect of these pooling operations to our network models. Afterwards, the convolution maps produced by the final convolutional layer of each of the model networks illustrated in Table I, are flattened out and concatenated to form a multi-dimensional feature vector. Then, the flattened feature vector is inputted to a fully connected layer where each of the feature elements are connected to all its input neurons. This fully connected layer can capture correlations between feature extracted in the distant part of the instantaneous sEMG images. The output of the fully connected layer in the network are used as discriminative feature representation for instantaneous sEMG images. In terms of representation, this is in contrasts to our HOG-based sEMG image representation [16], that generally extract very local descriptors by computing the histograms of oriented gradients and use as input to a classifier. Finally, the output of the fully connected layer is fed to a G-way softmax layer (where G is the number of neuromuscular activity classes) which produces a distribution over the class labels. If we denote ̂( ) as the th element of the dimensional output vector of the layer preceding the softmax layer, the class probabilities are estimated using the softmax function (. ) defined as below: (̂( ) ) = exp (̂( ) ) ∑ exp (̂( ) )(5) The goal of this training is to maximize the probability of the correct neuromuscular activity class. We achieve this by minimizing the cross-entropy loss [31] for each training sample. If is the true label for a given input, the loss is = − ∑ ( ) ln (σ(̂( ) )(6) The loss is minimized over the parameters by computing the gradient of with respect to the parameters and by updating the parameters using the state-of-the-art Adam (adaptive moment estimation) gradient descent-based optimization algorithm [32]. Having trained the network, an instantaneous HD-sEMG image is recognized as in the neuromuscular activity class ̂ by simply propagating the input image forward and computing: ̂= (̂( ) ). B. Normalization As discussed above, the acquired HD-sEMG signals at each sampling instant were arranged in a 2-D grid according to their electrode positioning and converted into an instantaneous sEMG image by linearly transforming the values of sEMG signals from to color intensity as [−2.5 , 2.5 ] to [0 255]. Therefore, the intensity distribution of the transformed sEMG images are normalized to be between zero and one in order to reduce the sensitivity to contrast and illumination changes. Given an input sEMG image , this is accomplished by applying max-min normalization: ′ = ( − ) . ( ′ − ′ ) − + ′(7) where and are respectively the maximum and minimum pixel intensity of the input image . ′ and ′ are the desired maximum and minimum pixel intensity range for the normalized image ′ . It is worth mentioning that our training data were not prenormalized when the BN is applied. IV.THE ALL CONVOLUTIONAL NEURAL NETWORK (ALL- CONVNET) We find that even our proposed S-ConvNet models described in the previous section can learn all the necessary invariances required for the recognition of instantaneous HD-sEMG images only using the makeshift database avalable for the target task. Following this finding, we further propose a new architecture that consists solely of convolutional layers. To design the proposed All-ConvNet model, we make use of the fact that if the image area covered by units in the topmost convolutional layer covers a portion of the image large enough to recognize its content. Then, the fully connected layers can also be replaced by a simple 1-by-1 convolutions. This leads to predictions of HD-sEMG image classes at different positions which can then simply be averaged over the whole image. This scheme was first described by Lin et al., [29], which further regularizes the network as the 1-by-1 convolution has much less parameters than a fully connected layer. Overall our architecture is thus reduced to consist only of convolutional layers with ELU (Eq. 2) non-linearities and an averaging + softmax layer to produce predictions over the whole instantanous HD-sEMG image. Table II We evaluate the proposed S-ConvNet and All-ConvNet models online by learning the instantaneous sEMG image representation on CapgMyo database for neuromuscular based gesture recognition. Next section discusses experimental results and analysis to evaluate the performance of the proposed S-ConvNet and All-ConvNet as well as some insight and findings about learning and recognizing instantaneous sEMG images. V.THE PERFORMANCE EVALUATION OF THE PROPOSED S-CONVNET AND ALL-CONVNET NETWORK MODELS From the viewpoint of Muscle Computer Interfaces (MCI) application scenarios, the neuromuscular activity recognition can be categorized into two types: I) intrasession, in which a classifier is trained on part of the data recorded from the subjects during one session and evaluated on another part of the data recorded from the same session, and II) inter-session, in which a classifier is trained on the data recorded from the subjects in one sessions and tested on the data recorded in another session. However, the sEMG-based gesture recognition in the literature has usually been investigated in intrasession scenarios [17]. In this preliminary experiments, the performance of our proposed S-ConvNet and All-ConvNet models are evaluated in intra-session scenarios. Nevertheless, the implications of our proposed methods in inter-session scenarios will also be reported in our future works. We evaluated our approach using CapgMyo data sets [17] (this database is made available from following website http://zju-capg.org/myo/data/index.html). This dataset was developed for providing a standard benchmark database (DB) to explore new possibilities for studying next-generation muscle-computer interfaces (MCIs). Table III illustrates gesture in DB-a and DB-b. The CapgMyo database comprises 3 subdatabases (referred as DB-a, DB-b and DB-c). However, DB-a has been used in our preliminary experiments to evaluate the performance of our proposed methods for intra-session neuromuscular activity recognition becasue the maximum number of subjects (18) have been participated in DB-a. In DB-a, 8 isotonic and isometric hand gestures were obtained from 18 of the 23 subjects and each gesture was also recorded for 10 times. For each subject, the recorded HD-sEMG data is filtered, sampled and instantaneous sEMG image is generated using the method mentioned in Section II. More explicitly, for each subject 8 different hand gestures are performed and each hand gestures are recorded for 10 times with a 1000 sampling rate, which in total generates (8 × 10 × 1000 = 80000) instantaneous sEMG images individually. A. Data Selection for Training, Validation and Testing Existing approaches for instantaneous HD-sEMG image recognition used a pre-trained model. In [17] pre-training, a total of (18×40000) = 720000 or 0.72 million training images have been used. During retraining (or fine-tuning), 40000 trainings samples for every subject have been used separately. Therefore, the existing approaches involves a total of (720000+40000) = 760000 or 0.76 million images only in the training process. In contrast, our model is learned from scratch through random-initialization. We performed training, validation and testing using only the makeshift datasets available i.e 80000 images produced by 18 subjects individually. In CapgMyo dataset, 8 different hand gestures (Table III) are performed by the 18 different subjects. Also, each of the 8 different hand gestures were repeated 10 times (10 trials) by the 18 different subjects. In every trial, we obtain 1000 images (because the HD-sEMG signals were sampled at 1000Hz). Therefore, we obtain a total of (1000×10×8) = 80000 images individually for each of the 18 different subjects. In order to be able to use a maximum number of images during training, we also introduced and performed a leave one trial out cross-validation. In leave one trial out cross-validation, we kept one trial out from each of the 8 different hand gestures i.e 8000 images for testing. The remaining 9 trials for 8 different hand gestures i.e 72k images have been used for training. From out of this 72k training images, we kept 9k images out randomly for validation to check whether our devised model is overfitting during the training process. Therefore, our training process involves only 63k images which is in contrast to the existing approaches where 760k images are involved in the training process. Finally, the leave one trial out cross-validation accuracy for 10 different trials are averaged and used as a performance indicator for an intra-subject test. For further illustration, the confusion matrix generated from the predicted classification results for leave one trial out cross-validation for subject 2 in the CapgMyo DB-a database are presented in Appendix A. The cross-validation accuracy B. Experimental Results In our experiments, we compared all the proposed S-ConvNet models described in Section III on the CapgMyo DB-a dataset without any pre-training or data augmentations. For effective and faster training of a CNN network model with high-dimensional parameter spaces requires a good initialisation strategy for the connection weights, a good activation function, using Batch Normalization and a good regularization technique. The weight initialization scheme, an activation function and the effectiveness of BN and Dropout regularizer are determined in a preliminary experiment using S-ConvNet model A and Subjects 2 from DB-a and then fixed for all other experiments. We investigate and experiments with two different initialization schemes for the connection weights in Xavier and He initialization [24], [33]. However, we found that the models with He initialization scheme perform on average 1-1.5% worse than the Xavier initialisation counterparts. We hence do not report the results here with the He intitialization to avoid cluttering the experiments. In order to investigate a most suitable activation function for our proposed S-ConvNet models, we also performed experiments with the different activation functions [34], [35]. The results are reported in Table IV. In addition, as the BN and Dropout often lead to worse performance when they are combined as discussed in Section III. In order to investigate this claim, we performed experiments with both of these methods combined and separately. The results are reported in Table IV. Also, training a CNN network with a highdimensional parameter spaces requires an efficient optimization algorithm. Objective functions are often Hence, we propose to use a computationally efficient stochastic optimization algorithm, Adam [32], which requires only first-order gradients with little memory requirement, is invariant to diagonal rescaling of the gradients and is suitable for high-dimensional problems. It also provides fast and reliable learning convergence that can be considerably faster than the stochastic gradient descent (SGD) optimization algorithm used in the exisitng approaches for instantaneous HD-sEMG image recognition. Our proposed all S-ConvNet and All-ConvNet were trained using Adam optimization algorithms with a momentum decay and scaling decay are intialized to 0.9 and 0.999 respectively. In contrast to SGD, Adam is an adaptive learning rate algorithm, therefore, it requires less tuning of the learning rate hyperparameter. The learning rate 0.001 is intialized to all our experiments. The smaller batches of 256 randomly chosen samples from training dataset are fed to the network during consecutive learning iterations for all our experiments. We set a maximum 100 epochs for training our S-ConvNet and All-ConvNet networks models. However, to avoid overfitting we have also applied early stopping in which the training process are interrupted if no improvments in validation loss are noticed for 5 consecutive epochs. The BN is applied after the input and before each non-linearity. Dropout was used to regularise all networks. The Dropout was applied on all layers with probabilities 35% and 25% for all S-ConvNet and All-ConvNet models respectively. The results for S-ConvNet models A that we conducted for the subject 2 in CapgMyo DB-a database are presented in Table IV. Several trends can be observed from these results. First, confirming previous results from the literature, the simplest model A (S-ConvNet A) perform remarkably well, achieving 98.29% correct neuromuscular activity recognition rate for an intra-subject test. Second, simply applying max-min normalization to the training dataset and fed to the S-ConvNet model A, the average correct recognition rate 94.89% has been achieved for 6 different experiments with only 2.55 min overall runtime for training, validation and testing on a Nvidia Tesla K-20C GPU. Third, simply replacing the max-min normalization by introducing BN to the network the average correct recognition rate improved to 97.13% by sparing overall 7.74 min runtime for training, validation and testing. Fourth, the correct recognition rate slightly decreases to 96.23% when the BN is replaced by the Dropout regularizer and max-min normalization while also increasing the overall runtime to 11.27 min for entire training, validation and testing process. Fifth, when BN and Dropout with a tiny probability are respectively applied to all layers of the network, the average recognition rate increases up to 97.56%. Due to introducing BN and Dropout to the networks, the overall runtime also increases to about 14 min for entire training, validation and testing process. In all cases, the performance of the model slightly decreases with spatial max-pooling and average-pooling. However, the spatial pooling can help to regularize CNNs and might be more effective while we will conduct experiments in intersession scenarios. It is worth noting that, the averagepooling perform quite well when the BN or BN and Dropout are introduced to the network model (e.g. Table III). All these preliminary experimental results for an intra-subject test confirm that our proposed S-ConvNet models can learn all the necessary invarinaces that requires to build a distinctive representation for instantaneous HD-sEMG image recognition. From Table IV it can be noticed that the maximum correct recognition rate 98.29% is achieved when exponential linear unit (ELU) activation function, BN and a tiny dropout probability are introduced to every layer of the network. Therefore, this configuration is fixed for other S-ConvNet and All-ConvNet models and performed experiments on all 18 subjects participated in CapgMyo DB-a. The results for all S-ConvNet and All-ConvNet models are shown in Table V. [13] 89.3 ≈ 5.58 M + Pre-training As can be seen in the Table V, the simple S-ConvNet models (on the order of 2M learning parameters) trained from random-initialization with 3×3 convolutions and a dense layer with only a smaller number of neuron performs comparable to the state of the art for CapgMyo DB-a dataset even though the state of the art methods use more complicated network architectures and training schemes which requires to learn over ≈ 5.58 M parameters during fine-tuning only and also pre-trained with over .72M instantaneous HD-sEMG images. Perhaps even more interesting, the proposed All-ConvNet (on the order of ≈ 0.008 M learning parameters) achieves 85.73% average recognition accuracy of an 8 hand gestures for 18 different subjects. This result is achieved when we performed dimensionality reduction in the All-ConvNet network. As also can be seen in the Table V, the average recognition accuracy decreases when we performed dimensionality reduction in to the network (e.g., row 3). This is due to the fact that the resolution of our instantaneous HD-sEMG images are already very low. Therefore, the recognition accuracy are expected to improve if the proposed All-ConvNet model are trained without performing dimensionality reduction. We are currently re-training the All-ConvNet network on DB-a without performing dimensionality reduction, and will include the results in Table V once training is finished. Fig. 5 presents the recognition accuracy obtained by our proposed different S-ConvNet and All-ConvNet models for 18 different subjects. We achieve 87.69%, 86.94%, 87.02% and 85.72% average recognition accuracy for the proposed S-ConvNet-A, B, C and All-ConvNet models respectively, which is very compétitive to the more complex, highly resourcebassed and fine-tuned pre-trained models proposed by the existing approaches while also reducing the learning parameters to a large extent. These high recognition accuracies for neuromuscular activity recognition based on instantaneous HD-sEMG image recognition indicate the stability and potentiality of the proposed S-ConvNet and All-ConvNet models. For the fair comparison with the state of the art, the following points are required to be highlighted. We introduce a leave one trial out cross-validation in which our proposed S-ConvNet and All-ConvNet models are tested with 80000 different samples for every subject. Existing instantaneous HD-sEMG image recognition approaches are tested with 40000 samples for each of the subject. Whereas we have used 80000 samples (twice the number of testing samples) for recognition and achieved the comparable performance on par with the state of the art. It is also noteworthy that the recognition results of all S-ConvNet and All-ConvNet models are obtained without any hyperparameter tuning. Therefore, we also want to stress out that the results of all models evaluated in this report could potentially be improved or even surpass the state of the art by a thorough hyperparameter tuning. Finally, the experimental results demonstrate that, (i) The proposed S-ConvNet models trained from random-intialization can learn all the necessary invariances that requires to build a discriminant representation using only the available target dataset for neuromuscular activity recognition based on instantaneous HD-sEMG images. Therefore, our discoveries will encouage community to devise shallow ConvNet architectures and train the model from the scratch (instead of pre-training) for improving the neuromuscular activity recognition performance especially in the data contrained scenarios. (ii) We definitely agree that, given limitless of training data and unlimited computational power, deep neural networks should perform extremely well. However, our proposed approach and experimental results imply an alternative view to handle this problem : a better S-ConvNet model structure might enable similar or even better performance compared with the more complex existing models trained from large datasets by conducting an exhaustive hyperparameter search. Particularly, our S-ConvNet and All-ConvNet models are only trained with 63K instantaneous HD-sEMG images, but achieve competitve performance as par with the more complex existing models trained with 720k + 40K instantaneous HD-sEMG images. Moreover, as the datasets grow larger, training complex deep neural networks becomes more expensive. Hence, a simple yet efficient approach becomes increasingly significant. Despite its conceptual simplicity, our proposed methods shows a great potential under this settings. (iii) We argue that, as aforementioned briefly, training from scratch is of critical importance at least for the following reasons. First, Domain mismatch-the distributions of the sEMG signals vary considerably even between recording sessions of the same subject within the same experimental set up. This problem becomes more challenging, where the learned model is used to recognize muscular activities in a new recording session. Though the fine-tuning of the pre-trained model can reduce the gap due to the deformations in a new recording session. But, what an amazing thing if we have a technique that can learn HD-sEMG images from scratch for recognizing neuromuscular activities. Second, the fine-tuned pre-trained model restricts the structure design space for neuromuscular activity recognition. This is very critical for the deployment of deep neural networks models to the resource limited scenarios. (iv) The existing CNN-based neuromuscular activity recognition methods requires a huge memory space to store the massive parameters. Therefore, the models are usually unsuitable for low-end hand-held devices and embedded electronics. Thanks to the proposed parameter-efficient S-ConvNet and All-ConvNet, our model is much smaller than the most competitive methods for instantaneous HD-sEMG image recognition. For instance, our S-ConvNet-A and All-ConvNet model acheives 87.69% and 85.73% average recognition accuracy with only ≈ 2.09 M and ≈ 0.008 M parameters respectively, which shows a greater potential for applications on low-end devices. VI. SUMMARY We presented S-ConvNet and All-ConvNet models, a simple yet efficient framework for learning instantaneus HD-sEMG images from scratch for neuromuscular activity recognition. Without using any pre-trained models, our proposed S-ConvNet and All-ConvNet demonstartes very competitive accuracy to the state of the art for neuromuscular activity recognition based on instantaneous HD-sEMG images, while using ≈ 12 × smaller dataset and reducing learning parameters to a large extent. The proposed S-ConvNet and All-ConvNet has a great potential for learning and recognizing neuromuscular activities on resourcebounded devices. Our future work will consider improving inter-session neuromuscular activity recogntion performances, as well as learning S-ConvNet and All-ConvNet models to support resource bounded devices. ACKNOWLEDGMENT -This work was supported in part by the regroupement stratégique en microsystèmes du Québec (ReSMiQ) and the Natural Sciences and Engineering Research Council (NSERC) of Canada. APPENDIX A. A CROSS-VALIDATION RESULTS FOR THE PROPOSED MODEL S-CONVNET A The confusion matrix generated from the predicted classification results for leave one trial out cross validation for subject 2 in the CapgMyo Db-a database are presented below as an example. The number of correctly classified neuromuscular activities (or hand gesutres) are listed along the diagonal line of the confusion matrix. The mean accuracy (mA) for leave one trial out cross validation are reported below every confusion matrix. The overall recognition rate for subject 2 in the CapgMyo dataset based on our S-ConvNet A is 96.0362%. Fig. 1 . 1Schematic diagram of the proposed framework of muscular activity recognition by instantaneous sEMG images III. MODEL DESCRIPTION-THE SHALLOW CONVOLUTIONAL NEURAL NETWORK (S-CONVNET) Fig. 2 . 2Total number HD-sEMG images seen during training, for pre-training + fine-tuning vs. random-initialization (3) can be interpreted as a max-pooling if the set as → ∞. In addition, tweaking a little bit of the equation (2) and setting = 1, the following equation can be translated in to an average-pooling: Fig. 4 . 4A schematic illustration of convolutions and pooling operation a) Convolution maps and b) Convolutions maps after spatial pooling. A is computed for each class i, as the number of totals correctly recognized hand gestures, c , divided by the total number of … , G} and G is the number of gesture classes. of internal data sub-sampling, dropout regularization and other sources of noise. Fig. 5 . 5Recognition accuracy of 8 hands gestures for 18 different subjects with our proposed S-ConvNet and All-ConvNet recognition approaches FigFig. 7 .Fig. 9 .Fig. 15 . 7915T2 trial of 8 different hand gestures are used as test set (mA = 96.78) Fig. 8. T3 trial of 8 different hand gestures are used as test set (mA = 97.91 ) T4 trial of 8 different hand gestures are used as test set (mA = 95.65) Fig. 10.T5 trial of 8 different hand gestures are used as test set (mA = 98.61) Fig. 11. T6 trial of 8 different hand gestures are used as test set (T10 trial of 8 different hand gestures are used as test set (mA = 88.66 ) TABLE I ITHE THREE S-CONVNET NETWORKS MODELS FOR NEUROMUSCULAR ACTIVITY RECOGNITIONA B C Input 16×8 Gray-level Image 3 × 3 Conv. 32 ELU 3 × 3 Conv. 32 ELU 3 × 3 Conv. 32 ELU 3 × 3 Conv. 64 ELU 1 × 1 Conv. 32 ELU 3 × 3 Conv. 32 ELU 3 × 3 Conv. 64 ELU 3 × 3 Conv. 64 ELU 3 × 3 Conv. 32 ELU with stride = 2 FC1 256 ELU 1× 1 Conv. 64 ELU 3 × 3 Conv. 64 ELU FC2 G-way softmax FC1 256 ELU 3 × 3 Conv. 64 ELU - FC2 G-way softmax 3 × 3 Conv. 64 ELU with stride = 2 - - FC1 256 ELU - - FC2 G-way softmax defined as: describes our proposed All-ConvNet architectureTABLE II THE ALL-CONVNET NETWORK MODEL FOR NEUROMUSCULAR ACTIVITY RECOGNITIONAll-ConvNet Input 16×16 Gray-level Image 3 × 3 Conv.64 ELU 3 × 3 Conv.64 ELU 3 × 3 Conv. 64 ELU with stride r =2 3× 3 Conv. 128 ELU 3× 3 Conv. 128 ELU 3× 3 Conv. 128 ELU with stride r =2 1×1 Conv. 128 ELU 8×8 Conv. 8 ELU global averaging over 8×8 spatial dimensions G-way softmax TABLE III . IIIGESTURES IN DB-A AND DB-B (8 ISOTONIC AND ISOMETRIC HAND CONFIGURATIONS) TABLE IV NEUROMUSCULAR IVACTIVITY RECOGNITION ACCURACY (%) FOR DIFFERENT ACTIVATION FUNCTIONS AND SPATIAL POOLINGNetwork Relu Leaky-Relu Elu Sigmoid Max-pool Avg-pool Avg-run time (min.) A 95.18 95.56 93.98 95.76 94.55 94.31 2.55 A with BN 96.16 97.34 97.50 98.00 96.66 97.13 7.74 A with Dropout regularization 96.99 96.68 96.58 96.30 95.19 95.61 11.27 A with BN and Dropout 97.18 97.54 98.29 97.80 96.98 97.54 14 TABLE V THE VAVERAGE RECOGNITION ACCURACY (%) OF 8 HAND GESTURES WITH INSTANTANEOUS HD-SEMG IMAGES FOR 18 DIFFERENT SUBJECTS AND RECOGNITION APPROACHESModel Average Recognition Accuracy (%) #Learning Parameters (millions) S-ConvNet-A 87.69 ≈ 2.09 M S-ConvNet-B 86.94 ≈ 2.12 M S-ConvNet-C with stride = 2 83.92 ≈ 0.14 M S-ConvNet-C with stride = 1 87.02 ≈ 2.10 M All-ConvNet 85.73 ≈ 0.008 M Geng et.al., The extraction of neural information from the surface EMG for the control of upper-limb prostheses: merging avenues and challenges. D Farina, IEEE Transactions on Neural Systems and Rehabilitation Engineering. 224D. Farina et al., "The extraction of neural information from the surface EMG for the control of upper-limb prostheses: merging avenues and challenges," IEEE Transactions on Neural Systems and Rehabilitation Engineering, vol. 22, no. 4, pp. 797-809, Jul. 2014. EMG-based continuous control scheme with simple classifier for electric-powered wheelchair. G Jang, J Kim, J , S Lee, Y Choi, IEEE Transactions on Industrial Electronics. 636G. Jang, J. Kim, J, S. Lee, and Y. Choi, "EMG-based continuous control scheme with simple classifier for electric-powered wheelchair," IEEE Transactions on Industrial Electronics, vol. 63, no. 6, pp. 3695-3705, 2016. Review of control algorithms for robotic ankle systems in lower-limb orthoses, prostheses, and exoskeletons. R Jimenez-Fabian, O Verlinden, Medical Engineering & Physics. 344R. Jimenez-Fabian, and O. Verlinden, "Review of control algorithms for robotic ankle systems in lower-limb orthoses, prostheses, and exoskeletons," Medical Engineering & Physics, vol. 34, no. 4, pp. 397-408, May 2012. Epidermal electronics. D.-H Kim, Science. 333D.-H. Kim, et al.,"Epidermal electronics," Science, vol. 333, pp. 838-843, 2011. Application of surface EMG topography in low back pain rehabilitation assessment. Y Hu, J N Mak, &amp; K Luk, International IEEE/EMBS Conference on Neural Engineering. Y. Hu, J. N. Mak, & K. Luk, "Application of surface EMG topography in low back pain rehabilitation assessment," International IEEE/EMBS Conference on Neural Engineering, pp. 557-560, May 2007. Intimate interfaces in action: assessing the usability and subtlety of EMGbased motionless gestures. E Costanza, S Inverso, R Allen, R Maes, Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. the SIGCHI Conference on Human Factors in Computing SystemsNew York, NY, USAACME. Costanza., S. A Inverso, R. Allen, R. and P Maes, "Intimate interfaces in action: assessing the usability and subtlety of EMG- based motionless gestures," Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, ACM New York, NY, USA, pp. 819-828, May 2007. Demonstrating the feasibility of using forearm electromyography for muscle-computer interfaces. T S Saponas, D S Tan, D Morris, R Balakrishnan, Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. the SIGCHI Conference on Human Factors in Computing SystemsNew York, NY, USAACMT. S. Saponas, D. S. Tan, D. Morris, and R. Balakrishnan,"Demonstrating the feasibility of using forearm electromyography for muscle-computer interfaces," Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, ACM New York, NY, USA, pp. 515-524, Apr. 2008. Making muscle-computer interfaces more practical. T S Saponas, D S Tan, D Morris, D , J Turner, J A Landay, Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. the SIGCHI Conference on Human Factors in Computing SystemsACMT. S. Saponas, D. S. Tan, D. Morris, D, J. Turner and J. A. Landay, "Making muscle-computer interfaces more practical," Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pp. 851-854, ACM, Apr. 2010. Electromyography data for non-invasive naturally-controlled robotic hand prostheses. M Atzori, Scientific Data 1M. Atzori et al.,"Electromyography data for non-invasive naturally-controlled robotic hand prostheses," Scientific Data 1, Dec. 2014. Multi-source adaptive learning for fast control of prosthetics hand. N Patricia, T Tommasi, B Caputo, International Conference on Pattern Recognition. N. Patricia, T. Tommasi. and B. Caputo, "Multi-source adaptive learning for fast control of prosthetics hand," International Conference on Pattern Recognition, pp. 2769-2774, Aug. 2014. Advancing muscle-computer interfaces with high-density electromyography. C Amma, T Krings, J ِer, J , T Schultz, Proceedings of the 33 rd Annual ACM Conference on Human Factors in Computing Systems. the 33 rd Annual ACM Conference on Human Factors in Computing SystemsNew York, NY, USAACMC. Amma, T. Krings, J. B ِer, J. and T. Schultz, "Advancing muscle-computer interfaces with high-density electromyography," Proceedings of the 33 rd Annual ACM Conference on Human Factors in Computing Systems, pp. 929- 938, ACM New York, NY, USA, Apr. 2015. Spatial correlation of high density EMG signals provides features robust to electrode number and shift in pattern recognition for myocontrol. A Stango, F Negro, D Farina, IEEE Transactions on Neural Systems and Rehabilitation Engineering. 232A. Stango, F. Negro and D. Farina, "Spatial correlation of high density EMG signals provides features robust to electrode number and shift in pattern recognition for myocontrol," IEEE Transactions on Neural Systems and Rehabilitation Engineering, vol. 23, no. 2, pp. 189-198, Mar. 2015. Gesture recognition by instantaneous surface EMG images. W Geng, Y Du, W Jin, W Wei, Y Hu, J Li, Scientific Reports. 636571W. Geng, Y. Du, W. Jin, W. Wei, Y. Hu and J. Li, "Gesture recognition by instantaneous surface EMG images," Scientific Reports, 6, 36571, 2016, . Fatigue and fibromyalgia syndrome: clinical and neurophysiologic pattern. R Casale, A Rainoldi, Best Practice & Research Clinical Rheumatology. 252R. Casale and A. Rainoldi, "Fatigue and fibromyalgia syndrome: clinical and neurophysiologic pattern," Best Practice & Research Clinical Rheumatology, vol. 25, no. 2, pp. 241-247, Apr. 2011. Imagenet classification with deep convolutional neural networks. A Krizhevsky, I Sutskever, G E Hinton, Advances in Neural Information Processing Systems. A. Krizhevsky, I. Sutskever, and G. E. Hinton,,"Imagenet classification with deep convolutional neural networks," Advances in Neural Information Processing Systems, pp. 1097- 1105, 2012. HOG and Pairwise SVMs for Neuromuscular Activity Recognition Using Instantaneous HD-sEMG Images. M R Islam, D Massicotte, F Nougarou, W Zhu, 16th IEEE International New Circuits and Systems Conference (NEWCAS). Montreal, QC, CanadaM. R. Islam, D. Massicotte, F. Nougarou and W. Zhu, "HOG and Pairwise SVMs for Neuromuscular Activity Recognition Using Instantaneous HD-sEMG Images," 2018 16th IEEE International New Circuits and Systems Conference (NEWCAS), Montreal, QC, Canada, pp. 335-339, Jun 2018. Surface EMGbased inter-session gesture recognition enhanced by deep domain adaptation. Y Du, W Jin, W Wei, Y Hu, W Geng, Sensors. 173458Y. Du., W. Jin, W. Wei, Y. Hu and W Geng, "Surface EMG- based inter-session gesture recognition enhanced by deep domain adaptation," Sensors, vol. 17, no. 3, 458, Feb 2017. DeepFace: Closing the Gap to Human-Level Performance in Face Verification. Y Taigman, M Yang, M Ranzato, L Wolf, 2014 IEEE Conference on Computer Vision and Pattern Recognition. Columbus, OHY. Taigman, M. Yang, M. Ranzato and L. Wolf, "DeepFace: Closing the Gap to Human-Level Performance in Face Verification," 2014 IEEE Conference on Computer Vision and Pattern Recognition, Columbus, OH, pp. 1701-1708, 2014. Locally smoothed neural networks. L Pang, Y Lan, J Xu, J Guo, X Cheng, abs/1711.08132CoRR. L. Pang, Y. Lan, J. Xu, J. Guo and X. Cheng, "Locally smoothed neural networks," CoRR, abs/1711.08132, 2017. Rethinking imagenet pretraining. K He, R Girshick, P Dollar, arXiv:1811.08883Available: arXiv preprintK. He, R. Girshick, and P. Dollar, "Rethinking imagenet pre- training", [Online]. Available: arXiv preprint arXiv:1811.08883, 2018. Cross modal distillation for supervision transfer. S Gupta, J Hoffman, J Malik, IEEE Conference on Computer Vision & Pattern Recognition (CVPR). S. Gupta, J. Hoffman, and J. Malik, "Cross modal distillation for supervision transfer", In IEEE Conference on Computer Vision & Pattern Recognition (CVPR), Jun 2016. DSOD: Learning deeply supervised object detectors from scratch. Z Shen, Z Liu, J Li, Y.-G Jiang, Y Chen, X Xue, ICCV. Z. Shen, Z. Liu, J. Li, Y.-G. Jiang, Y. Chen, and X. Xue, "DSOD: Learning deeply supervised object detectors from scratch," In ICCV, 2017. Striving for Simplicity: The All Convolutional Net. J T Springenberg, A Dosovitskiy, T Brox, M A Riedmiller, Available: abs/1412.6806CoRRJ. T. Springenberg, A. Dosovitskiy, T. Brox, and M. A. Riedmiller, "Striving for Simplicity: The All Convolutional Net," CoRR, [Online]. Available: abs/1412.6806, 2014. Understanding the difficulty of training deep feedforward neural networks. X Glorot, Y Bengio, AISTATS. X. Glorot and Y. Bengio, "Understanding the difficulty of training deep feedforward neural networks," In AISTATS, 2010. Batch normalization: Accelerating deep network training by reducing internal covariate shift. S Ioffe, C Szegedy, In ICML. 37S. Ioffe and C. Szegedy, "Batch normalization: Accelerating deep network training by reducing internal covariate shift," In ICML, vol. 37, pp. 448-456, 2015. Group normalization. Y Wu, K He, ECCV. Y. Wu and K. He, "Group normalization," In ECCV, pp. 3-19, 2018. Dropout: a simple way to prevent neural networks from overfitting. N Srivastava, G Hinton, A Krizhevsky, I Sutskever, R Salakhutdinov, Journal of Machine Learning Research. 151N. Srivastava, G. Hinton, A. Krizhevsky, I. Sutskever, and R. Salakhutdinov, "Dropout: a simple way to prevent neural networks from overfitting," "Journal of Machine Learning Research," 15(1), pp. 1929-1958, 2014. Understanding the disharmony between dropout and batch normalization by variance shift. X Li, S Chen, X Hu, J Yang, arXiv:1801.05134X. Li, S. Chen, X. Hu, and J. Yang, "Understanding the disharmony between dropout and batch normalization by variance shift," [Online]. Available:, In arXiv:1801.05134, 2018. Network in network. L Min, C Qiang, Y Shuicheng, ICLR: Conference Track. L. Min, C.Qiang, and Y. Shuicheng, "Network in network," In ICLR: Conference Track," 2014. Very deep convolutional networks for large-scale image recognition. K Simonyan, A Zisserman, K. Simonyan and A. Zisserman, "Very deep convolutional networks for large-scale image recognition," [Online]. . Available, arxiv:cs/arXiv:1409.1556Available: In arxiv:cs/arXiv:1409.1556, 2014. On loss functions for deep neural networks in classification. K Janocha, W M Czarnecki, CoRR. K. Janocha and Czarnecki WM , "On loss functions for deep neural networks in classification," CoRR, [Online]. Adam: A method for stochastic optimization. D P Kingma, J Ba, D. P. Kingma, and J. Ba, "Adam: A method for stochastic optimization," [Online]. . Available, arXiv:1412.6980arXiv preprintAvailable: arXiv preprint arXiv:1412.6980, 2014. Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. K He, X Zhang, S Ren, J Sun, ICCV. K. He, X. Zhang, S. Ren, and J. Sun, "Delving deep into rectifiers: Surpassing human-level performance on imagenet classification," In ICCV, Dec 2015. Fast and accurate deep network learning by exponential linear units (elus). D-A Clevert, T Unterthiner, S Hochreiter, Available: abs/1511.07289CoRRD-A Clevert, T. Unterthiner, and S. Hochreiter, "Fast and accurate deep network learning by exponential linear units (elus)," CoRR, [Online]. Available: abs/1511.07289, 2015. Empirical evaluation of rectified activations in convolutional network. B Xu, N Wang, T Chen, M Li, OnlineB. Xu, N. Wang, T. Chen, and M. Li, "Empirical evaluation of rectified activations in convolutional network," [Online]. . Available, arXiv:1505.00853arXiv preprintAvailable: arXiv preprint arXiv:1505.00853, 2015.
[]
[ "QoS-Aware Sum Capacity Maximization for Mobile Internet of Things Devices Served by UAVs", "QoS-Aware Sum Capacity Maximization for Mobile Internet of Things Devices Served by UAVs" ]
[ "Mohammadsaleh Nikooroo \nFaculty of Electrical Engineering\nCzech Technical University\nPrague, PragueCzech Republic\n", "Zdenek Becvar [email protected] \nFaculty of Electrical Engineering\nCzech Technical University\nPrague, PragueCzech Republic\n", "Omid Esrafilian \nCommunication Systems Department\nEURECOM\nSophia AntipolisFrance\n", "David Gesbert [email protected] \nCommunication Systems Department\nEURECOM\nSophia AntipolisFrance\n" ]
[ "Faculty of Electrical Engineering\nCzech Technical University\nPrague, PragueCzech Republic", "Faculty of Electrical Engineering\nCzech Technical University\nPrague, PragueCzech Republic", "Communication Systems Department\nEURECOM\nSophia AntipolisFrance", "Communication Systems Department\nEURECOM\nSophia AntipolisFrance" ]
[]
The use of unmanned aerial vehicles (UAVs) acting as flying base stations (FlyBSs) is considered as an effective tool to improve performance of the mobile networks. Nevertheless, such potential improvement requires an efficient positioning of the FlyBS. In this paper, we maximize the sum downlink capacity of the mobile Internet of Things devices (IoTD) served by the FlyBSs while a minimum required capacity to every device is guaranteed. To this end, we propose a geometrical approach allowing to derive the 3D positions of the FlyBS over time as the IoTDs move and we determine the transmission power allocation for the IoTDs. The problem is formulated and solved under practical constraints on the FlyBS's transmission and propulsion power consumption as well as on flying speed. The proposed solution is of a low complexity and increases the sum capacity by 15%-46% comparing to state-of-the-art works.
10.1109/pimrc54779.2022.9977852
[ "https://export.arxiv.org/pdf/2210.11880v1.pdf" ]
253,080,554
2210.11880
6ef52673e5663714c6e9f838610e3e1aac3c6edf
QoS-Aware Sum Capacity Maximization for Mobile Internet of Things Devices Served by UAVs Mohammadsaleh Nikooroo Faculty of Electrical Engineering Czech Technical University Prague, PragueCzech Republic Zdenek Becvar [email protected] Faculty of Electrical Engineering Czech Technical University Prague, PragueCzech Republic Omid Esrafilian Communication Systems Department EURECOM Sophia AntipolisFrance David Gesbert [email protected] Communication Systems Department EURECOM Sophia AntipolisFrance QoS-Aware Sum Capacity Maximization for Mobile Internet of Things Devices Served by UAVs Index Terms-Flying base stationUAVTransmission powerPropulsion powerSum capacityMobile IoT device6G The use of unmanned aerial vehicles (UAVs) acting as flying base stations (FlyBSs) is considered as an effective tool to improve performance of the mobile networks. Nevertheless, such potential improvement requires an efficient positioning of the FlyBS. In this paper, we maximize the sum downlink capacity of the mobile Internet of Things devices (IoTD) served by the FlyBSs while a minimum required capacity to every device is guaranteed. To this end, we propose a geometrical approach allowing to derive the 3D positions of the FlyBS over time as the IoTDs move and we determine the transmission power allocation for the IoTDs. The problem is formulated and solved under practical constraints on the FlyBS's transmission and propulsion power consumption as well as on flying speed. The proposed solution is of a low complexity and increases the sum capacity by 15%-46% comparing to state-of-the-art works. I. INTRODUCTION Deployment of unmanned aerial vehicles (UAVs) acting as flying base stations (FlyBSs) is a promising way to improve performance in 6G mobile networks, since the FlyBSs offer a high mobility and an adaptability to the environment via flexible movement in 3D. Potential benefits offered by the FlyBSs, however, comes along with challenges related to radio resource management and positioning of the FlyBSs [1], [2], [3], [4], [5], [6]. The problem of the FlyBS's positioning is investigated in many recent works. The objectives targeted in those works include a maximization of the downlink sum capacity [7], a maximization of the minimum capacity [8], a maximization of the uplink capacity [9], a maximization of the sum of uplink and downlink capacities [10], [11], a maximization of the minimum average capacity for device-to-device communication [12], a maximization of the minimum capacity in networks of sensors or Internet of Things devices (IoTDs) [13], a minimization of the FlyBS's power consumption [14], a minimization of the number of FlyBSs to guarantee users' QoS requirements [15]. However, the users considered in [7]- [15] are static (i.e., do not change their location over time). This is a required assumption in the solutions provided by those works in [7]- [15], and the FlyBS's entire trajectory is derived before the beginning of mission knowing that the users do not move during the mission. An extension of the solutions in these papers to the scenario with moving users is not straightforward. Furthermore, a guarantee of the minimum capacity to the users is not considered in [7], [10], [11], hence, the solutions cannot be adopted in applications, where the quality of service is concerned. A solution potentially applicable to the scenarios with moving users is outlined in [16], where the FlyBSs' altitude is optimized to maximize the average system throughput. Then, in [17], the authors optimize the number of FlyBSs and their positions to maximize sum capacity. However, neither [16] nor [17] provide any guarantee of the minimum capacity to the users. In [18], the sum capacity is maximized via a positioning of the FlyBSs using reinforcement learning. Furthermore, the problem of the transmission power allocation is investigated in [19] to maximize the energy efficiency, i.e., the ratio of the sum capacity to the total transmission power consumption. The minimum required capacity in [18] and [19] is assumed to be equal for all users. Besides, the FlyBS's positioning is not addressed in [19] at all and the transmission power allocation is not considered in [18]. Then, the minimum capacity of the users is maximized via the FlyBS's positioning and the transmission power allocation in [20]. Nevertheless, no constraint on the FlyBS's speed is considered. Surprisingly, there is no work targeting the sum capacity maximization in a practical scenario with moving sensors/IoTDs and with the minimum capacity guaranteed to the individual sensors/IoTDs. All related works are either focused on the scenario where data is collected from static users with apriori known coordinates or no minimum capacity is guaranteed to the users. We target the scenario with mobile devices and the minimum capacity guarantee and we propose a low-complexity solution based on an alternating optimization of the FlyBS's positioning and the transmission power allocation to the devices. The proposed optimization is done with respect to the feasibility region that is derived via a proposed geometrical approach. With respect to a majority of the related works, we also consider practical aspects and constraints of the FlyBSs including limits on the flying speed, transmission power, and propulsion power. The rest of this paper is organized as follows. In Section II we provide the system model for FlyBS-enabled sensor network and we formulate the problem of sum capacity maximization. Next, we propose a method to check the feasibility of a solution to the FlyBS's positioning and transmission power allocation in Section III. Then, we propose a solution based on an alternating optimization of the transmission power allocation and the FlyBS's positioning in section IV. A geometrical approach is proposed for the FlyBS's positioning. Then, in section V, the adopted simulation scenario and parameters are specified and the performance of our proposed solution is shown and compared with state-ofthe art schemes. Last, we conclude the paper and outline the potential future extensions in Section VI. II. SYSTEM MODEL AND PROBLEM FORMULATION In this section, we first define the system model. Then, we formulate the constrained problem of the sum capacity maximization. In our system model, one FlyBS serves N sensors/IoTDs {u 1 , ..., u N } in an area as shown in Fig. 1 . Let q(t) = X[k], Y [k], H[k] T denote the location of the FlyBS at time step k. We refer to the IoTDs/sensors as nodes in the rest of this paper. Let v i [k] = x i [k], y i [k], z i [k] T denote the coordinates of node i at time step k. Then, d i [k] denotes Euclidian distance of node i to the FlyBS at time step k. We adopt orthogonal downlink channel allocation for all nodes. Thus, the channel capacity of node i is: C i [k] = B i log 2 1 + p R i [k] N i + I ,(1) where B i denotes the bandwidth of the i-th node's channel (note that B i can differ among nodes), N i is the noise power at the i-th node's channel, I denotes the background interference from neighboring base stations (both flying and static), and p R i [k] is the received power by the i-th node at time step k. Let p T = [p T 1 , ..., p T N ] denote the FlyBS's transmission power allocated to all N nodes. According to the Friis' transmission equation, the received signal's power at node i (i ∈ [1, N ]) from the FlyBS is calculated as: p R i [k] = Q i ( γ γ + 1 h i + 1 γ + 1h i )p T i [k]d i −αi [k],(2) where the coefficient Q i is a parameter depending on the communication frequency and gain of antennas. Furthermore, γ is the Rician fading factor, h i is the line-of-sight (LoS) component satisfying |h n | = 1, andh i denotes the non-lineof-sight (NLoS) component satisfyingh i ∼ CN (0, 1), and α i is the pathloss exponent of the channel for node i. For the propulsion power consumption, we refer to the model provided in [21] for rotary-wing UAVs. More specifically, the propulsion power is expressed as: P pr [k] = L 0 1 + 3V 2 F [k] U 2 tip + η 0 ρs r AV 3 F [k] 2 + L i 1 + V 4 F [k] 4v 4 0,h − V 2 F [k] 2v 2 0,h 1 2 ,(3) where V F [k] is the FlyBS's speed at the time step k. Furthermore, L 0 and L i are the blade profile and induced powers in hovering status, respectively, U tip is the tip speed of the rotor blade, v 0,h is the mean rotor induced velocity during hovering, η 0 is the fuselage drag ratio, ρ is the air density, s r is the rotor solidity, and A is the rotor disc area. Our goal is to find the position of the FlyBS to maximize the sum capacity at every time step k while the node's minimum required capacity is always guaranteed with practical constraints implied by FlyBSs. Hence, we formulate the problem of the sum capacity maximization as follows: max p T [k],q[k] C tot [k], ∀k,(4)s.t. C i [k] ≥ C min i [k], i ∈ [1, N ], (4a) H min [k] ≤ H[k] ≤ H max [k], (4b) ||q[k] − q[k − 1]|| ≤ V max F δ k ,(4c)P pr [k] ≤ P pr,th [k], (4d) N i=1 p T i [k] ≤ p T max , p T i [k] ≥ 0, (4e) where C tot [k] = N i=1 C i [k] is the sum capacity of the nodes at time step k, C min i [k] denotes the minimum capacity required by node i at time step k, H min and H max are the minimum and maximum allowed flying altitude of the FlyBS at the time step k, respectively, and are set according to the environment as well as the flying regulations. Furthermore, V max F is the FlyBS's maximum supported speed, δ k is the duration between the time steps k − 1 and k, ||.|| is the L 2 norm, and p T max is the FlyBS's maximum transmission power limit. The constraint (4a) ensures that every node always receives the required capacity. Furthermore, (4b) and (4c) restrict the FlyBS's speed to the range of [0, V max F ] and H min [k], H max [k] , respectively. In addition, the constraints (4d) and (4e) assure that the FlyBS's propulsion power and total transmission power would not exceed P pr,th and p T max , respectively. In practice, the value of P pr,th can be set/adjusted at every time step and according to available remaining energy in the FlyBS's battery to prolong the FlyBS's operation. Challenging aspects to solve (4) include: i) before the positioning of the FlyBS, a feasibility of the solution to (3) should be verified due the constraints (4a)-(4e), and ii) the objective function C tot and the constraint (4e) are non-convex with respect to q. To tackle the aspect i), we propose a geometrical approach with a low complexity to check the feasibility of any solution to (4). If there is a feasible solution, the proposed approach further determines the feasibility domain used for a derivation of the FlyBS's positions. To tackle the aspect ii), we propose a suboptimal solution using an alternating optimization of the transmission power allocation and the FlyBS's positioning based on a local approximation of the objective function. In particular, we propose an iterative approach based on two steps: 1) an optimization of the transmission power allocation p T at the given position of the FlyBS and, 2) an update (optimization) of the FlyBS's position for the derived vector p T from the step 1 via a consideration of the feasibility domain defined by the constraints in (4). We elaborate the derivation of feasibility domain in Section III. Then, we explain our proposed alternating optimization of the transmission power and the FlyBS's positioning in Section IV. III. FEASIBILITY OF A SOLUTION In this section, we present a geometrical approach to check the feasibility of an arbitrary solution to (4) via a consideration of the constraints in (4). Let us first rewrite the constraint (4a) for an arbitrary setting of the transmission power allocation p T to individual nodes by means of (1) and (2) as follows: C i = B i log 2 1 + Q i p T i d i αi (N i + I) ≥ C min i ,(5) which yields d i ≤ ( Q i p T i (2 C min i B i − 1)(N i + I) ) 1 α i = ρ i , 1 ≤ i ≤ N. (6) Each of the N inequalities in (6) demarcates a sphere in 3D space. In particular, for every i ∈ [1, N ], the inequality in (6) implies that the FlyBS lies inside or on the sphere with a center at the location of node i and with a radius of ρ i . Next, the constraint (4b) defines the next position of the FlyBS on or between the planes z = H min [k] and z = H max [k]. In addition, according to Fig. 2 , the constraint (4d) is translated as V F ∈ [V th,1 F , V th,2 F ]. By combining this inequality with (4c) we get ||q[k] − q[k − 1]|| ≤ (min{V F,max , V th,2 F })δ k ,(7) and ||q[k] − q[k − 1]|| ≥ V th,1 F δ k .(8) The equations (7) and (8) define the FlyBS's next possible position as the border or inside of a region enclosed q[k] − θ 0 (p T , k) ≤ Υ(p T , k),(9)where θ 0 (p T , k) = [ N i=1 ιixi N i=1 ιi , N i=1 ιiyi N i=1 ιi , H], and ι i is a substitution derived in the proof, and Υ(p T , k) = ( p T max −χ N i=1 ιi ) 1 2 . proof. Please see Appendix A Note that, an existence of a feasible solution is contingent upon all the constraints in (4) and not only the condition (9). Thus, we now analyze the feasibility of any solution to (4) by incorporating the constraints derived for the FlyBS's next position. In order to check if these inequalities hold at the same time, we propose the following low-complexity approach. Let cl i (i ∈ [1, +2]) denote N + 2 spheres defined by the inequalities (6), (7), and (9). Note that we deal with (7) later in this section. The N spheres represented by (6) have centers at the same position as their corresponding nodes. Furthermore, the sphere indicated by (7) has a center at the FlyBS's position at time step k − 1. Then, for each pair of spheres sp j and sp k , we consider their intersection. There are three different cases regarding the intersection: i) sp j and sp k have no intersection point and lie completely outside each other, ii) sp j and sp k have no intersection points and one of these spheres lies inside the other one, iii) sp j and sp k intersect on their borders which is in the shape of a circle (assuming that a single point is also a circle with radius of zero). Note that any two spheres from the set of spheres indicated by (4a) do not completely overlap, as each sphere has a distinct center. Furthermore, if any sphere represented by (7) or (9) is identical to another sphere, we simply ignore one of those spheres. For the case i, we conclude that at least two of the constraints in (4) do not hold at the same time and, thus, there is no feasible solution to (4). For the case ii, one of the constraints in (4) corresponding to the outer sphere is automatically fulfilled if the other constraint (the one corresponding to the inner sphere) holds. In such case, we ignore the constraint corresponding to the outer sphere and the rest of the constraints are dealt with according to case i or iii. For the case iii, we propose the following low-complexity method to verify the non-emptiness of the intersections of the spheres (in other words, a feasibility of a solution to (4)): given the fact that the intersection of a plane and a sphere is circle (if not empty), we search for the intersection of the spheres only on certain planes. In particular, corresponding to each of the N + 2 spheres sp j , consider two horizontal planes pl j,1 and pl j,2 that are tangent to sp j (one at the topmost point on sp j and one at the lower most point). Then, we remove from the set of the derived planes those that do not fulfill the altitude constraint (4b). Hence, at most 2N + 4 horizontal planes are derived. Next, for each of the remaining planes, we find the intersection of the plane and all the spheres. Let cl l,k,1 and cl j,k,2 be the intersection (circle) of sp k on pl j,1 and on pl j,2 , respectively. On each plane, we derive and collect the intersection points of each two of such circles. Then, we verify whether there are any points in the set of the collected points that would lie inside or on the border of all the circles on the same plane. In case that there are no such points on any of the planes, there is no feasible solution to (4) as all the constraints in (4) cannot be met at the same time. Otherwise, there would be a solution if the remaining condition (8) is also met for at least one of those eligible candidate points. From the described process, the computational complexity of the proposed feasibility check scales as (2N + 4) × N +2 2 × (N + 2), i.e., it is O(N 4 ). In the next section, we target the problem of power allocation and the FlyBS's positioning in (4) and we show how the FlyBS's position is determined with respect to the constraint spheres sp j (j ∈ [1, N + 2]) derived in this section. IV. FLYBS POSITIONING AND POWER ALLOCATION In this section, we outline our proposed FlyBS's positioning and transmission power allocation maximizing the sum capacity under the feasibility condition derived in Section III. Our proposed solution is based on alternating optimization updating the transmission power p T and the FlyBS's position q at every time step. First, note that for a given q, the problem of the p T optimization is solved via CVX, as the sum capacity in (4) is concave, and the constraints in (4) are convex with respect to p T . Once p T is optimized at the given position q, we optimize q to maximize the sum capacity while considering the constraints in (4). To this end, we first consider the problem of the sum capacity maximization regardless of the constraints in (4). As the sum capacity is non-convex with respect to the FlyBS's position, we provide a solution based on a local approximation of the sum capacity in the form of a radial function with respect to the FlyBS's position as elaborated in the following Lemma 2. Lemma 2. The sum capacity C tot is approximated as a radial function with respect to q[k] as: C tot [k] ≈ W (p T , k) − ζ(p T , k) q[k] − S 0 (p T , k) 2 , (10) where the substitutions W (p T , k), ζ(p T , k), and S 0 (p T , k) are constants with respect to q[k] as presented in the proof. proof. Please see Appendix B According to (10), the FlyBS achieves the maximum capacity at the location S 0 . In addition, the sum capacity increases when the FlyBS's distance to S 0 decreases. This helps to derive the FlyBS's position for the constrained problem in (4) in following way. The FlyBS's position is updated to S 0 (as in (19)) if all constraints in (4) are fulfilled, i.e., if S 0 lies inside the feasibility region denoted by R f . Otherwise, S 0 lies outside of R f and the optimal position of the FlyBS (optimal with respect to (10)) is, then, the closest point from R f to S 0 . If S 0 lies outside of R f , we refer to the derived spheres representing the constraints in (4) (i.e., sp j for j ∈ [1, N + 2], see Section III) to find the closest point from R f to S 0 and we provide a geometrical solution to determine the FlyBS's position as follows (also demonstrated in Algorithm 1). Due to the compactness of R f , the closest point of R f to S 0 lies on the boundary of R f belonging also to the border of at least one of the (N + 2) spheres sp j . The closest point from any sphere sp j to S 0 is determined by finding the intersection of sp j and the straight line connecting S 0 to the center of sp j . Hence, we first find the closest point of each sp j to S 0 (corresponding to line 1 in Algorithm 1). Next, we derive all mutual intersections (circles) of each pair of spheres sp j and sp k and we find the closest point from each of the intersection circles to S 0 (line 2 in Algorithm 1-derivation steps not shown here to avoid cluttering, more details can be found in [24]). Similarly, we find the intersections of each sphere sp j (j ∈ [1, N + 2]) with each of the planes z = H min and z = H max and then we find the closest points on those intersection circles to S 0 (lines 3 and 4 in Algorithm 1). After collecting all those closest points to S 0 , we discard those collected points that do not fulfill all the conditions in (4) (line 5 in Algorithm 1). Last, in the remaining set of candidate points, the point with smallest distance to S 0 is the optimal position of the FlyBS (line 6 in Algorithm 1). Once the FlyBS's position q is updated, the power allocation p T is again optimized at the new q. The updated p T would change the spheres sp j (j ∈ [1, N + 2]) and also S 0 . Thus, the alternating optimization of p T and q continues until the FlyBS's displacement at some iteration falls below a given threshold or until the maximum number of iterations is reached. The complexity of finding the FlyBS's position at each time step is O(N 4 ). V. SIMULATIONS AND RESULTS In this section, we present models and simulations adopted for a performance evaluation of the proposed solution, and we show gains of the proposal over state-of-the-art schemes. A total bandwidth of 100 MHz is selected [22]. Spectral density of noise is set to -174 dBm/Hz. The background interference is set to -100 dBm. We set α i = 2.4 for all nodes. The allowed range for altitude of the FlyBS is [100, 300] m, and the maximum transmission power limit P max T X is 1 W [23]. A maximum speed of 25 m/s is assumed for the FlyBS. The maximum allowed propulsion power consumption (according to (4a)) is set to P pr,th = 250 W. Each simulation is of 1200 seconds duration. The results are averaged out over 100 simulation drops. A. Simulation scenario and models In addition to our proposal, we show the performance of the following state-of-the-art solutions: i) maximization of the minimum capacity of nodes (referred to as MMC) via the FlyBS's positioning and the transmission power allocation, published in [20], ii) allocation of the transmission power to maximize an energy efficiency introduced in [19] (referred to as EEM), iii) allocation of the transmission power proposed in [19] extended with K-means-based positioning of the FlyBS, as the solution in [19] does not address the positioning; this approach is denoted as the extended EEM (EEEM). B. Simulation results In this subsection, we present and discuss simulation results. Fig. 3 demonstrates the sum capacity versus number of nodes for C min i = 1 Mbps for all nodes. The sum capacity decreases for larger numbers of nodes for all schemes, because the available bandwidth and the total transmission power is split among more nodes. However, our proposed solution enhances the sum capacity compared to state-of-the-art solutions MMC, EEM, and EEEM by up to 26%, 43%, and 22%, respectively. in the proposed solution, EEM, and EEEM. This is because the increasing C min i further limits the FlyBS's allowed movement according to (6) and, thus, the FlyBS can explore only a smaller feasibility region to optimize the sum capacity. The proposed solution enhances the sum capacity with respect to MMC, EEM, and EEEM by up to 25%, 46%, and 22%, respectively, for N = 100, and by up to 24%, 26%, and 15%, respectively, for N = 180. As our algorithm is iterative, we demonstrate its fast convergence in Figs. 6 and 7 by showing an evolution of the sum capacity over iterations of the FlyBS's positioning and the transmission power allocation. The state-of-the-art schemes are not iterative, thus, their sum capacity is constant. Still, the proposed solution converges very fast, in about three iterations. Moreover, even the first iteration leads to a notably higher sum capacity comparing to all state-of-the-art solutions. This confirms that iterative approach does not limit feasibility and practical application of the proposed solution. VI. CONCLUSIONS In this paper, we have provided a geometrical solution maximizing the sum capacity via a positioning of the FlyBS and an allocation of the transmission power to the nodes, while the minimum required capacity to each node is guaranteed. We have shown that the proposed solution enhances the sum capacity by tens of percent compared to state-of-the-art works. In the future work, a scenario with multiple FlyBSs should be studied along with related aspects, such as a management of interference among FlyBSs and an association of the nodes to the FlyBSs, should be addressed. APPENDIX A PROOF TO LEMMA 1 Proof. Using (5), the constraint (4a) is rewritten as: Q −1 i d i αi [k](N i + I)(2 C min i B i − 1) ≤ p T i .(11) Then, the necessary condition to fulfill (4e) is that: N i=1 Q −1 i d i αi (N i + I)(2 C min i B i − 1) ≤ p T max .(12) To derive an explicit form of (12) in terms of the FlyBS's position, we adopt the following inequality derived from linear Taylor approximation with respect to arbitrary x and for η ≥ 1: (a + x) η ≥ (a + τ eω) η + η(e + τ eω) η−1 (x − τ aω),(13) where τ = x aσ , and σ is the approximation parameter such that choosing a smaller σ incurs a smaller error. Hence, the approximation error and, thus, the gap to the optimal solution can be set arbitrarily close to zero by adopting a small enough σ. Using (13), for the left-hand side in (12) we write: N i=1 Q −1 i d i αi (N i + I)(2 C min i B i − 1) = N i=1 Q −1 i (N i + I)× (2 C min i B i − 1) × ((X − x i ) 2 + (Y − y i ) 2 + (H − z i ) 2 − H 2 min +H 2 min ) α i 2 ≥ N i=1 Q −1 i (N i + I)(2 C min i B i − 1) × (µ α i 2 i − κ i σα i 2 × µ α i 2 −1 i H 2 min + (X 2 + x 2 i − 2Xx i + Y 2 + y 2 i − 2Y y i + H 2 + z 2 i − 2Hz i − H 2 min ) α i 2µ − α i 2 +1 i ) = ( N i=1 ι i ) q[k] − θ 0 (p T , k) 2 + χ,(14) where κ i = q[k] − v i 2 − H 2 min H 2 min σ , µ i = H 2 min (1 + κ i σ), ι i = 1 2 Q −1 i (N i + I)(2 C min i B i − 1)α i µ α i 2 −1 i ,(15) and χ = − N i=1 ι i (x 2 i + y 2 i + z 2 i )+ ( N i=1 ι i x i ) 2 + ( N i=1 ι i y i ) 2 + ( N i=1 ι i z i ) 2 N i=1 ι i + N i=1 Q −1 i (N i + I)(2 C min i B i − 1)(µ α i 2 i − κ i σα i 2 µ α i 2 −1 i H 2 min ).(16) Then, by incorporating (12) and the right-hand side in (14), Lemma 1 is proved. APPENDIX B PROOF TO LEMMA 2 Proof. We use the following linear approximation (with respect to Γ) for arbitrary values of ∆ and Γ: log 2 (∆ + Γ) ≈ 1 ln (2) (ln(∆ + ∆sξ) + Γ − s∆ξ ∆(1 + sξ) ), (17) where s = Γ ∆ξ . Note that the approximation error can be set arbitrarily close to zero by choosing small enough ξ. By taking p T i Qid i α i (Ni+I) as Γ in (17), the sum capacity is rewritten as: Ctot[k] = N i=1 Bi log 2 1 + Qip T i di α i (Ni + I) ≈ N i=1 B i p T i ((X − x i ) 2 + (Y − y i ) 2 + (H − z i ) 2 − H 2 min + H 2 min ) −α i 2 Q −1 i (1 + s i ξ)(N i + I)ln(2) + N i=1 B i ln(2) (ln(1 + s i ξ) − s i ξ 1 + s i ξ ) ≈ N i=1 B i Q i p T i (1 + s i ξ)(N i + I)ln(2) × (µ α i 2 i + κ i σα i 2 µ α i 2 −1 i H 2 min − α i µ α i 2 −1 i 2 ×(X 2 + x 2 i − 2Xx i + Y 2 + y 2 i − 2Y y i + H 2 + z 2 i − 2Hz i − H 2 min )) + N i=1 B i ln(2) (ln(1 + s i ξ) − s i ξ 1 + s i ξ ) = W (p T , k) − ( N i=1 ϕ i ) × (( X − ( N i=1 ϕ i x i N i=1 ϕ i )) 2 + (Y − ( N i=1 ϕ i y i N i=1 ϕ i )) 2 + (H − ( N i=1 ϕ i z i N i=1 ϕ i )) 2 ) = W (p T , k) − ζ(p T , k) q[k] − S 0 (p T , k) 2 ,(18) where S 0 (p T , k) = [ N i=1 ϕ i x i N i=1 ϕ i , N i=1 ϕ i y i N i=1 ϕ i , N i=1 ϕ i z i N i=1 ϕ i ],(19) and W (p T , t k ) = ( N i=1 ϕ i x i ) 2 + ( N i=1 ϕ i y i ) 2 + ( N i=1 ϕ i z i ) 2 N i=1 ϕ i − N i=1 ϕ i (x 2 i + y 2 i + z 2 i ) + N i=1 B i Q i p T i (1 + s i ξ)(N i + I)ln(2) × (µ − This work was supported by the project No. LTT 20004 funded by Ministry of Education, Youth and Sports, Czech Republic and by the grant of Czech Technical University in Prague No. SGS20/169/OHK3/3T/13, and partially by the HUAWEI France supported Chair on Future Wireless Networks at EURECOM. Fig. 1 : 1System model with mobile IoTDs placed within the coverage area of the FlyBS. Fig. 2 : 2Propulsion power model vs. speed for rotary-wing FlyBS. by two spheres centered at q[k − 1] (i.e., the FlyBS's position at the previous time step) and with radii of V k . Furthermore, to interpret the constraint (4e) in terms of the FlyBS's position, in following Lemma 1, we derive a necessary condition for the FlyBS's next position so that there exists a feasible position of the FlyBS for an arbitrary setting of p T . Lemma 1. For the power allocation vector p T at time step k − 1, a necessary condition for a feasibility of any solution to the positioning of the FlyBS at time step k is: Algorithm 1 1Determination of the FlyBS positioning Input: sp j (j ∈ [1, N + 2]), and planes z = H min , z = H max Λ = []: set of closest points to S 0 from the border of R f 1:Λ ← Λ ∪ argmin A∈spj ||S 0 − A||, ∀j 2: Λ ← Λ ∪ argmin D∈spj ∩sp k ||S 0 − D||, ∀j, k 3: Λ ← Λ ∪ argmin B∈spj ∩z=Hmin ||S 0 − B||, ∀j 4: Λ ← Λ ∪ argmin C∈spj ∩z=Hmax ||S 0 − C||, ∀j 5: Λ ← Λ − {q ∈ Λ| ∼ (4b) ∨ q / ∈ ∩ N +2j=1 sp j , } 6: q ← argmin q∈Λ ||S 0 − q|| Output: FlyBS's position (q) We assume an area with a size of 600 x 600 m. Within this area, 60 to 180 nodes are dropped. A half of the nodes move based on a random-walk mobility model with a speed of 1 m/s. The other half of the nodes are randomly distributed into six clusters of crowds. The centers of three of the clusters move at a speed of 1 m/s, where each node in those clusters moves with a uniformly distributed speed of [0.6, 1.4] m/s with respect to the center of each cluster. The centers of the other three clusters move at a speed of 1.6 m/s with the speed of nodes uniformly distributed over [1.2, 2] m/s with respect to the center of cluster. Fig. 3 : 3Sum capacity vs. number of nodes for Cmin =1 Mbps. Fig. 4 : 4Sum capacity vs. C min i for N = 100. Fig. 5 : 5Sum capacity vs. C min i for N = 180.Figs. 4 and 5 show an impact of C min on the sum capacity for N = 100 and N = 180, respectively. The maximum depicted C min i represents the largest C min i for which the feasible solution is found. Note that the value of C min i in MMC is not set manually, but it is directly derived by the scheme itself. For N = 100 and N = 180, the EEM does not find a feasible solution for C min i larger than 2.2 Mbps and 0.8 Mbps, respectively, due to a lack of positioning of the FlyBS. It is observed that the sum capacity decreases by C min i Fig. 6 : 6Convergence of the proposed scheme for N = 100. Fig. 7 : 7Convergence of the proposed scheme for N = 180. UAV Communications for 5G and Beyond: Recent Advances and Future Trends. B Li, IEEE Internet of Things Journal. 62B. Li, et al, "UAV Communications for 5G and Beyond: Recent Advances and Future Trends," IEEE Internet of Things Journal, vol. 6, no. 2, April 2019. Power Allocation, Channel Reuse, and Positioning of Flying Base Stations With Realistic Backhaul. P Mach, IEEE Internet of Things Journal. 93P. Mach, et al, "Power Allocation, Channel Reuse, and Positioning of Flying Base Stations With Realistic Backhaul," IEEE Internet of Things Journal, vol. 9, no. 3, pp. 1790-1805, 1 Feb.1, 2022. Optimization of Total Power Consumed by Flying Base Station Serving Mobile Users. M Nikooroo, Z Becvar, IEEE Trans. Netw. Sci. Eng. M. Nikooroo and Z. Becvar, "Optimization of Total Power Consumed by Flying Base Station Serving Mobile Users," IEEE Trans. Netw. Sci. Eng., Early Access, 2022. Mobile Internet of Things: Can UAVs Provide an Energy-Efficient Mobile Architecture?. M Mozaffari, IEEE GLOBECOM. M. Mozaffari, et al, "Mobile Internet of Things: Can UAVs Provide an Energy-Efficient Mobile Architecture?," IEEE GLOBECOM, 2016. Towards 6G IoT: Tracing Mobile Sensor Nodes with Deep Learning Clustering in UAV Networks. Y Spyridis, Sensors. 21Y. Spyridis, et al, "Towards 6G IoT: Tracing Mobile Sensor Nodes with Deep Learning Clustering in UAV Networks", Sensors, vol. 21, 2021. Learning to Communicate in UAV-Aided Wireless Networks: Map-Based Approaches. O Esrafilian, R Gangula, D Gesbert, IEEE Internet Things J. 62O. Esrafilian, R. Gangula and D. Gesbert, "Learning to Communicate in UAV-Aided Wireless Networks: Map-Based Approaches," IEEE Internet Things J., vol. 6, no. 2, 2019. Energy-Efficient UAV-to-User Scheduling to Maximize Throughput in Wireless Networks. S Ahmed, IEEE Access. 8S. Ahmed, et al, "Energy-Efficient UAV-to-User Scheduling to Maximize Throughput in Wireless Networks," IEEE Access, vol. 8, 2020. Joint Cache Placement, Flight Trajectory, and Transmission Power Optimization for Multi-UAV Assisted Wireless Networks. J Ji, IEEE Transactions on Wireless Communications. 198J. Ji, et al, "Joint Cache Placement, Flight Trajectory, and Transmission Power Optimization for Multi-UAV Assisted Wireless Networks," IEEE Transactions on Wireless Communications, vol. 19, no. 8, 2020. Capacity of Unmanned Aerial Vehicle Assisted Data Collection in Wireless Sensor Networks. Z Wei, IEEE Access. 8Z. Wei, et al, "Capacity of Unmanned Aerial Vehicle Assisted Data Collection in Wireless Sensor Networks," IEEE Access, vol. 8, 2020. 3D UAV Trajectory and Communication Design for Simultaneous Uplink and Downlink Transmission. M Hua, IEEE Transactions on Communications. 689M. Hua, et al, "3D UAV Trajectory and Communication Design for Simultaneous Uplink and Downlink Transmission," IEEE Transactions on Communications, vol. 68, no. 9, 2020. Throughput Maximization for Full-Duplex UAV Aided Small Cell Wireless Systems. M Hua, IEEE Wireless Communications Letters. 94M. Hua, et al, "Throughput Maximization for Full-Duplex UAV Aided Small Cell Wireless Systems," IEEE Wireless Communications Letters, vol. 9, no. 4, 2020. Full-Duplex UAV Relaying for Multiple User Pairs. B Li, IEEE Internet of Things Journal. 86B. Li, et al, "Full-Duplex UAV Relaying for Multiple User Pairs," IEEE Internet of Things Journal, vol. 8, no. 6, 2021. Common Throughput Maximization for UAV-Enabled Interference Channel With Wireless Powered Communications. L Xie, J Xu, Y Zeng, IEEE Transactions on Communications. 685L. Xie, J. Xu and Y. Zeng, "Common Throughput Maximization for UAV-Enabled Interference Channel With Wireless Powered Communications," IEEE Transactions on Communications, vol. 68, no. 5, pp. 3197-3212, May 2020. Energy-Efficient Resource Management in UAV-Assisted Mobile Edge Computing. Y K Tun, IEEE Communications Letters. 251Y. K. Tun, et al, "Energy-Efficient Resource Management in UAV- Assisted Mobile Edge Computing," IEEE Communications Letters, vol. 25, no. 1, pp. 249-253, Jan. 2021. UAV Path Planning With QoS Constraint in Deviceto-Device 5G Networks Using Particle Swarm Optimization. L Shi, S Xu, IEEE Access. 8L. Shi and S. Xu, "UAV Path Planning With QoS Constraint in Device- to-Device 5G Networks Using Particle Swarm Optimization," IEEE Access, vol. 8, pp. 137884-137896, 2020. A Novel Drone's Height Control Algorithm for Throughput Optimization in Disaster Resilient Network. M Ishigami, T Sugiyama, IEEE Transactions on Vehicular Technology. 6912M. Ishigami and T. Sugiyama, "A Novel Drone's Height Control Algorithm for Throughput Optimization in Disaster Resilient Network," IEEE Transactions on Vehicular Technology, vol. 69, no. 12, 2020. Multi-UAV Coverage Scheme for Average Capacity Maximization. R Chen, IEEE Communications Letters. 243R. Chen, et al, "Multi-UAV Coverage Scheme for Average Capacity Maximization," IEEE Communications Letters, vol. 24, no. 3, 2020. Three-Dimension Trajectory Design for Multi-UAV Wireless Network With Deep Reinforcement Learning. W Zhang, IEEE Transactions on Vehicular Technology. 701W. Zhang, et al, "Three-Dimension Trajectory Design for Multi- UAV Wireless Network With Deep Reinforcement Learning," IEEE Transactions on Vehicular Technology, vol. 70, no. 1, 2021. Energy Efficiency and Hover Time Optimization in UAV-Based HetNets. S T Muntaha, IEEE Transactions on Intelligent Transportation Systems. 228S. T. Muntaha, et al, "Energy Efficiency and Hover Time Optimization in UAV-Based HetNets," IEEE Transactions on Intelligent Transportation Systems, vol. 22, no. 8, 2021. Multi-UAV Deployment for Throughput Maximization in the Presence of Co-Channel Interference. I Valiulahi, C Masouros, IEEE Internet of Things Journal. 85I. Valiulahi and C. Masouros, "Multi-UAV Deployment for Throughput Maximization in the Presence of Co-Channel Interference," in IEEE Internet of Things Journal, vol. 8, no. 5, 2020. Energy Minimization for Wireless Communication With Rotary-Wing UAV. Y Zeng, J Xu, R Zhang, IEEE Trans. Wireless Commun. 184Y. Zeng, J. Xu, and R. Zhang, "Energy Minimization for Wireless Communication With Rotary-Wing UAV," in IEEE Trans. Wireless Commun., Vol. 18, No. 4, April 2019. Optimal Positioning of Flying Base Stations and Transmission Power Allocation in NOMA Networks. M Nikooroo, Z Becvar, IEEE Trans. Wireless Commun. 212M. Nikooroo and Z. Becvar, "Optimal Positioning of Flying Base Stations and Transmission Power Allocation in NOMA Networks," IEEE Trans. Wireless Commun., vol. 21, no. 2, pp. 1319-1334, Feb. 2022. 3-D Placement of an Unmanned Aerial Vehicle Base Station (UAV-BS) for Energy-Efficient Maximal Coverage. M Alzenad, IEEE Wireless Communications Letters. 64M. Alzenad, et al, "3-D Placement of an Unmanned Aerial Vehicle Base Station (UAV-BS) for Energy-Efficient Maximal Coverage," IEEE Wireless Communications Letters, vol. 6, no. 4, pp. 434-437, Aug. 2017. Distance to Circles in 3D. D Eberly, Geometric Tools. D. Eberly, "Distance to Circles in 3D ", Geometric Tools, https://www.geometrictools.com/Documentation//DistanceToCircle3.pdf.
[]
[ "ConvXAI : Delivering Heterogeneous AI Explanations via Conversations to Support Human-AI Scientific Writing", "ConvXAI : Delivering Heterogeneous AI Explanations via Conversations to Support Human-AI Scientific Writing" ]
[ "Hua Shen [email protected] ", "Chieh-Yang Huang [email protected] ", "Tongshuang Wu ", "Ting-Hao ", "Kenneth Huang ", "Hua Shen ", "Chieh-Yang Huang ", "Tongshuang Wu ", "Ting-Hao ", "Kenneth Huang ", "Hua Shen ", "Chieh-Yang Huang ", "Tongshuang Wu ", "Ting-Hao ", "Huang. .Kenneth ", "\nPennsylvania State University\nUSA\n", "\nPennsylvania State University\nUSA\n", "\nCarnegie Mellon University\nUSA\n", "\nPennsylvania State University\nUSA\n" ]
[ "Pennsylvania State University\nUSA", "Pennsylvania State University\nUSA", "Carnegie Mellon University\nUSA", "Pennsylvania State University\nUSA" ]
[ "Conference '23" ]
Figure 1: A walkthrough example demonstrating the steps of using ConvXAI in the use scenario case. Users first choose the target conference (A) and edit the abstract text as input (B). Afterwards, they can check the writing model predictions (C 1 ) and integrated writing review (C 2 ), then asking XAI agent explanation questions about understanding the writing model predictions and review (D 1 ) as well as how to improve their writing accordingly (D 2 ). Writers can iteratively refine and resubmit their abstracts (C).AbstractWhile various AI explanation (XAI) methods have been proposed to interpret AI systems, whether the state-of-the-art XAI methods are practically useful for humans remains inconsistent findings. To improve the usefulness of XAI methods, a line of studies identifies the gaps between the diverse and dynamic real-world user needs with the status quo of XAI methods. Although prior studies envision Conference '23, 2023, . ACM ISBN 978-1-4503-XXXX-X/18/06. . . $15.00 mitigating these gaps by integrating multiple XAI methods into the universal XAI interfaces (e.g., conversational or GUI-based XAI systems), there is a lack of work investigating how these systems should be designed to meet practical user needs. In this study, we present ConvXAI, a conversational XAI system that incorporates multiple XAI types, and empowers users to request a variety of XAI questions via a universal XAI dialogue interface. Particularly, we innovatively embed practical user needs (i.e., four principles grounding on the formative study) into ConvXAI design to improve arXiv:2305.09770v2 [cs.HC] practical usefulness. Further, we design the domain-specific language (DSL) to implement the essential conversational XAI modules and release the core conversational universal XAI API for generalization. The findings from two within-subjects studies with 21 users show that ConvXAI is more useful for humans in perceiving the understanding and writing improvement, and improving the writing process in terms of productivity and sentence quality. Finally, our work contributes insight into the design space of useful XAI, reveals humans' XAI usage patterns with empirical evidence in practice, and identifies opportunities for future useful XAI work.We release the open-sourced ConvXAI codes for future study. 1 2 ACM Reference Format:
10.48550/arxiv.2305.09770
[ "https://export.arxiv.org/pdf/2305.09770v2.pdf" ]
258,741,213
2305.09770
8b3943eeb8d9517cbce3212b5b85bebabd8441c0
ConvXAI : Delivering Heterogeneous AI Explanations via Conversations to Support Human-AI Scientific Writing 2023 Hua Shen [email protected] Chieh-Yang Huang [email protected] Tongshuang Wu Ting-Hao Kenneth Huang Hua Shen Chieh-Yang Huang Tongshuang Wu Ting-Hao Kenneth Huang Hua Shen Chieh-Yang Huang Tongshuang Wu Ting-Hao Huang. .Kenneth Pennsylvania State University USA Pennsylvania State University USA Carnegie Mellon University USA Pennsylvania State University USA ConvXAI : Delivering Heterogeneous AI Explanations via Conversations to Support Human-AI Scientific Writing Conference '23 2023ConvXAI : Delivering Heterogeneous AI Explanations via Con-versations to Support Human-AI Scientific Writing. In Woodstock '18: ACM Symposium on Neural Gaze Detection, June 03-05, 2018, Woodstock, NY . ACM, New York, NY, USA, 22 pages. Figure 1: A walkthrough example demonstrating the steps of using ConvXAI in the use scenario case. Users first choose the target conference (A) and edit the abstract text as input (B). Afterwards, they can check the writing model predictions (C 1 ) and integrated writing review (C 2 ), then asking XAI agent explanation questions about understanding the writing model predictions and review (D 1 ) as well as how to improve their writing accordingly (D 2 ). Writers can iteratively refine and resubmit their abstracts (C).AbstractWhile various AI explanation (XAI) methods have been proposed to interpret AI systems, whether the state-of-the-art XAI methods are practically useful for humans remains inconsistent findings. To improve the usefulness of XAI methods, a line of studies identifies the gaps between the diverse and dynamic real-world user needs with the status quo of XAI methods. Although prior studies envision Conference '23, 2023, . ACM ISBN 978-1-4503-XXXX-X/18/06. . . $15.00 mitigating these gaps by integrating multiple XAI methods into the universal XAI interfaces (e.g., conversational or GUI-based XAI systems), there is a lack of work investigating how these systems should be designed to meet practical user needs. In this study, we present ConvXAI, a conversational XAI system that incorporates multiple XAI types, and empowers users to request a variety of XAI questions via a universal XAI dialogue interface. Particularly, we innovatively embed practical user needs (i.e., four principles grounding on the formative study) into ConvXAI design to improve arXiv:2305.09770v2 [cs.HC] practical usefulness. Further, we design the domain-specific language (DSL) to implement the essential conversational XAI modules and release the core conversational universal XAI API for generalization. The findings from two within-subjects studies with 21 users show that ConvXAI is more useful for humans in perceiving the understanding and writing improvement, and improving the writing process in terms of productivity and sentence quality. Finally, our work contributes insight into the design space of useful XAI, reveals humans' XAI usage patterns with empirical evidence in practice, and identifies opportunities for future useful XAI work.We release the open-sourced ConvXAI codes for future study. 1 2 ACM Reference Format: Introduction The advancement of deep learning has led to breakthroughs in a number of artificial intelligence systems (AI). Yet, the superior performance of AI systems is often achieved at the expense of the interpretability of deep learning models [47]. A surge collection of eXplainable AI (XAI) methods have been developed to help humans understand AI from different facets. Each of these XAI methods commonly focuses on answering one or several XAI questions that users are interested in [44]. For example, saliency maps and feature attributions [45,61] aim to help humans answer "why" questions, by highlighting key rationales for AI predictions; counterfactual explanations answer "why X not Y" by perturbing the input in ways that impact model behaviors [47,81], etc. Despite their potential, there have been inconsistent findings on whether the XAI methods are reliably helpful for humans in real-world applications [3,56,67]. For instance, there is evidence that different types of explanations can support their intended use cases, such as debugging models [41], human-AI teaming [24], etc. In contrast, a number of studies uncover that applying state-ofthe-art XAI methods to real-world human tasks could not always help users better simulate model predictions [57], understanding AI model mistakes [67], etc. To resolve the usefulness of XAI methods, a line of work dives deep into the gaps between real-world user demands and the status quo XAI methods. Particularly, Liao et al. [44] presents XAI Question Bank, which demonstrates that users ask diverse XAI questions covering the whole AI lifecycle (e.g., building dataset, training and evaluating AI models, deploying AI systems, etc.). However, Shen and Huang [68] compare these practical user questions with over 200 XAI studies, and find that cutting-edge XAI methods are highly skewed to answering particular types of XAI questions (e.g., "why", "how"), but overlook the others [4]. Furthermore, users also tend to have multiple, dynamic and sometimes interdependent questions on AI explanations [38,76]. Taking Figure 1 as an example, a researcher interacts with an AI assistant to write scientific papers (built based on CODA-19 [30]). As shown in 2 , the AI assistant suggests rewriting the sentence3 in 1 from describing the finding, to providing the method context for the work. To understand this suggestion, the writer may first ask, "how confident does the model make this prediction?" (confidence explanation 1 ), then "how can I edit them to describe method?" (counterfactual explanation, 2 ). They might also want to further get familiar with the general writing styles of finding or method sentences by asking for relevant training examples (example explanation). Answering this series of XAI questions requires universal XAI interfaces, which integrate multiple AI explanations into a unified interface, to satisfy diverse user needs in practice. Prior studies commonly address the XAI integration by displaying multiple XAI methods into one Graphical User Interface [77,80], so that users can proactively select one or multiple XAI methods for their customized needs. Nevertheless, we can not neglect the drawbacks of the GUI-based universal XAI interfaces: from users' perspective, displaying much XAI information can potentially result in additional cognitive overload to users [57]. From XAI developer's viewpoints, GUI-based universal XAI interfaces require continual UI updates to add latest XAI methods. Besides, they are also difficult to be designed into some non-visual real-world AI systems such as virtual speech assistant (e.g., Amazon Alexa, Google Home). Inspired by the flexibility of dialog systems [19,35], prior work has envisioned the potential of "Explainability as a Dialogue" to balance the cognitive load with the diverse user needs [38,46,72,75,76]. For example, through interviews on healthcare professionals, and policymakers, Lakkaraju et al. [38] found that decision makers strongly prefer interactive explanations with natural language dialogue forms, and thereby advocated for interactive explanations. Nevertheless, there has been little exploration on how a conversational XAI system should be designed for practical use needs and how users might react to it. In this study, we propose a conversational XAI system, which incorporates multiple types of AI explanations into a universal XAI interface and empowers users to ask a variety of XAI questions via a concise dialogue interface. More importantly, we identify a set of practical XAI user demands through formative human studies with 7 users of diverse backgrounds, and represent them as four design principles of useful XAI. Specifically, these conversational XAI systems should be able to address various user questions ("multifaceted"), actively provide XAI tutorials and suggestions ("mixinitiative"), empower users to dig into AI explanations ("contextaware drill-down"), and make flexible customization with details on-demand ("controllability"). Following these practical XAI user demands, we develop a conversational XAI prototype system, ConvXAI, and incorporate the four user-oriented XAI principles into the design. At the core of the system, we develop a domain-specific language (DSL) for useroriented conversational XAI systems, which i) unifies a set of multimodal XAI methods into a single AI-Explainer module; ii) leverages rule-based user intent classifier and template-based response generator for conversational user interactions; iii) deploys a global conversational tracker to capture turn-based user intents transition and customization variables. We implement this DSL into a universal conversational XAI API and release it publically on GitHub for future research. Furthermore, we examine the potential of ConvXAI in the context of scientific writing, where the writers use ConvXAI to improve their paper abstract writing to submit to top-tier research conferences. In the use case of scientific writing, ConvXAI supports users to interact with two AI writing assistants which assess (on the sentence level): (1) whether the abstract follows the typical structures (e.g.,"background->purpose->method->findings") of the target conferences, and (2) whether the sentences' writing style matches the conference norm. We particularly define the "structure score" and "quality score" to give users quantitative feedback accompanied by detailed text-based suggestions. Users could then chat with the ConvXAI to understand the writing feedback and improve their papers with the help of AI explanations. We evaluate the ConvXAI system by conducting two withinsubject user studies with the same user group. We compare ConvXAI with SelectXAIthe traditional GUI-based universal XAI system that statically display all the XAIs on the interface in a collapsible manner ( Figure 4). Through the user study on the open-ended writing task 1 with 13 participants, we found that most users perceived ConvXAI to be more useful in understanding AI writing feedback and improving human writings. The results also validated the less cognitive load and effectiveness of the four user-oriented design principles. Furthermore, from the user study on the well-defined writing task 2 with 8 participants, we collect all the users' writing artifacts with ConvXAI and SelectXAI systems and evaluate them with both human evaluator and autometrics. We observed that both ConvXAI and SelectXAI can help users write better artifacts based on the built-in auto-metrics, in which ConvXAI is more useful for improving writing quality. However, the one human evaluator's measurement does not align well with the auto-metrics, where we posit this indicates the importance of designing AI model prediction aligned with human expectations. Based on these studies and findings, we further contribute insights into 'how humans select XAIs for practical usage via the conversational universal XAI interface'. We summarize five findings of human practical usage patterns in ConvXAI, and four core ingredients of useful XAI systems for future XAI work. In short, we observed the diversity of user needs across humans, and its temporal change over time. Also, the quantitative evidence shows a temporal decrease of reliance on XAI tutorial but an increase demand in XAI customization, etc. We conclude this work with limitations and future directions. • We propose ConvXAI: a conversational XAI system that innovatively designs the practical user needs (summarized as four XAI rationales) into the universal XAI interface, which includes an extensible variety of XAI methods. • We release the computational pipeline of conversational universal XAI interface that integrates four essential modules (i.e., user intent classifier, multiple AI explainers, natural language generation, and global conversational tracker) to process and respond to diverse user XAI questions. • We summarize findings from the two within-subjects studies that the ConvXAI is useful for humans to improve their perceived and auto-metric writing performance in practice. • We provide insights into the humans' usage patterns of choosing from diverse XAI methods to finish practical tasks with ConvXAI and the core ingredients of useful XAI for humans. 2 Related Work Human-Centered AI Explanations Earlier studies in the fields of Explainable Artificial Intelligence (XAI) primarily focus on developing different XAI techniques, which aims to explain why the model arrives at the predictions. This line of studies can be broadly categorized into generating post-hoc interpretations for well-trained deep learning models [25] and designing self-explaining models [40,69,70]. In specific, the majority of XAI methods aim to provide post-hoc interpretations either for each input instance (i.e., named "local explanations") [16,37,65] or for providing a global view of how the AI model works (i.e., named "global explanations") [62], where our study covers both of them. Additionally, XAI approaches are also divided into different formats [68], including example-based [20], feature-based [61], free text-based [9,58], rule-based explanations [62], etc, where our study covers a range of the XAI formats. Despite the increasing number of XAI approaches have been proposed, evaluating AI with humans is still a challenging problem. Doshi-Velez and Kim [17] propose a taxonomy of interpretability evaluation including "application-grounded", "human-grounded" and "functionally-grounded" evaluation metrics based on different levels of human involvement and application tasks. The majority of the proposed XAI approaches are commonly validated effectively using the "functionally-grounded" evaluation methods [28,33,79], which seek for automatic metrics (e.g., "plausibility") on proxy tasks without real human participating [5,50,86]. Furthermore, we can see burgeoning efforts being put around involve real humans into evaluating AI explanations under the theme of "human-centered explainable AI". The state-of-the-art XAI methods are applied to real human tasks, such as assessing human understanding [67], human simulatability [62,69], human trust and satisfaction on AI predictions [15,73], and human-AI teamwork performance [12], etc [20,23,26]. However, many of the human studies show that AI explanations are not always helpful for human understanding in tasks such as simulating model prediction [69], analyze model failures [67], human-AI team collaboration [4]. For instance, Bansal et al. [4] conducted human studies to investigate if XAI helps achieve complementary team performance, and showed that none of the explanation conditions preduced an accuracy significantly higher than the simple baseline of showing the confidence. In response, a line of work dives deep into the gaps between realworld user demands and the status quo XAI methods. Their findings reveal that users tend to ask multiple, dynamic and sometimes interdependent questions on AI explanations, whereas state-of-the-art XAI methods are mostly unable to satisfy. Although GUI-based XAI systems, which integrate multiple XAI into one interface, can potentially mitigate this issue, they inevitably suffer from the drawbacks, such as cognitive overload, frequent UI updates, etc. Therefore, prior studies envision the potential of "Explainability as a Dialogue" to balance the cognitive load with the diverse user needs [38,46,72,75,76]. For example, through interviews on healthcare professionals, and policymakers, Lakkaraju et al. [38] found that decision makers strongly prefer interactive explanations with natural language dialogue forms, and thereby advocated for interactive explanations. Nevertheless, there has been little exploration on how a conversational XAI system should be designed in practice and how users might react to it. Our studies aim to resolve this problem by incorporating practical user needs into the conversational XAI design, propose a user-oriented conversational universal XAI interface, and investigate human behaviors during using these systems. Conversational AI Systems Our work is situated within the rich body of conversational AI or chatbots studies, which entails a long research history in the NLP [43,59] and HCI fields [19,66]. Jurafsky [35] proposes that conversation between humans is an intricate and complex join activity, which entails a set of imperative properties: multiple turns, common grounding; dialogue structure, mixed-initiative. By incorporating these properties, conversational interactions are also shown to significantly contribute to establishing long-term rapport and trust between humans and systems [7]. User interaction experience can be improved by a set of factors from the conversational AI systems [66]. For example, Chaves and Gerosa [11] describe how human-like social characteristics, such as conversational intelligence and manners, may benefit the user experience. These principles and theories inform us to design a conversational AI explanation system that fulfills the diverse user needs in practice. Our study is deeply rooted in the conversational explanations in XAI -the users request their demanded explanations through the chatbot-based AI assistants [74,76]. Some previous studies have been conducted to explore the effective of interactive dialogues in explaining online symptom checkers (OSCs) [75,76]. For example, Tsai et al. [76] intervened the diagnostic and triage recommendations of the OSCs with three types of explanations (i.e., rationale-based, feature-based and example-based explanations) during the conversational flows. The findings yield four implications for the future OSC designs, which include empower users with more control, generating multifaceted and context-aware explanations, and be cautious of the potential downsides. However, these existing conversational AI explanation systems are still in a preliminary stage, which only provide one type of explanation and disable users from selecting different explanation types. Also, these are far from being able to incorporate user's feedback into producing AI explanations (e.g., enable users to choose counterfactual prediction foil) and produce personalized explanation for user's individual need. In addition, these conversational AI explanation systems are primarily applied to improve system transparency and comprehensibility, thus help user understand and building trust on the systems. Little attention has been paid to examine if and how conversational AI explanations can be indeed useful for users to improve their performance in the human-AI collaborative tasks. Our work further improves the conversational AI explanation systems from two perspectives: i) we focus on AI tasks where human's goal is to improve their task performance (i.e., scientific writing) rather than merely gain understanding of the AI predictions; ii) we identify four design principles and incorporate them into the empirical system design for further evaluation with human tasks. Our work aims to further unleash the capability of conversational AI explanations and make it more useful for human tasks. Writing Support Tools The improvements in large language models (LMs) like GPT3 [8] and Meena [2] have provided unprecedented language generation power. This leads to an increasing interest in how these new technologies may support writiers with AI-assisted writing support tools [39]. In these human-AI collaborative writing tasks, the writers interact with AI writing support tools not only for understanding its assessment, but also aim to leverage its feedback to improve the human writing output [29]. A few technologies are developed to support human writings. Many of them focused on lower-level linguistic improvement, such as proofreading, text generation, grammar correction, auto-completion, etc. For instance, Roemmele and Gordon [63] proposed Creative Help system that uses a recurrent neural network model to generate suggestions for the next sentence. Furthermore, a few studies propose AI assistants that leverage the generation capability of the language models to generate inspirations to assist the writers' ideation process [13,22,78]. For instance, Wordcraft [13] is an AI-assisted editor proposed for story writing, in which a writer and a dialogue system collaborate to write a story. The system further supports natural language generation to users including planning, writing and editing the story creation. In addition, there are a number of studies design AI assistants to provide assessment and feedback to help improve human writings iteratively [18]. For example, Huang et al. [31] argue that writing, as a complex creative task, demands rich feedback in the writing revision process. They presents Feedback Orchestration to guide writers to integrate feedback into revisions by a rhetorical structure. More studies are proposed for AI-assisted peer review [10]. For example, Yuan et al. [83] automate the scientific review process that uses LMs to generate reviews for scientific papers. In this work, we apply the conversational AI explanations to the human scientific writing tasks, in which humans submit their writings to the system and iteratively make a sequence of small decision-making process based on the AI feedback and explanations. As writing is a goal-directed thinking process [22]. The goal of the conversational XAI system is to support writers to understand the feedback and further improve their writing outputs. Therefore, we aim to evaluate the effects of conversational AI explanations in terms of not only helping users understand the AI prediction but also improve the writing performance. Understanding Practical User Demands in Conversational XAI: Formative Study As discussed above, AI explanations should be useful for humans in real-world human tasks. Prior work has envisioned the potential of conversational XAI approach to cater to the diverse, dynamic and interdependent user needs in practice. However, there lacks the in-depth analysis on how a conversational XAI system should be designed for practical use demands and how users might react to it. To further investigate this issues from users' perspective, we conducted a formative study with 7 users of diverse background to gather information on the necessity of conversational XAI system and practical user demands to use XAI for improving human performance. We next introduce the practical AI tasks, the demanded XAI methods, the formative study process, and the key findings. AI Tasks in Practice: Scientific Writing Tasks The status quo of XAI methods commonly builds upon other AI tasks for explaining the models' behavior resulting in the AI predictions. However, existing XAI studies mainly evaluate human perceptions (e.g., understanding, trust, etc.) on the AI systems [27,69] instead of usefulness of influencing human performance. With the objective of improve XAI usefulness, we select scientific writing as the practical AI task. The reasons derive from multiple facets: (1) the goal of using XAI in the scientific writing task is to help humans better "understand" the AI feedback, and further "improve" the human performance in the writing process and outputs. This XAI goal of "understand AI prediction" + "improve human performance" differs from a number of traditional XAI human studies, which merely assess "human perception of XAI effects" (e.g., rate the human trust and satisfaction). Therefore, by assessing the change of human performance, we can evaluate the usefulness of conversational XAI in a more objective and practical manner. (2) The writing task involves iterative and frequent human-AI interactions, in which humans use AI explanations to make successive decisions at each step instead of a one-shot prediction. The frequent usage of XAI methods can potentially lead to more profound influence on human performance to better measure the XAI usefulness. (3) The scientific writing task consumes a more complex cognitive process for humans, which can potentially elicit more XAI's useful effects on the writing process. In this study, we apply the conversational AI explanations to scientific writing tasks, in which humans aims to leverage the conversational AI explanations to understand the writing models' feedback ( Figure 1D 1 ) (i.e., including both model writing predictions and integrated writing reviews), and further improve their writings ( Figure 1D 2 ). This writing task reflects more diverse user goals and therefore well represents practical scenarios where users need multiple forms of XAIs. To form the write-AI interaction scenario, we develop two AI writing models to generate writing structure and style predictions, respectively ( Figure 1C 1 ). The writing structure model gives each sentence a research aspect label, indicating which aspect the sentence is describing among the five categories (i.e., background, purpose, method, contribution/finding and others). On the other hand, the writing style model provides each sentence a style quality score assessing "how well the writing style of this sentence can match well with the published sentences of the target conference". Based on the predictions of all sentences, we further use algorithms to integrate all sentences' predictions into the writing reviews ( Figure 1C Demand Analysis of XAI Questions We now discuss the types of XAI questions should the conversational XAI system be prepared to answer for the users. We see several different kinds of questions occur in the use case, with some more easily covered by conventional explanations methods (which heavily focus on explaining model predictions), while some others are not typically considered "XAI scenarios" (e.g., the sentence length question can arguably just be thought as data statistics). However, we argue that a conversational XAI system should also be prepared to answer these atypical questions as they also count as part of the user understanding procedure. In fact, we saw a similar trend in XAI Question Bank [44], i.e., in practical XAI use cases users are broadly interested in XAI questions beyond model behaviors, such as the "how can I utilize the system output?". Unfortunately, Shen and Huang [68] dig into into the discrepancy between the practical user needs and the state-of-the-art XAI techniques, they find existing XAI approaches largely ignore many XAI questions that are useful for humans in practice. Given these observations, we deem that conversational XAI system should be prepared to answer any knowledge gaps between the users and the AI models [48]. That says -the conversational XAI system is able to answer a variety of XAI questions that cover different perspectives of the system, including AI models, datasets, training and inference stages and even system limitations, etc [44]. Below, we categorize our explanation methods based on human goals. We hope this could inspire future work to further expand beyond the conventional, prediction-focused explanations. More concretely, the scientific writing task design inspires us to design XAI questions around four XAI goals, as illustrated in Table 1 (the last row "Understand Suggestion" is introduced after the formative study and we will revisit it in Section 4): (a) understanding data, which uses data to help contextualize users' understanding on where they abstract sit in the larger distribution; (b) understanding model, which provides information on the underlying model structure so users can assess the model reliability; (c) understand instance, which allows users to ask questions that dive into, each individual prediction unit (i.e., sentence), and therefore inspect each prediction from the model; and (d) improve instance, which goes one step further than understanding, and specifically targets at the goal of helping people to improve their writing by proactively suggesting potential changes. Overall, by conjecturing the use scenario of conversational XAI system and analyzing the XAI questions the users potentially need, we embody these XAI requests into a preliminary conversational XAI system to conduct the formative study (Section 3) at the next stage. Equipping with the above XAI questions derived from user needs, our next sub-goal is to investigate how to design the conversational XAI system to be useful for effectively responding to the diverse user needs. To this end, we implemented a preliminary system that covered the 9 formats of AI explanations mentioned above, and conducted a formative study with 7 users to obtain their comments. Consequently, we summarized four imperative principles for designing conversational XAI systems and further incorporate them into our formal system design illustrated in the next section (Section 4). Preliminary System, Participants and Study Procedure A Preliminary Conversational XAI System. We build up a preliminary system of conversational AI explanations for scientific writing support. The interface looks not too different from Figure 1 (which we introduce in more detail in Section 4), and is vertically XAI Goal User Question Samples XAI Formats Algorithm Understand Data What data did the system learn from? [64]. Note that the italic Understand Suggestion is introduced in Section 4.1 after the formative study (Section 3), whereas all the other explanations are implemented in the formative study. . Nice! I've collected 3235 papers from the CHI main conference to provide more targeted feedback to you. Your paper received an overall quality score as 2 (1 to 5 levels). You could further improve your paper abstract with the reviews below: -S2: Too short, please rewrite it into a longer sentence. -S7: Better to describe purpose aspect here, please rewrite the sentence to change its label from finding to purpose. -S10: The quality score of S10 is a bit lower than conference score range. -S10: Too short, please rewrite it into a longer sentence. I just selected Button CHI. divided into two panels: a writing panel on the left where users can inspect and edit their abstracts, and a explanation panel on the right where users interact with the XAI agent. In the writing panel, the most important component is the abstract input box. Users can iteratively edit their abstracts, and submit them to receive AI assessments on its writing structure and style. 3 As our AI writing assistants make predictions on a sentence level, we also visually split the abstract into sentences, and use the background color to encode model predictions (either the writing structure or style, depending on which assistant AI the user is focusing on). They can hover on these sentences to explicitly inspect the predicted labels, or to click on the sentences and start an explanation session in the explanation panel on the corresponding sentence. As for the explanation panel, we primarily design it to enable the support of multiple XAI methods. At the initial entry, the panel provides a summary on the recommended edits ( Figure 2A, similar to the final system). Then, we as participants dive into each individual sentences, we allow them to select XAI methods they might find suitable by clicking on the corresponding buttons ( Figure 2B, different from the final system -more explanations in Section 4). The button-based design is inspired by the standard interface for service chatbots [82], but participants were still allowed to just type their own questions. In specific, we enable ConvXAI to include nine XAI questions that target on five XAI goals (i.e., "understand data", "understand model", "understand criteria", "understand instance", "improve instance") as shown in Table 1. 4 This setting is also similar to the existing XAI interactive dialogue systems [75,76], where they provide different formats of AI explanation for the same prediction and evaluate human assessment on different explanations. Participants and Study Procedure. We recruited seven participants with diverse backgrounds and occupations in the formative study. The participants include 3 males and 4 females, who have diverse background ranging from undergraduate, Ph.D. students, industrial employees and professor in the university. Except for one master student, all others have much AI knowledge, writing experience and published over 5 papers. The demographic statistics of the seven participants are summarized in Table 3 in Appendix A.1. The formative studies are conducted virtually via virtual conference calls on Zoom. During this study, participants were asked to either bring one of their abstract draft or use one example provided by us. We conducted a semi Wizard-of-Oz (WoZ) process where we encouraged users to think aloud during asking AI explanations to the XAI agent, with keeping in mind the goal of improving their abstract writing. One researcher, who had several years of HCI and algorithmic AI explanation experience, acted as the XAI agent in this WoZ setting. We collected users' reflections on the system, and summarized them into the design rationales below. Design Principles for Conversational XAI Systems While formative study participants all appreciated the access to multiple XAI methods, they were frequently overwhelmed by the large number of options available. We combine their feedback with theoretic linguistic properties of human conversation [32,35], and propose the following for design requirements for conversational XAI systems: R.1 Multifaceted: conversational XAI system should provide diverse types and formats of AI explanations for users to choose from, and use multi-modal visualization techniques to display the explanations efficiently. As we have argued in Section 3), to satisfy diver users needs [44,68], it is imperative to provide multiple XAI types and formats. Nevertheless, some formative study participants noticed that having all the explanations displayed at once is overwhelming, and preferred to have a "overview first, details on demand" structure [71]. I-6 discussed that "I can tell the system knows a variety of AI explanations. However, it can be too much for me to understand all these explanations at once. I would prefer to know the 'big picture' first, and then drill down with 'some options' as I need to dive deeper. " R.2 Mixed-initiative: conversational XAI system should enable both user and XAI agent to initiate the conversation. Especially, it should proactively speculate the XAI user need and prompt with next step suggestions. One unique characteristic of conversations is mixed initiative, i.e., who drives the conversation [35]. Just as many existing conversational systems, we aim to mimic human-human conversations where initiative shifts back and forth between the human and the conversational XAI . This way, not only can the system answer users' questions, they can also occasionally steer the conversation towards different directions. In our study, we also found this to be quite essential especially when users do not have a clear goal in mind (e.g., "Which sentence in the abstract should I look into first?"). R.3 Context-aware drill-down: conversational XAI system should allow users to drill down AI explanations with multi-turn conversations with awareness of the context. Linguistic theories model human conversation as a sequence of turns, and conversational analysis theory [32] describes the complex dialogues as joining the basic units, named adjacency pairs. This was also empirically validated in our pilot study. For instance, I-2 discussed to potentially switch between explanations based on current observations: "I might directly ask the system how to rewrite the sentence to change this sentence into the background aspect (i.e., "counterfactual explanation"). But if its rewritten sentences are not good enough, I would check the most similar examples of background aspect to learn their style and write by my own then (i.e., "similar examples")". Carrying over context throughout the conversation without users repeating themselves too much is useful for making the conversational natural and continuous. R.4 Controllability: conversational XAI system should be able to generate customized AI explanations that can satisfy the user needs and context. This includes both only to display explanations that are relevant to their questions (e.g., answer "why this prediction" with feature attribution), and to adjust the explanation settings (e.g., number of important words to highlight). As I-7 said -"I spent too much time on figuring out what each XAI means, then I forget what I want to write in the abstract. It would be great to give me the AI explanations targeting to my question and enable me to input some variables to generate XAIs I want. At the same time, users still preferred to have a default explanation first and then provide options to control on the variables or diver deeper into details, so they only need to pay attention to parts that are worth of personalization. Note that Lakkaraju et al. [38] outlined five principles of interactive explanation from interviewing 26 real-world decision makers in healthcare and policy-making. They summarized four considerations for interactive explanation methods: the explanation should interact appropriately (understand continuous requests for explanations), respond appropriately (respond with informative, properly contextualized, and satisfying explanations), properly calibrated response (provide reliable notions of confidence), reduce explainability overhead (eliminate the need to write code), and consider context (condition the understanding of inputs on the previous How confident is the model for this prediction? Given your selected sentence = "In this paper , we examine the Conversational XAI potential in the context of scientific writing.", the model predicts a 'purpose' aspect label with confidence score = 0.9474. The most likely counterfactual label is 'background'. You can get this label by revising into: 'the Conversational XAI potential in the context of scientific writing the Conversational XAI potential in the context of scientific writing '. I'm confident with this revision with confidence score=0.3882. + background C2 C3 C4 How can I revise the input to get a different prediction label? Label Distribution Prediction Confidence Similar Published Sentences Important Words Counterfactual Prediction Would you like to set another contrastive label to change to? Please type the label from 'background', 'method', 'purpose', 'finding', 'others', or reply 'No' if you won't need. What are some published sentences that look similar to mine semantically? Would you like to see more or less examples, and meanwhile conditioned on an aspect? If you need, please type the word number + aspect (e.g., 6 + method), otherwise, please reply 'No'. Multifaceted C1 Mixed-Initiative Context-aware Drill-down Controllability C Figure 3: An overview of ConvXAI system. ConvXAI includes two writing models (A) to generate writing structure predictions (A1) and writing style (A2) predictions. Furthermore, the XAI agent in ConvXAI provides integrated writing review (B) followed by conversations with users to explain the writing predictions and reviews. Especially, the dialogue flows are designed to follow the four principles of "multifaceted" (C 1 ), "mixed-initiative" (C 2 ), "context-aware drill-down" (C 3 ) and "controllability" (C 4 ). interactions). Our designs indeed share overlaps with these considerations, e.g., as a dialog system people don't need to write code and "consider context" maps to "Context-aware drill-down". However, while Lakkaraju et al. mostly focus on properties of the explanation method, we extend to also consider the design of the interactive interface wrapping the explanations. Therefore, ours also include principles of interactions, especially context-aware and controllability. We next incorporate all principles into our formal system design in Section 4. ConvXAI Based on the use scenario and design principles, we present ConvXAI, a system that applies conversational AI explanations on scientific writing support tasks. The system aims to leverage conversational AI explanations on the AI writing models to improve human scientific writings. We extend the system developed in the formative study (Section 3), which consists of a writing panel and explanation panel. The writing panel is quite similar to the formative study, which can enable users to iteratively submit their paper abstract and check the writing model predictions for each sentence. We introduce the workflow of scientific writing task and how the two writing models generate predictions and reviews with more details in Section 3. On the other hand, we significantly improve the conversational AI explanation panel by incorporating the four design principles described above. Below, we elaborate the ten formats of AI explanations included in our ConvXAI system, how we design the conversational XAI with the four principles and the implementation of the system pipeline with details. ConvXAI explanation panel: Interface and dialogue flow Visually, the final ConvXAI explanation panel is not too different from the one in our formative study (see Figure 1D vs. Figure 3B). However, we significantly revise the underlying dialog mechanism according to the four requirements, so users can interact more smoothly with the XAI agent. We use Figure 3C to demonstrate the design. To design ConvXAI to be mixed-initiative (R.1 in Section 3.4), we start the explanation dialog with a review summary of the writing structure model and style model's outputs ( Figure 3B). The users can select any one sentence (in this case, the third sentence with the sentence id S3) in this suggestion list to dive in, and start a conversation session on the sentence. Uniquely, to maintain multifaceted explanations (R.1) without overwhelming users, we add an additional explanation type, understand suggestion -answering questions like "Can you explain this review" -which provides general contextualization on a given suggestion ( Figure 3C 1 ). To make it serve as an proactive guidance towards more sophisticated XAI methods, the agent also initiates a prompt message "to improve... " with a subset of relevant XAIs, based on the "guess" that users would want to improve their writing at this point. To enable context-aware drill down (R.3), the user questions as well as agent answers are considered subsequently. For example, in Figure 3C 3 , the user receives a review suggesting to describe background aspect instead of purpose aspect for the selected S3. The user firstly wants to know how confident the model makes this prediction. Given the model confidence is quite high (around 0.95), she wanted to know how much she has to change in order to receive a different label. The agent directly contextualize this questions based on the suggested change in Figure 3C 2 ("suggested to describe background"), and responds with a rewrite for the label background without having to double check with the user first. Still, the default may not reflect users' judgement in some cases. To mitigate potential wrong contextualization, we make the agent to always proactively initiate hints for controllablity (R.4), e.g., "would you like to..." at the bottom of Figure 3C 3 . Figure 3C 4 provides a more concrete example: when the user asks for similar sentences published in the targeted conference, the XAI agent responds the top-3 similar examples conditioned on the predicted aspect (i.e., purpose) by default. However, as the user is suggested to rewrite this sentence into background, she requests for the top-2 similar sentences which have background labels by specifying "2 + background", so to use those examples as gold groundtruths for improving her own writing. Writing Assistant Models We aim to provide two sets of writing support: (1) whether the abstract follows the typical semantic structure of the intended submission conferences, and (2) whether the abstract writing style matches with the conference norm. To do so, we leverage two large language models to generate predictions for each abstract sentence. First, we use a writing structure model to assess the semantic structure, by assessing if the abstract sufficiently covers all the required research aspects (e.g., provide background context, describe the proposed method, etc.) [30] (Figure 3A 1 ). We create the model by finetuning SciBERT-base [6], a pre-trained model specifically captures scientific document contexts, on the CODA-19 datasets [30], which annotates each sentence in 10,000+ abstracts by their intended aspects, including Background, Purpose, Method, Finding/Contribution, and Other in the COVID-19 Open Research Dataset. The model achieves F1 score over 0.62 for each aspect and an overall accuracy as 0.7453. The model performance is demonstrated in Appendix A.2A. While this model provides per-sentence predictions, the quality of an abstract depends more on the sequence of sentence structures. For example, "background" sentences should not be too many and should be primarily before "purpose" and "method". To support abstract improvement, we further implement a pattern explanation wrapper on top of the model, which suggest writers to change some sentences' aspects to reach a better aspect pattern. For example, "background" sentences should not be too many and should be primarily before "purpose" and "method". Therefore, we provide structure pattern assessment, which suggest writers to change some sentences' aspects to reach a better aspect pattern. Specifically, or each conference (e.g., ACL), we clustered all abstracts in the conference into five groups and extracted the centers' structural patterns as the benchmark (e.g., "background" (33.3%) -> "purpose" (16.7%) -> "method" (16.7%) -> "finding" (33.3%)). Afterwards, we compare the submitted abstract's structural pattern with the closest pattern using the Dynamic Time Warping [52] algorithm to generate the structure suggestion for writers. See the extracted structural patterns for all conferences in Appendix A.2B. Second, we use a writing style model to predict the style quality score for each sentence, and check if the writing style matches well with the target conference. As we intend to first support abstract improvement in the CS domain, we collect 9935 abstracts published during 2018-2022 from three conferences with relatively diverse writing styles, namely ACL (3221 abstracts), CHI (3235 abstracts) and ICLR (3479 abstracts), which are representatives of the top-tier conferences in Natural Language Processing, Human-Computer Interaction, and Machine Learning domains. More data statistics of the three conferences are in Appendix A.2C. To represent raw writing style match, we use the style model to assign a perplexity score [34] for each sentence, which is a measurement that approximates the sentence likelihood based on the training data. Further, since perplexity score is quite opaque, we add a normalization layer for better readability. Specifically, we categorize the quality scores into five levels (i.e., score = 1 (lowest) to 5 (highest)), which is similar to the conference review categories that writers are familiar with. To achieve this five levels, for each conference, we got the distribution of all sentences' perplexity scores, and computed the [20-th, 40-th, 60-th, 80-th] percentiles of all the scores, then divided all scores based on these percentiles. See the quality score distribution in Appendix A.2D. To provide better overviews, we further offer an overall, abstractlevel assessment by averaging its "overall style score" and "overall structure score". The "overall style score" is computed by averaging all sentences' quality score. Whereas we compute the "overall structure score" as overall structure score = 5 − 0.5 * #structure comments, where #structure comments means the number of structure reviews. Universal XAI Interface via Conversations Explanation methods underlying the XAI agent. Here, we provide technical details on all the explanation methods enumerated in Table 1. First, understanding data and model requires more global explanations that summarizes the training data distribution as well as the model context. For the data, we include data sheets [21] for the datasets used. We further compute important attribution distributions, including the quality scale mentioned above, the structure label distribution, and the sentence length. Such information also helps users contextualize where their abstract sits on the distribution. Similarly, for providing sufficient model information, we incorporate model cards [49] for SciBERT and GPT-2, and adjust them based on our finetuning data. Second, for understanding and improving models, we leverage the state-of-the-art XAI algorithms to generate local AI explanations. This includes: • Prediction confidence, which is the probability score after the softmax layer of the SciBERT model reflecting model prediction certainty. This explanation is only provided for the writing structure model. • Similar examples, which retrieves semantically similar sentences published in the target conference to be references. We assess this with the dot product similarity of the sentence embeddings [55] (derived from the corresponding writing assistant models). This is provided for both writing structure and style models. 5 • Important words, which aims to highlight the top-K words that attribute the writing model to the sentence prediction. We leverage the Integrated Gradient approach [51] to generate the word importance score (i.e., attribution). • Counterfactual Predictions, which re-writes the input sentence with a desired aspect while keeping the same meaning. We design an in-context learning approach using GPT3 [8] to re-write sentences. Given an input sentence, we first retrieve the top-5 semantically similar sentences for each of the five aspects from the collected CS-domain abstracts (the semantic similarity between sentences is measured by the cosine similarity over sentence embeddings [60] then follows the instruction to generate a modified sentence with the desired aspect label. Finally, as described in Section 4.1, we further add understanding suggestions to answer the general question of "how did the system generate the suggestions?", and provide pointers to other finer-grained explanation methods. We create "suggestion explanations" for each piece of writing feedback. Particularly, we create one template for writing structure review, writing style review, and sentence length review, respectively. In each template, we describe how we compare all predictions in the abstract with the target conference data statistics to generate the corresponding review. Then we initiate an "improving message" aiming to guide users in how to use XAI to improve their writing, this message includes the buttons of potential XAI methods that we deem users might use for resolving this review (as one example shown in Figure 3). Implementing the ConvXAI dialog pipeline. We develop the ConvXAI system to include a web server to host the User Interface (UI), and a deep learning server with GPUs to host both the writing language models and AI explanation models. We mainly describe our implementation of the conversational XAI agent module as below. Specifically, we develop the conversational XAI pipeline from scratch based on the Dialogue-State Architecture [1] from the taskoriented dialogue systems. The pipeline consists of three modules including a Natural Language Understanding module that classifies each XAI user question into an pre-defined user intent, which is mapped into one type of XAI algorithm. The second module, named AI Explainers is for generating ten types of AI explanations. Then the output is connected to the third module, named Natural Language Generation, to generate natural language responses that are friendly to users. We introduce more implementation details as below. • Natural Language Understanding (NLU). This module aims to parse the XAI user question and classify the user intent into which types of AI explanations they may need. We currently design the intent classifier to be a combined model of rule-based and BERT-embedding similarity based methods. We trained a BERT-based classifier to do the intent classification, where we classify each user question into one of the ten pre-defined XAI user intents. • AI Explaners (XAIers). Based on the triggered XAI user intent, this module selects the corresponding AI explainer algorithm to generate the AI explanations. Currently, we implemented the AI Explainers to include ten XAI algorithms (in Section 1) to answer the ten XAI user questions listed in Table 1 to explain) they need, and the AI Explainers will feed the "user-defined" variable into the AI algorithm to generate "user-customized" AI explanations. We will open source our codes upon paper acceptance. • Natural Language Generation (NLG). Given the outputs from the AI Explainers, we currently leverage a templatebased NLG module to convert the generated AI explanations into natural language responses. Note that we especially design the NLG templates to be multi-modal, so that it enables both free-text responses and visual-assisted responses (e.g., heatmap to explain feature attributions) to meet users' need. • Conversational XAI State Tracker (XAI-ST). As our ConvXAI empowers users to choose from multiple types of XAI methods, and drill down to AI explanations and make XAI customizations. We specifically design the global Conversational XAI State Tracker to record user's turn-based conversational interactions. Particularly, we record the turn-based user intent transitions and the users' customization on AI explanations. Overall, we design the conversational XAI pipeline to be model agnostic and XAI algorithm agnostic. This enables the ConvXAI system to be naturally generalized to various AI task models and AI explanation methods. We'll open source the system codes along with paper publication. User Studies We conducted two within-subjects human evaluation studies, where we compare the proposed ConvXAI against SelectXAI, a GUI-based universal XAI system. The user study aimed to investigate how users leverage the XAIs systems to better understand the AI writing feedback and improve their scientific writing. We particularly designed the study to consist of (1) an open-ended writing task to evaluate the effectiveness of user-oriented design in the system, and (2) a well-defined writing task to investigate how systems can help users improve their scientific writing process and output in practice. Specifically, we pose the following research questions: • RQ1: Can user-oriented design in ConvXAI help humans better understand the AI feedback and perceive improvement in writing performance? • RQ2: Can the ConvXAI be useful for humans to achieve a better writing process and output? • RQ3: How do humans leverage different AI explanations in ConvXAI to finish their practical tasks? Task1: Open-Ended Tasks for System Evaluation Can ConvXAI help users to better understand the writing feedback and improve their scientific writing? What designs support this purpose? With these questions keeping in mind, we conduct a within-subject user study comparing ConvXAI with a SelectXAI baseline interface. Following the study, we ask participants to comment on the systems and examine how they use the ConvXAI to improve their writing by observing their interaction process. Study Design and Procedure Participants and SelectXAI System. We recruited 13 participants from university mailing lists. All the participants had research writing experience, resided in the U.S. and were fluent in English. The group has no overlap with the formative study participants, none of them had used ConvXAI prior to the study. Each study lasted for one and a half hour. The participant was compensated with $40 in cash for their participation time. We ask each participant to compare ConvXAI with a baseline system, named SelectXAI, shown in Figure 4. The SelectXAI system also consists of all the AI explanation formats included in ConvXAI. However, it statically displays all the XAI formats on the right hand view panel instead of using the dynamic conversations to convey XAIs. To display all the XAI for each sentence, users can select a sentence from the left writing editor panel to be explained, then generate all XAI formats by clicking a trigger button at the right panel. As a result, users can view all XAI formats with each has a button to control hiding and showing the AI explanations results. In other words, SelectXAI remains being mutifacted (R.1) and somewhat controllable (R.4), but does not have drill-down (R.3) or mixed-initiative properties (R.2). Study Procedure. We conducted within-subjects study where we have the same users to interact with both proposed conversational XAI system and SelectXAI baseline system. Each user study consists of three steps where i) we first instruct each user how to use the ConvXAI and SelectXAI systems by showing them a live demo or recorded videos. They can stop the instruction anytime and ask any questions about the tutorials. ii) After the system tutorials, we invited the users to explore both ConvXAI and SelectXAI systems with the pre-defined order. Particularly, we randomized the orders of all the 13 studies. As a result, we ask 7 participants to start with the ConvXAI group, and 6 participants to start with the SelectXAI group. iii) Finally, we ask the users to fill in a post-hoc survey including two demographic questions and 14 questions rating their user experience with 5 point likert scale. We further ask them three open-form questions after the survey to interview their opinions about the ConvXAI and SelectXAI systems. During the step ii) and iii), we recorded the video of the process, encouraged them to think aloud. Besides, we designed the users to evaluate two systems either both with their own papers or both with the examples we provide. We encouraged users to use their own paper drafts where users had more incentives to improve the writing. As a consequence, 12 out of 13 users submit their own drafts or published papers. Study Results We first look into the overall usefulness of ConvXAI, and answer the question: is ConvXAI useful for users' ultimate goal of understanding and improving their abstract quality (RQ1)? We summarize participants' ratings on the two systems, ConvXAI and SelectXAI, in Figure 5. We performed the non-parametric Wilcoxon signedrank test to compare users' nominal Likert Scale ratings and found that participants self-perceived ConvXAI to help them to better understand why their writings were given the corresponding reviews (ConvXAI 4.07 ± 1.18 vs. SelectXAI 3.69 ± 1.37, = 0.036, Figure 5A). . They also felt that ConvXAI helped them more in improving their writing (4 ± 0.91 vs. 3.53 ± 0.77, = 0.019, Figure 5B). The helpfulness are likely because participants can more effectively find answers to their diverse questions, which we detail in Section 5.1.2. Besides their promising self-reflection, 3 out of 13 participants actually edited and iterated their abstracts in ConvXAI. They all successfully addressed the AI-raised issue (i.e., the corresponding suggestion disappeared when they re-evaluated the edited version). However, the other 10 participants showed low incentive to revise the published abstracts. Through interviews, we summarize some challenges they faced in interacting with the current ConvXAI in Section 6.3. Through the study observations and free-form question interviews with users, we obtained that 9 out of 13 participants prefer to use ConvXAI than SelectXAI system for improving their scientific writing. We conjecture that this might primarily result from ConvXAI's ability to answer user questions more sufficiently, efficiently and diversely. More specifically, the benefit comes from three dimensions: First, ConvXAI reduces users' cognitive load digesting the available information. 9 participants were overwhelmed by SelectXAI, and complained that they had to manually click through all the available buttons before they realize all of them contain explanations on the exact same sentence. In contrast, ConvXAI releases the same information more gradually through the back-and-forth conversations. Participants especially appreciated that the initial suggestions from ConvXAI (mixed-initiative, R2 ), as it enables them to interact with the system without having to understand its full XAI capability (unlike in SelectXAI). For example, P12 pointed out, "it is very helpful that the XAI agent can give me some hints on using the AI explanations. Especially when I'm a novice of scientific writing and AI explanation knowledge, this helps me get involved in the system more quickly. " Indeed, this is also reflected in participants' ratings: in Figure 5E, participants found ConvXAI helped them figure out how to inquiry about a sentence (ConvXAI 4.23 ± 0.83 vs. SelectXAI 3.77 ± 1.09, = 0.001). Additionally, it is important that the ConvXAI is robust in detecting user intents, such as being tolerant of user input typos. As P1 and P2 mentioned, "I really like the ConvXAI that allows my typos by only capturing the keywords, so that I don't need to memorize much knowledge for using the system. " Second, ConvXAI enables users to pinpoint the XAI questions efficiently. We quantified the types of questions participants frequently asked, and found 9 out of 13 participants had explicit preferences on using some specific AI explanations formats. Among these 9 users, 66.67%, 55.56% and 33.33% participants primarily used counterfactual explanation, similar example, and feature attribution explanations, respectively. This suggests that, indeed, people have different kinds of questions and XAI needs. Participants liked that they could take the initiation and prioritize their own needs, and simply query the associated XAI through the dialog, whereas in SelectXAI, "I just go over all the explanations and read everything, for some of the explanations I just don't care, this is somehow a bit overwhelming to me." (P3) This also means they were much less likely to be distracted duplicate details (e.g., P1: "I only need to understand the general information about the model and data at the very begining, after that, I don't need to check it repeatedly every time for each sentence." ), or explanations irrelevant to their questions. As a result, they rated ConvXAI to provide explanation more easily Figure 5C). Interestingly, having users to self-initiate questions brought an unexpected benefit -it helps users think through the writing and what they actually want to understand. As P6 said, "Compared with SelectXAI, ConvXAI slows down the interaction and gives me the time and incentive to think about what I want the robot to explain." P4 also pointed out, "The follow-up hints inspires me to think more about how to use the XAI for my writing." This somewhat echos prior work that showed pairing humans with slower AIs (that wait or take more time to make recommendation) may provide humans with a better chance to reflect on their own decisions [54]. Third, ConvXAI provides sufficient AI explanations crafted for user need. Interestingly, though ConvXAI and SelectXAI implemented the same amount of explanation types and participants were overwhelmed by SelectXAI, they still rated ConvXAI to have more sufficient amount of explanations (multi-faceted, ConvXAI 4.23±1.09 vs. SelectXAI 3.31±1.03, = 0.007, Figure 5D). ConvXAI's controllability (ConvXAI 4.08 ± 0.95 vs. SelectXAI 3.46 ± 1.45, = 0.014, Figure 5G) played an important role here (ConvXAI 4.07 ± 0.95 vs. SelectXAI 3.46 ± 1.45, = 0.001, Figure 5E). Participants mentioned that it is essential for them to customize how their questions were answered, and were satisfied that they could customize level of details in one XAI type (e.g., number of similar words in feature attribution, targeted label in counterfactual prediction, etc.), whereas SelectXAI did not provide the same level of control (as per status-quo). We observe all (13 out of 13) participants performed the personalized control on generating AI explanations during the user study. The ability to drill down was equally important. We saw users performing different kinds of follow-ups based on their current explorations. For instance, as P5 mentioned, "I would firstly check the model confidence explanation, if the confidence score is low, I would directly ignore this sentence prediction which makes my writing much easier. However, if the confidence score is high, I will use the counterfactual explanation to check how to revise this sentence. " Participants also mentioned "the function of enabling users to generate these personalized explanations are the most important features" resulting in why they prefer ConvXAI over SelectXAI systems. Like P8 pointed out, "I think SelectXAI has the advantage on easier to use because the learning curve is short. However, I would still prefer ConvXAI because it can provide me with much more explanations that I need. " To better understand users preferences on explanations, we summarize some use patterns in ConvXAI in the next section. Task2: Well-defined Tasks for Writing Evaluation To answer RQ2, we further evaluate participants' productivity and writing output quality to assess the usefulness of ConvXAI and SelectXAI on human writing performance in Task 2. Study Design and Procedure Participants and Grouping. We recalled 8 users, who have joined Task1 and been familiar with the system, to participate in Task2 again. There are two reasons to recruit the same group of users again: i) the experience in Task 1 could help users reduce their learning curve and cognitive load on familiarizing the XAIs and systems. Therefore, users can focus more on writing process; ii) this design can potentially provide temporal change of user behaviors on leveraging the systems. To conduct rigorous human studies, we divide 8 users into 4 pair of groups, with groups' research domains lying in "NLP", "HCI", "AI", and "AI", respectively. Study design and paper selection. Similar to Task1, we also conducted a within-subjects study, but with the objective of evaluating users' scientific writing outputs with the help of ConvXAI and SelectXAI systems. For each group of two users, we ask them to rewrite the same two papers asynchronously, with a reverse order of system assistants. For instance, within the same group, user1 rewrites with 'paper1-ConvXAI' followed by 'paper2-SelectXAI' settings, whereas user2 rewrites with 'paper1-SelectXAI' and 'paper2-ConvXAI' settings successively. Hence, these settings eliminates the correlations between papers and system types and orders. Afterwards, we evaluate the users' writing outputs and experience with a set of metrics, including a real-human editor evaluation, a set of auto-metrics and a post survey. For fair comparison, we pre-selected eight papers (i.e., 2 papers * 4 domain groups) for users to rewrite, which are recently submitted to arXiv (i.e., around Nov/29/2022) within the domains of Artificial Intelligence 6 , Computation and Language 7 , and Human-Computer Interaction 8 . Also, we followed a set of rules during paper selection: i) The papers are not in the top-5 best papers ranked by the editor and accepted by journals or conferences; ii) Users don't need specialized domain knowledge to improve writing. (e.g., no need to read the whole paper contents to improve the writing); iii) The AI aspect labels and quality score predictions are correct (checked by the authors). During the study, we also recorded the video of the process and encouraged the participants to think aloud. Study Results. We evaluate participants' scientific writing performance quantitatively in terms of productivity and writing performance (i.e., how many changes have been made and whether the improved writing outputs are scored better). Akin to Task1, we also qualitatively assess participants' perceived usefulness with 5 point likert scale from the post survey. Productivity. We evaluate productivity with respect to the "Edit-Distance" and the "Normalized-Edit-Distance" ("Normalized-ED") between the original paper abstract and the modified version from participants. We leverage Damerau-Levenshtein edit distance [14,42] and its normalized version [84] to compute the these two metrics. From Table 6 (A), we observe that participants' edit distance using the ConvXAI is 43.09% (i.e., M=56.88 vs. M=39.75) higher than that using SelectXAI in average, meanwhile the normalized edit distance is 35.29% (M=0.276 vs. M=0.204) higher comparing ConvXAI and SelectXAI as well. This demonstrates that ConvXAI is potentially useful to help users make more modifications on writing than that using the SelectXAI system. Besides, we also record the "Submission" counts representing how many time the users modified their draft and re-submitted to the systems. Table 6 (A) shows participants submitted 99.81% more times with ConvXAI than using SelectXAI during the writing, with statistically significant difference (p=0.0045). This result also indicates users tend to interact and submit more with ConvXAI than SelectXAI for rewriting the abstracts. These findings are consistent with the users' think aloud notes, in which most of them preferred to use the ConvXAI than SelectXAI for improving writing. Like P5 (who use SelectXAI first followed by ConvXAI) mentioned, "I somehow struggled with using the SelectXAI system because it provides very limited help. But I kind of start enjoying the writing process with the help of ConvXAI. " Writing Performance. To understand whether ConvXAI can actually help users improve writing outputs, we compare the abstracts before (i.e., Original) and after (i.e., Improved) editing with ConvXAI and SelectXAI as shown in Table 6. We evaluated abstracts using three different measurements: (i) Grammarly, (ii) ConvXAI's built-in models, and (iii) human evaluation. To measure the abstract quality with Grammarly, we set Grammarly's suggestion goal as audience = expert and formality = formal, manually copy-and-paste all the abstracts to Grammarly, and record the scores. Besides, we also adopt the two ConvXAI's built-in models, including the writing style model and the writing structure model. We leverage them to measure abstracts' language quality and abstract structure, respectively. These scores are also the AI scoring feedback for users during their writing tasks. For human evaluation, we hire one professional editor to rate abstracts' quality in terms of language quality and abstract structure. Note that it is difficult to find an expert who is experienced in reviewing abstracts of all "NLP", "HCI", and "AI" domains. Therefore, we are also aware of the limitation of this human evaluations. All scores are demonstrated in Table 6 (C). We can observe that, by comparing with Original scores, both ConvXAI and SelectXAI are useful for humans to improve their auto-metric writing performance, including the "Grammarly", "Model Quality", and "Model Structure" scores. Furthermore, ConvXAI specifically outperforms SelectXAI on Grammarly and writing quality metrics, indicating that ConvXAI can potentially help users to write better grammar-based and style-based sentences in scientific abstracts than SelectXAI. On the other hand, the human editor's evaluation shows inconsistent results, where ConvXAI and SelectXAI can both improve the writing Structure evaluations, but not in the Quality metric. To probe the inconsistency between human and auto-metric evaluations, we further compute the Pearson correlation between the model scores and the human ratings and find that both quality and structure are negatively correlated or not correlated (quality: -0.0311 and structure: -0.1150), showing that there is misalignment between human and models. Therefore, we posit that both universal XAI systems, including ConvXAI and SelectXAI, are useful to improve human writing performance under the auto-metric evaluations. Particularly, ConvXAI can outperform SelectXAI in terms of grammar and style-based writing quality. Besides, as the human is not aligned with model evaluations based on Pearson correlations, the improvement failed in human quality metric. This negative finding actually provides valuable insights on the important of aligning the human judgement and model objective in AI tasks, so that users can use the systems to effectively reach both improvement goals. Perceived Usefulness. In the post survey, we also ask users to rate their perception of system usefulness in terms of assisting their abstract writing. We particularly measured the users' perceived usefulness on "Overall Writing", "Writing Structure" improvement, and "Writing Quality" improvement. We design these three metrics to be consistent with the feedback from the AI writing models. Shown in Table 6 (B), we can see participants perceived ConvXAI to be 1 (out of 5) point higher than SelectXAI in terms of usefulness on all writing aspects. In the end of the survey, we further ask which AI explanations or system functions they perceived to be most useful, we elaborate this finding in Sec 5.3 below. We deploy Productivity with three auto-metrics including "Edit Distance", "Normalized-Edit-Distance", and "Submission Count". (B) We ask users to rate their perceived system usefulness for improving "Overall Writing", "Writing Structure", and "Writing Quality". (C) We evaluate writing outputs using both auto-metrics (i.e., "Grammarly", "Model Quality", and "Model Structure"), and human evaluation (i.e., "Human Quality" and "Human Structure"). Figure 7: User demands analysis during using ConvXAI to improve scientific writing in Task 1 and Task 2. Particularly, (1) We ranked the top-2 most frequently requested XAI methods by each user ID in Task 1(A1) and Task 2 (B1). (2) We compute all the users' question amount for each of the 10 XAI methods in Task 1 (A2) and Task 2 (B2). The blue bars indicate traditional XAI questions and the red ones means XAI customization requests. Usage Patterns with ConvXAI We propose ConvXAI based on the statement that universal XAI interfaces are important for satisfying user demands in real-world practice. In this section, we provide practical evidence to support that the universal XAI interface is indeed a necessary design of useful XAI for real-world user needs. By reviewing all 11 (from Task 1) and 8 (from Task 2) recorded study videos, we collected all the users' XAI question requests when they leverage ConvXAI to improve writing. In total, there are 95 and 92 XAI user requests in Task 1 and Task 2, respectively. Based on analyzing these XAI user requests., we demonstrate Figure 7 to provide detailed insights on practical user demands. More specifically, in Figure 7 (1), we visualize each individual user's top-2 priority on using the different XAI methods. In Figure 7 (2), we accumulate all users' requests on each XAI method to visualize the usage distribution among the ten XAI methods. We also separately visualize the Task 1 and Task 2 in order to observe the temporal usage patterns on XAI methods. We summarize our findings in detail as below. 5.3.1 Different users prioritize different AI explanations and orders for their needs. First, focusing on the same task but with different users, we observe that different users often prioritize different types of AI explanations even within the same task. In specific, for Task 1 shown in Figure 7 (A1), although 9 users (i.e., 1,2,3,4,6,7,8,9) prioritize to use "Examples" explanations, the other 2 users (i.e., 5,11) leverage "Attributition" and "Confidence" explanations most in their writing task 1. Besides, the 2nd-popular AI explanations of the 11 users are scattered among all the 10 XAI types without a unified pattern. Additionally, in Task 2 with Figure 7 (B1), we can see users' top2 explanations are converging into instance-wise explanations (i.e., "Attribution", "Counterfactual", etc). In specific, 7 out of 8 users prioritize "Counterfactual" and the other one leverage "Example" explanation the most. This is also consistent with the user's think alout observation. For instance, P5 lacks AI background and didn't understand what "Prediction Confidence" means in this situation, whereas P11 mentioned "model confidence is the first explanation I'll ask to decide whether I'll ignore the prediction or continue the explanations. " Furthermore, we accumulate the users' XAI request counts for each XAI type and show the results of Task 1 and Task 2 in Figure 7 (A2) and (B2), respectively. We can observe that although user needs are often dominated by one XAI type ("Example" and "Counterfactual" in Task 1 and 2, respectively), users also leverage ConvXAI to probe a wide range of other XAI types, such as "XAI tutorial", "Confidence", "Attribution", etc.) In short, these findings validate that it is important to use the universal XAI interface like ConvXAI, which can accommodate different users' backgrounds and practical demands. User demands are changing over time. In addition, we focus on the changes of user demands over time. We specifically compare the same user group's XAI needs in the two Tasks. By comparing Figure 7 (A1) vs. (B1), we can see that the top of users' XAI demands are gradually converging into the instance-wise explanations, including "Counterfactual", "Example", "Confidence", "Tutorial" and "Attribution" explanations. This can be further verified by comparing Figure 7 (A2) vs. (B2). We can see that i) user demands in Task 2 are highly skewed to "Counterfactual" explanations, which are two times more than the "Example" explanation ranked as top in Task 1. ii) Users leverage much less and even no global information explanations (e.g., "Data", "Model", "Length", etc) in Task 2. This is also consistent with the user think aloud notes, where P4 pointed out "After I know these data and model information, I might not need them again a lot, unless I need this information to analyze each sentence's prediction later. " This again shows that it is important to design XAI systems to be a universal yet flexible XAI interface, as ConvXAI, to capture the dynamic changes of user needs over time. 5.3.3 Proactive XAI tutorials are imperative to improve the XAI usefulness. Both our pilot study and the two tasks illustrate that providing users with instructions on how to use XAI is crucial. Particularly, echoing to the "Mixed-initiative" design principle, we proactively give hints of XAI use patterns (i.e., how to use AI explanations) for improving writing during the conversations. In Table 2, we exemplify a set of use patterns to resolve different AI writing feedback. From Figure 7 (A1) and (B1), we can observe that 72.73% (8 out of 11) users and 37.5% (3 out of 8) users prioritize "Tutorial" explanations as top-2 during Task 1 and 2, respectively. Similarly, in Figure 7 (A2) and (B2), the accumulated counts of "Tutorial" explanations also ranked within top-3 in both Task 1 and 2, indicating a high user demands of checking tutorial/hints of XAI usage patterns. Furthermore, we also observe a decreasing trend of "Tutorial" explanation needs over time by comparing Task 1 and Task 2. This potentially indicates that users are gradually being more proficient of using AI explanations for their own needs. XAI Customization is crucial. By observing the thinkaloud interviews in the two tasks, we deem one fundamental reason that ConvXAI outperforms SelectXAI is that it provides much more flexible customization for the user request. This corresponds to the "Controllability" design principle derived from the pilot study as well. Note that we only designs 3 out of 10 AI explanations to enable XAI customization. Particularly, we allow uses to specify one variable (i.e., "target-label") for generating "Counterfactual" and "Attribution" explanations, and four variables (i.e., "target-label", "example-count", "rank-method", "keyword") to generate "Example" explanations. Importantly, by visualizing Figure 7 (A2) and (B2), we observe that there are 22.11% and 40.22% practical user requests for XAI customization in Task 1 and 2, respectively. Besides, all users in both Task 1 and 2 requested XAI customization during their studies. These findings indicate that enabling users to customize their personal XAI needs is crucial in practice. 5.3.5 Same feedback can be resolved with different AI explanations. Additionally, we observe that the same writing feedback can be resolved with different AI explanations. As shown in Table 2, we demonstrate two use pattern examples to resolve each type of AI prediction feedback as the "hints" of how to use XAI within ConvXAI systems. Correspondingly, we also find different users choose different AI explanations to resolve the similar problem. For instance, when users receive a suggestion to rewrite the sentence into another aspect label, some participants directly ask for counterfactual explanations to change the label (e.g., P1, P7, P8), whereas others might refer to similar examples to understand the conference published sentences first, and then revise their own writings (e.g., P2, P6, P9, P11). Further, even the same people could use different XAIs based on different scenarios. As P1 mentioned "If time is urgent, I'll use counterfactual explanation because they are straight-forward. However, when I have more time, I'll use similar example explanation because I can potentially learn more writing skills from them. " Discussion and Limitations In this work, we propose ConvXAI as a universal XAI interface in the form of conversations. We especially incorporate practical user demands, representing as the four design principles collected from the formative study, into the ConvXAI design. As a result, users are able to better leverage the multi-faceted, mixed-initiated, contextaware and customized AI explanations in ConvXAI to achieve their tasks (e.g., scientific abstract writing). The ConvXAI design and findings can potentially shed light on developing more useful XAI systems. Additionally, we have released the core codes of universal XAI APIs and the complete code base of ConvXAI. The ConvXAI can be generalized to a variety of applications since the universal XAI methods and interface are model-agnostic. In this section, we further elaborate the core ingredients for useful XAIs based on user study observations with ConvXAI, the system generalizability and the empirical limitations. As an novel model of universal XAI interface using conversations, we believe it provides valuable grounding on how future conversational XAI systems should be developed to better meet real-world user demands. Crucial Ingredients of Useful XAI We design ConvXAI system as a prototypical yet potential solution of useful AI explanation systems in real-world tasks. The rationale is to mitigate the gaps between the practical, diverse and dynamic user demands of existing AI explanations via a universal XAI interface in the form of conversations. Especially, we aim to probe "what are the crucial ingredients of useful XAI systems?" during the one formative study and two human evaluation tasks. In summary, we elaborate our preliminary findings of useful XAI systems should potentially incorporate four factors, including: " Integrated XAI interface + proactive XAI tutorial + customized XAIs + lightweight XAI display". We elaborate each ingredient with the supportive evidence in our studies for more details. Integrated XAI interface accessible to multi-faceted XAIs. In Sec 5.3 and Figure 7, we demonstrate diverse XAI user needs and usage patterns from empirical observations. This indicates that XAI user demands are generally dynamically changed across different users and over time. Therefore, it is essential to empower users to choose the appropriate XAIs on their own preferences. Users can therefore leverage an integrated XAI interface with access to multi-faceted XAIs for their needs. Proactive XAI usage tutorial. From the formative study, we learned that it is difficult for users to figure out "how to leverage and combine the power of different XAI types to finish their practical goals". This finding motivates the "Mix-intiated" design principle, and resulting in designing XAI "tutorial" explanations to instruct users. Moreover, the two user studies provide evidence (i.e., in Sec 5.3.3 and Figure 7) that users indeed request many XAI tutorial explanations during the writing tasks, but the request amount is gradually decreased as the users getting more proficient in using the ConvXAI system. Customized XAI interactions. Users commonly demand more controllability on generating the AI explanations. We observed these user demands from both formative study (i.e., leading to "Controllability" design principle), and two user studies. More quantitatively, we provide evidence (in Sec 5.3.4 and Figure 7) that although only 3 out of 10 XAI types allow customization, all users leverages XAI customization to generate XAIs. Further, the demands of XAI customization increase over time. Lightweight XAI display with details-on-demands. By conducting user studies with both ConvXAI and SelectXAI, we observe that users prefer the XAI interface to be versatile yet simple. Regarding this, a details-on-demand approach using conversations (e.g., ConvXAI) is more appropriate, as the users can directly pinpoint to the expected XAI type as they need. We provide supportive evidence by comparing ConvXAI (details-on-demand) and SelectXAI (full initial disclosure) in Sec 5.1 and Sec 5.2. Generalizability of ConvXAI Although we contextualize ConvXAI system in the scientific writing tasks, the core design and findings can be naturally generalized to a variety of applications. We provide further elaborations as below. Design generalizability. We propose ConvXAI to be the universal XAI interface in the form of conversations, which incorporates ten types of AI explanation at current stage. However, we can naturally generalize the design of ConvXAI to be i) a combination of conversation-based and GUI-based universal XAI interface to leverage their combined benefits; and ii) continually adding new XAI methods without modifying the XAI interface. Technical generalizability. The core design and technique of the ConvXAI system lies in the conversational XAI approach, which contains three modules to i) classify a variety of user intents, ii) generate faithful AI explanations according to the user request, and iii) respond users with free-text languages. To be better generalized and leveraged by future research, we extract these core conversational XAI modules and constitute a universal XAI APIs, which is publically released at GitHub 2 . Furthermore, we also release the complete code base of ConvXAI system, including the universal XAI interface, the AI prediction models, and all the XAI algorithm implementations at GitHub as well 1 . Note that our XAI methods (e.g., NN-DOT [55] for example explanation, Integrated Gradient [51] for attribution explanation, etc) are model-agnostic, they can be applied to plenty of classification and generative machine learning models. In addition, due to the lack of datasets for conversational AI explanation, as an initial start, we are currently leveraging rule-based intent classifier and template-based response generation approaches. We encourage future studies to build more powerful intent classifier and response generation models, as well as incorporating more XAI methods. Application generalizability. Although ConvXAI system is contextualized in the scientific writing tasks, we can isolate the core conversational XAI part separately, and adapt it as an IDE plugin to explain a vast range of AI systems, which includes nonvisual systems that are unable to use GUI-based XAI techniques. Limitations Although the ConvXAI performs mostly better in assisting users to understand the writing feedback and improving their scientific writings, there are still factors and limitations to be noted when deploying ConvXAI in practice. Here, we discuss potential obstacles they faced, and potential fixes to improve ConvXAI. Users have a steeper learning curve to use ConvXAI. In interviewing the users about the advantages and disadvantages of the two systems, we found participants, especially those with less AI knowledge, experienced a steeper learning curve to use the ConvXAI-That says, participants need more effort to learn what answers they can expect from the XAI agent. In comparison, they think SelectXAI is much simpler to interact with because all the answers they can get are displayed in the interface. However, some participants also mentioned that they would like to spend the efforts to learn ConvXAI since it provides more potential explanations to be used. From the above observation, we deem that the ConvXAI system can be improved by providing the "instruction of system capability range" at the initial user interaction stages, and this learning effort will disappear when users interact more with ConvXAI in the long run. The performance of writing models and XAI algorithms influence user experience of ConvXAI. Another phenomenon we observed is that the under-performed model and XAI algorithm quality can influence the user experience, such as trust and satisfaction. Note that in the real-world AI tasks, humans are commonly motivated to use the XAI methods to analyze AI predictions, such as improving writing performance according to the AI writing feedback with the help of ConvXAI's XAI methods. However, there are situations that the AI writing feedback is misaligned with human judgement. In these situations, users commonly ignore the misaligned feedback and can potentially reduce the satisfaction and trust on the AI prediction models. To mitigate this issue, we posit two actions to resolve: i) it is important to align the AI models' predictions and feedback with human judgement before asking users to leverage analysis methods (e.g., XAIs in ConvXAI) to explain or interact with the AI predictions. ii) if the AI task is difficult thus it's inevitable to occur misalignment (e.g., the scientific writing task in this study), enabling human intervention on the models' prediction outputs can alleviate the harm in user experience. For example, when P4 met the the misalignment between the model output and his own judgement, he mentioned "it would be great if I can manually make the model ignore this review, so that the score can reflect my performance more fairly. " Future Directions Contextualize for the right user group. During the studies, we found different users with different background requested for diverse levels of AI explanation details for the same XAI question. For instance, when asking for the model description explanations, AI experts mostly looked for more model details such as the model architecture, how it was trained, etc. In contrast, participants less familiar with AI knowledge only wanted to see the high-level model information, such as who released the model and if it is reliable, etc. The observation echoes the motivating example used by Nobani et al. [53], andand indicates that users who have different backgrounds need different granularity level of AI explanations. While most XAI methods tend to provide user-agnostic information, it might be promising to wrap them based on intended user groups, e.g., with non-experts getting the simplified versions with all the jargons removed or explained. Prior work has also noted that users' perceptions on automated systems can be shaped by conceptual metaphors [36], which is also an interesting presentation method to explore. Characterize the paths and connections between XAI methods. We observe two interesting usage patterns of XAI methods in ConvXAI: First, different XAI methods can serve different roles in a conversation. For example, explanations on the training data information and model accuracy are static enough that it is sufficient to only describe them once in the ConvXAI tutorial; feature attributions and model performance confidence tend to be treated as the basic explanation and initial exploration points, whereas counterfactual explanations are most suitable for follow-ups Second, some explanation methods can lead to natural drill downs. For example, we may naturally consider editing the most important words to get counterfactual explanations, after we identify those words in feature attributions). If we more rigorously inspect the the best roles of, and links between, explanation methods, we may be able to create a graph connecting them. Tracing the graph should help us understand and implement what context should be kept for what potential follow-ups. Meanwhile, while we encourage continuous conversations, we also observe that as the conversation becomes longer, the earlier information are usually flushed out, and it becomes hard to stay on top of the entire session. Some users suggested promising directions, one participant recommended "slicing the dialogue into sessions, where each session only discuss one specific sentence. " Alternatively, advanced visual signals that reflect conversation structures [35] (e.g., the hierarchical dropdown in Wikum reflecting information flow [85]) could help people trace back to earlier snippets. Incorporate multi-modality. While our current controls and user queries tend to be explicit, prior work envisioned much more implicit control signals. For example, Lakkaraju et al. [38] envisioned the Natural Language Understanding unit should be able to parse sentences like "Wow, it's surprising that... ", decipher users' intent on querying outlier feature importance, and provide approriate responses. Identifying users' emotional responses to certain explanations (e.g., surprised, frustrated, affirmed) could be an interesting way to point to potential control responses. Though natural language interaction is intuitive, not all information needs to be conveyed through dialogs. Inspired by SelectXAI's flat learning curve, a combination of natural language inquiry and traditional WIMP interaction could make the system easier to grasp. Future work can survey how people might react to buttons or sliders that allow them to control the number of words or the number of similar examples to inspect. Conclusion In this study, we present ConvXAI, a system to support scientific writing via conversational AI explanations. Informed by linguistic properties of human conversation and empirical formative studies, we identify four design principles of Conversational XAI. That says -these systems should address various user questions ("multifaceted"), provide details on-demand ("controllability"), and should actively suggest and accept follow-up questions ("mix-initiative" and "context-aware drill-down"). We further build up an interactive prototype to instantiate these rationales, in which paper writers can interact with various state-of-the-art explanations through a typical chatbot interface. Through 14 user studies, we show that conversational XAI is promising for prompting users to think through what questions they want to ask, and for addressing diverse questions. We conclude by discussing the use patterns of ConvXAI, as well as implications for future conversational XAI systems. Figure 2 : 2An overview of User Interface (UI) for the pilot study. (A) shows the recommended edits from the writing models, and (B) displays a range of XAI buttons for users to choose from for viewing AI explanations. we summarized all the collected CHI abstracts into five structural patterns, where we found your submission is closest to the pattern of 'background' (33.3%) -> 'purpose' (16.7%) -> 'method' (16.7%) -> 'finding' (33.3%). By using Dynamic Time Warping algorithm to analyze how to revise your submission to fit this style pattern, the result suggested to describe background aspect but not purpose in this sentence.To improve, you can check the most important words resulting in the prediction and further check how to revise input into another label . See XAI questions below: Figure 4 : 4An overview of SelectXAI system. Similarly, it includes (A) two writing models to generate writing structure predictions, and (B) integrated writing review followed by (C) static XAI buttons to show and hide the explanations. Figure 5 : 5Analyses on users' self-ratings on their experiences playing with ConvXAI and SelectXAI. They self-rated ConvXAI to be better on all dimensions, and most significantly on the usefulness of mix-initiative and multifaceted functionality. and more naturally ( ConvXAI 4.0 ± 0.91 vs. SelectXAI 3.3 ± 1.25, = 0.008, Figure 6 : 6Evaluation of Productivity (A), Perceived Usefulness (B), and Writing Performance (C) measurements to assess users' writing performance in Task2. (A) Table 1 : 1ConvXAIcovers ten types of user questions (i.e., Data Statistic, Model Description, Feature Attribution, etc.) serving to four different XAI goals (e.g., Understand Model, Understand Data, Improve Instance, etc.). the XAI questions are derived from user needs for improving their scientific abstract writing in the conversational XAI systems. Some algorithms are referring to NN-DOT [55], Integrated Gradient [51] and MICE The top-3 similar examples from the CHI dataset are (Conditioned on label=purpose): sample-1137 -Our findings highlight trends that can drive critically needed digital health innovations for vulnerable populations.. sample-2239 -While urban design affects the public, most people do not have the time or expertise to participate in the process. sample-2655 -The EL display is connected to planning software and illuminates the correct hole. The top-2 similar examples from the CHI dataset are (Conditioned on label=background): sample-2307 -In response, we present Media of Things (MoT), a tool for on-location media productions.sample-14733 -To address this issue, we developed the Technology-Supported Reflection Inventory (TSRI), which is a scale that evaluates how effectively a system supports reflection. Table 2 : 2Examples of Use Patterns shown in the "Tutorial" explanations suggested by the ConvXAI system. Please see the open-source code of ConvXAI at: https://github.com/huashen218/ convxai.git 2 See the universal conversational XAI API at: https://github.com/huashen218/convxai/ blob/main/notebooks/convxai_universal_xai_api.ipynb As the writing models of the preliminary and formal conversational XAI systems are identical, we encourage readers to refer to Section 4.2 for more details of all the writing models and reviews. Note that we added "understand suggestion" explanations after the formative study as explained in Section 4.1. Note that we deem similar example useful mostly because users also tend to learn about the writing academic writing styles through mimicking published papers, but whether such reference counts as (or encourages) plagiarism is an open question that needs investigation. https://arxiv.org/list/cs.AI/recent. 7 https://arxiv.org/list/cs.CL/recent 8 https://arxiv.org/list/cs.HC/recent Conference'23, 2023, AcknowledgmentsWe thank Ruchi Panchanadikar for her amazing help in optimizing the UI visualization and functions, Yuxin Deng for her thoughtful comments on improving the user studies, and Reuben Lee for his helpful work on improving the UI details. We also thank all the users for participating in the formative and formal studies, and providing insightful feedback. We thank the reviewers for their constructive feedback.A AppendixA.1 Formative StudyA.1.1 Participants Details. In order to capture the user demands of conversational XAI systems from more comprehensive and representative views, we recruited seven participants with diverse backgrounds and occupations in the formative study. The demographic statistics of the seven participants are summarized inTable 3(A). Specifically, we invited 7 participants, including 3 females and 4 males. In detail, we collected and recorded the participants' information according to the criteria as: Writing Expr.A.2 Writing Model PerformanceWe summarize the writing model performance in theFigure 4. We can observe the writing structure model performance of the fine-tuned Sci-BERT language model is shown inFigure 4A. The model accuracy is 0.7453.Figure 4Bshows the extracted five aspect patterns for each conference. Further, we can see the data statistics of three conferences in terms of abstract number, sentence number and average sentence length inFigure 4Cand the quality score distribution inFigure 4D.Table 4: The summary of writing models' performance. The writing structure model performance (with fine-tuned Sci-BERT language model) is shown in (A); (B) shows the extracted five aspect patterns for each conference; the data statistics of three conferences in terms of abstract number, sentence number and average sentence length in (C) and the quality score distribution in (D). Fine-grained Analysis of Sentence Embeddings Using Auxiliary Prediction Tasks. Yossi Adi, Einat Kermany, Yonatan Belinkov, Ofer Lavi, Yoav Goldberg, 5th International Conference on Learning Representations. Toulon, FranceConference Track Proceedings. OpenReview.netYossi Adi, Einat Kermany, Yonatan Belinkov, Ofer Lavi, and Yoav Goldberg. 2017. Fine-grained Analysis of Sentence Embeddings Using Auxiliary Prediction Tasks. In 5th International Conference on Learning Representations, ICLR 2017, Toulon, France, April 24-26, 2017, Conference Track Proceedings. OpenReview.net. https://openreview.net/forum?id=BJh6Ztuxl Daniel Adiwardana, Minh-Thang Luong, David R So, Jamie Hall, Noah Fiedel, Romal Thoppilan, Zi Yang, Apoorv Kulshreshtha, Gaurav Nemade, Yifeng Lu, arXiv:2001.09977Towards a human-like open-domain chatbot. arXiv preprintDaniel Adiwardana, Minh-Thang Luong, David R So, Jamie Hall, Noah Fiedel, Romal Thoppilan, Zi Yang, Apoorv Kulshreshtha, Gaurav Nemade, Yifeng Lu, et al. 2020. Towards a human-like open-domain chatbot. arXiv preprint arXiv:2001.09977 (2020). Does the whole exceed its parts? the effect of ai explanations on complementary team performance. Gagan Bansal, Tongshuang Wu, Joyce Zhou, Raymond Fok, Besmira Nushi, Ece Kamar, Marco Tulio Ribeiro, Daniel Weld, Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. the 2021 CHI Conference on Human Factors in Computing SystemsGagan Bansal, Tongshuang Wu, Joyce Zhou, Raymond Fok, Besmira Nushi, Ece Kamar, Marco Tulio Ribeiro, and Daniel Weld. 2021. Does the whole exceed its parts? the effect of ai explanations on complementary team performance. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. 1-16. Does the Whole Exceed its Parts? The Effect of AI Explanations on Complementary Team Performance. Gagan Bansal, Tongshuang Wu, Joyce Zhou, Raymond Fok, Besmira Nushi, Ece Kamar, Marco Túlio Ribeiro, Daniel S Weld, CHI. Gagan Bansal, Tongshuang Wu, Joyce Zhou, Raymond Fok, Besmira Nushi, Ece Kamar, Marco Túlio Ribeiro, and Daniel S. Weld. 2021. Does the Whole Exceed its Parts? The Effect of AI Explanations on Complementary Team Performance. CHI (2021). The elephant in the interpretability room: Why use attention as explanation when we have saliency methods. Jasmijn Bastings, Katja Filippova, 10.18653/v1/2020.blackboxnlp-1.14Proceedings of the Third BlackboxNLP Workshop on Analyzing and Interpreting Neural Networks for NLP. the Third BlackboxNLP Workshop on Analyzing and Interpreting Neural Networks for NLPAssociation for Computational LinguisticsJasmijn Bastings and Katja Filippova. 2020. The elephant in the interpretability room: Why use attention as explanation when we have saliency methods?. In Proceedings of the Third BlackboxNLP Workshop on Analyzing and Interpreting Neural Networks for NLP. Association for Computational Linguistics, Online, 149-155. https://doi.org/10.18653/v1/2020.blackboxnlp-1.14 Iz Beltagy, Kyle Lo, Arman Cohan, arXiv:1903.10676SciBERT: A pretrained language model for scientific text. arXiv preprintIz Beltagy, Kyle Lo, and Arman Cohan. 2019. SciBERT: A pretrained language model for scientific text. arXiv preprint arXiv:1903.10676 (2019). Relational agents: a model and implementation of building user trust. Timothy Bickmore, Justine Cassell, Proceedings of the SIGCHI conference on Human factors in computing systems. the SIGCHI conference on Human factors in computing systemsTimothy Bickmore and Justine Cassell. 2001. Relational agents: a model and implementation of building user trust. In Proceedings of the SIGCHI conference on Human factors in computing systems. 396-403. Language models are few-shot learners. Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Advances in neural information processing systems. 33Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. 2020. Language models are few-shot learners. Advances in neural information processing systems 33 (2020), 1877-1901. e-SNLI: Natural Language Inference with Natural Language Explanations. Oana-Maria Camburu, Tim Rocktäschel, Thomas Lukasiewicz, Phil Blunsom, Advances in Neural Information Processing Systems 31: Annual Conference on Neural Information Processing Systems. Samy Bengio, Hanna M. Wallach, Hugo Larochelle, Kristen Grauman, Nicolò Cesa-Bianchi, and Roman GarnettNeurIPS; Montréal, CanadaOana-Maria Camburu, Tim Rocktäschel, Thomas Lukasiewicz, and Phil Blun- som. 2018. e-SNLI: Natural Language Inference with Natural Language Explanations. In Advances in Neural Information Processing Systems 31: An- nual Conference on Neural Information Processing Systems 2018, NeurIPS 2018, December 3-8, 2018, Montréal, Canada, Samy Bengio, Hanna M. Wal- lach, Hugo Larochelle, Kristen Grauman, Nicolò Cesa-Bianchi, and Roman Garnett (Eds.). 9560-9572. https://proceedings.neurips.cc/paper/2018/hash/ 4c7a167bb329bd92580a99ce422d6fa6-Abstract.html AI-assisted peer review. Alessandro Checco, Lorenzo Bracciale, Pierpaolo Loreti, Stephen Pinfield, Giuseppe Bianchi, Humanities and social sciences communications. 8Alessandro Checco, Lorenzo Bracciale, Pierpaolo Loreti, Stephen Pinfield, and Giuseppe Bianchi. 2021. AI-assisted peer review. Humanities and social sciences communications 8, 1 (2021), 1-11. Once upon a time in visualization: Understanding the use of textual narratives for causality. Arjun Choudhry, Mandar Sharma, Pramod Chundury, Thomas Kapler, W S Derek, Naren Gray, Niklas Ramakrishnan, Elmqvist, IEEE Transactions on Visualization and Computer Graphics. 27Arjun Choudhry, Mandar Sharma, Pramod Chundury, Thomas Kapler, Derek WS Gray, Naren Ramakrishnan, and Niklas Elmqvist. 2020. Once upon a time in visualization: Understanding the use of textual narratives for causality. IEEE Transactions on Visualization and Computer Graphics 27, 2 (2020), 1332-1342. Are visual explanations useful? a case study in model-in-the-loop prediction. Eric Chu, Deb Roy, Jacob Andreas, abs/2007.12248ArXiv preprintEric Chu, Deb Roy, and Jacob Andreas. 2020. Are visual explanations useful? a case study in model-in-the-loop prediction. ArXiv preprint abs/2007.12248 (2020). https://arxiv.org/abs/2007.12248 Andy Coenen, Luke Davis, Daphne Ippolito, Emily Reif, Ann Yuan, arXiv:2107.07430Wordcraft: A human-AI collaborative editor for story writing. arXiv preprintAndy Coenen, Luke Davis, Daphne Ippolito, Emily Reif, and Ann Yuan. 2021. Wordcraft: A human-AI collaborative editor for story writing. arXiv preprint arXiv:2107.07430 (2021). A technique for computer detection and correction of spelling errors. J Fred, Damerau, Commun. ACM. 7Fred J Damerau. 1964. A technique for computer detection and correction of spelling errors. Commun. ACM 7, 3 (1964), 171-176. Chains-of-Reasoning at TextGraphs 2019 Shared Task: Reasoning over Chains of Facts for Explainable Multi-hop Inference. Rajarshi Das, Ameya Godbole, Manzil Zaheer, Shehzaad Dhuliawala, Andrew Mccallum, 10.18653/v1/D19-5313Proceedings of the Thirteenth Workshop on Graph-Based Methods for Natural Language Processing (TextGraphs-13). the Thirteenth Workshop on Graph-Based Methods for Natural Language Processing (TextGraphs-13)Hong KongAssociation for Computational LinguisticsRajarshi Das, Ameya Godbole, Manzil Zaheer, Shehzaad Dhuliawala, and Andrew McCallum. 2019. Chains-of-Reasoning at TextGraphs 2019 Shared Task: Reason- ing over Chains of Facts for Explainable Multi-hop Inference. In Proceedings of the Thirteenth Workshop on Graph-Based Methods for Natural Language Processing (TextGraphs-13). Association for Computational Linguistics, Hong Kong, 101-117. https://doi.org/10.18653/v1/D19-5313 How do Decisions Emerge across Layers in Neural Models? Interpretation with Differentiable Masking. Nicola De Cao, Michael Sejr Schlichtkrull, Wilker Aziz, Ivan Titov, 10.18653/v1/2020.emnlp-main.262Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP). the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)Association for Computational LinguisticsNicola De Cao, Michael Sejr Schlichtkrull, Wilker Aziz, and Ivan Titov. 2020. How do Decisions Emerge across Layers in Neural Models? Interpretation with Differ- entiable Masking. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP). Association for Computational Linguistics, Online, 3243-3255. https://doi.org/10.18653/v1/2020.emnlp-main.262 Towards a rigorous science of interpretable machine learning. Finale Doshi, - Velez, Been Kim, arXiv:1702.08608arXiv preprintFinale Doshi-Velez and Been Kim. 2017. Towards a rigorous science of inter- pretable machine learning. arXiv preprint arXiv:1702.08608 (2017). Wanyu Du, Myung Zae, Vipul Kim, Dhruv Raheja, Dongyeop Kumar, Kang, arXiv:2204.03685Repeat: A System Demonstration for Human-in-the-loop Iterative Text Revision. arXiv preprintWanyu Du, Zae Myung Kim, Vipul Raheja, Dhruv Kumar, and Dongyeop Kang. 2022. Read, Revise, Repeat: A System Demonstration for Human-in-the-loop Iterative Text Revision. arXiv preprint arXiv:2204.03685 (2022). Iris: A conversational agent for complex tasks. Ethan Fast, Binbin Chen, Julia Mendelsohn, Jonathan Bassen, Michael S Bernstein, Proceedings of the 2018 CHI conference on human factors in computing systems. the 2018 CHI conference on human factors in computing systemsEthan Fast, Binbin Chen, Julia Mendelsohn, Jonathan Bassen, and Michael S Bernstein. 2018. Iris: A conversational agent for complex tasks. In Proceedings of the 2018 CHI conference on human factors in computing systems. 1-12. What can AI do for me? evaluating machine learning interpretations in cooperative play. Shi Feng, Jordan Boyd-Graber, Proceedings of the 24th International Conference on Intelligent User Interfaces. the 24th International Conference on Intelligent User InterfacesShi Feng and Jordan Boyd-Graber. 2019. What can AI do for me? evaluating machine learning interpretations in cooperative play. In Proceedings of the 24th International Conference on Intelligent User Interfaces. 229-239. Datasheets for datasets. Timnit Gebru, Jamie Morgenstern, Briana Vecchione, Jennifer Wortman Vaughan, Hanna Wallach, Hal Daumé Iii, Kate Crawford, Commun. ACM. 64Timnit Gebru, Jamie Morgenstern, Briana Vecchione, Jennifer Wortman Vaughan, Hanna Wallach, Hal Daumé Iii, and Kate Crawford. 2021. Datasheets for datasets. Commun. ACM 64, 12 (2021), 86-92. A Design Space for Writing Support Tools Using a Cognitive Process Model of Writing. Katy Gero, Alex Calderwood, Charlotte Li, Lydia Chilton, Proceedings of the First Workshop on Intelligent and Interactive Writing Assistants. the First Workshop on Intelligent and Interactive Writing AssistantsKaty Gero, Alex Calderwood, Charlotte Li, and Lydia Chilton. 2022. A Design Space for Writing Support Tools Using a Cognitive Process Model of Writing. In Proceedings of the First Workshop on Intelligent and Interactive Writing Assistants (In2Writing 2022). 11-24. Explainable Active Learning (XAL): An Empirical Study of How Local Explanations Impact Annotator Experience. Bhavya Ghai, Yunfeng Vera Liao, Rachel Zhang, Klaus Bellamy, Mueller, abs/2001.09219ArXiv preprintBhavya Ghai, Q Vera Liao, Yunfeng Zhang, Rachel Bellamy, and Klaus Mueller. 2020. Explainable Active Learning (XAL): An Empirical Study of How Local Explanations Impact Annotator Experience. ArXiv preprint abs/2001.09219 (2020). https://arxiv.org/abs/2001.09219 Human evaluation of spoken vs. visual explanations for open-domain qa. Ana Valeria Gonzalez, Gagan Bansal, Angela Fan, Robin Jia, Yashar Mehdad, Srinivasan Iyer, arXiv:2012.15075arXiv preprintAna Valeria Gonzalez, Gagan Bansal, Angela Fan, Robin Jia, Yashar Mehdad, and Srinivasan Iyer. 2020. Human evaluation of spoken vs. visual explanations for open-domain qa. arXiv preprint arXiv:2012.15075 (2020). GEval: Tool for Debugging NLP Datasets and Models. Filip Graliński, Anna Wróblewska, Tomasz Stanisławek, Kamil Grabowski, Tomasz Górecki, 10.18653/v1/W19-4826Proceedings of the 2019 ACL Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP. the 2019 ACL Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLPFlorence, ItalyAssociation for Computational LinguisticsFilip Graliński, Anna Wróblewska, Tomasz Stanisławek, Kamil Grabowski, and Tomasz Górecki. 2019. GEval: Tool for Debugging NLP Datasets and Models. In Proceedings of the 2019 ACL Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP. Association for Computational Linguistics, Florence, Italy, 254-262. https://doi.org/10.18653/v1/W19-4826 Evaluating Explainable AI: Which Algorithmic Explanations Help Users Predict Model Behavior. Peter Hase, Mohit Bansal, Proceedings of the 58th. the 58thPeter Hase and Mohit Bansal. 2020. Evaluating Explainable AI: Which Algorith- mic Explanations Help Users Predict Model Behavior?. In Proceedings of the 58th 10.18653/v1/2020.acl-main.491Annual Meeting of the Association for Computational Linguistics. Association for Computational LinguisticsAnnual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics, Online, 5540-5552. https://doi.org/10.18653/v1/2020. acl-main.491 Evaluating Explainable AI: Which Algorithmic Explanations Help Users Predict Model Behavior. Peter Hase, Mohit Bansal, Proceedings of the 58th. the 58thPeter Hase and Mohit Bansal. 2020. Evaluating Explainable AI: Which Algorith- mic Explanations Help Users Predict Model Behavior?. In Proceedings of the 58th 10.18653/v1/2020.acl-main.491Annual Meeting of the Association for Computational Linguistics. Association for Computational LinguisticsAnnual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics, Online, 5540-5552. https://doi.org/10.18653/v1/2020. acl-main.491 The Promise and Peril of Human Evaluation for Model Interpretability. Bernease Herman, abs/1711.07414ArXiv preprintBernease Herman. 2017. The Promise and Peril of Human Evaluation for Model Interpretability. ArXiv preprint abs/1711.07414 (2017). https://arxiv.org/abs/1711. 07414 Heteroglossia: In-situ story ideation with the crowd. Chieh-Yang Huang, Shih-Hong Huang, Ting-Hao Kenneth Huang, Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems. the 2020 CHI Conference on Human Factors in Computing SystemsChieh-Yang Huang, Shih-Hong Huang, and Ting-Hao Kenneth Huang. 2020. Heteroglossia: In-situ story ideation with the crowd. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems. 1-12. Coda-19: Using a non-expert crowd to annotate research aspects on 10. Chieh-Yang Ting-Hao&apos;kenneth&apos; Huang, Chien-Kuang Cornelia Huang, Yen-Chia Ding, C Lee Hsu, Giles, arXiv:2005.02367000+ abstracts in the covid-19 open research dataset. arXiv preprintTing-Hao'Kenneth' Huang, Chieh-Yang Huang, Chien-Kuang Cornelia Ding, Yen-Chia Hsu, and C Lee Giles. 2020. Coda-19: Using a non-expert crowd to annotate research aspects on 10,000+ abstracts in the covid-19 open research dataset. arXiv preprint arXiv:2005.02367 (2020). Feedback Orchestration: Structuring Feedback for Facilitating Reflection and Revision Conference '23. Yi-Ching Huang, Hao-Chuan Wang, Jane Yung-Jen Hsu ; Hua Shen, Chieh-Yang Huang, Tongshuang Wu, Ting-Hao , Companion of the 2018 ACM Conference on Computer Supported Cooperative Work and Social Computing. Kenneth) Huang in WritingYi-Ching Huang, Hao-Chuan Wang, and Jane Yung-jen Hsu. 2018. Feedback Orchestration: Structuring Feedback for Facilitating Reflection and Revision Conference '23, 2023, Hua Shen, Chieh-Yang Huang, Tongshuang Wu, and Ting-Hao (Kenneth) Huang in Writing. In Companion of the 2018 ACM Conference on Computer Supported Cooperative Work and Social Computing. 257-260. Ian Hutchby, Robin Wooffitt, Conversation analysis. Ian Hutchby and Robin Wooffitt. 2008. Conversation analysis. Polity. Towards Faithfully Interpretable NLP Systems: How Should We Define and Evaluate Faithfulness. Alon Jacovi, Yoav Goldberg, 10.18653/v1/2020.acl-main.386Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. the 58th Annual Meeting of the Association for Computational LinguisticsAssociation for Computational LinguisticsAlon Jacovi and Yoav Goldberg. 2020. Towards Faithfully Interpretable NLP Systems: How Should We Define and Evaluate Faithfulness?. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics, Online, 4198-4205. https://doi.org/10.18653/v1/ 2020.acl-main.386 Perplexity-a measure of the difficulty of speech recognition tasks. Fred Jelinek, L Robert, Mercer, R Lalit, James K Bahl, Baker, The Journal of the Acoustical Society of America. 62Fred Jelinek, Robert L Mercer, Lalit R Bahl, and James K Baker. 1977. Perplexity-a measure of the difficulty of speech recognition tasks. The Journal of the Acoustical Society of America 62, S1 (1977), S63-S63. Speech & language processing. Dan Jurafsky, Pearson Education IndiaDan Jurafsky. 2000. Speech & language processing. Pearson Education India. Conceptual metaphors impact perceptions of human-ai collaboration. Pranav Khadpe, Ranjay Krishna, Li Fei-Fei, Jeffrey T Hancock, Michael S Bernstein, Proceedings of the ACM on Human-Computer Interaction. 4Pranav Khadpe, Ranjay Krishna, Li Fei-Fei, Jeffrey T Hancock, and Michael S Bernstein. 2020. Conceptual metaphors impact perceptions of human-ai collabo- ration. Proceedings of the ACM on Human-Computer Interaction 4, CSCW2 (2020), 1-26. Interpretation of NLP models through input marginalization. Siwon Kim, Jihun Yi, Eunji Kim, Sungroh Yoon, 10.18653/v1/2020.emnlp-main.255Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP). the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)Association for Computational LinguisticsSiwon Kim, Jihun Yi, Eunji Kim, and Sungroh Yoon. 2020. Interpretation of NLP models through input marginalization. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP). Association for Computational Linguistics, Online, 3154-3167. https://doi.org/10.18653/v1/2020. emnlp-main.255 Himabindu Lakkaraju, Dylan Slack, Yuxin Chen, arXiv:2202.01875Chenhao Tan, and Sameer Singh. 2022. Rethinking Explainability as a Dialogue: A Practitioner's Perspective. arXiv preprintHimabindu Lakkaraju, Dylan Slack, Yuxin Chen, Chenhao Tan, and Sameer Singh. 2022. Rethinking Explainability as a Dialogue: A Practitioner's Perspective. arXiv preprint arXiv:2202.01875 (2022). Coauthor: Designing a human-ai collaborative writing dataset for exploring language model capabilities. Mina Lee, Percy Liang, Qian Yang, CHI Conference on Human Factors in Computing Systems. Mina Lee, Percy Liang, and Qian Yang. 2022. Coauthor: Designing a human-ai collaborative writing dataset for exploring language model capabilities. In CHI Conference on Human Factors in Computing Systems. 1-19. Rationalizing Neural Predictions. Tao Lei, Regina Barzilay, Tommi Jaakkola, 10.18653/v1/D16-1011Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. the 2016 Conference on Empirical Methods in Natural Language ProcessingAustin, TexasAssociation for Computational LinguisticsTao Lei, Regina Barzilay, and Tommi Jaakkola. 2016. Rationalizing Neural Pre- dictions. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, Austin, Texas, 107-117. https://doi.org/10.18653/v1/D16-1011 Explanation-based human debugging of nlp models: A survey. Piyawat Lertvittayakumjorn, Francesca Toni, Transactions of the Association for Computational Linguistics. 9Piyawat Lertvittayakumjorn and Francesca Toni. 2021. Explanation-based hu- man debugging of nlp models: A survey. Transactions of the Association for Computational Linguistics 9 (2021), 1508-1528. Binary codes capable of correcting deletions, insertions, and reversals. Vladimir I Levenshtein, Soviet physics doklady. 10Vladimir I Levenshtein et al. 1966. Binary codes capable of correcting deletions, insertions, and reversals. In Soviet physics doklady, Vol. 10. Soviet Union, 707-710. Acute-eval: Improved dialogue evaluation with optimized questions and multi-turn comparisons. Margaret Li, Jason Weston, Stephen Roller, arXiv:1909.03087arXiv preprintMargaret Li, Jason Weston, and Stephen Roller. 2019. Acute-eval: Improved dialogue evaluation with optimized questions and multi-turn comparisons. arXiv preprint arXiv:1909.03087 (2019). Questioning the AI: informing design practices for explainable AI user experiences. Daniel Vera Liao, Sarah Gruen, Miller, Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems. the 2020 CHI Conference on Human Factors in Computing SystemsQ Vera Liao, Daniel Gruen, and Sarah Miller. 2020. Questioning the AI: informing design practices for explainable AI user experiences. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems. 1-15. A unified approach to interpreting model predictions. Advances in neural information processing systems. M Scott, Su-In Lundberg, Lee, 30Scott M Lundberg and Su-In Lee. 2017. A unified approach to interpreting model predictions. Advances in neural information processing systems 30 (2017). Explaining by Conversing: The Argument for Conversational Xai Systems. Wassim Marrakchi, Ph. D. Dissertation. Harvard UniversityWassim Marrakchi. 2021. Explaining by Conversing: The Argument for Conversa- tional Xai Systems. Ph. D. Dissertation. Harvard University. Explanation in artificial intelligence: Insights from the social sciences. Tim Miller, Artificial Intelligence. 267Tim Miller. 2019. Explanation in artificial intelligence: Insights from the social sciences. Artificial Intelligence 267 (2019), 1-38. Explainable AI: Beware of inmates running the asylum or: How I learnt to stop worrying and love the social and behavioural sciences. Tim Miller, Piers Howe, Liz Sonenberg, Tim Miller, Piers Howe, and Liz Sonenberg. 2017. Explainable AI: Beware of inmates running the asylum or: How I learnt to stop worrying and love the social and behavioural sciences. arxiv (2017). Model cards for model reporting. Margaret Mitchell, Simone Wu, Andrew Zaldivar, Parker Barnes, Lucy Vasserman, Ben Hutchinson, Elena Spitzer, Deborah Inioluwa, Timnit Raji, Gebru, Proceedings of the conference on fairness, accountability, and transparency. the conference on fairness, accountability, and transparencyMargaret Mitchell, Simone Wu, Andrew Zaldivar, Parker Barnes, Lucy Vasserman, Ben Hutchinson, Elena Spitzer, Inioluwa Deborah Raji, and Timnit Gebru. 2019. Model cards for model reporting. In Proceedings of the conference on fairness, accountability, and transparency. 220-229. Towards Transparent and Explainable Attention Models. Akash Kumar Mohankumar, Preksha Nema, Sharan Narasimhan, M Mitesh, Khapra, Balaraman Balaji Vasan Srinivasan, Ravindran, 10.18653/v1/2020.acl-main.387Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics, Online. the 58th Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics, OnlineAkash Kumar Mohankumar, Preksha Nema, Sharan Narasimhan, Mitesh M. Khapra, Balaji Vasan Srinivasan, and Balaraman Ravindran. 2020. Towards Transparent and Explainable Attention Models. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. Association for Com- putational Linguistics, Online, 4206-4216. https://doi.org/10.18653/v1/2020.acl- main.387 Did the model understand the question?. Ankur Pramod Kaushik Mudrakarta, Mukund Taly, Kedar Sundararajan, Dhamdhere, arXiv:1805.05492arXiv preprintPramod Kaushik Mudrakarta, Ankur Taly, Mukund Sundararajan, and Kedar Dhamdhere. 2018. Did the model understand the question? arXiv preprint arXiv:1805.05492 (2018). Dynamic time warping. Information retrieval for music and motion. Meinard Müller, Meinard Müller. 2007. Dynamic time warping. Information retrieval for music and motion (2007), 69-84. Towards an Explainer-agnostic Conversational XAI. Navid Nobani, Fabio Mercorio, Mario Mezzanzanica, IJCAI. Navid Nobani, Fabio Mercorio, and Mario Mezzanzanica. 2021. Towards an Explainer-agnostic Conversational XAI.. In IJCAI. 4909-4910. A slow algorithm improves users' assessments of the algorithm's accuracy. Joon Sung Park, Rick Barber, Alex Kirlik, Karrie Karahalios, Proceedings of the ACM on Human-Computer Interaction. 3CSCWJoon Sung Park, Rick Barber, Alex Kirlik, and Karrie Karahalios. 2019. A slow algorithm improves users' assessments of the algorithm's accuracy. Proceedings of the ACM on Human-Computer Interaction 3, CSCW (2019), 1-15. An empirical comparison of instance attribution methods for NLP. Pouya Pezeshkpour, Sarthak Jain, C Byron, Sameer Wallace, Singh, arXiv:2104.04128arXiv preprintPouya Pezeshkpour, Sarthak Jain, Byron C Wallace, and Sameer Singh. 2021. An empirical comparison of instance attribution methods for NLP. arXiv preprint arXiv:2104.04128 (2021). Forough Poursabzi-Sangdeh, D Goldstein, J Hofman, Jennifer Wortman Vaughan, H Wallach, 2021. Manipulating and Measuring Model Interpretability. CHI. Forough Poursabzi-Sangdeh, D. Goldstein, J. Hofman, Jennifer Wortman Vaughan, and H. Wallach. 2021. Manipulating and Measuring Model Interpretability. CHI (2021). Manipulating and measuring model interpretability. Forough Poursabzi-Sangdeh, G Daniel, Jake M Goldstein, Jennifer Hofman, Wortman Wortman, Hanna Vaughan, Wallach, Proceedings of the 2021 CHI conference on human factors in computing systems. the 2021 CHI conference on human factors in computing systemsForough Poursabzi-Sangdeh, Daniel G Goldstein, Jake M Hofman, Jennifer Wort- man Wortman Vaughan, and Hanna Wallach. 2021. Manipulating and measuring model interpretability. In Proceedings of the 2021 CHI conference on human factors in computing systems. 1-52. Explain Yourself! Leveraging Language Models for Commonsense Reasoning. Bryan Nazneen Fatema Rajani, Caiming Mccann, Richard Xiong, Socher, 10.18653/v1/P19-1487Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. the 57th Annual Meeting of the Association for Computational LinguisticsFlorence, ItalyAssociation for Computational LinguisticsNazneen Fatema Rajani, Bryan McCann, Caiming Xiong, and Richard Socher. 2019. Explain Yourself! Leveraging Language Models for Commonsense Reasoning. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics, Florence, Italy, 4932-4942. https://doi.org/10.18653/v1/P19-1487 CoQA: A Conversational Question Answering Challenge. Siva Reddy, Danqi Chen, Christopher D Manning, 10.1162/tacl_a_00266Transactions of the Association for Computational Linguistics. 7Siva Reddy, Danqi Chen, and Christopher D. Manning. 2019. CoQA: A Con- versational Question Answering Challenge. Transactions of the Association for Computational Linguistics 7 (2019), 249-266. https://doi.org/10.1162/tacl_a_00266 Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks. Nils Reimers, Iryna Gurevych, 10.18653/v1/D19-1410Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP). the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)Hong Kong, ChinaAssociation for Computational LinguisticsNils Reimers and Iryna Gurevych. 2019. Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP). Association for Computational Linguistics, Hong Kong, China, 3982-3992. https://doi.org/10.18653/v1/D19-1410 Why Should I Trust You?": Explaining the Predictions of Any Classifier. Sameer Marco Túlio Ribeiro, Carlos Singh, Guestrin, 10.1145/2939672.2939778Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. Balaji Krishnapuram, Mohak Shah, Alexander J. Smola, Charu C. Aggarwal, Dou Shen, and Rajeev Rastogithe 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data MiningSan Francisco, CA, USAACMMarco Túlio Ribeiro, Sameer Singh, and Carlos Guestrin. 2016. "Why Should I Trust You?": Explaining the Predictions of Any Classifier. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, San Francisco, CA, USA, August 13-17, 2016, Balaji Krishnapuram, Mohak Shah, Alexander J. Smola, Charu C. Aggarwal, Dou Shen, and Rajeev Rastogi (Eds.). ACM, 1135-1144. https://doi.org/10.1145/2939672.2939778 Anchors: High-Precision Model-Agnostic Explanations. Sameer Marco Túlio Ribeiro, Carlos Singh, Guestrin, Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence, (AAAI-18), the 30th innovative Applications of Artificial Intelligence (IAAI-18), and the 8th AAAI Symposium on Educational Advances in Artificial Intelligence (EAAI-18). Sheila A. McIlraith and Kilian Q. Weinbergerthe Thirty-Second AAAI Conference on Artificial Intelligence, (AAAI-18), the 30th innovative Applications of Artificial Intelligence (IAAI-18), and the 8th AAAI Symposium on Educational Advances in Artificial Intelligence (EAAI-18)New Orleans, Louisiana, USAAAAI PressMarco Túlio Ribeiro, Sameer Singh, and Carlos Guestrin. 2018. Anchors: High- Precision Model-Agnostic Explanations. In Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence, (AAAI-18), the 30th innovative Applications of Artificial Intelligence (IAAI-18), and the 8th AAAI Symposium on Educational Ad- vances in Artificial Intelligence (EAAI-18), New Orleans, Louisiana, USA, February 2-7, 2018, Sheila A. McIlraith and Kilian Q. Weinberger (Eds.). AAAI Press, 1527- 1535. https://www.aaai.org/ocs/index.php/AAAI/AAAI18/paper/view/16982 Creative help: A story writing assistant. Melissa Roemmele, Andrew S Gordon, International Conference on Interactive Digital Storytelling. SpringerMelissa Roemmele and Andrew S Gordon. 2015. Creative help: A story writing assistant. In International Conference on Interactive Digital Storytelling. Springer, 81-92. Explaining nlp models via minimal contrastive editing (mice). Alexis Ross, Ana Marasović, Matthew E Peters, arXiv:2012.13985arXiv preprintAlexis Ross, Ana Marasović, and Matthew E Peters. 2020. Explaining nlp models via minimal contrastive editing (mice). arXiv preprint arXiv:2012.13985 (2020). Do Neural Dialog Systems Use the Conversation History Effectively? An Empirical Study. Chinnadhurai Sankar, Sandeep Subramanian, Chris Pal, Sarath Chandar, Yoshua Bengio, 10.18653/v1/P19-1004Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. the 57th Annual Meeting of the Association for Computational LinguisticsFlorence, ItalyAssociation for Computational LinguisticsChinnadhurai Sankar, Sandeep Subramanian, Chris Pal, Sarath Chandar, and Yoshua Bengio. 2019. Do Neural Dialog Systems Use the Conversation History Effectively? An Empirical Study. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics, Florence, Italy, 32-37. https://doi.org/10.18653/v1/P19-1004 How do you Converse with an Analytical Chatbot? Revisiting Gricean Maxims for Designing Analytical Conversational Behavior. Vidya Setlur, Melanie Tory, CHI Conference on Human Factors in Computing Systems. Vidya Setlur and Melanie Tory. 2022. How do you Converse with an Analytical Chatbot? Revisiting Gricean Maxims for Designing Analytical Conversational Behavior. In CHI Conference on Human Factors in Computing Systems. 1-17. How Useful Are the Machine-Generated Interpretations to General Users? A Human Evaluation on Guessing the Incorrectly Predicted Labels. Hua Shen, Ting-Hao Huang, Proceedings of the AAAI Conference on Human Computation and Crowdsourcing. the AAAI Conference on Human Computation and Crowdsourcing8Hua Shen and Ting-Hao Huang. 2020. How Useful Are the Machine-Generated In- terpretations to General Users? A Human Evaluation on Guessing the Incorrectly Predicted Labels. In Proceedings of the AAAI Conference on Human Computation and Crowdsourcing, Vol. 8. 168-172. Explaining the Road Not Taken. Hua Shen, Ting-Hao&apos;kenneth&apos; Huang, ACM CHI 2022 Workshop on Human-Centered Explainable AI. Hua Shen and Ting-Hao'Kenneth' Huang. 2021. Explaining the Road Not Taken. ACM CHI 2022 Workshop on Human-Centered Explainable AI (2021). Are Shortest Rationales the Best Explanations for Human Understanding. Hua Shen, Tongshuang Wu, Wenbo Guo, Ting-Hao Huang, 10.18653/v1/2022.acl-short.2Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics. the 60th Annual Meeting of the Association for Computational LinguisticsDublin, IrelandAssociation for Computational Linguistics2Short Papers)Hua Shen, Tongshuang Wu, Wenbo Guo, and Ting-Hao Huang. 2022. Are Shortest Rationales the Best Explanations for Human Understanding?. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers). Association for Computational Linguistics, Dublin, Ireland, 10-19. https://doi.org/10.18653/v1/2022.acl-short.2 Does String-Based Neural MT Learn Source Syntax. Xing Shi, Inkit Padhi, Kevin Knight, 10.18653/v1/D16-1159Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. the 2016 Conference on Empirical Methods in Natural Language ProcessingAustin, TexasAssociation for Computational LinguisticsXing Shi, Inkit Padhi, and Kevin Knight. 2016. Does String-Based Neural MT Learn Source Syntax?. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, Austin, Texas, 1526-1534. https://doi.org/10.18653/v1/D16-1159 The eyes have it: A task by data type taxonomy for information visualizations. Ben Shneiderman, The craft of information visualization. ElsevierBen Shneiderman. 2003. The eyes have it: A task by data type taxonomy for information visualizations. In The craft of information visualization. Elsevier, 364-371. Dylan Slack, Satyapriya Krishna, Himabindu Lakkaraju, and Sameer Singh. 2022. TalkToModel: Explaining Machine Learning Models with Interactive Natural Language Conversations. Dylan Slack, Satyapriya Krishna, Himabindu Lakkaraju, and Sameer Singh. 2022. TalkToModel: Explaining Machine Learning Models with Interactive Natural Language Conversations. (2022). No Explainability without Accountability: An Empirical Study of Explanations and Feedback in Interactive ML. Alison Smith-Renner, Ron Fan, Melissa Birchfield, Tongshuang Wu, Jordan L Boyd-Graber, Daniel S Weld, Leah Findlater, 10.1145/3313831.3376624CHI '20: CHI Conference on Human Factors in Computing Systems. Regina Bernhaupt, Florian 'Floyd' Mueller, David Verweij, Josh Andres, Joanna McGrenere, Andy Cockburn, Ignacio Avellino, Alix Goguey, Pernille Bjøn, Shengdong Zhao, Briane Paul Samson, and Rafal KocielnikHonolulu, HI, USAACMAlison Smith-Renner, Ron Fan, Melissa Birchfield, Tongshuang Wu, Jordan L. Boyd-Graber, Daniel S. Weld, and Leah Findlater. 2020. No Explainability without Accountability: An Empirical Study of Explanations and Feedback in Interactive ML. In CHI '20: CHI Conference on Human Factors in Computing Systems, Honolulu, HI, USA, April 25-30, 2020, Regina Bernhaupt, Florian 'Floyd' Mueller, David Verweij, Josh Andres, Joanna McGrenere, Andy Cockburn, Ignacio Avellino, Alix Goguey, Pernille Bjøn, Shengdong Zhao, Briane Paul Samson, and Rafal Kocielnik (Eds.). ACM, 1-13. https://doi.org/10.1145/3313831.3376624 Glass-Box: Explaining AI Decisions With Counterfactual Statements Through Conversation With a Voice-enabled Virtual Assistant. Kacper Sokol, A Peter, Flach, IJCAI. Kacper Sokol and Peter A Flach. 2018. Glass-Box: Explaining AI Decisions With Counterfactual Statements Through Conversation With a Voice-enabled Virtual Assistant.. In IJCAI. 5868-5870. Exploring the Effects of Interactive Dialogue in Improving User Control for Explainable Online Symptom Checkers. Yuan Sun, Sundar, CHI Conference on Human Factors in Computing Systems Extended Abstracts. Yuan Sun and S Shyam Sundar. 2022. Exploring the Effects of Interactive Dialogue in Improving User Control for Explainable Online Symptom Checkers. In CHI Conference on Human Factors in Computing Systems Extended Abstracts. 1-7. Exploring and promoting diagnostic transparency and explainability in online symptom checkers. Chun-Hua Tsai, Yue You, Xinning Gui, Yubo Kou, John M Carroll, Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. the 2021 CHI Conference on Human Factors in Computing SystemsChun-Hua Tsai, Yue You, Xinning Gui, Yubo Kou, and John M Carroll. 2021. Exploring and promoting diagnostic transparency and explainability in online symptom checkers. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. 1-17. AllenNLP Interpret: A Framework for Explaining Predictions of NLP Models. Eric Wallace, Jens Tuyls, Junlin Wang, Sanjay Subramanian, Matt Gardner, Sameer Singh, 10.18653/v1/D19-3002Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP): System Demonstrations. the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP): System DemonstrationsHong Kong, ChinaAssociation for Computational LinguisticsEric Wallace, Jens Tuyls, Junlin Wang, Sanjay Subramanian, Matt Gardner, and Sameer Singh. 2019. AllenNLP Interpret: A Framework for Explaining Predictions of NLP Models. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP): System Demonstrations. Association for Computational Linguistics, Hong Kong, China, 7-12. https://doi.org/10.18653/ v1/D19-3002 Interpretable Directed Diversity: Leveraging Model Explanations for Iterative Crowd Ideation. Yunlong Wang, Priyadarshini Venkatesh, Brian Y Lim, CHI Conference on Human Factors in Computing Systems. Yunlong Wang, Priyadarshini Venkatesh, and Brian Y Lim. 2022. Interpretable Directed Diversity: Leveraging Model Explanations for Iterative Crowd Ideation. In CHI Conference on Human Factors in Computing Systems. 1-28. Attention is not not Explanation. Sarah Wiegreffe, Yuval Pinter, 10.18653/v1/D19-1002Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP). the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)Hong Kong, ChinaAssociation for Computational LinguisticsSarah Wiegreffe and Yuval Pinter. 2019. Attention is not not Explanation. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Pro- cessing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP). Association for Computational Linguistics, Hong Kong, China, 11-20. https://doi.org/10.18653/v1/D19-1002 RECAST: Enabling User Recourse and Interpretability of Toxicity Detection Models with Interactive Visualization. Austin P Wright, Omar Shaikh, Haekyu Park, Will Epperson, Muhammed Ahmed, Stephane Pinel, 10.1145/3449280Proc. ACM Hum.-Comput. Interact. 5ArticleAustin P. Wright, Omar Shaikh, Haekyu Park, Will Epperson, Muhammed Ahmed, Stephane Pinel, Duen Horng (Polo) Chau, and Diyi Yang. 2021. RECAST: Enabling User Recourse and Interpretability of Toxicity Detection Models with Interactive Visualization. Proc. ACM Hum.-Comput. Interact. 5, CSCW1, Article 181 (apr 2021), 26 pages. https://doi.org/10.1145/3449280 Polyjuice: Automated, General-purpose Counterfactual Generation. Tongshuang Wu, Marco Tulio Ribeiro, Jeffrey Heer, Daniel S Weld, arXiv:2101.00288arXiv preprintTongshuang Wu, Marco Tulio Ribeiro, Jeffrey Heer, and Daniel S Weld. 2021. Polyjuice: Automated, General-purpose Counterfactual Generation. arXiv preprint arXiv:2101.00288 (2021). How to Guide Task-oriented Chatbot Users, and When: A Mixed-methods Study of Combinations of Chatbot Guidance Types and Timings. Su-Fang Yeh, Meng-Hsin Wu, Tze-Yu Chen, Yen-Chun Lin, Xijing Chang, You-Hsuan Chiang, Yung-Ju Chang, CHI Conference on Human Factors in Computing Systems. Su-Fang Yeh, Meng-Hsin Wu, Tze-Yu Chen, Yen-Chun Lin, XiJing Chang, You- Hsuan Chiang, and Yung-Ju Chang. 2022. How to Guide Task-oriented Chatbot Users, and When: A Mixed-methods Study of Combinations of Chatbot Guidance Types and Timings. In CHI Conference on Human Factors in Computing Systems. 1-16. Can we automate scientific reviewing?. Weizhe Yuan, Pengfei Liu, Graham Neubig, arXiv:2102.00176arXiv preprintWeizhe Yuan, Pengfei Liu, and Graham Neubig. 2021. Can we automate scientific reviewing? arXiv preprint arXiv:2102.00176 (2021). A normalized Levenshtein distance metric. Li Yujian, Liu Bo, 29Li Yujian and Liu Bo. 2007. A normalized Levenshtein distance metric. IEEE transactions on pattern analysis and machine intelligence 29, 6 (2007), 1091-1095. Wikum: Bridging discussion forums and wikis using recursive summarization. X Amy, Lea Zhang, David Verou, Karger, Proceedings of the 2017 ACM Conference on Computer Supported Cooperative Work and Social Computing. the 2017 ACM Conference on Computer Supported Cooperative Work and Social ComputingAmy X Zhang, Lea Verou, and David Karger. 2017. Wikum: Bridging discussion forums and wikis using recursive summarization. In Proceedings of the 2017 ACM Conference on Computer Supported Cooperative Work and Social Computing. 2082-2096. Dissonance Between Human and Machine Understanding. 3, CSCW, Article. Zijian Zhang, Jaspreet Singh, Ujwal Gadiraju, Avishek Anand, 10.1145/335915856Zijian Zhang, Jaspreet Singh, Ujwal Gadiraju, and Avishek Anand. 2019. Disso- nance Between Human and Machine Understanding. 3, CSCW, Article 56 (2019), 23 pages. https://doi.org/10.1145/3359158 background' (25%) -> 'purpose' (12.5%) -> 'method' (37.5%) -> 'finding. 25%)1. 'background' (25%) -> 'purpose' (12.5%) -> 'method' (37.5%) -> 'finding' (25%); background' (33.3%) -> 'purpose' (16.7%) -> 'method' (16.7%) -> 'finding. 3332. 'background' (33.3%) -> 'purpose' (16.7%) -> 'method' (16.7%) -> 'finding' (33.3%); background' (42.9%) -> 'method' (28.6%) -> 'finding. 5283. 'background' (42.9%) -> 'method' (28.6%) -> 'finding' (28.5%); background' (50%) -> 'purpose' (16.7%) -> 'finding. 3334. 'background' (50%) -> 'purpose' (16.7%) -> 'finding' (33.3%); background' (25%) -> 'finding' (12.5%) -> 'method' (12.5%) -> 'finding. 50%)5. 'background' (25%) -> 'finding' (12.5%) -> 'method' (12.5%) -> 'finding' (50%); background' (22.2%) -> 'purpose' (11.2%) -> 'method' (33.3%) -> 'finding. 3332. 'background' (22.2%) -> 'purpose' (11.2%) -> 'method' (33.3%) -> 'finding' (33.3%); background' (33.3%) -> 'purpose' (16.7%) -> 'method' (16.7%) -> 'finding. 3333. 'background' (33.3%) -> 'purpose' (16.7%) -> 'method' (16.7%) -> 'finding' (33.3%); background' (33.3%) -> 'method' (16.7%) -> 'finding' (50%). 4. 'background' (33.3%) -> 'method' (16.7%) -> 'finding' (50%); background' (20%) -> 'finding' (6.7%) -> 'background' (13.3%) -> 'purpose' (6.7%) -> 'background' (13.3%) -> 'finding' (6.7%) -> 'method. 6.7%) -> 'finding' (26.7%)5. 'background' (20%) -> 'finding' (6.7%) -> 'background' (13.3%) -> 'purpose' (6.7%) -> 'background' (13.3%) -> 'finding' (6.7%) -> 'method' (6.7%) -> 'finding' (26.7%);
[ "https://github.com/huashen218/", "https://github.com/huashen218/convxai/" ]
[ "Combinatorial Neural Bandits", "Combinatorial Neural Bandits" ]
[ "Taehyun Hwang ", "Kyuwook Chai ", "Min-Hwan Oh " ]
[]
[]
We consider a contextual combinatorial bandit problem where in each round a learning agent selects a subset of arms and receives feedback on the selected arms according to their scores. The score of an arm is an unknown function of the arm's feature. Approximating this unknown score function with deep neural networks, we propose algorithms: Combinatorial Neural UCB (CN-UCB) and Combinatorial Neural Thompson Sampling (CN-TS). We prove thatwhere d is the effective dimension of a neural tangent kernel matrix, K is the size of a subset of arms, and T is the time horizon. For CN-TS, we adapt an optimistic sampling technique to ensure the optimism of the sampled combinatorial action, achieving a worst-case (frequentist) regret of O( d √ T K). To the best of our knowledge, these are the first combinatorial neural bandit algorithms with regret performance guarantees. In particular, CN-TS is the first Thompson sampling algorithm with the worst-case regret guarantees for the general contextual combinatorial bandit problem. The numerical experiments demonstrate the superior performances of our proposed algorithms.
null
[ "https://export.arxiv.org/pdf/2306.00242v1.pdf" ]
258,999,766
2306.00242
900dfda15cb92f37bfb18aa508ad9252a223818d
Combinatorial Neural Bandits Taehyun Hwang Kyuwook Chai Min-Hwan Oh Combinatorial Neural Bandits We consider a contextual combinatorial bandit problem where in each round a learning agent selects a subset of arms and receives feedback on the selected arms according to their scores. The score of an arm is an unknown function of the arm's feature. Approximating this unknown score function with deep neural networks, we propose algorithms: Combinatorial Neural UCB (CN-UCB) and Combinatorial Neural Thompson Sampling (CN-TS). We prove thatwhere d is the effective dimension of a neural tangent kernel matrix, K is the size of a subset of arms, and T is the time horizon. For CN-TS, we adapt an optimistic sampling technique to ensure the optimism of the sampled combinatorial action, achieving a worst-case (frequentist) regret of O( d √ T K). To the best of our knowledge, these are the first combinatorial neural bandit algorithms with regret performance guarantees. In particular, CN-TS is the first Thompson sampling algorithm with the worst-case regret guarantees for the general contextual combinatorial bandit problem. The numerical experiments demonstrate the superior performances of our proposed algorithms. Introduction We consider a general class of contextual semi-bandits with combinatorial actions, where in each round the learning agent is given a set of arms, chooses a subset of arms, and receives feedback on each of the chosen arms along with the reward based on the combinatorial actions. The goal of the agent is to maximize cumulative rewards through these repeated interactions. The feedback is given as a function of the feature vectors (contexts) of the chosen arms. However, the functional form of the feedback model is unknown to the agent. Therefore, the agent needs to carefully balance exploration and exploitation in order to simultaneously learn the feedback model and optimize cumulative rewards. Many real-world applications are naturally combinatorial action selection problems. For example, in most online recommender systems, such as streaming services and online retail, recommended items are typically presented as a set or a list. Real-time vehicle routing can be formulated as the shortest-path problem under uncertainty which is a classic combinatorial problem. Network routing is also another example of a combinatorial optimization problem. Often, in these applications, the response model is not fully known a priori (e.g., user preferences in recommender systems, arrival time in vehicle routing) but can only be queried by sequential interactions. Therefore, these applications can be formulated as a combinatorial bandit problem. Despite the generality and wide applicability of the combinatorial bandit problem in practice, the combinatorial action space poses a greater challenge in balancing exploration and exploitation. To overcome such a challenge, parametric models such as the (generalized) linear model are often assumed for the feedback model (Qin et al., 2014;Wen et al., 2015;Kveton et al., 2015;Zong et al., 2016;Li et al., 2016;2019;Oh & Iyengar, 2019). These works typically extend the techniques in the (generalized) linear contextual bandits (Abe & Long, 1999;Auer, 2002;Filippi et al., 2010;Rusmevichientong & Tsitsiklis, 2010;Abbasi-Yadkori et al., 2011;Chu et al., 2011;Li et al., 2017) to utilize contextual information and the structure of the feedback/reward model to avoid the naive exploration in combinatorial action space. However, the representation power of the (generalized) linear model can be limited in many real-world applications. When the model assumptions are violated, often the performances of the algorithms that exploit the structure of a model can severely deteriorate. Beyond the parametric assumption for the feedback model, discretization-based techniques (Chen et al., 2018;Nika et al., 2020) have been proposed to capture the non-linearity of the base arm under the Lipschitz condition on the feedback model. These techniques split the context space and compute an upper confidence bound of rewards for each context partition. The performances of the algorithms strongly Table 1. Comparison with the related work. For the neural bandit algorithms with single arm selection (Zhou et al., 2020;Zhang et al., 2021), the reward function is not defined for a super arm (or the reward function can be viewed the same as the feedback for a single arm). All of the feedback models assume the boundedness of feedback. O is a big-O notation up to logarithmic factors. (Zhou et al., 2020) No TS (Zhang et al., 2021) No General - O( d √ T ) Neural-General - O( d √ T ) CN-UCB (this work) Yes General Lipschitz O( d √ T ) or O( dT K) CN-TS (this work) Yes General Lipschitz O( d √ T K) † Bayesian regret, which is a weaker notion of regret than the worst-case regret. ‡d represents the approximate optimality dimension related to context space. depend on the policy of how to partition the context space. However, splitting the context space is computationally expensive. As the reward function becomes more complex, so does the splitting procedure. Thus, it is challenging to apply these methods to high-dimensional contextual bandits. In addition, the Lipschitz assumption on the feedback model (not on the reward function) does not hold when contexts close in the context space yield significantly different outcomes, i.e., when context space cannot be partitioned with respect to the outcome. Deep neural networks have shown remarkable empirical performances in various learning tasks (LeCun et al., 2015;Goodfellow et al., 2016;Silver et al., 2016). Incorporating the superior representation power and recent advances in generalization theory of deep neural networks (Jacot et al., 2018;Cao & Gu, 2019) into contextual bandits, an upper confidence bound (UCB) algorithm as an extension of the linear contextual bandit has been proposed (Zhou et al., 2020). Extending the UCB approach, Zhang et al. (2021) proposed a neural network-based Thompson Sampling (TS) algorithm (Thompson, 1933). However, these algorithms are proposed only for single-action selection. How these algorithms generalize to the combinatorial action selection has remained open. In this paper, we study provably efficient contextual combinatorial bandit algorithms without any modeling assumptions on the feedback model (with mild assumptions on the reward function which takes the feedback as an input). The extension to the combinatorial actions and providing provable performance guarantees requires more involved analysis and novel algorithmic modifications, particularly for the TS algorithm. To briefly illustrate this challenge, even under the simple linear feedback model, a worst-case regret bound has not been known for a TS algorithm with various classes of combinatorial actions. This is due to the difficulty of ensuring the optimism of randomly sampled combinatorial actions (see Section 4.1). Addressing such challenges, we adapt an optimistic sampling technique to our proposed TS algorithm, which allows us to achieve a sublinear regret. Our main contributions are as follows: • We propose algorithms for a general class of contextual combinatorial bandits: Combinatorial Neural UCB (CN-UCB) and Combinatorial Neural Thompson Sampling (CN-TS). To the best of our knowledge, these are the first neural-network based combinatorial bandit algorithms with regret guarantees. • We establish that CN-UCB is statistically efficient achieving O( d √ T ) or O( dT K) regret, where d is the effective dimension of a neural tangent kernel matrix, K is the size of a subset of arms, and T is the time horizon. This result matches the corresponding regret bounds of linear contextual bandits. • The highlight of our contributions is that CN-TS is the first TS algorithm with the worst-case regret guarantees of O( d √ T K) for a general class of contextual combinatorial bandits. To our best knowledge, even under a simpler, linear feedback model, the existing TS algorithms with various combinatorial actions (including semi-bandit) do not have the worst-case regret guarantees. This is due to the difficulty of ensuring the optimism of sampled combinatorial actions. We overcome this challenge by adapting optimistic sampling of the estimated reward while directly sampling in the reward space. • The numerical evaluations demonstrate the superior performances of our proposed algorithms. We observe that the performances of the benchmark methods deteriorate significantly when the modeling assumptions are violated. In contrast, our proposed methods exhibit consistent competitive performances. Problem setting Notations For a vector x ∈ R d , we denote its ℓ 2 -norm by ∥x∥ 2 and its transpose by x ⊤ . The weighted ℓ 2 -norm associated with a positive definite matrix A is defined by ∥x∥ A := √ x ⊤ Ax. The trace of a matrix A is tr(A). We define [N ] for a positive integer N to be a set containing positive integers up to N , i.e., {1, 2, . . . , N }. Contextual Combinatorial Bandit In this work, we consider a contextual combinatorial bandit, where T is the total number of rounds, and N is the number of arms. At round t ∈ [T ], a learning agent observes the set of context vectors for all arms {x t,i ∈ R d | i ∈ [N ]} and chooses a set of arms S t ⊂ [N ] with size constraint |S t | = K. S t is called a super arm. We introduce the notion of candidate super arm set S ⊂ 2 [N ] defined as the set of all possible subsets of arms with size K, i.e., S := {S ⊂ [N ] | |S| = K}. SCORE FUNCTION FOR FEEDBACK Once a super arm S t ∈ S is chosen, the agent then observes the scores of the chosen arms {v t,i } i∈St and receives a reward R(S t , v t ) as a function of the scores v t : = [v t,i ] N i=1 (which we discuss in the next section). This type of feedback is also known as semi-bandit feedback (Audibert et al., 2014). Note that in combinatorial bandits, feedback and reward are not necessarily the same as is the case in noncombinatorial bandits. For each t ∈ [T ] and i ∈ [N ], score v t,i is assumed to be generated as follows: v t,i = h(x t,i ) + ξ t,i(1) where h is an unknown function satisfying 0 ≤ h(x) ≤ 1 for any x, and ξ t,i is a ρ-sub-Gaussian noise satisfying E[ξ t,i |F t ] = 0 where F t is the history up to round t. To learn the score function h in Eq. (1), we use a fully connected neural network (Zhou et al., 2020;Zhang et al., 2021) with depth L ≥ 2, defined recursively: f 1 = W 1 x f ℓ = W ℓ ϕ(f ℓ−1 ), 2 ≤ ℓ ≤ L, f (x; θ) = √ mf L(2) where θ := [vec(W 1 ) ⊤ , ..., vec(W L ) ⊤ ] ⊤ ∈ R p is the parameter of the neural network with p = dm+m 2 (L−2)+m, ϕ(x) := max{x, 0} is the ReLU activation function, and m is the width of each hidden layer. We denote the gradient of the neural network by g(x; θ) := ∇ θ f (x; θ) ∈ R p . REWARD FUNCTION & REGRET R(S, v) is a deterministic reward function that measures the quality of the super arm S based on the scores v. For example, the reward of a super arm S t can be the sum of the scores of arms in S t , i.e., R(S t , v t ) = i∈St v t,i . For our analysis, the reward function can be any function (linear or non-linear) which satisfies the following mild assumptions standard in the combinatorial bandit literature (Qin et al., 2014;Li et al., 2016). Assumption 1 (Monotonicity). R(S, v) is monotone non- decreasing with respect to the score vector v = [v i ] N i=1 , which means, for any S, if v i ≤ v ′ i for all i ∈ [N ], we have R(S, v) ≤ R(S, v ′ ) . Assumption 2 (Lipschitz continuity). R(S, v) is Lipschitz continuous with respect to the score vector v restricted on the arms in S, which means, there exists a constant C 0 > 0 such that for any v and v ′ , we have |R (S, v) − R(S, v ′ )| ≤ C 0 i∈S (v i − v ′ i ) 2 . Remark 1. Reward function satisfying Assumptions 1 and 2 encompasses a wide range of combinatorial feedback models including semi-bandit, document-based or position based ranking models, and cascading models with little change to the learning algorithm. See Appendix G for more detailed discussions. Note that we do not require the agent to have direct knowledge on the explicit form of the reward function R(S, v). For the sake of clear exposition, we assume that the agent has access to an exact optimization oracle O S (v) which takes a score vector v as an input and returns the solution of the maximization problem argmax S∈S R(S, v). Remark 2. One can trivially extend the exact optimization oracle to an α-approximation oracle without altering the learning algorithm or regret analysis. For problems such as semi-bandit algorithms choosing top-K arms, exact optimization can be done by simply sorting base scores. Even for more challenging assortment optimization, there are many polynomial-time (approximate) optimization methods available (Rusmevichientong et al., 2010;Davis et al., 2014). For this reason, we present the regret analysis without α-approximation assumption. Extension of our regret analysis to an α-approximation oracle is given in Appendix E. The goal of the agent is to minimize the following (worstcase) cumulative expected regret: R(T ) = T t=1 E [R(S * t , v * where v * t := [h(x t,i )] N i=1 is the expected score which is unknown, and S * t := argmax S∈S R(S, v * t ) is the offline optimal super arm at round t under the expected score. Combinatorial Neural UCB (CN-UCB) CN-UCB Algorithm In this section, we present our first algorithm, Combinatorial Neural UCB (CN-UCB). CN-UCB is a neural network-based UCB algorithm that operates using the optimism in the face of uncertainty (OFU) principle (Lai & Robbins, 1985) for combinatorial actions. In our proposed method, the neural network used for feedback model approximation is initialized by randomly generating each entry of θ 0 = [vec(W 1 ) ⊤ , ..., vec(W L ) ⊤ ] ⊤ , where for each ℓ ∈ [L − 1], W ℓ = (W, 0; 0, W) with each entry of W generated independently from N (0, 4/m) and W L = (w ⊤ , −w ⊤ ) with each entry of w generated independently from N (0, 2/m). At each round t ∈ [T ], the algorithm observes the contexts for all arms, {x t,i } i∈ [N ] and computes an upper confidence bound u t,i of the expected score for each arm i, based on x t,i , θ t−1 , and the exploration parameter γ t−1 . Then, the sum of upper confidence bound score vector u t := [u t,i ] N i=1 and the offset term vector e t := [e t , · · · , e t ], (specified in Lemma 1), is passed to the optimization oracle O S as input. Then, the agent plays S t = O S (u t + e t ) and receives the corresponding scores {v t,i } i∈St as feedback along with the reward associated with super arm S t . Then the algorithm updates θ t by minimizing the following loss function in Eq.(4) using gradient descent with step size η for J times. L(θ) = 1 2 n k=1 f (x k ; θ) − v k 2 + mλ 2 ∥θ − θ 0 ∥ 2 2(4) Here, the loss is minimized using ℓ 2 -regularization. Hyperparameter λ controls the level of regularization, where the regularization centers at the randomly initialized neural network parameter θ 0 . The CN-UCB algorithm is summarized in Algorithm 1. Regret of CN-UCB For brevity, we denote {x k } T N k=1 be the collection of all contexts {x 1,1 , . . . , x T,N }. H (1) i,j = Σ (1) i,j = ⟨x i , x j ⟩, A (ℓ) i,j = Σ (ℓ) i,i Σ (ℓ) i,j Σ (ℓ) j,i Σ (ℓ) j,j , Σ (ℓ+1) i,j = 2E (y,z)∼N (0,A (ℓ) i,j ) [ϕ(y)ϕ(z)] , H (ℓ+1) i,j = 2 H (ℓ) i,j E (y,z)∼N (0,A (ℓ) i,j ) [ϕ ′ (y)ϕ ′ (z)] + Σ (ℓ+1) i,j . Then, H = ( H (L) + Σ (L) )/2 is called the neural tangent kernel (NTK) matrix on the context set {x k } T N k=1 . The NTK matrix H on the contexts {x k } T N k=1 is defined recursively from the input layer to the output layer of the network (Zhou et al., 2020;Zhang et al., 2021). Then, we define the effective dimension of the NTK matrix H. Definition 2. The effective dimension d of the NTK matrix H with regularization parameter λ is defined as d = log det(I + H/λ) log(1 + T N/λ) .(5) The effective dimension can be thought of as the actual dimension of contexts in the Reproducing Kernel Hilbert Space spanned by the NTK. For further detailed information, we refer the reader to Jacot et al. (2018). We proceed under the following assumption regarding contexts: Assumption 3. For any k ∈ [T N ], ∥x k ∥ 2 = 1 and [x k ] j = [x k ] j+ d 2 for 1 ≤ j ≤ d 2 . Furthermore, for some λ 0 > 0, H ⪰ λ 0 I . This is a mild assumption commonly used in the neural contextual bandits (Zhou et al., 2020;Zhang et al., 2021). ∥x∥ 2 = 1 is only imposed for simplicity of exposition. For the condition on the entries of x, we can always reconstruct a new context x ′ = [x ⊤ , x ⊤ ] ⊤ / √ 2. A positive definite NTK matrix is a standard assumption in the NTK literature (Du et al., 2019;Arora et al., 2019), also used in the aforementioned neural contextual bandit literature. The following theorem provides the regret bound of Algorithm 1. Theorem 1. Suppose Assumptions 1-3 hold. Let h = h(x k ) T N k=1 ∈ R T N . If we run CN-UCB with m ≥ poly(T, L, N, λ −1 , λ −1 0 , log T ) , η =C 1 (T KmL + mλ) −1 , λ ≥C 2 LK, J = 2 log λ/T K/(λ +C 3 T KL) T KL/(C 1 λ) for some positive constantsC 1 ,C 2 ,C 3 withC 2 ≥ max t,i ∥g(x t,i ; θ t−1 )/ √ m∥ 2 2 /L and B ≥ √ 2h ⊤ H −1 h, then the cumulative expected regret of CN-UCB over hori- zon T is upper-bounded by R(T ) = O dT max{ d, K} . Discussion of Theorem 1. Theorem 1 establishes that the cumulative regret of CN-UCB is O( d √ T ) or O( dT K), whichever is higher. This result matches the state-of-theart regret bounds for the contextual combinatorial bandits with the linear feedback model (Li et al., 2016;Zong et al., 2016;Li & Zhang, 2018). Note that the existence ofC 2 in Theorem 1 follows from Lemma B.6 in Zhou et al. (2020) Algorithm 1 Combinatorial Neural UCB (CN-UCB) Input: Number of rounds T , regularization parameter λ, norm parameter B, step size η, network width m, number of gradient descent steps J, network depth L. Initialization: Randomly initialize θ 0 as described in Section 3.1 and Z 0 = λI for t = 1, ..., T do Observe {x t,i } i∈[N ] Computev t,i = f (x t,i ; θ t−1 ) and u t,i =v t,i + γ t−1 ∥g(x t,i ; θ t−1 )/ √ m∥ Z −1 t−1 for i ∈ [N ] Let S t = O S (u t + e t ) Play super arm S t and observe {v t,i } i∈St Update Z t = Z t−1 + i∈St g(x t,i ; θ t−1 )g(x t,i ; θ t−1 ) ⊤ /m Update θ t to minimize the loss in Eq.(4) using gradient descent with η for J times Compute γ t and e t+1 described in lemma 1 end for and Lemma B.3 in Cao & Gu (2019). While the regret analysis for Theorem 1 has its own merit, the technical lemmas for Theorem 1 also provide the building block for the more challenging analysis of the TS algorithm which is presented in Section 4. Proof Sketch of Theorem 1 In this section, we provide a proof sketch of the regret upper bound in Theorem 1 and the key lemmas whose proofs are deferred to Appendix A. Recall that we do not make any parametric assumption on the score function, but a neural network is used to approximate the unknown score function. Hence, we need to carefully control the approximation error. To achieve this, we use an over-parametrized neural network, for which the following condition on the neural network width is required. Condition 1. The network width m satisfies m ≥ C max{L − 3 2 K − 1 2 λ 1 2 log(T N L 2 /δ) 3 2 , T 6 N 6 L 6 log(T 2 N 2 L/δ) max{λ −4 0 , 1}} , m (log m) −3 ≥ CT 4 K 4 L 21 λ −4 (1 + T /λ) 6 + CT KL 12 λ −1 + CT 4 K 4 L 18 λ −10 (λ + T L) 6 , where C is a positive absolute constant. Unlike the analysis of the (generalized) linear UCB algorithms (Abbasi-Yadkori et al., 2011;Li et al., 2017), we do not have guarantees on the upper confidence bound u t,i being higher than the expected score v * t,i = h(x t,i ) due to the approximation error. Therefore, we consider adding the offset term to the the upper confidence bound to ensure optimism. The following lemma shows that the upper confidence bounds u t,i do not deviate far from the expected score h(x t,i ) and specifies the value of the offset term. Lemma 1. For any δ ∈ (0, 1), suppose the width of the neural network m satisfies Condition 1. Let γ t be a positive scaling factor defined as γ t = Γ 1,t ρ log det Z t det λI + Γ 2,t − 2 log δ + √ λB + (λ + C 1 tKL) (1 − ηmλ) J 2 tK/λ + Γ 3,t , where Γ 1,t = 1 + C Γ,1 t 7 6 K 7 6 L 4 λ − 7 6 m − 1 6 log m , Γ 2,t = C Γ,2 t 5 3 K 5 3 L 4 λ − 1 6 m − 1 6 log m , Γ 3,t = C Γ,3 t 7 6 K 7 6 L 7 2 λ − 7 6 m − 1 6 log m(1 + tK/λ) , for some constants C 1 , C Γ,1 , C Γ,2 , C Γ,3 > 0. If η ≤ C 2 (T KmL+mλ) −1 for some C 2 > 0, then for any t ∈ [T ] and i ∈ [N ], with probability at least 1 − δ we have |u t,i − h(x t,i )| ≤ 2γ t−1 g(x t,i ; θ t−1 )/ √ m Z −1 t−1 + e t , where e t is defined for some absolute constants C 3 , C 4 > 0 as follows. e t := C 3 γ t−1 t 1 6 K 1 6 L 7 2 λ − 2 3 m − 1 6 log m + C 4 t 2 3 K 2 3 λ − 2 3 m − 1 6 log m . The next corollary shows that the surrogate upper confidence bound u t,i + e t is higher than true mean score h(x t,i ) with high probability. Lemma 2. For any δ ∈ (0, 1) suppose the width of the neural network m satisfies Condition 1. Corollary 1. With probability at least 1 − δ u t,i + e t ≥ h(x t,i ) .If η ≤ C 1 (T KmL + mλ) −1 , and λ ≥ C 2 LK, for some positive constant C 1 , C 2 with C 2 ≥ max t,i ∥g(x t,i ; θ t−1 )/ √ m∥ 2 2 /L, then with probability at least 1 − δ, for some C 3 > 0, T t=1 i∈St g(x t,i ; θ t−1 )/ √ m 2 Z −1 t−1 ≤ 2 d log(1+T N/λ) + 2 + C 3 T 5 3 K 3 2 L 4 λ − 1 6 m − 1 6 log m . Combining these results, we can derive the regret bound in Theorem 1. First, using the Lipschitz continuity of the reward function, we bound the instantaneous regret with the sum of scores for each individual arm within the super arm. By Lemma 1, the upper confidence bound of the overparametrized neural network concentrates well around the true score function. By adding an arm-independent offset term, we can ensure the optimism of the surrogate upper confidence bound. Then, we apply Lemma 2 to derive the desired cumulative regret bound. Combinatorial Neural TS (CN-TS) Challenges in Worst-Case Regret Analysis for Combinatorial Actions The challenges in the worst-case (non-Bayesian) regret analysis for TS algorithms with combinatorial actions lie in the difficulty of ensuring optimism of a sampled combinatorial action. The key analytical element to drive a sublinear regret for any TS algorithm, either combinatorial or noncombinatorial, is to show that a sampled action is optimistic with sufficient frequency (Agrawal & Goyal, 2013;Abeille & Lazaric, 2017). With combinatorial actions, however, ensuring optimism becomes more challenging than singleaction selection. In particular, if the structure of the reward and feedback model is not known, one can only resort to hoping that all of the sampled base arms in the chosen super arm S t are optimistic, i.e., the scores of all sampled base arms are higher than their expected scores. The probability of such an event can be exponentially small in the size of the super arm K. For example, let the probability that the sampled score of the i-th arm is higher than the corresponding expected score be at least p, i.e., P( v i > h(x i )) ≥ p. If the sampled score of every arm is optimistic, by the monotonicity property of the reward function, the reward induced by the sampled scores would be larger than the reward induced by the expected score, i.e., R(S, v) ≥ R(S, v * ). However, the probability of the event that all the K sampled scores are higher than their corresponding expected scores would be in the order of p K . Hence, the probability of such an event can be exponentially small in the size of the super arm K. Note that in the UCB exploration, one can ensure highprobability optimism even with combinatorial actions in a straightforward manner since action selection is deterministic. However, in TS with combinatorial actions, suitable random exploration with provable efficiency is much more challenging to guarantee. This challenge is further exacerbated by the complex analysis based on neural networks that we consider in this work. CN-TS Algorithm To address the challenge of TS exploration with combinatorial actions described above, we present CN-TS, a neural network-based TS algorithm. We make two modifications from conventional TS for parametric bandits. First, instead of maintaining an actual Bayesian posterior as in the canonical TS algorithms, CN-TS is a generic randomized algorithm that samples rewards from a Gaussian distribution. The algorithm directly samples estimated rewards from a Gaussian distribution, rather than sampling network parameters -this modification is adapted from Zhang et al. (2021). Second, in order to ensure sufficient optimistic sampling in combinatorial action space, we draw multiple M independent score samples for each arm instead of drawing a single sample. Leveraging these multiple samples, we compute the most optimistic (the highest estimated) score for each arm. We demonstrate that implementing this modification effectively ensures the required optimism of samples, formalized in Lemma 3. The algorithm is summarized in Algorithm 2. Regret of CN-TS Under the same assumptions introduced in the analysis of CN-UCB, we present the worst-case regret bound for CN-TS in Theorem 2. Theorem 2. Suppose Assumptions 1-3 hold and m satisfies Condition 1. If we run CN-TS with η =C 1 (T KmL + mλ) , λ = max{1 + 1/T,C 2 LK}, J = 2 log λ/T KL/(4C 3 T ) T KL/(C 1 λ) ν = B + ρ d log(1 + T N/λ) + 2 + 2 log T , B = max{1/(22e √ π), √ 2h ⊤ Hh} , M = ⌈1 − log K/ log(1 − p)⌉ for some positive constantsC 1 > 0,C 3 > 0, andC 2 ≥ Algorithm 2 Combinatorial Neural Thompson Sampling (CN-TS) Input: Number of rounds T , regularization parameter λ, exploration variance ν, step size η, network width m, number of gradient descent steps J, network depth L, sample size M . Initialization: Randomly initialize θ 0 as described in Section 3.1 and Z 0 = λI for t = 1, ..., T do Observe {x t,i } i∈[N ] Compute σ 2 t,i = λg(x t,i ; θ t−1 ) ⊤ Z −1 t−1 g(x t,i ; θ t−1 )/m for each i ∈ [N ] Sample { v (j) t,i } M j=1 independently from N (f (x t,i ; θ t−1 ), ν 2 σ 2 t,i ) for each i ∈ [N ] Compute v t,i = max j v (j) t,i for each i ∈ [N ] Let S t = O S ( v t + ϵ) Play super arm S t and observe {v t,i } i∈St Update Z t = Z t−1 + i∈St g(x t,i ; θ t−1 )g(x t,i ; θ t−1 ) ⊤ /m Update θ t to minimize the loss (4) using gradient descent with η for J times end for max t,i ∥g(x t,i ; θ t−1 )/ √ m∥ 2 2 /L, then the cumulative ex- pected regret of CN-TS over horizon T is upper-bounded by R(T ) = O( d √ T K) . Discussion of Theorem 2. Theorem 2 establishes that the cumulative regret of CN-TS is O( d √ T K). To the best of our knowledge, this is the first TS algorithm with the worst-case regret guarantees for general combinatorial action settings. This is crucial since various combinatorial bandit problems were prohibitive for TS methods due to the difficulty of ensuring the optimism of randomly selected super-action as discussed in Section 4.1. Our result also encompasses the linear feedback model setting, for which, to our best knowledge, a worst-case regret bound has not been proven for TS with combinatorial actions in general. Remark 3. Both CN-UCB and CN-TS depend on the condition of network size m. However, our experiments show superior performances of the proposed algorithms even when they are implemented with much smaller m (see Section 5). The large value of m is sufficient for regret analysis, due to the current state of the NTK theory. The same phenomenon is also present in the single action selection version of the neural bandits (Zhang et al., 2021;Zhou et al., 2020). Remark 4. For a clear exposition of main ideas, the knowledge of T is assumed for both CN-UCB and CN-TS. This knowledge was also assumed in the previous neural bandit literature (Zhang et al., 2021;Zhou et al., 2020). We can replace this requirement of knowledge on T by using a doubling technique. We provide modified algorithms that do not depend on such knowledge of T in Appendix F. Remark 5. The proposed optimistic sampling technique can be applied to the regret analysis for TS algorithms with combinatorial actions other than neural bandit settings. Regarding the cost of the optimistic sampling, this salient feature of the algorithm is controlled by the number of mul-tiple samples M . A notable feature is that while this technique provides provably sufficient optimism, the proposed optimistic sampling technique comes at a minimal cost of log M . That is, even if we over-sample by the factor of 2, the additional cost in the regret bound only increases by the additive logarithmic factor, i.e., log 2M = log M + log 2. Also, given that a theoretically suggested value of M as shown in Theorem 2 is only Ω(log K), the regret caused by the optimistic sampling is of O(log log K). Proof Sketch of Theorem 2 For any t ∈ [T ], we define events E σ t and E µ t similar to the prior literature on TS (Agrawal & Goyal, 2013;Zhang et al., 2021) defined as follows. E σ t := {ω ∈ F t+1 | ∀i, | v t,i − f (x t,i ; θ t−1 )| ≤ β t νσ t,i } E µ t := {ω ∈ F t+1 | ∀i, |f (x t,i ; θ t−1 ) − h(x t,i )| ≤ νσ t,i + ϵ} where for some constants {C ϵ,k } 4 k=1 , ϵ is defined as ϵ := C ϵ,1 T 2 3 K 2 3 L 3 λ − 2 3 m − 1 6 log m + C ϵ,2 (1 − ηmλ) J/2 T KL/λ + C ϵ,3 T 7 6 K 7 6 L 4 λ − 7 6 m − 1 6 log m(1 + T K/λ) + C ϵ,4 T 7 6 K 7 6 λ − 2 3 L 9 2 m − 1 6 log m · B + ρ d log(1 + T N/λ) + 2 − 2 log δ . Under event E σ t , the difference between the optimistic sampled score and the estimated score can be controlled by the score's approximate posterior variance. Under the event E µ t , the estimated score based on the neural network does not deviate far from the expected score up to the approximate error term. Note that both events E µ t , E σ t happen with high probability. The remaining part is a guarantee on the probability of optimism for randomly sampled actions. Lemma 3 shows that the proposed optimistic sampling ensures a constant probability of optimism. Lemma 3. Suppose we take optimistic samples of size M = ⌈1 − log K log(1− p) ⌉ where p := 1/(4e √ π). Then we have P R(S t , v t + ϵ) > R(S * t , v * t )|F t , E µ t ≥ p where ϵ = [ϵ, . . . , ϵ] ∈ R N . Lemma 3 implies that even in the worst case, our randomized action selection still provides optimistic rewards at least with constant frequency. Hence, the regret pertaining to random sampling can be upper-bounded based on this frequent-enough optimism. The complete proof is deferred to Appendix B. Numerical Experiments In this section, we perform numerical evaluations on CN-UCB and CN-TS. For each round in CN-TS, we draw M = 10 samples for each arm. We also present the performances of CN-TS(M=1), which is a special case of CN-TS drawing only one sample per arm. We perform synthetic experiments and measure the cumulative regret of each algorithm. In Experiment 1, we compare our algorithms with contextual combinatorial bandits based on a linear assumption: CombLinUCB and CombLinTS (Wen et al., 2015). In Experiment 2, we demonstrate the empirical performances of our algorithms as the context dimension d increases. The contexts given to the agent in each round are randomly generated from a unit ball. The dimension of each context is d = 80 for Experiment 1, and d = 40, 80, 120 for Experiment 2. For each round, the agent of each algorithm chooses K = 4 arms among N = 20. Similar to the experiments in Zhou et al. (2020), we assume three unknown score functions h 1 (x t,i ) = x ⊤ t,i a , h 2 (x t,i ) = (x ⊤ t,i a) 2 , h 3 (x t,i ) = cos(πx ⊤ t,i a) , where a has the same dimension of the context and is randomly generated from a unit ball and remains fixed during the horizon. We suppose a top-K problem and use the sum of scores R(S t , v t ) = i∈St v t,i as the reward function. However, as mentioned in Remark 1, the reward function can be any function that satisfies Assumptions 1 and 2. We use regularization parameter λ = 1 for all methods, confidence bound coefficient α = 1 for CombLinUCB and γ = 1 for CN-UCB, and exploration variance ν = 1 for CN-TS, CN-TS(M=1) and CombLinTS. To estimate the score of each arm, we design a neural network with depth L = 2 and hidden layer width m = 100. The number of parameters is p = md + m = 8100 for Experiment 1, and p = 4100, 8100, 12100 for Experiment 2. The activation function is the rectified linear unit (ReLU). We use the loss function in Eq.(4) and use stochastic gradient descent with a batch of 100 super arms. We train the neural network every 10 rounds. The training epoch is 100, and the learning rate is 0.01. Experiment 1. We evaluate the cumulative regret of the algorithms for each score function h. For score functions h 1 (x) and h 2 (x), we set the number of rounds, T , to 2000, while T is set to 4000 for h 3 (x). We then present the average results, derived from 20 independent runs for each score function instance. The results are depicted in Figure 1. Our proposed algorithms show significant improvements over those based on linear models. In contrast to linear baselines, the cumulative regrets for CN-UCB and CN-TS demonstrate a sub-linear trend, even when the score function is quadratic or non-linear. These findings suggest that our algorithms can be readily applied to a diverse range of complex reward functions. Experiment 2. We present the results of our proposed algorithms for context dimensions d = 40, 80, 120. To highlight the advantage of optimistic sampling, we show a comparison between CN-TS and CN-TS(M=1). For these experiments, we utilize the quadratic score function h 2 (x). The number of rounds, T , is set to 2000 for d = 40, 80 and 4000 for d = 120. Similar to Experiment 1, the results represent averages derived from 20 independent runs. Figure 2 demonstrates the proficient performance of our algorithms, even as the feature dimension increases. The empirical results suggest a scalability of our algorithms in d that is no greater than linear. Furthermore, when d is large, CN-TS exhibits a marginally lower cumulative regret compared to CN-TS(M=1). This observation substantiates our assertion that CN-TS ensures a constant probability of optimism by drawing multiple M samples. Conclusion In this paper, we study a general class of a contextual combinatorial bandit problem, where the model of the score function is unknown. Approximating the score function with deep neural networks, we propose two algorithms: CN-UCB and CN-TS. We prove that CN-UCB achieves O( d of O( d √ T K). To our knowledge, these are the first combinatorial neural bandit algorithms with sub-linear regret guarantees. In particular, CN-TS is the first general contextual combinatorial Thompson sampling algorithm with the worst-case regret guarantees. Compared to the benchmark methods, our proposed methods exhibit consistent competitive performances, hence achieving both provable efficiency and practicality. Appendix A. Regret Bound for CN-UCB In this section, we present all the necessary technical lemmas and their proof, followed by the proof of Theorem 1. A.1. Proof of Lemma 1 We introduce the following technical lemmas which are necessary for proof of Lemma 1. Lemma 4 (Lemma 5.1 in Zhou et al. (2020)). For any δ ∈ (0, 1), suppose that there exists a positive constantC such that m ≥CT 4 N 4 L 6 λ −1 0 log(T 2 N 2 L/δ) . Then, with probability at least 1 − δ, there exists a θ * ∈ R p such that for all i ∈ [T N ], h(x k ) = g(x k ; θ 0 ) ⊤ (θ * − θ 0 ) , √ m∥θ * − θ 0 ∥ 2 ≤ √ h ⊤ H −1 h , where H is the NTK matrix defined in Definition 1 and h = [h(x k )] T N k=1 . Lemma 5 (Lemma 4.1 in Cao & Gu (2019)). Suppose that there existC 1 ,C 2 > 0 such that for any δ ∈ (0, 1), τ satisfies C 1 m − 3 2 L − 3 2 log(T N L 2 /δ) 3 2 ≤ τ ≤C 2 L −6 (log m) − 3 2 . Then, with probability at least 1 − δ, for all θ,θ satisfying ∥ θ − θ 0 ∥ 2 ≤ τ , ∥θ − θ 0 ∥ 2 ≤ τ and k ∈ [T N ], we have f (x k ; θ) − f (x k ;θ) − g(x k ;θ) ⊤ ( θ −θ) ≤C 3 τ 4 3 L 3 m log m, whereC 3 ≥ 0 is an absolute constant. Lemma 6 (Lemma 5 in Allen-Zhu et al. (2019b)). For any δ ∈ (0, 1), suppose that there existC 1 ,C 2 > 0 such that if τ satisfiesC 1 m − 3 2 L − 3 2 max (log m) − 3 2 , (log(T N/δ)) 3 2 ≤ τ ≤C 2 L − 9 2 (log m) −3 . Then, with probability at least 1 − δ, for all θ satisfying ∥θ − θ 0 ∥ 2 ≤ τ and k ∈ [T N ] we have ∥g(x k ; θ) − g(x k ; θ 0 )∥ 2 ≤C 3 log mτ 1 3 L 3 ∥g(x k ; θ 0 )∥ 2 , whereC 3 > 0 is an absolute constant. Lemma 7 (Lemma B.3 in Cao & Gu (2019)). Suppose that there existC 1 ,C 2 > 0 such that for any δ ∈ (0, 1), τ satisfies C 1 m − 3 2 L − 3 2 log(T N L 2 /δ) 3 2 ≤ τ ≤C 2 L −6 (log m) − 3 2 . Then with probability at least 1 − δ, for any θ satisfying ∥θ − θ 0 ∥ 2 ≤ τ and k ∈ [T N ] we have ∥g(x k ; θ)∥ F ≤C 3 √ mL whereC 3 > 0 is an absolute constant. Proof of Lemma 1. First of all, note that because m satisfies Condition 1, the required conditions in Lemma 4-7 are satisfied. For any t ∈ [T ], i ∈ [N ], by definition of u t,i and v * t,i , we have u t,i − v * t,i = f (x t,i ; θ t−1 ) + γ t−1 g(x t,i ; θ t−1 )/ √ m Z −1 t − h(x t,i ) = f (x t,i ; θ t−1 ) + γ t−1 g(x t,i ; θ t−1 )/ √ m Z −1 t−1 − g(x t,i ; θ 0 ) ⊤ (θ * − θ 0 ) ≤ f (x t,i ; θ t−1 ) − g(x t,i ; θ 0 ) ⊤ (θ * − θ 0 ) I0 + γ t−1 g(x t,i ; θ t−1 )/ √ m Z −1 t−1 I1 where the second equality holds due to Lemma 4, and the inequality follows from the triangle inequality. For I 0 , we have I 0 = f (x t,i ; θ t−1 ) − g(x t,i ; θ 0 ) ⊤ (θ * − θ 0 + θ t−1 − θ t−1 ) = f (x t,i ; θ t−1 ) − f (x t,i ; θ 0 ) − g(x t,i ; θ 0 ) ⊤ (θ t−1 − θ 0 ) − g(x t,i ; θ 0 ) ⊤ (θ * − θ t−1 ) ≤ f (x t,i ; θ t−1 ) − f (x t,i ; θ 0 ) − g(x t,i ; θ 0 ) ⊤ (θ t−1 − θ 0 ) I2 + g(x t,i ; θ 0 ) ⊤ (θ * − θ t−1 ) I3(6) where the second equality holds due to the initial condition of f , i.e., f (x; θ 0 ) = 0 for all x, and the inequality comes from the triangle inequality. To bound I 2 , we have I 2 = f (x t,i ; θ t−1 ) − f (x t,i ; θ 0 ) − g(x t,i ; θ 0 ) ⊤ (θ t−1 − θ 0 ) ≤ C ′ 3 τ 4 3 L 3 m log m = C 3 t 2 3 K 2 3 λ − 2 3 m − 1 6 log m where the first inequality follows from Lemma 5 for some constant C ′ 3 > 0, and the second equality is due to setting τ of Lemma 5 as 2 tK/(mλ) of Lemma 11, i.e., τ = 2 tK/(mλ). To bound I 3 , we have I 3 = g(x t,i ; θ 0 ) ⊤ (θ * − θ t−1 ) ≤ ∥g(x t,i ; θ 0 )∥ Z −1 t−1 ∥θ * − θ t−1 ∥ Zt−1 ≤ γ t−1 √ m ∥g(x t,i ; θ 0 )∥ Z −1 t−1 where the first inequality holds due to the Cauchy-Schwarz inequality, and the second inequality follows from Lemma 11. Combining the results, we have u t,i − v * t,i ≤ I 2 + I 3 + I 1 ≤ C 3 t 2 3 K 2 3 λ − 2 3 m − 1 6 log m + γ t−1 √ m ∥g(x t,i ; θ 0 )∥ Z −1 t−1 + γ t−1 √ m ∥g(x t,i ; θ t−1 )∥ Z −1 t−1 = C 3 t 2 3 K 2 3 λ − 2 3 m − 1 6 log m + γ t−1 √ m ∥g(x t,i ; θ 0 )∥ Z −1 t−1 + ∥g(x t,i ; θ t−1 )∥ Z −1 t−1 I4 . Now I 4 can be bounded as I 4 = ∥g(x t,i ; θ 0 ) + g(x t,i ; θ t−1 ) − g(x t,i ; θ t−1 )∥ Z −1 t−1 + ∥g(x t,i ; θ t−1 )∥ Z −1 t−1 ≤ ∥g(x t,i ; θ 0 ) − g(x t,i ; θ t−1 )∥ Z −1 t−1 + 2 ∥g(x t,i ; θ t−1 )∥ Z −1 t−1 ≤ 1 √ λ ∥g(x t,i ; θ 0 ) − g(x t,i ; θ t−1 )∥ 2 + 2 ∥g(x t,i ; θ t−1 )∥ Z −1 t−1 ≤ 1 √ λ C ′ 2 log m 2 tK/(mλ) 1 3 L 3 ∥g(x t,i ; θ 0 )∥ 2 + 2 ∥g(x t,i ; θ t−1 )∥ Z −1 t−1 ≤ C 2 t 1 6 K 1 6 λ − 2 3 L 7 2 m 1 3 log m + 2 ∥g(x t,i ; θ t−1 )∥ Z −1 t−1 where the first inequality follows from the triangle inequality, the second inequality holds due to the property ∥x∥ Z −1 t−1 ≤ 1 √ λ ∥x∥ 2 , the third inequality follows from Lemma 6 with τ = 2 tK/(mλ) in Lemma 11, and the last inequality holds due to Lemma 7. Finally, by taking a union bound about δ, with probability at least 1 − 5δ, we have u t,i − v * t,i ≤ C 3 t 2 3 K 2 3 λ − 2 3 m − 1 6 log m + γ t−1 √ m I 4 ≤ 2γ t−1 g(x t,i ; θ t−1 )/ √ m Z −1 t−1 + C 2 γ t−1 t 1 6 K 1 6 λ − 2 3 L 7 2 m − 1 6 log m + C 3 t 2 3 K 2 3 λ − 2 3 m − 1 6 log m . In particular, if we define e t = C 2 γ t−1 t 1 6 K 1 6 λ − 2 3 L 7 2 m − 1 6 log m + C 3 t 2 3 K 2 3 λ − 2 3 m − 1 6 log m and we replace δ with δ/5, then we have the desired result. A.2. Proof of Corollary 1 Proof of Corollary 1. Suppose that Lemma 1 holds. Let us denotē u t,i = u t,i + C 2 γ t−1 t 1 6 K 1 6 λ − 2 3 L 7 2 m − 1 6 log m + C 3 t 2 3 K 2 3 λ − 2 3 m − 1 6 log m et . Then, we havē u t,i − v * t,i = f (x t,i ; θ t−1 ) + γ t−1 g(x t,i ; θ t−1 )/ √ m Z −1 t−1 + e t − g(x t,i ; θ 0 ) ⊤ (θ * − θ 0 ) ≥ − f (x t,i ; θ t−1 ) − g(x t,i ; θ 0 ) ⊤ (θ * − θ 0 ) I0 +γ t−1 g(x t,i ; θ t−1 )/ √ m Z −1 t−1 + e t ≥ − C 3 t 2 3 K 2 3 λ − 2 3 m − 1 6 log m I2 − γ t−1 √ m ∥g(x t,i ; θ 0 )∥ Z −1 t−1 I3 + γ t−1 √ m ∥g(x t,i ; θ t−1 )∥ Z −1 t−1 + e t + γ t−1 √ m ∥g(x t,i ; θ t−1 )∥ Z −1 t−1 − γ t−1 √ m ∥g(x t,i ; θ t−1 )∥ Z −1 t−1 = −C 3 t 2 3 K 2 3 λ − 2 3 m − 1 6 log m + 2 γ t−1 √ m ∥g(x t,i ; θ t−1 )∥ Z −1 t−1 + e t − γ t−1 √ m ∥g(x t,i ; θ 0 )∥ Z −1 t−1 + ∥g(x t,i ; θ t−1 )∥ Z −1 t−1 I4 ≥ 0 , where the first equation comes from Lemma 4 and the second inequality follows from Eq.(6). A.3. Proof of Lemma 2 The following lemma is necessary for our proof. Lemma 8 (Lemma B.7 in Zhang et al. (2021)). For any t ∈ [T ], suppose that there existsC > 0 such that the network width m satisfies m ≥CT 6 N 6 L 6 log(T LN/δ). Then with probability at least 1 − δ, log det(I + λ −1 K t ) ≤ log det(I + λ −1 H) + 1 , where K t =J ⊤ tJt /m,J t = [g(x 1,a11 ; θ 0 ), · · · , g(x t,a tK ; θ 0 )] ∈ R p×tK , and a tk means k-th action in the super arm S t at time t, i.e., S t := {a t1 , . . . , a tK }. Proof. Note that we have det(Z T ) = det Z T −1 + i∈S T g(x T,i ; θ T −1 )g(x T,i ; θ T −1 ) ⊤ /m = det Z 1 2 T −1 I + i∈S T Z − 1 2 T −1 (g(x T,i ; θ T −1 )/ √ m)(g(x T,i ; θ T −1 )/ √ m) ⊤ Z − 1 2 T −1 Z 1 2 T −1 = det(Z T −1 ) · det I + i∈S T Z − 1 2 T −1 g(x T,i ; θ T −1 )/ √ m Z − 1 2 T −1 g(x T,i ; θ T −1 )/ √ m ⊤ = det(Z T −1 ) · 1 + i∈S T g(x T,i ; θ T −1 )/ √ m 2 Z −1 T −1 = det(Z 0 ) T t=1 1 + i∈St g(x t,i ; θ t−1 )/ √ m 2 Z −1 t−1 . Then, we have log det(Z T ) det(Z 0 ) = T t=1 log 1 + i∈St g(x t,i ; θ t−1 )/ √ m 2 Z −1 t−1 . On the other hand, for any t ∈ [T ], we have i∈St g(x t,i ; θ t−1 )/ √ m 2 Z −1 t−1 ≤ i∈St 1 λ ∥g(x t,i ; θ t−1 )∥ 2 2 /m ≤ i∈St 1 λm C 2 √ mL 2 ≤ 1 , where the first inequality comes from the property ∥x∥ 2 A −1 ≤ ∥x∥ 2 2 /λ min (A) for any positive definite matrix A, the constant C 2 of the second inequality can be derived by Lemma 7, and the last inequality holds due to the assumption of λ. Then using the inequality, x ≤ 2 log(1 + x) for any x ∈ [0, 1], we have T t=1 i∈St g(x t,i ; θ t−1 )/ √ m 2 Z −1 t−1 ≤ 2 T t=1 log 1 + i∈St g(x t,i ; θ t−1 )/ √ m 2 Z −1 t−1 ≤ 2 log det Z T det λI ≤ 2 log detZ T det λI + C 3 T 5 3 K 5 3 L 4 λ − 1 6 m − 1 6 log m , where the last inequality holds due to Lemma 13 for some C 3 > 0. Furthermore, since we have log detZ T det λI = log det Z T (λI) −1 = log det I + T t=1 i∈St g(x t,i ; θ 0 )g(x t,i ; θ 0 ) ⊤ /(mλ) = log det I + λ −1J TJ ⊤ T /m = log det I + λ −1J⊤ TJT /m = log det I + λ −1 K T ≤ log det I + λ −1 H + 1 = d log(1 + T N/λ) + 1 ,(7) where the first, second equation and the first inequality holds naively, the third equality uses the definition ofJ t , the fourth equality holds since for any matrix A ∈ M n (R) the nonzero eigenvalues of I + AA ⊤ and I + A ⊤ A are same, which means det(I + AA ⊤ ) = det(I + A ⊤ A), the first inequality follows from Lemma 8, and the last equality uses the definition of effective dimension in Definition 2. Finally, by taking a union bound about δ, with probability at least 1 − 2δ, we have T t=1 i∈St g(x t,i ; θ t−1 )/ √ m 2 Z −1 t−1 ≤ 2 d log(1 + T N/λ) + 2 + C 3 T 5 3 K 5 3 L 4 λ − 1 6 m − 1 6 log m . By replacing δ with δ/2, we have the desired result. A.4. Proof of Theorem 1 Proof of Theorem 1. We define the following event: E 1 := |u t,i − h(x t,i )| ≤ 2γ t−1 ∥g(x t,i ; θ t−1 )/ √ m∥ Z −1 t−1 + e t , ∀i ∈ [N ], 1 ≤ t ≤ T , E 2 := T t=1 i∈St g(x t,i ; θ t−1 )/ √ m 2 Z −1 t−1 ≤ 2 d log(1 + T N/λ) + 2 + C 3 T 5 3 K 5 3 L 4 λ − 1 6 m − 1 6 log m , E := E 1 ∩ E 2 . Then, we decompose the cumulative expected regret into two components: when E occurs and when E does not happen. R(T ) = E T t=1 (R(S * t , v * t ) − R(S t , v * t )) 1I(E) + E T t=1 (R(S * t , v * t ) − R(S t , v * t )) 1I(E c ) ≤ E   T t=1 (R(S * t , v * t ) − R(S t , v * t )) 1I(E) It   + O(1) , where the inequality holds since we have E holds with probability at least 1 − T −1 by Lemma 1 and Lemma 2. To bound I t , we have I t ≤ R(S * t , v * t ) − R(S t , v * t ) ≤ R(S * t , u t + e t ) − R(S t , v * t ) (8) ≤ R(S t , u t + e t ) − R(S t , v * t ) ≤ C ℓ i∈St u t,i + e t − v * t,i 2 ≤ C ℓ i∈St 2γ t−1 g(x t,i ; θ t−1 )/ √ m Z −1 t−1 + 2e t 2 ≤ 4C ℓ i∈St max γ t−1 g(x t,i ; θ t−1 )/ √ m Z −1 t−1 , e t 2 ,(9) where C ℓ is a Lipschitz constant, the first inequality holds due to the monotonicity of the reward function, the second inequality comes from the choice of the oracle, i.e., S t = O S (u t + e t ), the third inequality follows from the Lipschitz continuity of the reward function, the fourth inequality comes from Lemma 1 and the last inequality holds due to the property, a + b ≤ 2 max{a, b}. On the other hand, if we denote A i := γ t−1 ∥g(x t,i ; θ t−1 )/ √ m∥ Z −1 t−1 , then we have i∈St max γ t−1 g(x t,i ; θ t−1 )/ √ m Z −1 t−1 , e t 2 = Ai≥et A 2 i + Ai<et e 2 t ≤ i∈St A 2 i + i∈St e 2 t ≤ i∈St A 2 i + i∈St e 2 t = i∈St A 2 i + √ Ke t .(10) By substituting Eq.(10) into Eq.(9), we have R(S * t , v * t ) − R(S t , v * t ) ≤ 4C ℓ   i∈St γ 2 t−1 g(x t,i ; θ t−1 )/ √ m 2 Z −1 t−1 + √ Ke t  (11) Therefore, by summing Eq.(11) over all t ∈ [T ], we have R(T ) ≤ 4C ℓ T t=1   i∈St γ 2 t−1 g(x t,i ; θ t−1 )/ √ m 2 Z −1 t−1 + √ Ke t   ≤ 4C ℓ γ T T t=1 i∈St g(x t,i ; θ t−1 )/ √ m 2 Z −1 t−1 + 4C ℓ √ KT e T ≤ 4C ℓ γ T T T t=1 i∈St g(x t,i ; θ t−1 )/ √ m 2 Z −1 t−1 + 4C ℓ √ KT e T ≤ 4C ℓ γ T T 2 d log(1 + T N/λ) + 2 +C 1 T 5 3 K 5 3 L 4 λ − 1 6 m − 1 6 log m + 4C ℓ √ KT e T ,(12) where the second inequality holds since γ t ≤ γ T and e t ≤ e T , the third inequality follows from the Cauchy-Schwarz inequality and the last inequality comes from Lemma 2 with an absolute constantC 1 > 0. Meanwhile, we bound γ T as follows: γ T = Γ 1,T ρ log det Z T det λI + C Γ,2 T 5 3 K 5 3 L 4 λ − 1 6 m − 1 6 log m − 2 log δ + √ λB + (λ +C 2 T KL) (1 − ηmλ) J 2 T K/λ + Γ 3,T ≤ Γ 1,T ρ log detZ T det λI + 2C Γ,2 T 5 3 K 5 3 L 4 λ − 1 6 m − 1 6 log m − 2 log δ + √ λB + (λ +C 2 T KL) (1 − ηmλ) J 2 T K/λ + Γ 3,T ≤ Γ 1,T ρ d log(1 + T N/λ) + 1 + 2C Γ,2 T 5 3 K 5 3 L 4 λ − 1 6 m − 1 6 log m − 2 log δ + √ λB + (λ +C 2 T KL) (1 − ηmλ) J 2 T K/λ + Γ 3,T(13) where the fist inequality holds due to Lemma 13, the second inequality holds due to Eq.(7). Note that by setting η = C 1 (T KmL + mλ) −1 and J = 2 log √ λ/T K λ+C2T KL T KL C1λ , we have (λ +C 2 T KL)(1 − ηmλ) J 2 T K/λ ≤ 1 . By choosing sufficiently large m such that Γ 1,T = 1 + C Γ,1 T 7 6 K 7 6 L 4 λ − 7 6 m − 1 6 log m ≤ 2 Γ 2,T = C Γ,2 T 5 3 K 5 3 L 4 λ − 1 6 m − 1 6 log m ≤ 1 , C 1 T 5 3 K 3 2 L 4 λ − 1 6 m − 1 6 log m ≤ 1 , (λ +C 2 T KL)Γ 3,T = (λ +C 2 T KL)C Γ,3 T 7 6 K 7 6 L 7 2 λ − 7 6 m − 1 6 log m(1 + T K/λ) ≤ 1 , T e T ≤ γ T + 1 ≤ 2ρ d log(1 + T N/λ) + 3 − 2 log δ + 2 √ λB + 3 , and combining all the results, R(T ) can be bounded by R(T ) ≤ 4C ℓ T 2 d log(1 + T N/λ) + 3 2ρ d log(1 + T N/λ) + 3 − 2 log δ + 2 √ λB + 2 + 4C ℓ √ K 2ρ d log(1 + T N/λ) + 3 − 2 log δ + 2 √ λB + 3 . B. Regret Bound for CN-TS B.1. Proof Lemma 3 proof of Lemma 3. For given F t , since v (j) t,i ∼ N (f (x t,i ; θ t−1 ), ν 2 σ 2 t,i ), we have P max j v (j) t,i + ϵ > h(x t,i ) | F t , E µ t = 1 − P v (j) t,i + ϵ ≤ h(x t,i ), ∀j ∈ [M ] | F t , E µ t = 1 − P v (j) t,i − f (x t,i ; θ t−1 ) + ϵ νσ t,i ≤ h(x t,i ) − f (x t,i ; θ t−1 ) νσ t,i , ∀j ∈ [M ] | F t , E µ t ≥ 1 − P v (j) t,i − f (x t,i ; θ t−1 ) + ϵ νσ t,i ≤ |h(x t,i ) − f (x t,i ; θ t−1 )| νσ t,i , ∀j ∈ [M ] | F t , E µ t = 1 − P v (j) t,i − f (x t,i ; θ t−1 ) νσ t,i ≤ |h(x t,i ) − f (x t,i ; θ t−1 )| − ϵ νσ t,i , ∀j ∈ [M ] | F t , E µ t = 1 − P Z j ≤ |h(x t,i ) − f (x t,i ; θ t−1 )| − ϵ νσ t,i , ∀j ∈ [M ] | F t , E µ t , where the first inequality is due to a ≤ |a|, for the last equality we denote Z j as a standard normal random variable. Note that under the event E µ t , we have |f (x t,i ; θ t−1 ) − h(x t,i )| ≤ νσ t,i + ϵ for all i ∈ [N ]. Hence, under the event E µ t , |h(x t,i ) − f (x t,i ; θ t−1 )| − ϵ νσ t,i ≤ νσ t,i + ϵ − ϵ νσ t,i = 1 . Then, it follows that P max j v (j) t,i + ϵ > h(x t,i ) | F t , E µ t ≥ 1 − [P (Z ≤ 1)] M . Using the anti-concentration inequality in Lemma 9, we have P(Z ≤ 1) ≤ 1 − p where p := 1/(4e √ π). Then finally we have P R(S t , v t + ϵ) ≥ R(S * t , v * t ) | F t , E µ t ≥ P R(S * t , v t + ϵ) ≥ R(S * t , v * t ) | F t , E µ t ≥ P v t,i + ϵ ≥ h(x t,i ), ∀i ∈ S * t | F t , E µ t = i∈St P v t,i + ϵ ≥ h(x t,i ) | F t , E µ t ≥ 1 − [P(Z ≤ 1)] M K ≥ 1 − (1 − p) M K ≥ 1 − K (1 − p) M ≥ 1 − (1 − p) = p , where the first inequality holds due to the choice of the oracle, the second inequality comes from the monotonicity of the reward function, the third inequality uses the Bernoulli's inequality, and the last inequality comes from the choice of M = ⌈1 − log K log(1− p) ⌉, which means (1 − p) M ≤ 1 K (1 − p). B.2. Proof of Theorem 2 Proof of Theorem 2. First of all, we decompose the expected cumulative regret as follows: R(T ) = T t=1 E [R(S * t , v * t ) − R(S t , v t + ϵ)] R1(T ) + T t=1 E [R(S t , v t + ϵ) − R(S t , v * t )] R2(T ) . From now on, we derive the bounds for R 1 (T ) and R 2 (T ) respectively. Bounding R 2 (T ) First we decompose R 2 (T ): R 2 (T ) = T t=1 E   R(S t , v t + ϵ) − R(S t ,v t + ϵ) I2   + T t=1 E   R(S t ,v t + ϵ) − R(S t , v * t ) I1   . For I 1 , we have |R(S t ,v t + ϵ) − R(S t , v * t )| ≤ C (1) 0 i∈St (f (x t,i ; θ t−1 ) + ϵ − h(x t,i )) 2 ≤ C (1) 0 i∈St (νσ t,i + 2ϵ) 2 ≤ C (1) 0 i∈St (2 max{νσ t,i , 2ϵ}) 2 ≤ 2C (1) 0 i∈St (νσ t,i ) 2 + i∈St 4ϵ 2 ≤ 2C (1) 0   i∈St (νσ t,i ) 2 + i∈St 4ϵ 2   = 2C (1) 0   ν i∈St σ 2 t,i + 2ϵ √ K   , where the first inequality holds due to the Lipschitz continuity for a constant C (1) 0 > 0, the second inequality holds due to the event E µ t holds with high probability, the third inequality follows from the property that a + b ≤ 2 max{a, b}, and the last inequality uses the fact that √ a + b ≤ √ a + √ b for any a, b ≥ 0. On the other hand, for I 2 we have |R(S t , v t + ϵ) − R(S t ,v t + ϵ)| ≤ C (2) 0 i∈St ( v t,i − f (x t,i ; θ t−1 )) 2 ≤ C (2) 0 i∈St β 2 t ν 2 σ 2 t,i = C (2) 0 β t ν i∈St σ 2 t,i , where the first inequality holds for some Lipschitz continuity constant C (2) 0 > 0, the second inequality holds due to the event E σ t holds with high probability. By combining the bounds of I 1 and I 2 , we derive the bound for R 2 (T ) as follows: R 2 (T ) ≤ 2C 0 T t=1 E   ν i∈St σ 2 t,i + 2ϵ √ K   + C 0 νβ T T t=1 E   i∈St σ 2 t,i   = C 0 ν(β T + 2)E   T t=1 i∈St σ 2 t,i   + 2C 0 T √ Kϵ ≤ C 0 ν(β T + 2)E   T T t=1 i∈St σ 2 t,i   + 2C 0 T √ Kϵ = C 0 ν(β T + 2)E   T λ T t=1 i∈St g(x t,i ; θ t−1 )/ √ m 2 Z −1 t   + 2C 0 T √ Kϵ ≤ C 0 ν(β T + 2) T λ 2 d log(1 + T N/λ) + 2 + C 1 T 5 3 K 3 2 L 4 λ − 1 6 m − 1 6 log m + 2C 0 T √ Kϵ ,(14) where C 1 > 0 is a constant, the first inequality uses β t ≤ β T and C 0 = max{C (1) 0 , C(2) 0 }, the second inequality follows from the Cauchy-Schwarz inequality, and the last inequality holds due to Lemma 2. Then, we can write E   i∈S ′ t σ 2 t,i | F t , E t   ≥ E   i∈St σ 2 t,i | F t , E t ,v 1:M t ∈ V opt t   · P v 1:M t ∈ V opt t | F t , E t ≥ E   i∈St σ 2 t,i | F t , E t ,v 1:M t ∈ V opt t   · p/2 , where S ′ t is a super arm induced by any sampled scores. By combining the results, we have E R(S * t , v * t ) − R(S t , v t + ϵ) 1I(E t ) | F t ≤ 2C 0 β t νE   i∈St σ 2 t,i | F t , E t ,v 1:M t ∈ V opt t   · P(E t ) ≤ 4C 0 β t ν p E   i∈S ′ t σ 2 t,i | F t , E t   · P(E t ) ≤ 4C 0 β t ν p E   i∈S ′ t σ 2 t,i | F t   . Summing over all t ∈ [T ] and the failure event into consideration, we have T t=1 E R(S * t , v * t ) − R(S t , v t + ϵ) 1I(E t ) | F t ≤ 4C 0 β T ν p T t=1 E   i∈S ′ t σ 2 t,i | F t   .(15) Note that the summation on the RHS contains an expectation, so we cannot directly apply Lemma 2. Instead, since we can write T t=1 E   i∈S ′ t σ 2 t,i | F t   = T t=1 i∈S ′′ t σ 2 t,i + T t=1   E   i∈S ′ t σ 2 t,i | F t   − i∈S ′′ t σ 2 t,i   , where S ′′ t is any super arm induced by arbitrary sampled scores. By using Lemma 2 we have T t=1 i∈S ′′ t σ 2 t,i ≤ T T t=1 i∈S ′′ t σ 2 t,i ≤ T λ 2 d log(1 + T N/λ) + 2 + C 1 T 5 3 K 3 2 L 4 λ − 1 6 m − 1 6 log m ,(16) where C 1 > 0 is a constant. On the other hand, let Y t = t k=1 E i∈S ′ k σ 2 k,i | F k − i∈S ′′ k σ 2 k,i . Since we have Y t − Y t−1 = E   i∈S ′ t σ 2 t,i | F t   − i∈S ′′ t σ 2 t,i , which implies, E [Y t − Y t−1 | F t ] = E   E   i∈S ′ t σ 2 t,i | F t   | F t   − E   i∈S ′′ t σ 2 t,i | F t   = 0 , then Y t is a martingale for all 1 ≤ t ≤ T . Note that we can bound |Y t − Y t−1 | as follows: |Y t − Y t−1 | = E   i∈S ′ t σ 2 t,i | F t   − i∈S ′′ t σ 2 t,i ≤ E   i∈S ′ t (C 2 √ L) 2 | F t   + i∈S ′′ t (C 2 √ L) 2 = 2C 2 √ LK , where the inequality holds due to Lemma 7 for some positive constant C 2 . Then, applying the Azuma-Hoeffding inequality (Lemma 10), which means, T t=1   E   i∈S ′ t σ 2 t,i | F t   − i∈S ′′ t σ 2 t,i   ≤ C 2 8T LK log T ,(17) with probability 1 − T −1 . Combining Eq.(16) and Eq. (17), we have E   i∈S ′ t σ 2 t,i | F t   ≤ T λ 2 d log(1 + T N/λ) + 2 + C 1 T 5 3 K 3 2 L 4 λ − 1 6 m − 1 6 log m + C 2 8T LK log T(18) By substituting Eq.(18) for Eq.(15), we have the bound for R 1 (T ) as follows: R 1 (T ) ≤ 4C 0 β t ν p T λ 2 d log(1 + T N/λ) + 2 + C 1 T 5 3 K 3 2 L 4 λ − 1 6 m − 1 6 log m + C 2 8T LK log T + O(1)(19) Finally, combining Eq.(19) and Eq. (14) we have R(T ) ≤ 4C 0 β T ν p T λ 2 d log(1 + T N/λ) + 2 + C 1 T 5 3 K 3 2 L 4 λ − 1 6 m − 1 6 log m + C 2 8T LK log T + 2C 0 T √ Kϵ + O(1) + C 0 ν(β T + 2) T λ 2 d log(1 + T N/λ) + 2 + C 1 T 5 3 K 3 2 L 4 λ − 1 6 m − 1 6 log m(20) Then choosing m such that which follows, T ϵ ≤ 1. Hence, R(T ) can be bounded by C 1 T 5 3 K 3 2 L 4 λ − 1 6 m − 1 6 log m ≤ 1 , C ϵ,1 T 5 3 K 2 3 L 3 λ − 2 3 m − 1 6 log m ≤ 1 4 , C ϵ,R(T ) ≤ 4C 0 β T ν p T λ 2 d log(1 + T N/λ) + 3 + C 2 8T LK log T + 2C 0 √ K + O(1) + C 0 ν(β T + 2) T λ 2 d log(1 + T N/λ) + 3 . C. Auxiliary Lemmas Lemma 9. (Abramowitz & Stegun, 1964) For a Gaussian distributed random variable Z with mean µ and variance σ 2 , for any z ≥ 1, 1 2 √ πz exp(−z 2 /2) ≤ P(|Z − µ| > zσ) ≤ 1 √ πz exp(−z 2 /2) . Lemma 10 (Azuma-Hoeffding inequality). If a super-martingale (Y t , t ≥ 0) corresponding to filtration F t , satisfies |Y t − Y t−1 | < β t for some constant β t , for all t = 1, . . . , T , then for any a ≥ 0, P(Y t − Y t−1 ≥ a) ≤ 2 exp − a 2 2 T t=1 β 2 t . D. Extensions from Neural Bandits for Single Action In this section, we describe how the auxiliary lemmas used in the neural bandit works (Zhou et al., 2020;Zhang et al., 2021) for single action can be extended to the combinatorial action settings. The main distinction is that in single action settings, the amount of data to be trained at time t is t, whereas in combinatorial action settings, it is tK. Therefore, by properly accounting for this difference, we can obtain the following results. Definition 3. For simplicity, we restate some definitions used in this section. Z t = λI + t k=1 i∈S k g(x k,i ; θ 0 )g(x k,i ; θ 0 ) ⊤ /m , Z t (or Z t ) = λI + t k=1 i∈S k g(x k,i ; θ k−1 )g(x k,i ; θ k−1 ) ⊤ /m , σ 2 t,i = λg(x t,i ; θ 0 ) ⊤Z t−1 g(x t,i ; θ 0 )/m , σ 2 t,i = λg(x t,i ; θ t−1 ) ⊤ Z t−1 g(x t,i ; θ t−1 )/m , J t = [g(x 1,a11 ; θ 0 ), . . . , g(x 1,a 1K ; θ 0 ), . . . g(x t,a tK ; θ 0 )] ∈ R p×tK , J t = [g(x 1,a11 ; θ t−1 ), . . . , g(x 1,a 1K ; θ t−1 ), . . . g(x t,a tK ; θ t−1 )] ∈ R p×tK , y t = [v 1,a11 , . . . , v 1,a 1K , . . . , v t,a tK ] ⊤ ∈ R tK , where a tk is the k-th action in the super arm S t at time t, i.e., S t := {a t1 , · · · , a tK }. Lemma 11 (Lemma 5.2 in Zhou et al. (2020)). Suppose that there exist some positive constantsC 1 ,C 2 > 0 such that for any δ ∈ (0, 1), η ≤C 1 (T KmL + mλ) −1 and m ≥C 2 K − 1 2 L − 3 2 λ 1 2 log(T N L 2 /δ) 3 2 , m(log m) −3 ≥C 2 T KL 12 λ −1 + T 4 K 4 L 18 λ −10 (λ + T KL) 6 + T 7 K 7 L 21 λ −7 (1 + T K/λ) 6 . Then, with probability at least 1 − δ, we have ∥θ t − θ 0 ∥ 2 ≤ 2 tK/(mλ) , ∥θ * − θ t ∥ Zt ≤ γ t / √ m . Lemma 12 (Lemma B.2 in Zhou et al. (2020)). There exist some constants {C i } 4 i=1 > 0 such that for any δ ∈ (0, 1), if for any t ∈ [T ], η, m satisfy that 2 tK/(mλ) ≥C 1 m − 3 2 L − 3 2 log(T N L 2 /δ) log m(1 + tK/λ) , then, with probability at least 1 − δ, we have ∥θ t − θ 0 ∥ ≤ 2 tK/(mλ) and ∥θ t − θ 0 −Z −1 tJt y t /m∥ 2 ≤ (1 − ηmλ) J 2 tK/(mλ) +C 5 t 7 6 K 7 6 L 7 2 λ − 7 6 m − 2 3 log m(1 + tK/λ) , whereC 5 > 0 is an absolute constant. Lemma 13 (Lemma B.3 in Zhou et al. (2020)). There exist some constants {C i } 5 i=1 > 0 such that for any δ ∈ (0, 1), if for any t ∈ [T ], m satisfies that C 1 m − 3 2 L − 3 2 log(T N L 2 /δ) 3 2 ≤ 2 tK/(mλ) ≤C 2 L −6 (log m) − 3 2 , then, with probability at least 1 − δ, for any t ∈ [T ] we have ∥Z t ∥ 2 ≤ λ +C 3 tKL , ∥Z t − Z t ∥ F ≤C 4 t 7 6 K 7 6 L 4 λ − 1 6 m − 1 6 log m , log detZ t det λI − log det Z t det λI ≤C 5 t 5 3 K 5 3 L 4 λ − 1 6 m − 1 6 log m , whereC 3 ,C 4 ,C 5 > 0 are some absolute constants, andZ t = λI + t−1 k=1 Zhou et al. (2020)). For any δ ∈ (0, 1),C 1 ,C 2 > 0, suppose that τ satisfies i∈S k g(x k,i ; θ 0 )g(x k,i ; θ 0 ) ⊤ . Lemma 14 (Lemma C.2 inC 1 m − 3 2 L − 3 2 log(T N L 2 /δ) 3 2 ≤ τ ≤C 2 L −6 (log m) − 3 2 , Then, with probability at least 1 − δ, if for any j ∈ [J], ∥θ (j) − θ (0) ∥ 2 ≤ τ , we have the following results for any j, s ∈ [J], ∥J (j) ∥ F ≤C 3 √ tKmL , ∥J (j) − J (0) ∥ F ≤C 4 τ 1 3 L 7 2 tKm log m , ∥f (s) − f (j) − (J (j) ) ⊤ (θ (s) − θ (j) )∥ 2 ≤C 5 τ 4 3 L 3 tKm log m , ∥y∥ 2 ≤ √ tK , whereC 3 ,C 4 ,C 5 > 0 are some absolute constants. Lemma 15 (Lemma C.3 in Zhou et al. (2020)). For any δ ∈ (0, 1) and {C i } 4 i=1 > 0, suppose that τ, η satisfȳ C 1 m − 3 2 L − 3 2 log(T N L 2 /δ) 3 2 ≤C 2 L −6 (log m) − 3 2 , η ≤C 3 (mλ + tKmL) −1 , τ 8 3 ≤C 4 mλ 2 η 2 L −6 (log m) −1 . Then, with probability at least 1−δ, if for any j ∈ [J], ∥θ (j) −θ (0) ∥ 2 ≤ τ , we have that for any j ∈ [J], ∥f (j) −y∥ 2 ≤ 2 √ tK. Lemma 16 (Lemma C.4 in Zhou et al. (2020)). For any δ ∈ (0, 1) and {C i } 3 i=1 > 0, suppose that τ, η satisfȳ C 1 m − 3 2 L − 3 2 log(T N L 2 /δ) 3 2 ≤C 2 L −6 (log m) − 3 2 , η ≤C 3 (mλ + tKmL) −1 . Then, with probability at least 1 − δ, we have for any j ∈ [J], ∥ θ (j) − θ (0) ∥ 2 ≤ tK/(mλ) , ∥ θ (j) − θ (0) −Z −1J y/m∥ 2 ≤ (1 − ηmλ) j 2 tK/(mλ) . of each round in CN-UCB is relatively slow as m is a large constant. On the other hand, CN-UCB with doubling can show faster computation speed, especially at the beginning rounds, where m is kept relatively small. The same argument can be applied to CN-TS and CN-TS with doubling. CN-UCB with doubling is summarized in Algorithm 3. CN-TS with doubling is summarized in Algorithm 4. The Update algorithm is summarized in Algorithm 5. Algorithm 3 CN-UCB with doubling Input: Epoch period τ , network depth L. Initialization: Initialize {network width m τ , regularization parameter λ τ , norm parameter B τ , step size η τ , number of gradient descent steps J τ } with respect to τ , set number of parameters of neural network p(τ ) = m τ d+m 2 τ (L−2)+m τ , Z 0 = λ τ I p(τ ) , randomly initialize θ 0 as described in Section 3. 1 while t ̸ = T do Observe {x t,i } i∈[N ] Computev t,i = f (x t,i ; θ t−1 ) and u t,i =v t,i + γ t−1 g(x t,i ; θ t−1 )/ √ m τ Z −1 t−1 for i ∈ [N ] Let S t = O S (u t + e t ) Play super arm S t and observe {v t,i } i∈St Update(t, τ ) Compute γ t and e t+1 described in lemma 1 (replace {λ, m, I, B, η, J} with {λ τ , m τ , I p(τ ) , B τ , η τ , J τ }) end while Algorithm 4 CN-TS with doubling Input: Epoch period τ , network depth L, sample size M Initialization: Initialize {network width m τ , regularization parameter λ τ , exploration variance ν τ , step size η τ , number of gradient descent steps J τ } with respect to τ , set number of parameters of neural network p(τ ) = m τ d + m 2 τ (L − 2) + m τ , Z 0 = λ τ I p(τ ) , randomly initialize θ 0 as described in Section 3. 1 while t ̸ = T do Observe {x t,i } i∈[N ] . Compute σ 2 t,i = λ τ g(x t,i ; θ t−1 ) ⊤ Z −1 t−1 g(x t,i ; θ t−1 )/m τ for each i ∈ [N ] Sample { v (j) t,i } M j=1 independently from N (f (x t,i ; θ t−1 ), ν 2 τ σ 2 t,i ) for each i ∈ [N ] Compute v t,i = max j v (j) t,i for each i ∈ [N ] Let S t = O S ( v t + ϵ) Play super arm S t and observe {v t,i } i∈St Update(t, τ ) end while F.2. Regret Analysis The regret upper bounds of CN-UCB with doubling and CN-UCB (or CN-TS with doubling and CN-TS) have the same rate up to logarithmic factors. We provide the sketch of proof. By modifying Definitions 1 and 2 with respect to τ , the effective dimension d τ can be written as d τ = log det(I + H τ /λ τ )/ log(1 + τ N/λ τ ). Denote the epoch periods as τ n = 2 n τ 0 , where n ∈ Z ≥0 and τ 0 is the initial epoch period. If T < τ 0 , CN-UCB with doubling and CN-TS with doubling are equivalent to CN-UCB and CN-TS respectively. In this case, there is no change in the regret upper bounds. Meanwhile, if T ≥ τ 0 , there existsn ∈ Z + such that τn −1 ≤ T < τn. Denote the instantaneous regret as Reg t . Define b t=a Reg t := 0 if a > b. Then the regret can be written as R(T ) = τ0 t=1 Reg t + τ1 t=τ0+1 Reg t + · · · + τn −1 τn −2 +1 Reg t + T t=τn −1 +1 Reg t . Let d := max{ d τ0 , . . . , d τn }. For CN-UCB with doubling, each sum has an upper bound O(max{ d τn , d τn K} √ τ n ). Algorithm 5 Update(t, τ ) Input: Epoch period τ , round t if t < τ then Update Z t = Z t−1 + i∈St g(x t,i ; θ t−1 )g(x t,i ; θ t−1 ) ⊤ /m τ Update θ t to minimize the loss Eq.(4) using gradient descent with η τ for J τ times else τ ← − 2τ Reinitialize {m τ , λ τ , η τ , J τ } with respect to τ , set p(τ ) = m τ d + m 2 τ (L − 2) + m τ , randomly reinitialize θ 0 as described in Section 3.1. For CN-UCB with doubling, reinitialize B τ with respect to τ , Z 0 = λ τ I p(τ ) For CN-TS with doubling, reinitialize ν τ with respect to τ , Z 0 = λ τ I p(τ ) for t ′ = 1, · · · , t do For CN-UCB with doubling, Z t ′ = Z t ′ −1 + i∈S t ′ g(x t ′ ,i ; θ t ′ −1 )g(x t ′ ,i ; θ t ′ −1 ) ⊤ /m τ For CN-TS with doubling, Z t ′ = Z t ′ −1 + i∈S t ′ g(x t ′ ,i ; θ t ′ −1 )g(x t ′ ,i ; θ t ′ −1 ) ⊤ /m τ Update θ t ′ to G. Specific Examples of Combinaotrial Feedback Models As mentioned in Remark 1, algorithms having a reward function satisfying Assumptions 1 and 2 encompasses various combinatorial feedback models, suggesting that these assumptions are not restrictive. In this section, we provide specific examples. G.1. Semi-bandit Model In the semi-bandit setting, after choosing a superarm, the agent observes all of the scores (or feedback) associated with the superarm and receives a reward as a function of the scores. The main text of this paper describes how our algorithms cover semi-bandit feedback models. Recall that in semi-bandit setting, if the feature vectors are independent then the score of each arm is independent. Meanwhile, in ranking models (or click models), chosen arms may have a position within the superarm, and the scores of arms may depend on its own attractiveness as well as its position. G.2. Document-based Model The document-based model is a click model that assumes the scores of an arm are identical to its attractiveness. The attractiveness of an arm is determined by the context of arm. Formally, for each arm i ∈ [N ], let α(x t,i ) ∈ [0, 1] be the attractiveness of arm i at time t. Then the document-based model assumes that the score function of x t,i in the k-th position is defined as h(x t,i , k) = α(x t,i ) 1I(k ≤ K) .(21) Note that h in Eq.(21) is bounded in [0, 1]. Since a neural network is a universal approximator, we can utilize neural networks to estimate the score of arm i in position k as follows: h(x t,i , k) = f (x t,i , k; θ t−1 ) . Note that for any k ∈ [K], the score of an arm only depends on the attractiveness of the arm. Hence, our algorithms can be directly applicable to the document-based model without any modification. G.3. Position-based Model In the document-based model, the score of an arm is invariant to the position within the super arm. However, in the position-based model, the score of a chosen arm varies depending on its position. Let χ : [K] → [0, 1] be a function that measures the quality of a position within the super arm. The position-based model assumes that the score function of a chosen arm associated to x t,i and located in the k-th position is defined as h(x t,i , k) = α(x t,i )χ(k) .(22) Note that the score of an arm can change as its position moves within the superarm. We can slightly modify our suggested algorithms to reflect this. First, we introduce a modified neural networkḟ (x t,i , k; θ t−1 ) that estimates the score of each arm at every available position. By this, the action space of each round increases from N to N K. The regret bound only changes as much as the action space increases. Denote the gradient ofḟ (x t,i , k; θ t−1 ) asġ(x t,i , k; θ t−1 ). Furthermore, we replace the oracle toȮ S {u t,i (k) + e t } i∈[N ],k∈ [K] that considers the position of the arms. The oraclė O S chooses only one arm for one position. Also, an arm that has been chosen for a certain position cannot be chosen for another position. As an optimization problem having the above constraints can be solved with linear programming, O S {u t,i (k) + e t } i∈[N ],k∈ [K] can compute exact optimization within polynomial time. Modified algorithm for a positionbased model is described in Algorithm 6. Algorithm 6 Combinatorial neural bandits for for position-based model Initialize as Algorithm 1 for t = 1, ..., T do Observe {x t,i } i∈ [N ] if Exploration == UCB then Compute u t,i (k) =ḟ (x t,i , k; θ t−1 ) + γ t−1 ∥ġ(x t,i , k; θ t−1 )/ √ m)∥ Z −1 t−1 for i ∈ [N ], k ∈ [K] Let S t =Ȯ S {u t,i (k) + e t } i∈[N ],k∈[K] else if Exploration == TS then Compute σ 2 t,i (k) = λġ(x t,i , k; θ t−1 ) ⊤ Z −1 t−1ġ (x t,i , k; θ t−1 )/m for i ∈ [N ], k ∈ [K] Sample { v (j) t,i (k)} M j=1 independently from N (ḟ (x t,i , k; θ t−1 ), ν 2 σ 2 t,i (k)) for i ∈ [N ], k ∈ [K] Compute v t,i (k) = max j v (j) t,i (k) for i ∈ [N ], k ∈ [K] Let S t =Ȯ S { v t,i (k) + ϵ} i∈[N ],k∈ [K] end if Play super arm S t and observe {v t,i (k i )} i∈St (UCB) Update Z t = Z t−1 + i∈Stġ (x t,i , k i ; θ t−1 )ġ(x t,i , k i ; θ t−1 ) ⊤ /m (TS) Update Z t = Z t−1 + i∈Stġ (x t,i , k i ; θ t−1 )ġ(x t,i , k i ; θ t−1 ) ⊤ /m Update θ t to minimize the loss in Eq.(4) using gradient descent with η for J times end for G.4. Cascade Model In the cascade model, the agent suggests arms to a user one-by-one in order of the positions of the arms within the superarm. The user scans the arms one-by-one until she selects an arm that she likes, which ends the suggestion procedure. Note that the suggestion procedure potentially may end before the agent shows all the arms in the superarm to the user. Also, the user may not select any arm after she scans all the arms in the superarm. Hence, unlike the previously mentioned models, where the agent receives all of the scores of the chosen arms, in the cascade model, the agent only receives the scores of the arms observed by the user. Let us assume that the score the agent receives when the user selects an arm in the 1-st position is 1. In case the same arm is in the k-th position, the score the agent receives when the user selects the same arm must be less than 1. To reflect this feature, we consider a position discount factor ψ k ∈ [0, 1], k ≤ K that is multiplied to the attractiveness of the arm. The observed score of an arm is determined by its attractiveness and the position discount factor that is multiplied to it. The mechanism estimating the attractiveness using a neural network is same as the one for the semi-bandits. The only difference is that the agent only receives the discounted scores of the arms observed by the user. if Exploration == UCB then Compute u t,i = f (x t,i ; θ t−1 ) + γ t−1 ∥g(x t,i , k; θ t−1 )/ √ m)∥ Z −1 t−1 for i ∈ [N ] Let S t = O S {u t,i + e t } i∈[N ] else if Exploration == TS then Compute σ 2 t,i = λg(x t,i ; θ t−1 ) ⊤ Z −1 t−1 g(x t,i ; θ t−1 )/m for i ∈ [N ] Sample { v (j) t,i } M j=1 independently from N (f (x t,i ; θ t−1 ), ν 2 σ 2 t,i ) for i ∈ [N ] Compute v t,i = max j v (j) t,i for i ∈ [N ] Let S t = O S { v t,i + ϵ} i∈[N ] end if Play super arm S t and observe F t , {ψ k v t,k } k∈ [Ft] (UCB) Update Z t = Z t−1 + k∈ [Ft] g(x t,k ; θ t−1 )g(x t,i ; θ t−1 ) ⊤ /m (TS) Update Z t = Z t−1 + k∈ [Ft] g(x t,k ; θ t−1 )g(x t,k ; θ t−1 ) ⊤ /m Update θ t to minimize the loss in Eq.(4) using gradient descent with η for J times end for Suppose that the user selects F t -th arm. Then the agent observes the discounted scores for the first F t arms in S t . Update is based on the discounted scores, ψ k v t,k , k ≤ F t . An adjusted Algorithm for the cascade model is described in Algorithm 7. In addition, in case we have no information of the position discount factor, we can deal with the cascade model same as the position-based model. H. Additional Related Work As mentioned in Section 1, the proposed methods are the first neural network-based combinatorial bandit algorithms with regret guarantees. As for the previous combinatorial TS algorithms, Wen et al. (2015) proposed a TS algorithm for a contextual combinatorial bandits with semi-bandit feedback and a linear score function. However, the regret bound for the algorithm is only analyzed in the Bayesian setting (hence establishing the Bayesian regret) which is a weaker notion of regret and much easier to control in combinatorial action settings. To our knowledge, Oh & Iyengar (2019) was the first work to establish the worst-case regret bound for a variant of contextual combinatorial bandits, multinomial logit (MNL) contextual bandits, utilizing the optimistic sampling procedure similar to CN-TS. Yet, our proposed algorithm differs from Oh & Iyengar (2019) in that we sample directly from the score space rather than the parameter space which avoids the computational complexity of sampling a high-dimensional network parameters. More importantly, Oh & Iyengar (2019) exploit the structure of the MNL choice feedback model to derive the regret bound whereas we address a more general semi-bandit feedback without any assumptions on the structure of the feedback model. I. Additional Experiments In Experiment 1, the linear combinatorial bandit algorithms perform worse than our proposed algorithms, even for the linear score function. One of the possible reasons for this is that the neural network based algorithms use much larger number of parameters than the linear model based algorithms, overparametrized for the problem setting. Overparametrized neural networks have been shown to have superior generalization performances. See Allen-Zhu et al. (2019b;a). Note that the regret performance is about the generalization to the unseen data rather than it is about the fit to the existing data. In this aspect, overparameterized neural network can show superior performance over the linear model. This is supported by Figure 3. In Figure 3, we demonstrate the empirical performances of CN-TS ans CombLinTS as the network width m decreases. We can see that by decreasing m, the results of the neural network models and linear models become more similar, i.e., the gap between the regrets reduce. Definition 1 . 1(Jacot et al., 2018; Cao & Gu, 2019) Define The point of Corollary 1 is that inZhou et al. (2020), to bound the instantaneous regret, it is enough for the agent to choose only one optimistic action (see Lemma 5.3 in Zhou et al.(2020)), while in our case, the agent has to choose the optimistic super arm in order to bound the instantaneous regret (See Eq. (8) in Proof of Theorem 1). However, in order to ensure the optimism of the chosen super arm, it is necessary to guarantee the optimism of all individual arms in the chosen super arm, which is represented in Corollary 1.The following technical lemma bounds the sum of weighted norms which is similar to Lemma 4.2 inQin et al. (2014) andLemma 5.4 in Zhou et al. (2020). For example, R(S t , v t ) can be the quality of positions of a position-based click model (Lattimore & Szepesvari, 2020) or the expected revenue given by a multinomial logit (MNL) choice model (Oh & Iyengar, 2019) although the regret bound under the MNL choice model is not provided under the current theoretical result. √Figure 1 .Figure 2 . 12T ) or O( dT K) regret. For CN-TS, we adapt an optimistic sampling technique to ensure the optimism of the sampled combinatorial action, establish a worst-case (frequentist) regret Cumulative regret of CN-UCB and CN-TS compared with algorithms based on linear models. Experiment results of CN-UCB, CN-TS, and CN-TS(M=1) as context dimension d increases. Oh, M.-h. and Iyengar, G. Thompson sampling for multinomial logit contextual bandits. In Advances in Neural Information Processing Systems, pp. 3151-3161, 2019. Qin, L., Chen, S., and Zhu, X. Contextual combinatorial bandit and its application on diversified online recommendation. In Proceedings of the 2014 SIAM International Conference on Data Mining, pp. 461-469. SIAM, 2014. Rusmevichientong, P. and Tsitsiklis, J. N. Linearly parameterized bandits. Mathematics of Operations Research, 35 (2):395-411, 2010. Rusmevichientong, P., Shen, Z.-J. M., and Shmoys, D. B. Dynamic assortment optimization with a multinomial logit choice model and capacity constraint. Operations research, 58(6):1666-1680, 2010. Silver, D., Huang, A., Maddison, C. J., Guez, A., Sifre, L., Van Den Driessche, G., Schrittwieser, J., Antonoglou, I., Panneershelvam, V., Lanctot, M., et al. Mastering the game of go with deep neural networks and tree search. nature, 529(7587):484-489, 2016. Thompson, W. R. On the likelihood that one unknown probability exceeds another in view of the evidence of two samples. Biometrika, 25(3/4):285-294, 1933. , we have T C ϵ,2 (1 − ηmλ) J/2 T KL/λ ≤ 1 4 Abeille, M. and Lazaric, A. Linear thompson sampling revisited. In Artificial Intelligence and Statistics, pp. 176-184. PMLR, 2017. Nika, A., Elahi, S., and Tekin, C. Contextual combinatorial volatile multi-armed bandit with adaptive discretization. In International Conference on Artificial Intelligence and Statistics, pp. 1486-1496. PMLR, 2020.Abramowitz, M. and Stegun, I. A. (eds.). Handbook of Mathematical Functions with Formulas, Graphs, and Mathematical Tables. Dover, New York, 1964. Agrawal, S. and Goyal, N. Thompson sampling for contex- tual bandits with linear payoffs. In International Confer- ence on Machine Learning, pp. 127-135. PMLR, 2013. Allen-Zhu, Z., Li, Y., and Liang, Y. Learning and gener- alization in overparameterized neural networks, going beyond two layers. In Advances in Neural Information Processing Systems, volume 32. Curran Associates, Inc., 2019a. Allen-Zhu, Z., Li, Y., and Song, Z. A convergence theory for deep learning via over-parameterization, 2019b. Arora, S., Du, S., Hu, W., Li, Z., and Wang, R. Fine-grained analysis of optimization and generalization for overpa- rameterized two-layer neural networks. In International Conference on Machine Learning, pp. 322-332. PMLR, 2019. Audibert, J.-Y., Bubeck, S., and Lugosi, G. Regret in online combinatorial optimization. Mathematics of Operations Research, 39(1):31-45, 2014. Auer, P. Using confidence bounds for exploitation- exploration trade-offs. Journal of Machine Learning Research, 3(Nov):397-422, 2002. Besson, L. and Kaufmann, E. What doubling tricks can and can't do for multi-armed bandits. arXiv preprint arXiv:1803.06971, 2018. Cao, Y. and Gu, Q. Generalization bounds of stochastic gradient descent for wide and deep neural networks. In Advances in Neural Information Processing Systems, vol- ume 32, pp. 10836-10846, 2019. Chen, L., Xu, J., and Lu, Z. Contextual combinatorial multi- armed bandits with volatile arms and submodular reward. Advances in Neural Information Processing Systems, 31, 2018. Chu, W., Li, L., Reyzin, L., and Schapire, R. Contextual bandits with linear payoff functions. In Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, pp. 208-214, 2011. Davis, J. M., Gallego, G., and Topaloglu, H. Assortment op- timization under variants of the nested logit model. Math- ematics of Operations Research, 62(2):250-273, 2014. Du, S. S., Zhai, X., Poczos, B., and Singh, A. Gradient descent provably optimizes over-parameterized neural networks. In International Conference on Learning Rep- resentations, 2019. URL https://openreview.net/ forum?id=S1eK3i09YQ. Filippi, S., Cappe, O., Garivier, A., and Szepesvári, C. Para- metric bandits: The generalized linear case. In Advances in Neural Information Processing Systems, pp. 586-594, 2010. Goodfellow, I., Bengio, Y., Courville, A., and Bengio, Y. Deep learning, volume 1. MIT press Cambridge, 2016. Jacot, A., Gabriel, F., and Hongler, C. Neural tangent ker- nel: Convergence and generalization in neural networks. In Advances in Neural Information Processing Systems, volume 31, pp. 8571-8580, 2018. Kveton, B., Szepesvari, C., Wen, Z., and Ashkan, A. Cas- cading bandits: Learning to rank in the cascade model. In International Conference on Machine Learning, pp. 767-776, 2015. Lai, T. L. and Robbins, H. Asymptotically efficient adaptive allocation rules. Advances in applied mathematics, 6(1): 4-22, 1985. Lattimore, T. and Szepesvari, C. Bandit Algorithms. Cam- bridge: Cambridge University Press, 2020. LeCun, Y., Bengio, Y., and Hinton, G. Deep learning. nature, 521(7553):436-444, 2015. Li, L., Lu, Y., and Zhou, D. Provably optimal algorithms for generalized linear contextual bandits. In International Conference on Machine Learning, pp. 2071-2080, 2017. Li, S. and Zhang, S. Online clustering of contextual cascad- ing bandits. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 32, 2018. Li, S., Wang, B., Zhang, S., and Chen, W. Contextual com- binatorial cascading bandits. In International conference on machine learning, pp. 1245-1253. PMLR, 2016. Li, S., Lattimore, T., and Szepesvári, C. Online learning to rank with features. In International Conference on Machine Learning, pp. 3856-3865. PMLR, 2019. Wen, Z., Kveton, B., and Ashkan, A. Efficient learning in large-scale combinatorial semi-bandits. In International Conference on MachineLearning, pp. 1113Learning, pp. -1122Learning, pp. , 2015 Zhang, W., Zhou, D., Li, L., and Gu, Q. Neural thomp- son sampling. In International Conference on Learning Representation (ICLR), 2021. Zhou, D., Li, L., and Gu, Q. Neural contextual bandits with ucb-based exploration. In International Conference on Machine Learning, pp. 11492-11502. PMLR, 2020. Zong, S., Ni, H., Sung, K., Ke, N. R., Wen, Z., and Kveton, B. Cascading bandits for large-scale recommendation problems. In Proceedings of the Thirty-Second Confer- ence on Uncertainty in Artificial Intelligence, UAI'16, pp. 835-844, 2016. minimize the loss (4) using gradient descent with η τ for J τ times end for end if Return: θ t , Z t or Z t Thus, the regret is bounded by O(max{ d, dK} √ 2T ) . Similarly, for CN-TS with doubling, each sum has upper bound O( d τn √ τ n K) and the regret has upper bound O( d √ 2T K) . Algorithm 7 Combinatorial neural bandits for for cascade feedback model Initialize as Algorithm 1, {ψ k ∈ [0, 1]} k∈[K] : position discount factors for t = 1, ..., T do Observe {x t,i } i∈[N ] t ) − R(S t , v * t )](3) t + ϵ) the reward under the sampled scorev 1:M t and ϵ. Also, we defineS t as the super arm induced byv t ∈ V opt t and ϵ. Similarly we can define R(S,v t + ϵ).Recall that S t = argmax R(S, v t + ϵ). Then, for anyv 1:M t ∈ V t , we have R(S * t , v * t ) − R(S t , v t + ϵ) 1I (E t ) ≤ R(S * t , v * t ) − inḟ v 1:M t ∈ Vt max S R(S,v 1:M t + ϵ) 1I (E t ) .Note that we can decomposeR(S * t , v * t ) − R(S t , v t + ϵ) = R(S * t , v * t ) − R(S t , v t + ϵ) 1I(E t ) + R(S * t , v * t ) − R(S t , v t + ϵ) 1I(E c t ) . AcknowledgementsThis work was supported by the New Faculty Startup Fund from Seoul National University and the National Research Foundation of Korea(NRF) grant funded by the Korea government(MSIT) (No. 2022R1C1C1006859, No. 2022R1A4A103057912, No. 2021M3E5D2A01024795).Note that a sufficient condition for ensuring the success of CN-TS is to show that the probability of sampling being optimistic is high enough. Lemma 3 gives a lower bound of the probability that the reward induced by sampled scores is larger than the reward induced by the expected scores up to the approximation error. For our analysis, first we define V t the set of concentrated samples for which the reward induced by sampled scores concentrate appropriately to the reward induced by the estimated scores. Also, we define the set of optimistic samples V opt t which coinciding with V t .Also, note that the event E t := E σ t ∩ E µ t , which meansFor our notations, we denoteṠ t as the super arm induced by the sampled scorev 1:M t ∈ V t and ϵ. Also we represent R(S,v 1:M Since the event E t holds with high probability, we can bound the summation of the second term in the right hand side as follows:Therefore, we need to bound the summation of I 3 . Note that we havewhere C 0 > 0 is a Lipschitz constant. On the other hand, from Lemma 3, we havewhich means thatIn this section, we extend our regret analysis to the case when the agent only has access to an α-approximation oracle, O α S . First, we replace S t with S α t = O α S (u t + e t ) for CN-UCB (Algorithm 1) and S α t = O α S ( v t + ϵ) for CN-TS (Algorithm 2). The total regret R(T ) is replaced with an α-regret defined as:, whichever is higher, by substituting the following notations in Appendix A.4:We split α-regret as follows:By replacing S t with S α t in Appendix B.2, we can get the α-regret bound of R α 2 (T )., we know that R α 1 (T ) ≤ R 1 (T ). By combining the results, we can conclude that the α-regret bound of CN-TS is O( d √ T K).F. When Time Horizon T Is UnknownFor Theorems 1 and 2, we assumed that T is known for the sake of clear exposition for our proposed algorithms and their regret analysis. However, the knowledge of T is not essential both for the algorithms and their analysis. With slight modifications, our proposed algorithms can be applied to the settings where T is unknown. In this section, we propose the variants of CN-UCB and CN-TS: CN-UCB with doubling and CN-TS with doubling, and show that their regret upper bounds are of the same order of regret as those of CN-UCB and CN-TS up to logarithmic factors.F.1. AlgorithmsCN-UCB with doubling and CN-TS with doubling utilize a doubling technique(Besson & Kaufmann, 2018)in which the network size stays fixed during each epoch but is updated after the end of each epoch whose length τ doubles the length of a previous epoch. This way, even when T is unknown, the networks size can be set adaptively over epochs.The algorithms first initialize the variables related to τ , especially the hidden layer width m τ and the number of parameters of the neural network p(τ ). For each round, after playing super arm S t and observing the scores {v t,i } i∈St , CN-UCB with doubling and CN-TS with doubling call the Update algorithm. Until τ , Update algorithm updates θ t and Z t or Z t as if τ is the time horizon. If t reaches τ , Update algorithm doubles the value of τ . After reinitializing the variables related to the doubled τ , which includes reconstructing the neural network to have a larger hidden layer width m τ , the algorithm updates all of the θ t ′ and Z t ′ or Z t ′ for t ′ = 0, · · · , t. Update algorithm returns θ t and Z t or Z t to CN-UCB with doubling or CN-TS with doubling. This process continues until t reaches T .Note that the computation complexity of each round of CN-UCB and CN-UCB with doubling heavily depends on how quickly they can compute the inverse of the gram matrix Z. Since Z ∈ M p (R), and p depends on m, the computation speed Improved algorithms for linear stochastic bandits. Y Abbasi-Yadkori, D Pál, C Szepesvári, Advances in Neural Information Processing Systems. Abbasi-Yadkori, Y., Pál, D., and Szepesvári, C. Improved algorithms for linear stochastic bandits. In Advances in Neural Information Processing Systems, pp. 2312-2320, 2011. Associative reinforcement learning using linear probabilistic concepts. N Abe, P M Long, regret of CN-TS and CombLinTS with respect to the network width (m). International Conference on Machine LearningAbe, N. and Long, P. M. Associative reinforcement learn- ing using linear probabilistic concepts. In International Conference on Machine Learning, pp. 3-11, 1999. regret of CN-TS and CombLinTS with respect to the network width (m).
[]
[]
[ "\nDepartment of Applied Computing and Engineering\nCardiff School of Technologies\nCardiff Metropolitan University\nCF5 2YBCardiffUK\n", "\nDepartment of Networks and Cyber Security\nBirmingham City University\nB5 5JUBirminghamUK\n", "\nComputer Science Department\nFaculty of Computer and Information Sciences\nAin Shams University\n11566CairoEgypt\n", "\nDepartment of Computer Science\nCzech Technical University\n166 36Prague, Prague 6, Czechia\n", "\nDepartment of Electronic and Electrical Engineering\nUniversity of Strathclyde\nG1 1XWGlasgowI.A.UK\n", "\nLupovis Limited\nG2 2BAGlasgowUK\n" ]
[ "Department of Applied Computing and Engineering\nCardiff School of Technologies\nCardiff Metropolitan University\nCF5 2YBCardiffUK", "Department of Networks and Cyber Security\nBirmingham City University\nB5 5JUBirminghamUK", "Computer Science Department\nFaculty of Computer and Information Sciences\nAin Shams University\n11566CairoEgypt", "Department of Computer Science\nCzech Technical University\n166 36Prague, Prague 6, Czechia", "Department of Electronic and Electrical Engineering\nUniversity of Strathclyde\nG1 1XWGlasgowI.A.UK", "Lupovis Limited\nG2 2BAGlasgowUK" ]
[]
Citation: Ukwandu, E.; Ben-Farah, M.A.; Hindy, H.; Bures, M.; Atkinson, R.; Tachtatzis, C.; Andonovic, I.; Bellekens, X. Cyber-Security Challenges in Aviation Industry: A Review of Current and Future Trends. Information 2022, 13, 146. https://
10.3390/info13030146
null
235,795,009
2107.04910
43379d5523d5f2b42c7d2fc40c61fc99d8117e63
Published: 10 March 2022 Department of Applied Computing and Engineering Cardiff School of Technologies Cardiff Metropolitan University CF5 2YBCardiffUK Department of Networks and Cyber Security Birmingham City University B5 5JUBirminghamUK Computer Science Department Faculty of Computer and Information Sciences Ain Shams University 11566CairoEgypt Department of Computer Science Czech Technical University 166 36Prague, Prague 6, Czechia Department of Electronic and Electrical Engineering University of Strathclyde G1 1XWGlasgowI.A.UK Lupovis Limited G2 2BAGlasgowUK Published: 10 March 202210.3390/info13030146Received: 10 February 2022 Accepted: 5 March 2022Academic Editor: Sokratis Katsikas Citation: Ukwandu, E.; Ben-Farah, M.A.; Hindy, H.; Bures, M.; Atkinson, R.; Tachtatzis, C.; Andonovic, I.; Bellekens, X. Cyber-Security Challenges in Aviation Industry: A Review of Current and Future Trends. Information 2022, 13, 146. https:// Introduction The ongoing trend in increasing the levels of the integration of Information and Communication Technology (ICT) tools into mechanical devices in routine use within the aviation industry has surfaced concerns regarding the resilience of current cyber-security protection frameworks. Thus, consideration of the needs of the sector in terms of cyber-security compliance is featuring as another challenge in the evolution of the aviation industry through the adoption of smart airports and e-enabled aircraft infrastructures [1]. The aviation industry holds a strategic global position as the gateway between nations. The resilience of the infrastructures in support of its operational integrity is vital as minor errors or oversights result in a range of significant damages and losses, e.g., fatalities, loss or exposure of stakeholders, staff and customer personally identifiable information, theft of credentials, intellectual property and intelligence. There is clear evidence, as shown in Sections 2.3 and 3, that major threat actors are collaborating with state actors to acquire intellectual property and intelligence, in order to advance their domestic aerospace capabilities as well as to monitor, infiltrate and subvert other nations' capabilities. Thus, there is an industry imperative to define and implement commensurate cyber-defense strategies that protect against malicious threats that endanger the operational integrity of a key industry. Monteagudo [2] recommend the industry adopt micro-segmentation strategies in cyber-defence design and implementations, resulting in the division of aviation infrastructures into multiple micro-islands, each governed by separate access privileges. The approach targets the containment of any compromise or data breach to a specific segment. Bellekens et al. [3] propose a deception solution for the early detection of breaches in critical infrastructures as current techniques are ineffective, with threats to the civil aviation industry continuing to proliferate with a focus on stealing information for both political and financial gains, with some malicious acts resulting in long-term business disruptions [4]. The review explores the cyber-security landscape within the civil aviation industry only; military flight operations are not considered. Private and commercial areas of the industry are reviewed with consideration of the entire ecosystem, extending to the whole system of avionics, air-traffic controls, airlines, and airports. The goal is to provide a critical assessment of the current trends and practices and based on the results of the analyses, predict future trends as the civil aviation industry continues to increase the use of Information Technology (IT) technologies, such as Internet of Things (IoT) devices, machine learning, cloud storage and cloud computing, to optimise business operations. The remainder of the review is organised as follows. Section 2 captures the range of reported cyber-threats, the threat actors and their motivation drawn from the published literature. Section 3 focuses on the documented cyber-attacks in the last 20 years, while Section 4 provides a mapping of the attack surfaces a malicious attacker can exploitat the airport or in aircraft systems. Section 5 contains insight on the steps to mitigate cyber-security challenges within the civil aviation industry. Section 6 describes the future evolution of the civil aviation sector as it relates to smart airports and e-enabled aircraft, laying the environment for the prediction of the concomitant changes in the threat dynamics and their implications on the industry. Conclusions are drawn in Section 7, with Section 8 providing open research challenges and opportunities in civil aviation cyber-securities. A Systematic Literature Review Section 2 presents a review of the available literature on cyber-attack incidents, the threat actors and their motivation within the civil aviation industry. Review Methodology The review was executed following the process of systematic analysis and methodology defined by Okoli and Schabram in [5] and Okoli in [6], guiding the selection and extraction of relevant information from the literature. The objectives of the analysis is to map reported cyber-security incidents within private and commercial areas of the industry over the last 20 years (2001-2021), with consideration of the whole system of avionics, air-traffic controls, airlines, and airports. Aim and Objectives The aim is to identify and analyse reported cyber-security incidents across the aviation sector over the last 20 years (2001-2021) to benchmark the most common threat actors, their motivations, the class of attacks and the aviation infrastructure subject to most attacks. Insights on the current scenario lay the foundation with which to predict future cybersecurity practices. The specific objectives are as follows: • Survey of cyber-attack incidents in the civil aviation sector over the last 20 years; • Analysis and review of state-of-the-art cyber-attack trends, threat actors and their motivation; • Identification of the most common types of attacks and targeted infrastructures; • Providing cyber-security professionals with information on the current and future trends of cyber-attack incidents in the context of the evolution of the civil aviation sector. Classification and Research Criteria A survey of peer-reviewed papers showed that a limited number of papers have been reported with regard to cyber-attack incidents in the civil aviation sector. As an example, only 1 publication was found in the Scopus database when searched using a combination of the following keywords 'cyber AND incident AND aviation AND industry'; and a total of 29 publications were found when searched using 'cyber AND attack AND aviation AND industry', of which 27 were journal articles and 2 were conference proceedings articles published in 2021. The trend in the number of relevant published papers shows a steady increase over the recent past; five journal and two conference proceedings articles were published in 2020; three journal and three conference proceedings articles in 2019; one journal/two conference proceedings articles in 2018; only three conference proceedings articles in 2017; one journal and one conference proceedings article were published in each of 2016, 2015 and 2013; and, finally, there was one publication as part of conference proceeding in 2012 and none thereafter. (Table 1). Worth noting is that no article focused on articulating cyber-attack incidents, rather propose different cyber-security approaches in securing the aviation infrastructure of both internal and external systems in the sector. Here, an extensive search was employed to surface relevant information from online repositories, web-based announcements, online articles and reports on websites of both primary and third-party organisations operating in aviation sectors. The review was supplemented by web-based aviation cyber-security reports, newspapers and news magazine, status reports, regulations and related information from regulatory agencies. The relevant incidents were tabulated based on the class of attack and according to the cyber-security triad of Confidentiality, Integrity and Availability. The source, year, location and type of attack, a more detailed description of the attack with the people affected and the possible cost implications for each incident were recorded. Furthermore, the review considered only cyber-attacks over the last 20 years, namely, from 2001 to 2021, as the only other documented incident within the industry between 1997-2014 was theft of an MSc thesis. The selected search term-the concatenation of keywords 'cyber AND incident AND aviation AND industry'-will exclude relevant literature that did not use/cite any of the keywords. Moreover, as the search is English-based, it excludes any potential non-Englishlanguage papers. The authors also acknowledge the possibility of excluding literature in databases with which their respective institutions have no established subscription. Cyber-Threats and Automation in Civil Aviation Industry Given the importance of cyber-technologies to the operational integrity of the aviation industry, the sector has relied on the International Air Transport Association (IATA) [7] to provide guidance to improve and update cyber-security regulations, standards, and principles for the end-to-end ecosystem comprising the whole system of avionics, air-traffic controls, airlines, and airports [2,8]. The business goals range from improving on-the-ground/air-borne/in-space operations, customer services such as, but not limited to, ticket bookings, in-flight entertainment systems, flight check-in and -out, security screening of passengers and use of aircraft cabin wireless Internet services [8,9]. It is also evident that the use of a new suite of technologies and tools has yielded significant positive impacts on aircraft control systems, enhancing the quality of aviation operations, increasing safety and performance [2,[9][10][11][12]. The trend, however, has concomitant negative impacts in terms of cyber-security through increasing vulnerabilities, gates which may result in breaches with potential losses in terms of human life and business continuity [2,12,13]. 16 29 In 2018, Corretjer [14] undertook an analysis of current cyber-security practices within the United States aviation industry (civil and military) and recorded the strategies of both government and private entities to protect the industry against cyber-attacks. The conclusions, although commending the effort to date of the Federal Airport Authority (FAA) and the private sector to manage the proliferation of cyber-attacks, recommend the need to intensify the implementation of proactive measures throughout the design, acquisition, operation and maintenance of aviation navigation systems. Kagalwalla and Churi [15] stressed the increased challenges in provisioning cybersecurity in aviation as a consequence of the increase in the deployment of modern ICT technologies such as IoT, machine learning, cloud storage/computing with their concomitant inherent vulnerabilities. Moreover, Duchamp, Bayram and Korhani [1] highlight that the increase in the number of travellers, building of new modern airports, and complexities in new aircraft have also stimulated an increase in cyber-attacks in civil aviation. ICAO [4] believe that the increased reliance on the integrity and confidentiality of data for the optimisation of day-to-day business transactions have in turn increased the risk of cyber-incidents. Increased levels of automation, a central spine within the evolution of next generation systems, result in the proliferation of attack surfaces with threat actors targeting business disruptions and theft of information for both political and financial gains. Lehto [16] cites the dynamic between advancements in cyber-attack tools and methods coupled with increased exposures and the motivation of the attackers has created the current trend in cyber-attacks impacting airlines, aircraft manufacturers and authorities. Cyber Risk International [17] contend that the rise in cyber-security challenges is the result of a combination of digital transformation, higher levels of inter-connectivity, segmentation, and complexity, recent solutions by the industry to service the surge in global travel. In summary, the main conclusions reached are as follows: the heavy reliance on IT facilities to maintain quality of services has resulted in higher levels of exposure to cyber-attacks; multiple entry and exit in the industry creates new vulnerabilities; legacy IT issues and fragmentation significantly exacerbate the challenges as they were not designed to cope with cyber-crime [2]. Kagalwalla and Churi [15] cite that the lack of resources, funds and skilled staff are part of the spectrum of challenges, as are insider threats, the procurement of modern-day operational technologies such as Supervisory Control and Data Acquisition (SCADA), Industrial Control Systems (ICS). Building a strong security culture, implementing meaningful prevention, and proactive measures are solutions that are offered. Threat Actors and Their Motivations Fireeye Incorporated [18] reported their findings on the major threat actors in the aerospace industry and the motives behind their attacks. The main finding of the assessment was that the most prevalent industry cyber threats arise from Advance Persistent Threat (APT) groups that work in collaboration with state actors to acquire intellectual property and intelligence in order to advance their domestic aerospace capabilities as well as to monitor, infiltrate and subvert other sovereign nations' capabilities. APTs were executed by cyber-espionage groups that specialise in targeting information and security assets of critical economic importance to nations and large corporations [19]. These groups are highly skilled, very knowledgeable, and experienced in carrying out malicious acts with high degrees of expertise in masking their attack paths, a mix of characteristics that render them elusive, high-profile and able to inflict significant damage. Evidence was provided that some groups operating in partnership with particular state actors, use the stolen assets to develop cyber-security countermeasures as well as technologies and tools for sale on the dark web. According to Fireeye's threat intelligence system, at least 24 APT incidents were identified that compromised different aerospace organisations with stolen data types ranging from budget information, business communications, equipment maintenance records and specifications. Other data include organisational charts and company directories, personally identifiable information, product designs, product blueprints, production processes and proprietary product or service information, research reports, safety procedures, system log files and testing results and reports, potentially enabling a spectrum of damaging consequences. Kessler and Craiger [20] categorised the threat actors according to their motivations: cyber-criminals, whose activities are responsible for 450 billion dollars of annual loss to the global economy; cyber-activists/hacktivists whose concern is the philosophy, politics and non-monetary goals of the discipline; cyber-spies, motivated by financial, industrial, political and diplomatic espionage; and cyber-terrorists, driven by political, religious, ideological or social violence. Attackers supported by a nation-state in order to advance the latter's strategic goals are classified as cyber-warriors. Abeyratne [21] notes that threat actors are motivated by the ability to cause business disruption and theft of information for political as well as financial gains. Recent reports reinforce the likelihood of a significant rise in cyber-threats as the volume of global passengers rises, with embedded systems being deployed in response in order to sustain the quality of services. The integration of hardware and software to increase the efficiency of operations through increased levels of automation presents a more extensive attack surface, stimulating further, the motivation of threat actors. It is therefore timely to accentuate the significant challenges facing the civil aviation industry in the provision of cyber-security as the number and classes of cyber-threats proliferate. The growing degree of threat should be addressed as a matter of urgency through research and innovation in proactive approaches within cyber-security by design tools that mitigate the risks and dissuade malicious activity. Documented Cyber-Attacks in Aviation Industry (2001-2021) The ever-increasing reliance in data-driven processes to increase the efficiency of businesses practices and the quality of life for citizens brings attendant risks and challenges in providing effective cyber-security protection [22]. It is clear that the integration of technologies has also increased the safety and efficiency of air transport systems. However, higher levels of human migration and hyperconnectivity gate a cascading impact as a cyberincident in one airport translates into a transnational problem with social and economic consequences [1]. It is therefore incumbent on the industry to be proactive in providing robust mitigation for any class of emergent attack. In this context, Table 2 present reviews of documented cyber-threats and attacks in civil aviation industry over the last 20 years (2001-2021). The review by Viveros [23], which covered the period 1997-2014, is not exhaustive, as well as being outdated considering the evolutionary progression of cyber-security incidents in recent times. Although the presented mapping has been carried out diligently, the authors acknowledge the possibility that some cyber-attacks in the civil aviation industry within the period under review may have been omitted, as some incidents may not have been made public. Analysis and Critical Reviews of Cyber-Attacks in the Civil Aviation Industry Of all attacks studied, 71% focused on the theft of login details such as administrative passwords and malicious hacking to gain unauthorised access to the IT infrastructure ( Figure 1). Denial-of-service attacks, such as Distributed Denial of Services (DDoS), which compromise data availability, rank second at 25%, followed by attacks that target corrupting the integrity of files, either by intercepting them while in transit or at rest, which correspond to 4% of all attacks. This evidence adds credence to the assertions presented in Section 2.3, which posits that the major motivation of threat actors is the theft of intellectual property and intelligence. The assessment of cyber-attack by type is presented in Figure 2, the results of which support the evidence presented in Figure 1, showing that malicious hacking activities top the list of the type of cyber-attacks at 26%, the aim being to gain unauthorised access using known malicious password cracking techniques, for example, brute force or dictionary attacks. Data breach and ransomware attacks are second at 14% each, while attacks related to phishing and malware follow at 11% each. Cyber-incidents classed as human error, bot attacks, worms and DDoS are the most rare, at 4% each. Figure 3 shows that most cyber-attacks within the aviation industry occur in North America, with 11 out of 26 recorded incidents in the United States of America (USA) and 1 only in Canada. Mazareanu [47] suggests that the relatively large number of incidents may not be unconnected to the large number of airports in the USA, as in 2019, USA was home to 5080 public and 14,556 private airports. Europe is second with a 44% rate of attack incidents, with Britain topping the list of countries. Nations in Asia come third at 8%, with no known cyber-attacks recorded in airports in Africa. Table 3 captures the number of individuals impacted, the number of times airports were shut down, and the number of days aircraft were grounded owing to cyber-incidents. Incidents during 2018 remain the most numerous, representing the highest rate of cyber-attacks in the history of the aviation industry, with 94,500,000 people affected and about 5 continuous days of aircraft being grounded. The crypto-mining malware discovered by Cyberbit through its Endpoint Detection and Response (EDR) software in 2019 was, however, the most worrying incident, an installation of malicious software that infected more than 50% of the European airport workstations. A quantification of losses owing to cyber-security breaches is hindered by the lack of transparency in record keeping, documentation and publication of relevant incidents for public knowledge. The monetary value of losses paid by the industry due to cyber-crime are never publicised, nor documented, especially the level of compensation to victims of these attacks, as well as that paid as ransom during ransomware attacks. Other records not disclosed are the number of shutdowns suffered by the attacked airports, as well as the number of lost flight hours as a consequence of cyber-incidents. Cyber-Attack Surfaces and Vulnerabilities in the Civil Aviation Industry Paganini [48] attests that only attackers with a broad understanding of how an aircraft or aviation system functions can successfully disrupt normal operations, citing that an attack on an entire aircraft or aviation system is non-trivial. Haass, Sampigethaya and Capezzuto [8] highlighted that technologies such as Wireless Fidelity (WiFi), Internet, IoT, Global Positioning System (GPS), open-source systems, virtualisation and cloud computing services have been central in the optimisation of aviation operations, reducing costs and response times through enhanced inter-operability. Integrated systems, however, can be targeted remotely due to their inherent vulnerabilities, an assertion shared by [2,49] and Lykou et al. [9]. Lykou et al. [9] also added that the practise of Bring Your Own Device (BYOD) by airport customers, travellers and employees creates a rich attack surface. As an example, the work in [50] reports an attacker's access of the in-flight entertainment by simply attaching a cat6 cable with a modified connector through his laptop; other aeroplane network systems could also be commandeered, as confirmed by Freiherr in [51]. Efe, Cavlan and Tuzlupinr [52] postulate that the increase in the number of operational aircraft coupled with the innovation driving the development of smaller and more sustainable air vehicles render air communication protocols a high-profile target for cyberattacks. Duchamp, Bayram and Korhani [1], Kessler et al. [20] and Abeyratne [21] are of the view that the reliance on computer-based IT systems in the day-to-day management of the industry-which has enabled improvements in the sophistication of air navigation, on-board aircraft control and communication systems-increase the cyber-attack surface. Furthermore, airport ground systems, which include flight information, security screening and day-to-day data management systems, are also identified as targets. An aggregation of the above targets comprises the spectrum of feasible attack surfaces in the Civil Aviation Industry (CAI), with associated vulnerabilities. Santamarta [53] discovered security flaws in Inmarsat and Iridium Satellite Communication (SATCOM) terminals in 2014, infrastructure in routine use within the aviation industry. Researchers concluded that malicious attackers have the potential to exploit the vulnerabilities inherent in the design of the system through back-doors; exploitable were hardcoded credentials, an insecure protocol and weak encryption algorithms. Biesecker [54] reported in 2017 that a team of government, industry and academic researchers successfully hacked into a legacy Boeing 757 commercial aircraft, remotely, in a non-laboratory setting, by accessing its systems through radio frequency communications. Aerospace and Avionic Systems Aerospace systems have been subject to increasing degrees of software and hardware integration, implemented through embedded-computing technologies. As a result, the system is plagued with software vulnerabilities, as ensuring that embedded systems are free from security weaknesses is difficult, as explored by Dessiatnikof et al. in [55] and Papp et al. [56]. In [55], the researchers further assert that attacks on aerospace systems can originate from the lower layers, such as the Operating System (OS) kernel, protection mechanisms and context switching as it is difficult, even when formal verification methods are applied, to prove an absence of vulnerabilities within embedded systems. One of the principle conclusions arising from their findings is that attacks against aerospace computer systems can be categorised based on the attacker's skills and aims; the aim is either to corrupt the computing system's core functions or the fault-tolerance mechanisms, such as error detection and recovery systems. An avionics system provides critical support to crew members and pilots for the safe operation of an aircraft, as it provides weather information, positioning data and communications [57]. Avionics is defined as the combination of aviation with electronics, consisting of embedded systems in aircraft design, development and operation [58]. Avionic systems gather data, such as speed, direction, and air temperature, through external sensors and route appropriate data to other components of the aircraft using an avionic network [59]. In recent times, in a bid to leverage the lower cost of Commercial-Off-The-Shelf (COTS) components and software technologies to provision increased bandwidth and reduce cost, Ethernet networks such as Avionics Full DupleX Switched Ethernet (AFDX)-an IEEE 802.11 protocol-based Wireless Flight Management System (WFMS)-have been used in avionic networks. Wired avionic communications provide a more secure network with a high degree of reliability and safety, as it is difficult for malicious users to access and inject false data [60,61]. On the other hand, the Avionics Wireless Network (AWN) brings new challenges related to assurance, reliability and security [62,63]. Aircraft avionics not only provide on-board passenger entertainment, but also enable the control of flight functions, navigation, guidance, communications, system operation and monitoring. The high level of integration creates cyber-security concerns, for instance, where Voice-over-the-Radio (VoR) communications are used both with pilots and controllers. The major disadvantages of VoR is the time delay to receive the signal, especially in the case of multiple communications, and the corruption of the signal or ambiguity in understanding between controller and pilot due to noise. The Controller Pilot Data Link (CPDLC), however, is digital and thus more robust to impairments. The air carrier flight operations centres are synchronised with the flight deck to receive the same signal at the same time, allowing maximum risk awareness and informing on the optimum decisions. In recent times, the aviation community has focused on creating a modernised National Airspace System (NAS) underpinned by a new communication system able to improve the interaction between the aircraft and the ground system. More detail on the attack surfaces across different aerospace and avionic components is provided in the following sub-subsections. Aircraft Communications Addressing and Reporting System (ACARS) Aeronautical Radio Incorporated (ARINC) introduced the ACARS data link protocol to reduce crew workload and improve data integrity. ACARS is an ARINC 618-based air-toground protocol that transfers data between on-board avionics systems and ground-based ACARS networks [64]. The ACARS system consists of a Control Display Unit (CDU) and ACARS Management Unit (MU); the MU sends and receives digital messages from the ground using existing very high frequency (VHF) radios; on the ground, the system-a network of radio transceivers-receives and transmits data link messages, as well as routing them to various aircraft on the network. [65,66] that the current use of ACARS by stakeholders extends beyond its original application to serve as flight trackers and the crew automated timekeeping system. The works in [65,66] demonstrate how current ACARS usage systematically breaches location privacy; the authors of [65] showed how sensitive information transmitted over an ACARS wireless channel can lead to a privacy breach for users, supporting a known fact that ACARS messages are susceptible to eavesdropping attacks. The article in [65] was concluded by proposing a privacy framework, and in [66] the use of encryption and policy measures was recommended to arrest known eavesdropping attacks on the communication channel. Smith et al. stated in Automatic Dependent Surveillance-Broadcast (ADS-B) Aircraft automatically transmit (ADS-B Out) and/or receive (ADS-B In) identification and positional data in a broadcast mode through a data link using Automatic Dependent Surveillance Broadcast (ADS-B), improving the safety and capacity of airport surveillance and thus enhancing situational awareness of airborne and ground surveillance in airports [67]. Ali et al. [68] state that ADS-B Out provides a range of ground applications support, including Air Traffic Control (ATC) surveillance in both radar and non-radar airspace over the airport, as well as enabling enhanced surveillance applications through links to aircraft in order to receive ADS-B Out messages from other aircraft within their coverage (ADS-B In) areas. The integrity and availability of the ADS-B system is paramount as a result of its role in supporting key ground and airborne applications [69]. Furthermore, Manesh and Kaabouch in [70] stated that ADS-B employing global satellite navigation systems generates precise airspace mappings for air traffic management. Thus, the security of ADS-B has become a major concern as the system broadcasts detailed information about aircraft, their positions, velocities, and other data over unencrypted data links. Tabassum [71] analysed the performance of ADS-B data received from Grand Fork International Airport. The data were in raw and archived Global Data Link (GDL-90) format. GDL-90 is designed to transmit, receive and decode ADS-B messages through an on-board data link by combining GPS satellite navigation with data link communications. The aim was to detect anomalies in the data and, in turn, quantify the associated risk. In the course of the research, dropout, low-confidence data, message loss, data jump, and altitude discrepancy were identified as anomalies, but the focus was on two of them, dropouts and altitude deviations. The conclusion drawn was that all failures relating to the anomalies have the potential of affecting ATC operation either from an airspace perspective, such as dropout, low-confidence data or from an aircraft perspective, such as data jump, partial message loss and altitude discrepancy. All are surfaces which an attacker can leverage to execute attacks such as eavesdropping, jamming, message injection, deletion and modification [70,72]. Electronic Flight Bag The Electronic Flight Bag (EFB) displays digital documentation, such as navigational charts, operations manuals, and airplane checklists by the flight crew. It can also be used by crew members to perform basic flight planning calculations. Advanced EFB now perform many complex flight-planning tasks and are integrated into flight management systems, alongside other avionic systems, to display the real-time position of an aircraft on navigational charts with weather information [57]. Wolf, Minzlaff and Moser [73] assert that EFBs are valuable elements as a replacement of traditional paper references carried on-board as part of the flight management system, thus yielding added benefits by reducing weight. Advanced EFBs integrated into flight management systems, in contrast with the traditional paper-based method that were stand-alone, now present a new attack surface, e.g., a malware-infected EFB will gate denial-of-service attacks to other connected on-board systems [57,[73][74][75]. Table 4 summarises the range of cyber-attack surfaces identified within the civil aviation industry and recommended ways to mitigate them. Consistent patching and software updates, phasing out existing legacy encryption as soon as practicable and following current recommendations on the use of cryptographic algorithms and network protocols. Attack Surfaces in the Civil Aviation Industry SATCOM terminals can be exploited through some design flaws in areas such as hardcoded credentials, insecure protocol, weak encryption algorithms. C,I [55,56] Aerospace systems Consistent patching of OS, phasing out existing legacy encryption as soon as practicable and following current recommendations on the use of cryptographic algorithms. Attackers, based on skill level, can exploit issues with integration of OS in embedded systems, such as in OS kernel, context switching, protection mechanisms. C,I [65,66] ACARS Phasing out existing legacy encryption as soon as practicable and following current recommendations on the use of cryptographic algorithms and established policy measures. The ACARS communication channel is susceptible to eavesdropping and privacy breach. C,I [71] ADS-B Phasing out existing legacy encryption as soon as practicable and following current recommendations on the use of cryptographic algorithms. The ADS-B communication channel is prone to eavesdropping, jamming attacks, message injection, deletion and modification. C,I [62,63] AWN Phasing out existing legacy encryption as soon as practicable and following current recommendations on the use of cryptographic algorithms. The Wireless Avionic Network communication channel is prone to data integrity problems such as data assurance, reliability and security. Legend: C = Confidentiality, I = Integrity, A = Availability. Mitigation of Cyber-Security Challenges within the Civil Aviation Industry The mapping of the range of cyber-attacks within the civil aviation industry reveals that phishing and network attacks, such as eavesdropping, DoS, Man-in-the-middle and spoofing attacks, predominate [76]. Distributed Denial-of-Service (DDoS) and DoS attacks on network assets at the airport, most notably, Vulnerability Bandwidth Depletion DDoS Attacks (VBDDA), could be mitigated according to Ugwoke et al. [77] by the proposed embedded Stateful Packet Inspection (SPI) based on the OpenFlow Application Centric Infrastructure (OACI). The focus was to mitigate attacks on Airport Information Resource Management Systems (AIRMS), an enterprise cloud-based resource management system used in some airports. Delain et al. [78] assume a different position on DDoS prevention, adopting volumetric protection through providing an alternative secondary Internet connection, as well as deploying high-performance hardware devices. The latter monitor logging activities and traffic continuously to improve the efficiency of the protection mechanism. Clark and Hakim [79], Martellini [80] and Singer and Friedman [81] propose the use of airport intelligence classification to protect airport assets and infrastructure from cyber-attacks, the method being classified according to good technical practice for high-level security issues. In practice, the approach is founded on a good cyber-hygiene culture, involving system and anti-virus regular updates, cyber-education for new employees, regular data backup and password management. The use of encoding was posited by Efe et al. [52] as a measure to prevent cyber-attacks on ADS-B data used for airborne and ground surveillance in airports. The use of the random blurring technique on aircraft data from ADS-B within permissible error bounds, so as not to impair the operational integrity of Air Traffic Control (ATC), is also proposed as a means of limiting and monitoring the level of interference of Unmanned Aerial Vehicles (UAVs) on ADS-B data using aircraft information at the airport. The Future Civil Aviation Industry and Its Cyber-Security Challenges The concept of 'smartness' within the civil aviation industry has its root in the relatively recent deployments embodying the digitalisation of the industry, such as the integration of IoT-enabled devices, sensors into physical systems, the use of blockchain, AI, cloud and big data to sustain the quality of service delivery. The business goal is to provision optimal services, ensuring an enhanced customer experience in a reliable and sustainable manner by targeting the optimisation of growth, operational efficiency, safety and security [10]. The migration to increasing levels of automation through the integration of operational systems spawns new attack surfaces which, in turn, mandates the revision of existing cyber-security implementations, assessment of the ramifications of the new evolving threats, updating both the risk scenario analysis and resilience measures. Smart Airports In addition to the technologies cited under the integrated digital transformation evolution within the airport eco-system, Zamorano et al. in [82] have highlighted other technologies such as Radio Frequency Identification (RFID), geolocation, immersive realities, biometric systems and robotics as core elements within next-generation Smart Airport environments. Koroniotis et al. in [83] are of the view that advances in IoT device integration within the aviation sector infrastructures alone have given rise to the emergence of the Smart Airport. The objective is to deliver an excellent customer experience with improved efficiency in daily operations, enhancing robustness, efficiency and control of service delivery [83]. The acquisition of customer data of interactions with every 'thing' within the airport in real time, as well as its subsequent analyses to generate passenger profiles, is a proven route to gating ancillary revenues [84]. In essence, the Smart Airport is a data-rich environment, equipped with a range of sensors, actuators and other embedded devices that provide customers with a user interface to interact with cyber-physical devices across the environment. Lykou et al. [10] categorised the scope of threats against IoT infrastructures and applications within smart airports into the following: network and communication attacks, malicious software and tampering with airport smart devices. The scenario analysis of likely malicious attacks also included the misuse of authorisation, social engineering and phishing with consideration of smart applications, mitigating actions and resilience measures. Furthermore, Koroniotis et al. in [83] postulate that IoT systems and devices are prone to APT-led attacks due to hardware constraints, software flaws or misconfigurations. AI-enabled techniques based on machine learning are suggested as a potential methodology to develop solutions that address the challenge of IoT-inspired cyber-attacks. A robust cyber-defence framework in smart airports is of vital importance to ensure the reliability of services and mitigate against service disruptions and cancellations, as well as loss of sensitive information. E-Enabled Aircraft The use of electronic data exchange and digital network connectivity are the spines of the approach adopted by the industry to increase the efficiency of on-board aircraft operations; IoT will play an important role in this respect, according to Wolf et al. [73]. A review on the role and the potential of e-enabled devices in enhancing digital network connectivity and electronic data exchange in future e-enabled aircraft, together with their attendant vulnerabilities, attack surfaces and possible mitigating factors, is thus of benefit. Mahmoud et al. in 2010 [85] reported on a design of an adaptive security architecture of future network-connected aircraft, while Neumann [86], Sampigethaya et al. [87,88] surveyed both the current and future security provision of embedded system in e-enabled aircraft networks. Mahmoud et al. [85] proposed a secure system topology for the embedded aircraft system network, referred to as SecMan for application in Fiber-like aircraft satellite telecommunications. Sampigethaya et al. provided evidence that the safety, security and efficiency of e-enabled aircraft will be highly dependent on the security capabilities of the communications, network and cyber-physical systems. The consequence of the deployment of advance sensing, extensive computerised systems, enhanced communication channels between on-ground and on-board systems, on-board system integration and smart software-enabled interfaces is a proliferation of attack surfaces. Such surfaces present opportunities to exploit on-board cyber-physical systems remotely through radio frequency jamming, node impersonation and passive eavesdropping [88]. Table 5 provides a summary of the classes of attacks in the context of the evolution of the sector. The on-board trend of increasing the degree of integration of IT services into aircraft mechanical devices will undoubtedly enhance efficiencies, however, at the expense of an increase in attack surfaces. The relatively recent harnessing of artificial intelligence techniques by cyber-attackers to automate attack processes [89,90] is a worrying development and stimulates a response strategy also founded on the use of AI-enabled cyber-defence frameworks in safeguarding e-enabled aircraft against severely damaging breaches. Conclusions The review presented a mapping of the cyber-attack incidents within the civil aviation industry over the last 20 years, through a search of the published literature and documented cyber-attacks, as well as capturing the motives of the threat actors. Results show that the main cyber-threat to the industry stem from APT groups, in collaboration with state actors, the goal being to acquire intellectual property and intelligence in order to advance domestic aerospace capabilities as well as monitor, infiltrate and subvert other nations' capabilities. As is the obligation of any industry, the aviation sector continues to strive to improve the quality of services provided and enhance customer experience. The approach followed to satisfy the business need is founded on increasing the levels of system integration, the judicious embedding of automation where appropriate and an increase in the use of data. The evolution to date has been seeded through the implementation of IoT technologies, not only to increase the level of inter-connectivity on-ground, on-board and between the two domains, but also to gather key sensor and customer behaviour data, the former necessary to optimise on-board (e-aircraft) operations, whilst the latter-to enhance onground (smart airport) customer experience. However, the higher levels of integration and connectivity spawn a spectrum of new cyber-attack surfaces and, given the ability of attackers to automate attack processes through AI, there is an immediate need to develop holistic cyber-defence strategies to protect the cyber-integrity of the emerging Smart Airport and e-enabled aircraft systems. Otherwise, there exists a great likelihood that APT groups could advance beyond attacking airport facilities only to breach on-board and in-flight aircraft by using sophisticated remote attack tools with severe concomitant damages and loss of life. Open Challenges and Research Opportunities The combination of digital transformation, connectivity, segmentation and complexity currently being experienced in the industry due to the surge in global travel will continue to pose challenges in terms of cyber-security. The increasing levels of integration and automation to satisfy the needs of the business exposes the sector by presenting new opportunities for cyber-attacks. There is no doubt that the evolution will improve the quality of services provided and improve the customer experience but at the expense of exposing new attack surfaces to cyber-threat actors, which will stimulate a proliferation in the number of attacks. Furthermore, the industry is also obligated to protect legacy IT infrastructures and entrenched practices exacerbated further by the fragmentation in the industry, which increases the complexity of the challenge, as much of the systems in use were not designed to be robust against cyber-crime. In this context, the difficulty of securing accurate information of sufficient scope on the nature and magnitude of cyber-incidents within the industry remains an open challenge that hinders innovation. News channels, blogs or company websites provide minimal information on cyber-breaches due to the sensitive nature of the industry and its dominance by government-owned agencies. This practice, whilst understandable from an industry perspective, presents researchers with challenges in developing fit-for-purpose solutions that support the evolution of the sector. Developers thus resort to performing informed quantitative analysis of potentially skewed data to reach meaningful conclusions. Thus, clearly evident are the emerging opportunities for the development of AIbased cyber-security solutions that address the major threats to the operational integrity of the aviation industry. Innovating proactive, offence-centric measures for the protection of avionic infrastructures characterised by increasing levels of automation and, in turn, creating additional attack surfaces, presents a rich vein of opportunities. Funding: The research is supported by the European Union Horizon 2020 Programme "FORESIGHT (Advanced cyber-security simulation platform for preparedness training in Aviation, Naval and Power-grid environments)" under Grant Agreement No. 833673. The content reflects the authors' view only and the Agency is not responsible for any use that may be made of the information within the paper. Institutional Review Board Statement: Not applicable. Informed Consent Statement: Not applicable. Data Availability Statement: All data analysed and used in this paper are secondary and publicly available data. Conflicts of Interest: The authors declare no conflict of interest. Figure 1 . 1Cyber-Attack Class based on Security Triad. Figure 2 . 2Cyber-Attacks by Type. Figure 3 . 3Cyber-Attacks by Location. Author Contributions: Conceptualization, E.U., M.A.B.-F., H.H. and X.B.; investigation, E.U. and H.H.; methodology, E.U., X.B. and H.H.; administration, X.B. and I.A.; supervision, X.B. and I.A.; validation, M.B., E.U., R.A., C.T., X.B. and I.A.; writing-original draft preparation, E.U. and H.H.; writing-review and editing, M.B., E.U., H.H., M.A.B.-F., R.A., C.T., X.B. and I.A. All authors have read and agreed to the published version of the manuscript. Table 1 . 1Literature Search Results.Year Database Journal Conference Total 2021 Scopus 1 2 3 2020 Scopus 5 2 7 2019 Scopus 3 3 6 2018 Scopus 1 2 3 2017 Scopus 0 3 3 2016 Scopus 1 1 2 2015 Scopus 1 1 2 2013 Scopus 1 1 2 2012 Scopus 0 1 1 Summary 13 Table 2 . 2Cyber-Attacks in the Civil Aviation Industry. Distributed Denial-of-Service (DDoS) attack by cybercriminals that affected LOT Polish Airlines flight-plan IT Network systems at the Warsaw Chopin airport. The attack rendered LOT's system computers unable to send flight plans to the aircraft, thus grounding at least 10 flights, leaving about 1400 passengers stranded.OTR VietnamThe defacement of a website belonging to Vietnam airlines and flight information screens at Ho Chi Minh City and the capital, Hanoi, displaying messages supportive of China's maritime claims in the South China Sea by Pro-Beijing hackers.Class Ref Year Incident Source Location Description C [24] 2003 Slammer Worm attack OTR USA One of the FAA's administrative servers was compro- mised through a slammer worm attack. Internet services were shut down in some parts of Asia as a result of this attack and this slowed down connections worldwide. A [25] 2006 Cyber- Attack OTR Alaska, USA Two separate attacks on US Federal Aviation Admin- istration (FAA) internet services that forced it to shut down some of its air traffic control systems. C [25] 2008 Malicious hacking attack OTR Oklahoma, USA Hackers stole the administrative password of FAA's interconnected networks when they took control of their system. By gaining access to the domain controller in the Western Pacific region, they were able to access more than 40,000 login credentials used to control part of the FAA's mission-support network. C [26] 2009 Malicious hacking attack OTR USA A malicious hacking attack on FAA's computer, through which hackers gained access to personal information of 48,000 current and former FAA employees. C [27] 2013 Malware at- tack OTR Istanbul, Turkey Shutting down of passport control system at the depar- ture terminals of Istanbul Ataturk and Sabiha Gokcen airports due to a malware attack, leading to the delay of many flights. C [28] 2013 Hacking and phishing at- tacks OTR USA Malicious hacking and phishing attacks that targeted about 75 airports. These major cyber-attacks were al- leged to have been carried out by an undisclosed nation- state that sought to breach US commercial aviation net- works. A [29] 2015 DDoS attack OTR Poland A I [30] 2016 Hacking, phishing attacks Table 2 . 2Cont.A malware attack was detected in a computer in the IT network of Kyiv's main airport, which includes the airport's air traffic control system.An attack on electronic flight information screens at Bristol Airport. This resulted in the screen being taken offline and replaced with whiteboard information. There was no known adverse effect from this attack.Air Canada reported a mobile app data breach affecting the personal data of 20,000 people.Boeing was hit by the WannaCry computer virus, but the attack was reported to have minimal damage to the company's internal systems. attack launched by Russian APT group (APT28) that blocked Sweden's air traffic control capabilities, grounding hundreds of flights over a 5-day period. About 3 million bots attacks were blocked in a day by Israel's airport authority, as they attempted to breach airport systems.OTR Albany, USA Albany International Airport experienced a ransomware attack on Christmas of 2019. The attackers successfully encrypted the entire database of the airport forcing the authorities to pay a ransom in exchange of the decryption key to a threat actor.Class Ref Year Incident Source Location Description A [31] 2016 Cyber-attack OTR Boryspil, Ukraine A [30] 2017 Human error OTR United King- dom British flag-carrier computer systems failure caused by disconnection and re-connection of the data-center power supply by a contracted engineer. This acci- dent left about 75,000 passengers of British Airways stranded. C [32] 2018 Data breach OTR Hong Kong Cathay Pacific Airways data breach of about 9.4 million customers' personal identifiable information. C [33] 2018 Data breach OTR United King- dom British Airways Data breach of about 380,000 customers' personal identifiable information. C [34] 2018 Data breach OTR USA Delta Air Lines Inc. and Sears Departmental stores re- ported a data breach of about 100,000 customers' pay- ment information through a third party. A [35] 2018 Ransomware attack OTR Bristol Airport, UK C [36] 2018 Mobile app data breach OTR Air Canada, Canada C [37] 2018 Data breach OTR Washington DC, USA Data breach on a NASA server that led to possible com- promise of stored personally identifiable information (PII) of employees on 23 October 2018. C [38] 2018 Ransomware attack OTR Chicago, USA A [20] 2018 Cyber-attack TP Sweden Cyber-A [39] 2019 Bot attacks OTR Ben Gurion Airport, Israel C [40] 2019 Cyber- Incident OTR Toulouse, France A cyber incident that resulted in unauthorised access to Airbus "Commercial Aircraft business" information systems. There was no known impact according to the report on Airbus' commercial operations. C [41] 2019 Ransomware attack C [42] 2019 Crypto min- ing Malware infection OTR Europe Cyberbit researchers discovered through their security software, known as EDR, a network infection of more than 50% of the European airport workstations by a cryptocurrency mining malware. Table 2 . 2Cont.Class Ref Year Incident Source Location Description C [43] 2019 Phishing at- tack OTR New Zealand A phishing attack targeted at Air New Zealand Air- points customers. This attack compromised the personal information of approximately 112,000 customers, with names, details and Airpoints numbers among the data exposed. C [44] 2020 Ransomware attack OTR Denver, USA A cyber-incident that involved the attacker accessing and stealing company data, which were later leaked online. C [45] 2020 Ransomware attack OTR San Antonio, USA Data breach suffered by ST Engineering's aerospace subsidiary in the USA that later lead to a ransomware attack by Maze Cyber-criminal. I [46] 2021 Software Er- ror OTR Birmingham, United King- dom A software error in the IT system that could not recog- nise mass discrepancies between loadsheet and the flight plan, leading to the aircraft having 1606 kg more take-off mass than required. C = Confidentiality, I = Integrity, A = Availability, OTR = Online Technical Report and News, TP = Technical Presentation. Table 3 . 3Cost of Cyber-Attacks in the Aviation Industry Per Year.Year No. of Persons Affected Airports Shut Down Lost Flight Hours 2003 Not Provided Not Provided Not Provided 2006 Not Provided 2 Not Provided 2008 40,000 Not Provided Not Provided 2009 48,000 Not Provided Not Provided 2013 Not Provided 77 Not Provided 2015 1400 Not Provided Not Provided 2016 Not Provided Not Provided Not Provided 2017 75,000 Not Provided Not Provided 2018 94,500,000 Not Provided 120 2019 112,000 Not Provided Not Provided 2020 Not Provided Not Provided Not Provided Legend: Not Provided = records were not made public. Table 4 . 4Some Exploitable Flaws and Components in the Civil Aviation Industry.Class Ref Component Mitigation Description C,I [53] SATCOM terminals Table 5 . 5A summary of the classes of attacks in next generation aviation systems. Network mapping attack/implementation of profiling module (training and testing algorithm) TestStad/Machine Learning Algorithm [92] Discrete-time Markov chain model (DTMC): Analysing the capacity of the block chain Block mining algorithm and Ethereum protocol [93] Manual test: Analysis and attacks of each device, Automated test: process testing of different IoT deviceDomain Ref Experimental Tests/Scenarios Tools IoT [91] Open-Source MS [94] DoS massif traffic/Transfer Data/Abnormal code/System crash DTM by Triangle Micro Works [95] Real-world attack scenarios: internal and exter- nal network attacks SDN/network function virtualisation [96] Anomaly intrusion/attacks traffic Machine learning algorithm/feature extraction [97] Command injection attack Machine learning algorithm/PLC programming by Ladder language [98] SWaT/WADI datasets: Normal and attack sce- nario Machine learning algorithm [99] Man-in-the-middle attack SDN /Python [100] LAUP algorithm(authentication)/key distribu- tion test COOJA simulator Smart Grid [101] Offline co-simulation Test-bed: DoS/FDI attacks OMNET++ [102] Access to communication link ([103]) attack model OPAL-RT [104] Deep packet inspection Software-Defined Networks/OpenFMB [105] Power supply interruption Attack/Physical damage attack Real world power system/Machine learning [106] MMS/GOOSE/SV implementation IEC 61850 Protocol/Ethernet RaspberryPi 3B+ Table 5 . 5Cont. Ref Experimental Tests/Scenarios Tools [107] HIL simulation/proof-of-concept validation Python [108] DoS/Man in the middle attacks/TCP SYN Flood Attack [116] Test of memory usage before or after instance creation OpenStack: Open-Source cloud operating system Online Machine Tool Communication [119] Side-channel attacks/stealthy data exfiltration DHCP server/TFTP Server/HTTP Server/MQTT Server [120] SQL Injection attack OpenStack implementation/Python [121] Testing traffic scenarios Openflow controller/OpenvSwitch/Network virtual-Domain DeterLab/Security Experimentation EnviRonment (SEER) [109] Recording network traffic/poisoning attack Real-Time Digital Simulator (RTDS) [110] Timing Intrusion Attack Field End-to-End Calibrator/Gold PMU [111] Test of cyber-physical sensor: IREST Idaho CPS SCADA Cybersecurity (ISAAC) testbed [112] MITM attack/DoS attack Open-source software/Raspberry Pis. FLEP-SGS Cloud [113] Flood malicious traffic (ICMP/HTTP/SYN) VMware Esxi hypervisor/A vCenter server/VMs [114] Considering small messages (about1-2 KBytes): Fast filling of the buffers MOM4Cloud architectural model. [115] UNM database: Malicious tracing logs KVM2.6.27 hypervisor/Python3.4 [117] Evaluation of performance metrics of NDN/edge cloud computing Cloud VM [118] Adding defaults: broken interconnection/abnormal extruder MTComm: ization agent [122] Time-inference attacks Software-Defined Network [123] DDoS attack OpenStack environment Cyber-Security, a new challenge for the aviation and automotive industries. H Duchamp, I Bayram, R Korhani, In Seminar in Information Systems: Applied Cybersecurity Strategy for Managers. 20Duchamp, H.; Bayram, I.; Korhani, R. Cyber-Security, a new challenge for the aviation and automotive industries. In Seminar in Information Systems: Applied Cybersecurity Strategy for Managers; 2016; pp. 1-4. Available online: https://blogs.harvard.edu/cybe rsecurity/files/2017/01/Cybersecurity-aviation-strategic-report.pdf (accessed on 20 September 2020). Aviation Cybersecurity-High Level Analysis, Major Challenges and Where the Industry Is Heading. J Monteagudo, 26Monteagudo, J. Aviation Cybersecurity-High Level Analysis, Major Challenges and Where the Industry Is Heading. 2020. Available online: https://cyberstartupobservatory.com/aviation-cybersecurity-major-challenges/ (accessed on 26 September 2020). From cyber-security deception to manipulation and gratification through gamification. X Bellekens, G Jayasekara, H Hindy, M Bures, D Brosset, C Tachtatzis, R Atkinson, International Conference on Human-Computer Interaction. Berlin/Heidelberg, GermanySpringerBellekens, X.; Jayasekara, G.; Hindy, H.; Bures, M.; Brosset, D.; Tachtatzis, C.; Atkinson, R. From cyber-security deception to manipulation and gratification through gamification. In International Conference on Human-Computer Interaction; Springer: Berlin/Heidelberg, Germany, 2019; pp. 99-114. Security and Facilitation Strategic Objective: Aviation Cybersecurity Strategy. ICAO. Security and Facilitation Strategic Objective: Aviation Cybersecurity Strategy. 2019. Available online: https://www.icao.i nt/cybersecurity/Documents/AVIATIONCYBERSECURITYSTRATEGY.EN.pdf (accessed on 6 December 2021). A Guide to Conducting a Systematic Literature Review of Information Systems Research. C Okoli, K Schabram, Okoli, C.; Schabram, K. A Guide to Conducting a Systematic Literature Review of Information Systems Research. 2010. Available online: https://asset-pdf.scinapse.io/prod/1539987097/1539987097.pdf (accessed on 6 December 2021). A Guide to Conducting a Standalone Systematic Literature Review. C Okoli, Okoli, C. A Guide to Conducting a Standalone Systematic Literature Review. 2015. Available online: https://aisel.aisnet.org/cai s/vol37/iss1/43/ (accessed on 6 December 2021). Compilation of Cyber Security Regulations, Standards, and Guidance Applicable to Civil Aviation. IATAIATA. Compilation of Cyber Security Regulations, Standards, and Guidance Applicable to Civil Aviation. 2020. Available online: https://www.iata.org/contentassets/4c51b00fb25e4b60b38376a4935e278b/compilationofcyberregulationsstandardsan dguidanceapr212.0.pdf (accessed on 6 December 2021). Aviation and cybersecurity: Opportunities for applied research. J Haass, R Sampigethaya, V Capezzuto, TR News. 30439Haass, J.; Sampigethaya, R.; Capezzuto, V. Aviation and cybersecurity: Opportunities for applied research. TR News 2016, 304, 39. Implementing cyber-security measures in airports to improve cyber-resilience. G Lykou, A Anagnostopoulou, D Gritzalis, Proceedings of the 2018 Global Internet of Things Summit (GIoTS). the 2018 Global Internet of Things Summit (GIoTS)Bilbao, SpainLykou, G.; Anagnostopoulou, A.; Gritzalis, D. Implementing cyber-security measures in airports to improve cyber-resilience. In Proceedings of the 2018 Global Internet of Things Summit (GIoTS), Bilbao, Spain, 4-7 June 2018; pp. 1-6. Smart airport cybersecurity: Threat mitigation and cyber resilience controls. G Lykou, A Anagnostopoulou, D Gritzalis, 10.3390/s19010019Sensors. 19PubMedLykou, G.; Anagnostopoulou, A.; Gritzalis, D. Smart airport cybersecurity: Threat mitigation and cyber resilience controls. Sensors 2019, 19, 19. [CrossRef] [PubMed] Cyber security for airports. K Gopalakrishnan, M Govindarasu, D W Jacobson, B M Phares, 10.7708/ijtte.2013.3(4).02Int. J. Traffic Transp. Eng. 3Gopalakrishnan, K.; Govindarasu, M.; Jacobson, D.W.; Phares, B.M. Cyber security for airports. Int. J. Traffic Transp. Eng. 2013, 3, 365-376. [CrossRef] A R Mathew, arXiv:1908.09894Airport Cyber Security and Cyber Resilience Controls. arXiv 2019. Mathew, A.R. Airport Cyber Security and Cyber Resilience Controls. arXiv 2019, arXiv:1908.09894. Cyber-attacks-The impact over airports security and prevention modalities. G Suciu, A Scheianu, A Vulpe, I Petre, V Suciu, World Conference on Information Systems and Technologies. Berlin/Heidelberg, GermanySpringerSuciu, G.; Scheianu, A.; Vulpe, A.; Petre, I.; Suciu, V. Cyber-attacks-The impact over airports security and prevention modalities. In World Conference on Information Systems and Technologies; Springer: Berlin/Heidelberg, Germany, 2018; pp. 154-162. Analysis of Today's Commercial Aircrafts and Aviation Industry Systems. P J Corretjer, Cybersecurity, 22Utica College, Utica, NY, USAMaster's ThesisCorretjer, P.J. A Cybersecurity Analysis of Today's Commercial Aircrafts and Aviation Industry Systems. Master's Thesis, Utica College, Utica, NY, USA, 2018; p. 22. Cybersecurity in Aviation: An Intrinsic Review. N Kagalwalla, P P Churi, Proceedings of the 2019 5th International Conference On Computing, Communication, Control And Automation (ICCUBEA). the 2019 5th International Conference On Computing, Communication, Control And Automation (ICCUBEA)Pune, IndiaKagalwalla, N.; Churi, P.P. Cybersecurity in Aviation: An Intrinsic Review. In Proceedings of the 2019 5th International Conference On Computing, Communication, Control And Automation (ICCUBEA), Pune, India, 19-21 September 2019; pp. 1-6. Cyber Security in Aviation, Maritime and Automotive. In Computation and Big Data for Transport. M Lehto, SpringerBerlin/Heidelberg, GermanyLehto, M. Cyber Security in Aviation, Maritime and Automotive. In Computation and Big Data for Transport; Springer: Berlin/Heidelberg, Germany, 2020; pp. 19-32. Cyber Threats to the Aviation Industry. I Cyberrisk, 19CyberRisk, I. Cyber Threats to the Aviation Industry. 2020. Available online: https://cyberriskinternational.com/2020/04/06/cy ber-threats-to-the-aviation-industry/ (accessed on 19 September 2020). Cyber Threats to the Aerospace and Defense Industries. Fireeye, 24Fireeye. Cyber Threats to the Aerospace and Defense Industries. 2016. Available online: https://www.fireeye.com/content/dam /fireeye-www/current-threats/pdfs/ib-aerospace.pdf (accessed on 24 September 2020). Infamous APT Groups: Fast Fact Trading Cards. Varonis, Varonis. 9 Infamous APT Groups: Fast Fact Trading Cards. 2020. Available online: https://www.varonis.com/blog/apt-groups (accessed on 6 December 2021). Aviation Cybersecurity: An Overview. G C Kessler, J P Craiger, Kessler, G.C.; Craiger, J.P. Aviation Cybersecurity: An Overview. 2018. Available online: https://commons.erau.edu/ntas/2018 /presentations/37/ (accessed on 6 December 2021). Aviation and Cybersecurity in the Digital World. R Abeyratne, Aviation in the Digital Age. Berlin/Heidelberg, GermanySpringerAbeyratne, R. Aviation and Cybersecurity in the Digital World. In Aviation in the Digital Age; Springer: Berlin/Heidelberg, Germany, 2020; pp. 173-211. The State of Civil Aviation Cybersecurity. A Arampatzis, 30Arampatzis, A. The State of Civil Aviation Cybersecurity. 2020. Available online: https://www.tripwire.com/state-of-security/s ecurity-data-protection/civil-aviation-cybersecurity/ (accessed on 30 September 2020). Analysis of the Cyber Attacks against ADS-B Perspective of Aviation Experts. C A P Viveros, Tartu, EstoniaUniversity of TartuMaster's ThesisViveros, C.A.P. Analysis of the Cyber Attacks against ADS-B Perspective of Aviation Experts. Master's Thesis, University of Tartu, Tartu, Estonia, 2016. Slammer Didn't Hurt Us, but Other Attacks Coming. G Gross, Faa, 19Gross, G. FAA: Slammer Didn't Hurt Us, but Other Attacks Coming. 2003. Available online: https://www.networkworld.com/a rticle/2339600/faa--slammer-didn-t-hurt-us--but-other-attacks-coming.html (accessed on 19 September 2020). Traffic Faces 'Serious Harm' from Cyber Attackers. D Goodin, Air, 19Goodin, D. US Air Traffic Faces 'Serious Harm' from Cyber Attackers. 2009. Available online: Https://www.theregister.com/20 09/05/07/air-traffic-cyber-attack/ (accessed on 19 September 2020). Report: Hackers Broke into FAA Air Traffic Control Systems. M Ellinor, 19Ellinor, M. Report: Hackers Broke into FAA Air Traffic Control Systems. 2009. Available online: https://www.cnet.com/tech/se rvices-and-software/report-hackers-broke-into-faa-air-traffic-control-systems/ (accessed on 19 September 2020). Istanbul Ataturk International Airport Targeted by a Cyber-Attack. P Paganini, 19Paganini, P. Istanbul Ataturk International Airport Targeted by a Cyber-Attack. 2013. Available online: https://securityaffairs.co /wordpress/16721/hacking/istanbul-ataturk-international-airport-targeted-by-cyber-attack.html (accessed on 19 September 2020). Phishing Scam Targeted 75 US Airports. W Welsh, 19Welsh, W. Phishing Scam Targeted 75 US Airports. 2014. Available online: https://www.informationweek.com/?1 (accessed on 19 September 2020). Attack On LOT Polish Airline Grounds 10 Flights. T Brewster, 19Brewster, T. Attack On LOT Polish Airline Grounds 10 Flights. 2015. Available online: https://www.forbes.com/sites/thomasb rewster/2015/06/22/lot-airline-hacked/?sh=6e4015fe124e (accessed on 19 September 2020). Cyber-Security Challenges in Aviation. K Kirkliauskaite, Main, 19Kirkliauskaite, K. Main Cyber-Security Challenges in Aviation. 2020. Available online: https://www.aerotime.aero/25150-main -cyber-security-challenges-in-aviation (accessed on 19 September 2020). Ukraine Says to Review Cyber Defenses after Airport Targeted from Russia. P Polityuk, A Prentice, Polityuk, P.; Prentice, A. Ukraine Says to Review Cyber Defenses after Airport Targeted from Russia. 2016. Available online: https://www.reuters.com/article/us-ukraine-cybersecurity-malware-idUSKCN0UW0R0 (accessed on 6 October 2020). Cathay Pacific Cyber Attack Is World's Biggest Airline Data Breach. K Park, 19Park, K. Cathay Pacific Cyber Attack Is World's Biggest Airline Data Breach. 2018. Available online: https://www.insurancejou rnal.com/news/international/2018/10/26/505699.html (accessed on 19 September 2020). British Airways Says 'Sophisticated' Hacker Stole Data on 380,000 Customers. P Sandle, 19Sandle, P. British Airways Says 'Sophisticated' Hacker Stole Data on 380,000 Customers. 2018. Available online: https: //www.insurancejournal.com/news/international/2018/09/10/500566.htm (accessed on 19 September 2020). . K Singh, Delta, Sears Report Data Breach by Service Provider. 19Singh, K. Delta, Sears Report Data Breach by Service Provider. 2018. Available online: https://www.insurancejournal.com/new s/national/2018/04/05/485440.htm (accessed on 19 September 2020). Brit Airport Pulls Flight info System Offline after Attack by 'Online Crims'. J Leyden, 19Leyden, J. Brit Airport Pulls Flight info System Offline after Attack by 'Online Crims'. 2018. Available online: https://www.ther egister.com/2018/09/17/bristol-airport-cyber-attack/ (accessed on 19 September 2020). Air Canada Suffers Major App Data Breach of 20,000 Customers. T Sandle, 19Sandle, T. Air Canada Suffers Major App Data Breach of 20,000 Customers. 2018. Available online: https://www.digitaljournal.c om/business/air-canada-in-major-app-data-breach/article/530763 (accessed on 19 September 2020). Potential Personally Identifiable Information (PII) Compromise of NASA Servers. B Gibbs, 22Gibbs, B. Potential Personally Identifiable Information (PII) Compromise of NASA Servers. 2018. Available online: Http: //spaceref.com/news/viewsr.html?pid=52074/ (accessed on 22 September 2020). Boeing Hit by WannaCry Virus, but Says Attack Caused Little Damage. D Gates, 22Gates, D. Boeing Hit by WannaCry Virus, but Says Attack Caused Little Damage. 2018. Available online: https://www.seattletim es.com/business/boeing-aerospace/boeing-hit-by-wannacry-virus-fears-it-could-cripple-some-jet-production/ (accessed on 22 September 2020). Israeli Airports Fend Off 3 Million Attempted Attacks a Day, Cyber Head Says. S Solomon, 19Solomon, S. Israeli Airports Fend Off 3 Million Attempted Attacks a Day, Cyber Head Says. 2019. Available online: https://ww w.timesofisrael.com/israeli-airports-fend-off-3-million-attempted-attacks-a-day-cyber-head-says/ (accessed on 19 September 2020). Airbus Statement on Cyber Incident. M Duvelleroy, 22Duvelleroy, M. Airbus Statement on Cyber Incident. 2019. Available online: https://www.airbus.com/en/newsroom/press-rel eases/2019-01-airbus-statement-on-cyber-incident (accessed on 22 September 2020). Ransomware Attack on Albany Airport on Christmas. N Goud, 25Goud, N. Ransomware Attack on Albany Airport on Christmas 2019. 2019. Available online: https://www.cybersecurity-inside rs.com/ransomware-attack-on-albany-airport-on-christmas-2019/ (accessed on 25 September 2020). Cryptocurrency Miners Infected More than 50% of the European Airport Workstations. N Team, Team, N. Cryptocurrency Miners Infected More than 50% of the European Airport Workstations. 2019. Available online: https: //www.cyberdefensemagazine.com/cryptocurrency-miners-infected-more-than-50-of-the-european-airport-workstations/ (ac- cessed on 25 September 2020). Air New Zealand Experiences Data Breach. M Narendra, Privacy, 25Narendra, M. Privacy: Air New Zealand Experiences Data Breach. 2019. Available online: https://www.grcworldforums.com/n ews/2019/08/16/privacy-air-new-zealand-experiences-data-breach/ (accessed on 25 September 2020). DoppelPaymer Ransomware Used to Steal Data from Supplier to SpaceX, Tesla. E Montalbano, 22Montalbano, E. DoppelPaymer Ransomware Used to Steal Data from Supplier to SpaceX, Tesla. 2020. Available online: https://threatpost.com/doppelpaymer-ransomware-used-to-steal-data-from-supplier-to-spacex-tesla/153393/ (accessed on 22 September 2020). Ransomware Attack hits ST Engineering's USA Aerospace Unit. A Chua, 23Chua, A. Ransomware Attack hits ST Engineering's USA Aerospace Unit. 2020. Available online: https://www.flightglobal.com/ aerospace/ransomware-attack-hits-st-engineerings-usa-aerospace-unit/138722.article (accessed on 23 September 2020). Airline Software Super-Bug: Flight Loads Miscalculated Because Women Using 'Miss' Were Treated as Children. T Claburn, Claburn, T. Airline Software Super-Bug: Flight Loads Miscalculated Because Women Using 'Miss' Were Treated as Children. 2021. Available online: https://www.theregister.com/2021/04/08/tuisoftwaremistake/ (accessed on 9 April 2021). Number of Public and Private Airports in the United States from 1990 to 2019*. 2020. E Mazareanu, 28Mazareanu, E. Number of Public and Private Airports in the United States from 1990 to 2019*. 2020. Available online: https://www.statista.com/statistics/183496/number-of-airports-in-the-united-states-since-1990/ (accessed on 28 November 2020). Cyber Threats against the Aviation Industry. P Paganini, 19Paganini, P. Cyber Threats against the Aviation Industry. 2014. Available online: https://resources.infosecinstitute.com/topic/c yber-threats/ (accessed on 19 September 2020). Overcoming the Cyber Threat in Aviation. Thales, 24Thales. Overcoming the Cyber Threat in Aviation. 2016. Available online: https://onboard.thalesgroup.com/overcoming-cyber -threat-aviation/ (accessed on 24 September 2020). Feds Say that Banned Researcher Commandeered a Plane. K Zetter, Zetter, K. Feds Say that Banned Researcher Commandeered a Plane. 2015. Available online: https://www.wired.com/2015/05/ (accessed on 18 January 2022). Will Your Airliner Get Hacked?. G Freiherr, Freiherr, G. Will Your Airliner Get Hacked? 2021. Available online: https://www.smithsonianmag.com/air-space-magazine/wi ll-your-airliner-get-hacked-180976752/ (accessed on 18 January 2022). Air Traffic Security against Cyber Threats. A Efe, B Tuzlupınar, A C Cavlan, Bilge Int. J. Sci. Technol. Res. 2021Efe, A.; Tuzlupınar, B.; Cavlan, A.C. Air Traffic Security against Cyber Threats. Bilge Int. J. Sci. Technol. Res. 2021, 3, 135-143. Available online: https://dergipark.org.tr/en/pub/bilgesci/issue/49118/405074 (accessed on 18 January 2021). Up Call for SATCOM Security. Technical White Paper. R Santamarta, Wake, 19Santamarta, R. A Wake-Up Call for SATCOM Security. Technical White Paper. 2014. Available online: https://www.secnews.gr /wp-content/uploads/Files/Satcom_Security.pdf (accessed on 19 September 2020). Boeing 757 Testing Shows Airplanes Vulnerable to Hacking, DHS Says. C Biesecker, Avionics InternationalNew York, NY, USABiesecker, C. Boeing 757 Testing Shows Airplanes Vulnerable to Hacking, DHS Says; Avionics International: New York, NY, USA, 2017 Potential attacks on onboard aerospace systems. A Dessiatnikoff, Y Deswarte, E Alata, V Nicomette, 10.1109/MSP.2012.104IEEE Secur. Priv. 10Dessiatnikoff, A.; Deswarte, Y.; Alata, E.; Nicomette, V. Potential attacks on onboard aerospace systems. IEEE Secur. Priv. 2012, 10, 71-74. [CrossRef] Embedded systems security: Threats, vulnerabilities, and attack taxonomy. D Papp, Z Ma, L Buttyan, Proceedings of the 2015 13th Annual Conference on Privacy, Security and Trust (PST). the 2015 13th Annual Conference on Privacy, Security and Trust (PST)Izmir, TurkeyPapp, D.; Ma, Z.; Buttyan, L. Embedded systems security: Threats, vulnerabilities, and attack taxonomy. In Proceedings of the 2015 13th Annual Conference on Privacy, Security and Trust (PST), Izmir, Turkey, 21-23 July 2015; pp. 145-152. GAO. Aviation Cybersecurity. 2020. Available online. 12GAO. Aviation Cybersecurity. 2020. Available online: https://www.gao.gov/assets/gao-21-86.pdf (accessed on 12 May 2020). . Physical Science and Technology. Academic PressEncyclopedia of Physical Science and Technology; Academic Press: Cambridge, MA, USA, 1987. System and Method for Data Collection in an Avionics Network. B Smith, App. 11/092470U.S. PatentSmith, B. System and Method for Data Collection in an Avionics Network. U.S. Patent App. 11/092,470, 28 September 2006. Challenges of security and trust in avionics wireless networks. R N Akram, K Markantonakis, R Holloway, S Kariyawasam, S Ayub, A Seeam, R Atkinson, Proceedings of the 2015 IEEE/AIAA 34th Digital Avionics Systems Conference (DASC). the 2015 IEEE/AIAA 34th Digital Avionics Systems Conference (DASC)Prague, Czech RepublicAkram, R.N.; Markantonakis, K.; Holloway, R.; Kariyawasam, S.; Ayub, S.; Seeam, A.; Atkinson, R. Challenges of security and trust in avionics wireless networks. In Proceedings of the 2015 IEEE/AIAA 34th Digital Avionics Systems Conference (DASC), Prague, Czech Republic, 13-17 September 2015; pp. 777-780. An efficient, secure and trusted channel protocol for avionics wireless networks. R N Akram, K Markantonakis, K Mayes, P F Bonnefoi, D Sauveron, S Chaumette, Proceedings of the 2016 IEEE/AIAA 35th Digital Avionics Systems Conference (DASC). the 2016 IEEE/AIAA 35th Digital Avionics Systems Conference (DASC)Sacramento, CA, USAAkram, R.N.; Markantonakis, K.; Mayes, K.; Bonnefoi, P.F.; Sauveron, D.; Chaumette, S. An efficient, secure and trusted channel protocol for avionics wireless networks. In Proceedings of the 2016 IEEE/AIAA 35th Digital Avionics Systems Conference (DASC), Sacramento, CA, USA, 25-29 September 2016; pp. 1-10. Security and performance comparison of different secure channel protocols for Avionics Wireless Networks. R N Akram, K Markantonakis, K Mayes, P F Bonnefoi, D Sauveron, S Chaumette, Proceedings of the 2016 IEEE/AIAA 35th Digital Avionics Systems Conference (DASC). the 2016 IEEE/AIAA 35th Digital Avionics Systems Conference (DASC)Sacramento, CA, USAAkram, R.N.; Markantonakis, K.; Mayes, K.; Bonnefoi, P.F.; Sauveron, D.; Chaumette, S. Security and performance comparison of different secure channel protocols for Avionics Wireless Networks. In Proceedings of the 2016 IEEE/AIAA 35th Digital Avionics Systems Conference (DASC), Sacramento, CA, USA, 25-29 September 2016; pp. 1-8. A secure and trusted boot process for avionics wireless networks. K Markantonakis, R N Akram, R Holloway, Proceedings of the 2016 Integrated Communications Navigation and Surveillance (ICNS). the 2016 Integrated Communications Navigation and Surveillance (ICNS)Herndon, VA, USAMarkantonakis, K.; Akram, R.N.; Holloway, R. A secure and trusted boot process for avionics wireless networks. In Proceedings of the 2016 Integrated Communications Navigation and Surveillance (ICNS), Herndon, VA, USA, 19-21 April 2016; pp. 1C3-1-1C3-9. How ACARS Will Evolve, Not Disappear, With Transition to IPS. W Bellamy, Iii, 28Bellamy, W., III. How ACARS Will Evolve, Not Disappear, With Transition to IPS. 2018. Available online: https://www.aviation today.com/2018/06/12/acars-will-evolve-not-disappear-transition-ips/ (accessed on 28 September 2020). M Smith, D Moser, M Strohmeier, V Lenders, I Martinovic, arXiv:1705.07065Analyzing privacy breaches in the aircraft communications addressing and reporting system (acars). arXiv 2017. Smith, M.; Moser, D.; Strohmeier, M.; Lenders, V.; Martinovic, I. Analyzing privacy breaches in the aircraft communications addressing and reporting system (acars). arXiv 2017, arXiv:1705.07065. Undermining privacy in the aircraft communications addressing and reporting system (ACARS). M Smith, D Moser, M Strohmeier, V Lenders, I Martinovic, 10.1515/popets-2018-0023Proc. Priv. Enhancing Technol. Smith, M.; Moser, D.; Strohmeier, M.; Lenders, V.; Martinovic, I. Undermining privacy in the aircraft communications addressing and reporting system (ACARS). Proc. Priv. Enhancing Technol. 2018, 2018, 105-122. [CrossRef] A Safety Assessment Framework for Automatic Dependent Surveillance Broadcast (ADS-B) and Its Potential Impact on Aviation Safety. B S Ali, Imperial College London, London, UKCentre for Transport Studies, Department of Civil and EnvironmentalPh.D. ThesisAli, B.S. A Safety Assessment Framework for Automatic Dependent Surveillance Broadcast (ADS-B) and Its Potential Impact on Aviation Safety. Ph.D. Thesis, Centre for Transport Studies, Department of Civil and Environmental, Imperial College London, London, UK, 2013. Evaluation of the capability of automatic dependent surveillance broadcast to meet the requirements of future airborne surveillance applications. B S Ali, W Schuster, W Y Ochieng, 10.1017/S0373463316000412J. Navig. 7049Ali, B.S.; Schuster, W.; Ochieng, W.Y. Evaluation of the capability of automatic dependent surveillance broadcast to meet the requirements of future airborne surveillance applications. J. Navig. 2017, 70, 49. [CrossRef] A safety assessment framework for the Automatic Dependent Surveillance Broadcast (ADS-B) system. B S Ali, W Y Ochieng, W Schuster, A Majumdar, T K Chiew, 10.1016/j.ssci.2015.04.011Saf. Sci. 78Ali, B.S.; Ochieng, W.Y.; Schuster, W.; Majumdar, A.; Chiew, T.K. A safety assessment framework for the Automatic Dependent Surveillance Broadcast (ADS-B) system. Saf. Sci. 2015, 78, 91-100. [CrossRef] Analysis of vulnerabilities, attacks, countermeasures and overall risk of the Automatic Dependent Surveillance-Broadcast (ADS-B) system. M R Manesh, N Kaabouch, 10.1016/j.ijcip.2017.10.002Int. J. Crit. Infrastruct. Prot. 19Manesh, M.R.; Kaabouch, N. Analysis of vulnerabilities, attacks, countermeasures and overall risk of the Automatic Dependent Surveillance-Broadcast (ADS-B) system. Int. J. Crit. Infrastruct. Prot. 2017, 19, 16-31. [CrossRef] Performance Analysis of Automatic Dependent Surveillance-Broadcast (ADS-B) and Breakdown of Anomalies. A Tabassum, 28Tabassum, A. Performance Analysis of Automatic Dependent Surveillance-Broadcast (ADS-B) and Breakdown of Anomalies. 2017. Available online: https://www.proquest.com/openview/8e29fdfcd2afbe8ce28f760d0a314248/1?pq-origsite=gscholar&cbl=18750 (accessed on 28 September 2020). On the security of the automatic dependent surveillance-broadcast protocol. M Strohmeier, V Lenders, I Martinovic, 10.1109/COMST.2014.2365951IEEE Commun. Surv. Tutor. 17Strohmeier, M.; Lenders, V.; Martinovic, I. On the security of the automatic dependent surveillance-broadcast protocol. IEEE Commun. Surv. Tutor. 2014, 17, 1066-1087. [CrossRef] Information technology security threats to modern e-enabled aircraft: A cautionary note. M Wolf, M Minzlaff, M Moser, 10.2514/1.I010156J. Aerosp. Inf. Syst. 11Wolf, M.; Minzlaff, M.; Moser, M. Information technology security threats to modern e-enabled aircraft: A cautionary note. J. Aerosp. Inf. Syst. 2014, 11, 447-457. [CrossRef] Airbus deliver Electronic Flight Bag Services to Airlines Worldwide. E Howard, Dell, 12Howard, E. Dell and Airbus deliver Electronic Flight Bag Services to Airlines Worldwide. 2013. Available online: https://www.intelligent-aerospace.com/commercial/article/16539972/dell-and-airbus-deliver-electronic-flight-bag-services -to-airlines-worldwide (accessed on 12 February 2021). Fokker Services Certifies iPad Electronic Flight Bag (EFB) for Bombardier Dash 8 Twin-Engine Passenger Turboprop. J Keller, 12Keller, J. Fokker Services Certifies iPad Electronic Flight Bag (EFB) for Bombardier Dash 8 Twin-Engine Passenger Turboprop. 2013. Available online: https://www.intelligent-aerospace.com/commercial/article/16539248/fokker-services-certifies-ipad -electronic-flight-bag-efb-for-bombardier-dash-8-twinengine-passenger-turboprop (accessed on 12 February 2021). Machine Learning Approach to Cyber Security in Aviation. A R Taleqani, K E Nygard, R Bridgelall, J Hough, Proceedings of the 2018 IEEE International Conference on Electro/Information Technology (EIT). the 2018 IEEE International Conference on Electro/Information Technology (EIT)Rochester, MI, USATaleqani, A.R.; Nygard, K.E.; Bridgelall, R.; Hough, J. Machine Learning Approach to Cyber Security in Aviation. In Proceedings of the 2018 IEEE International Conference on Electro/Information Technology (EIT), Rochester, MI, USA, 3-5 May 2018; pp. 0147-0152. Security QoS profiling against cyber terrorism in airport network systems. F Ugwoke, K Okafor, V Chijindu, Proceedings of the 2015 International Conference on Cyberspace (CYBER-Abuja). the 2015 International Conference on Cyberspace (CYBER-Abuja)Abuja, NigeriaUgwoke, F.; Okafor, K.; Chijindu, V. Security QoS profiling against cyber terrorism in airport network systems. In Proceedings of the 2015 International Conference on Cyberspace (CYBER-Abuja), Abuja, Nigeria, 4-7 November 2015; pp. 241-251. Cyber-Security Application for SESAR OFA 05.01.01-Final Report. O Delain, O Ruhlmann, E Vautier, C Johnson, M Shreeve, P Sirko, V Prozserin, Delain, O.; Ruhlmann, O.; Vautier, E.; Johnson, C.; Shreeve, M.; Sirko, P.; Prozserin, V. Cyber-Security Application for SESAR OFA 05.01.01-Final Report. 2016. Available online: https://www.sesarju.eu/sites/default/files/documents/news/Addressingairp ortcybersecurityFull0.pdf (accessed on 3 April 2020). Cyber-Physical Security: Protecting Critical Infrastructure at the State and Local Level. R M Clark, S Hakim, Springer3Berlin/Heidelberg, GermanyClark, R.M.; Hakim, S. Cyber-Physical Security: Protecting Critical Infrastructure at the State and Local Level; Springer: Berlin/Heidelberg, Germany, 2016; Volume 3. Cyber Security: Deterrence and IT Protection for Critical Infrastructures. M Martellini, SpringerBerlin/Heidelberg, GermanyMartellini, M. Cyber Security: Deterrence and IT Protection for Critical Infrastructures; Springer: Berlin/Heidelberg, Germany, 2013. What Everyone Needs to Know. P W Singer, A Friedman, Cybersecurity, OUPUSA; New York, NY, USASinger, P.W.; Friedman, A. Cybersecurity: What Everyone Needs to Know; OUP USA: New York, NY, USA, 2014. . M M Zamorano, M C Fernández-Laso, De Esteban Curiel, J. Smart Airports: Acceptance of Technology by Passengers. Cuad. Tur. 45Zamorano, M.M.; Fernández-Laso, M.C.; de Esteban Curiel, J. Smart Airports: Acceptance of Technology by Passengers. Cuad. Tur. 2020, 45, 567-570. A Holistic Review of Cybersecurity and Reliability Perspectives in Smart Airports. N Koroniotis, N Moustafa, F Schiliro, P Gauravaram, H Janicke, 10.1109/ACCESS.2020.3036728IEEE Access. 8Koroniotis, N.; Moustafa, N.; Schiliro, F.; Gauravaram, P.; Janicke, H. A Holistic Review of Cybersecurity and Reliability Perspectives in Smart Airports. IEEE Access 2020, 8, 209802-209834. [CrossRef] Airport: How IOT and New Technologies Shaping the Future of Airport Industry. I N Akar, M H Yaqoobi, Smart, Akar, I.N.; Yaqoobi, M.H. Smart Airport: How IOT and New Technologies Shaping the Future of Airport Industry. Available online: https://hadiyaqoobi.github.io/Graduation-project/documents/Thesis202.1.pdf (accessed on 3 April 2020). An adaptive security architecture for future aircraft communications. M S B Mahmoud, N Larrieu, A Pirovano, A Varet, 10.1109/DASC.2010.5655363Proceedings of the 29th Digital Avionics Systems Conference. the 29th Digital Avionics Systems ConferenceSalt Lake City, UT, USApp. 3.E.2-1-3.E.2-16. [CrossRefMahmoud, M.S.B.; Larrieu, N.; Pirovano, A.; Varet, A. An adaptive security architecture for future aircraft communications. In Proceedings of the 29th Digital Avionics Systems Conference, Salt Lake City, UT, USA, 3-7 October 2010; pp. 3.E.2-1-3.E.2-16. [CrossRef] Computer security in aviation: Vulnerabilities, threats, and risks. P G Neumann, George Washington University, International Conference on Aviation Safety in the 21st Century; White House Commission on Safety and Security. Washington, DC, USANeumann, P.G. Computer security in aviation: Vulnerabilities, threats, and risks. In International Conference on Aviation Safety in the 21st Century; White House Commission on Safety and Security and George Washington University: Washington, DC, USA, 1997. Secure operation, control, and maintenance of future e-enabled airplanes. K Sampigethaya, R Poovendran, L Bushnell, 10.1109/JPROC.2008.2006123Proc. IEEE. IEEE96Sampigethaya, K.; Poovendran, R.; Bushnell, L. Secure operation, control, and maintenance of future e-enabled airplanes. Proc. IEEE 2008, 96, 1992-2007. [CrossRef] Future e-enabled aircraft communications and security: The next 20 years and beyond. K Sampigethaya, R Poovendran, S Shetty, T Davis, C Royalty, 10.1109/JPROC.2011.2162209Proc. IEEE. IEEE99Sampigethaya, K.; Poovendran, R.; Shetty, S.; Davis, T.; Royalty, C. Future e-enabled aircraft communications and security: The next 20 years and beyond. Proc. IEEE 2011, 99, 2040-2055. [CrossRef] The ai-based cyber threat landscape: A survey. N Kaloudi, J Li, 10.1145/3372823ACM Comput. Surv. (CSUR). 53Kaloudi, N.; Li, J. The ai-based cyber threat landscape: A survey. ACM Comput. Surv. (CSUR) 2020, 53, 1-34. [CrossRef] M Brundage, S Avin, J Clark, H Toner, P Eckersley, B Garfinkel, A Dafoe, P Scharre, T Zeitzoff, B Filar, arXiv:1802.07228The malicious use of artificial intelligence: Forecasting, prevention, and mitigation. arXiv 2018. Brundage, M.; Avin, S.; Clark, J.; Toner, H.; Eckersley, P.; Garfinkel, B.; Dafoe, A.; Scharre, P.; Zeitzoff, T.; Filar, B.; et al. The malicious use of artificial intelligence: Forecasting, prevention, and mitigation. arXiv 2018, arXiv:1802.07228. S Siboni, V Sachidananda, A Shabtai, Y Elovici, arXiv:1610.05971Security Testbed for the Internet of Things. arXiv 2016. Siboni, S.; Sachidananda, V.; Shabtai, A.; Elovici, Y. Security Testbed for the Internet of Things. arXiv 2016, arXiv:1610.05971. Capacity of blockchain based internet-of-things: Testbed and analysis. X Wang, G Yu, X Zha, W Ni, R P Liu, Y J Guo, K Zheng, X Niu, 10.1016/j.iot.2019.100109Internet Things 2019, 8, 100109. [CrossRefWang, X.; Yu, G.; Zha, X.; Ni, W.; Liu, R.P.; Guo, Y.J.; Zheng, K.; Niu, X. Capacity of blockchain based internet-of-things: Testbed and analysis. Internet Things 2019, 8, 100109. [CrossRef] Design and implementation of automated IoT security testbed. O A Waraga, M Bettayeb, Q Nasir, M A Talib, 10.1016/j.cose.2019.101648Comput. Secur. 88Waraga, O.A.; Bettayeb, M.; Nasir, Q.; Talib, M.A. Design and implementation of automated IoT security testbed. Comput. Secur. 2020, 88, 101648. [CrossRef] Design and implementation of cybersecurity testbed for industrial IoT systems. S Lee, S Lee, H Yoo, S Kwon, T Shon, 10.1007/s11227-017-2219-zJ. Supercomput. 74Lee, S.; Lee, S.; Yoo, H.; Kwon, S.; Shon, T. Design and implementation of cybersecurity testbed for industrial IoT systems. J. Supercomput. 2018, 74, 4506-4520. [CrossRef] SODA: A software-defined security framework for IoT environments. Y Kim, J Nam, T Park, S Scott-Hayward, S Shin, 10.1016/j.comnet.2019.106889Comput. Netw. 163Kim, Y.; Nam, J.; Park, T.; Scott-Hayward, S.; Shin, S. SODA: A software-defined security framework for IoT environments. Comput. Netw. 2019, 163, 106889. [CrossRef] Selection of effective machine learning algorithm and Bot-IoT attacks traffic identification for internet of things in smart city. M Shafiq, Z Tian, Y Sun, X Du, M Guizani, 10.1016/j.future.2020.02.017Future Gener. Comput. Syst. 2020Shafiq, M.; Tian, Z.; Sun, Y.; Du, X.; Guizani, M. Selection of effective machine learning algorithm and Bot-IoT attacks traffic identification for internet of things in smart city. Future Gener. Comput. Syst. 2020, 107, 433-442. [CrossRef] Effect of imbalanced datasets on security of industrial IoT using machine learning. M Zolanvari, M A Teixeira, R Jain, Proceedings of the 2018 IEEE International Conference on Intelligence and Security Informatics (ISI). the 2018 IEEE International Conference on Intelligence and Security Informatics (ISI)Miami, FL, USA, 9Zolanvari, M.; Teixeira, M.A.; Jain, R. Effect of imbalanced datasets on security of industrial IoT using machine learning. In Proceedings of the 2018 IEEE International Conference on Intelligence and Security Informatics (ISI), Miami, FL, USA, 9-11 November 2018; pp. 112-117. A Dual-Isolation-Forests-Based Attack Detection Framework for Industrial Control Systems. M Elnour, N Meskin, K Khan, R Jain, 10.1109/ACCESS.2020.2975066IEEE Access8Elnour, M.; Meskin, N.; Khan, K.; Jain, R. A Dual-Isolation-Forests-Based Attack Detection Framework for Industrial Control Systems. IEEE Access 2020, 8, 36639-36651. [CrossRef] Enhancing IoT security through network softwarization and virtual security appliances. A Molina Zarca, J Bernal Bernabe, I Farris, Y Khettab, T Taleb, A Skarmeta, 10.1002/nem.2038Int. J. Netw. Manag. 28Molina Zarca, A.; Bernal Bernabe, J.; Farris, I.; Khettab, Y.; Taleb, T.; Skarmeta, A. Enhancing IoT security through network softwarization and virtual security appliances. Int. J. Netw. Manag. 2018, 28, e2038. [CrossRef] Testbed evaluation of Lightweight Authentication Protocol (LAUP) for 6LoWPAN wireless sensor networks. Arockia Baskaran, A G R Nanda, P Nepal, S He, S , 10.1002/cpe.4868Concurr. Comput. Pract. Exp. 314868Arockia Baskaran, A.G.R.; Nanda, P.; Nepal, S.; He, S. Testbed evaluation of Lightweight Authentication Protocol (LAUP) for 6LoWPAN wireless sensor networks. Concurr. Comput. Pract. Exp. 2019, 31, e4868. [CrossRef] Implementation and development of an offline co-simulation testbed for studies of power systems cyber security and control verification. E Hammad, M Ezeme, A Farraj, 10.1016/j.ijepes.2018.07.058Int. J. Electr. Power Energy Syst. 104Hammad, E.; Ezeme, M.; Farraj, A. Implementation and development of an offline co-simulation testbed for studies of power systems cyber security and control verification. Int. J. Electr. Power Energy Syst. 2019, 104, 817-826. [CrossRef] Real-time cyber physical system testbed for power system security and control. S Poudel, Z Ni, N Malla, 10.1016/j.ijepes.2017.01.016Int. J. Electr. Power Energy Syst. 90Poudel, S.; Ni, Z.; Malla, N. Real-time cyber physical system testbed for power system security and control. Int. J. Electr. Power Energy Syst. 2017, 90, 124-133. [CrossRef] Cyber-physical security testbeds: Architecture, application, and evaluation for smart grid. A Hahn, A Ashok, S Sridhar, M Govindarasu, 10.1109/TSG.2012.2226919IEEE Trans. Smart Grid. 4Hahn, A.; Ashok, A.; Sridhar, S.; Govindarasu, M. Cyber-physical security testbeds: Architecture, application, and evaluation for smart grid. IEEE Trans. Smart Grid 2013, 4, 847-855. [CrossRef] Implementation of deep packet inspection in smart grids and industrial Internet of Things: Challenges and opportunities. G De La Torre, P Rad, K K R Choo, 10.1016/j.jnca.2019.02.022J. Netw. Comput. Appl. 135De La Torre, G.; Rad, P.; Choo, K.K.R. Implementation of deep packet inspection in smart grids and industrial Internet of Things: Challenges and opportunities. J. Netw. Comput. Appl. 2019, 135, 32-46. [CrossRef] Epic: An electric power testbed for research and training in cyber physical systems security. S Adepu, N K Kandasamy, A Mathur, Computer Security. Berlin/Heidelberg, GermanySpringerAdepu, S.; Kandasamy, N.K.; Mathur, A. Epic: An electric power testbed for research and training in cyber physical systems security. In Computer Security; Springer: Berlin/Heidelberg, Germany, 2018; pp. 37-52. Communication Model of Smart Substation for Cyber-Detection Systems. R Fujdiak, P Blazek, P Chmelar, P Dittrich, M Voznak, P Mlynek, J Slacik, P Musil, P Jurka, J Misurec, International Conference on Computer Networks. Berlin/Heidelberg, GermanySpringerFujdiak, R.; Blazek, P.; Chmelar, P.; Dittrich, P.; Voznak, M.; Mlynek, P.; Slacik, J.; Musil, P.; Jurka, P.; Misurec, J. Communi- cation Model of Smart Substation for Cyber-Detection Systems. In International Conference on Computer Networks; Springer: Berlin/Heidelberg, Germany, 2019; pp. 256-271. The Development and Application of a DC Microgrid Testbed for Distributed Microgrid Energy Management System. Z Cheng, M Y Chow, Proceedings of the IECON 2018-44th Annual Conference of the IEEE Industrial Electronics Society. the IECON 2018-44th Annual Conference of the IEEE Industrial Electronics SocietyWashington, DC, USACheng, Z.; Chow, M.Y. The Development and Application of a DC Microgrid Testbed for Distributed Microgrid Energy Management System. In Proceedings of the IECON 2018-44th Annual Conference of the IEEE Industrial Electronics Society, Washington, DC, USA, 21-23 October 2018; pp. 300-305. Integrated simulation to analyze the impact of cyber-attacks on the power grid. R Liu, A Srivastava, Proceedings of the 2015 Workshop on Modeling and Simulation of Cyber-Physical Energy Systems (MSCPES). the 2015 Workshop on Modeling and Simulation of Cyber-Physical Energy Systems (MSCPES)Seattle, WA, USALiu, R.; Srivastava, A. Integrated simulation to analyze the impact of cyber-attacks on the power grid. In Proceedings of the 2015 Workshop on Modeling and Simulation of Cyber-Physical Energy Systems (MSCPES), Seattle, WA, USA, 13 April 2015; pp. 1-6. The idaho CPS smart grid cybersecurity testbed. I A Oyewumi, A A Jillepalli, P Richardson, M Ashrafuzzaman, B K Johnson, Y Chakhchoukh, M A Haney, F T Sheldon, D C De Leon, Isaac, Proceedings of the 2019 IEEE Texas Power and Energy Conference (TPEC). the 2019 IEEE Texas Power and Energy Conference (TPEC)College Station, TX, USA, 7-8Oyewumi, I.A.; Jillepalli, A.A.; Richardson, P.; Ashrafuzzaman, M.; Johnson, B.K.; Chakhchoukh, Y.; Haney, M.A.; Sheldon, F.T.; de Leon, D.C. ISAAC: The idaho CPS smart grid cybersecurity testbed. In Proceedings of the 2019 IEEE Texas Power and Energy Conference (TPEC), College Station, TX, USA, 7-8 February 2019; pp. 1-6. Testbed for Timing Intrusion Evaluation and Tools for Lab and Field Testing of Synchrophasor System. M Kezunovic, C Qian, C Seidl, J Ren, Proceedings of the 2019 International Conference on Smart Grid Synchronized Measurements and Analytics (SGSMA). the 2019 International Conference on Smart Grid Synchronized Measurements and Analytics (SGSMA)College Station, TX, USAKezunovic, M.; Qian, C.; Seidl, C.; Ren, J. Testbed for Timing Intrusion Evaluation and Tools for Lab and Field Testing of Synchrophasor System. In Proceedings of the 2019 International Conference on Smart Grid Synchronized Measurements and Analytics (SGSMA), College Station, TX, USA, 21-23 May 2019; pp. 1-8. Cyber and Physical Anomaly Detection in Smart-Grids. D L Marino, C S Wickramasinghe, K Amarasinghe, H Challa, P Richardson, A A Jillepalli, B K Johnson, C Rieger, M Manic, 10.1109/RWS47064.2019.8972003IEEE Resil. Week (RWS). Marino, D.L.; Wickramasinghe, C.S.; Amarasinghe, K.; Challa, H.; Richardson, P.; Jillepalli, A.A.; Johnson, B.K.; Rieger, C.; Manic, M. Cyber and Physical Anomaly Detection in Smart-Grids. IEEE Resil. Week (RWS) 2019, 2019, 187-193. [CrossRef] FLEP-SGS 2: A Flexible and Low-cost Evaluation Platform for Smart Grid Systems Security. C Konstantinou, M Sazos, M Maniatakos, Proceedings of the 2019 IEEE Power & Energy Society Innovative Smart Grid Technologies Conference (ISGT). the 2019 IEEE Power & Energy Society Innovative Smart Grid Technologies Conference (ISGT)Washington, DC, USAKonstantinou, C.; Sazos, M.; Maniatakos, M. FLEP-SGS 2: A Flexible and Low-cost Evaluation Platform for Smart Grid Systems Security. In Proceedings of the 2019 IEEE Power & Energy Society Innovative Smart Grid Technologies Conference (ISGT), Washington, DC, USA, 18-21 February 2019; pp. 1-5. Designing an efficient security framework for detecting intrusions in virtual network of cloud computing. R Patil, H Dudeja, C Modi, 10.1016/j.cose.2019.05.016Comput. Secur. 85Patil, R.; Dudeja, H.; Modi, C. Designing an efficient security framework for detecting intrusions in virtual network of cloud computing. Comput. Secur. 2019, 85, 402-422. [CrossRef] An approach for the secure management of hybrid cloud-edge environments. A Celesti, M Fazio, A Galletta, L Carnevale, J Wan, M Villari, 10.1016/j.future.2018.06.043Future Gener. Comput. Syst. 90Celesti, A.; Fazio, M.; Galletta, A.; Carnevale, L.; Wan, J.; Villari, M. An approach for the secure management of hybrid cloud-edge environments. Future Gener. Comput. Syst. 2019, 90, 1-19. [CrossRef] KVM Based introspection approach to detect malware in cloud environment. P Mishra, I Verma, S Gupta, Kvminspector, 10.1016/j.jisa.2020.102460J. Inf. Secur. Appl. 51Mishra, P.; Verma, I.; Gupta, S. KVMInspector: KVM Based introspection approach to detect malware in cloud environment. J. Inf. Secur. Appl. 2020, 51, 102460. [CrossRef] A performance analysis of openstack open-source solution for IaaS cloud computing. V N Van, L M Chi, N Q Long, G N Nguyen, D N Le, Proceedings of the Second International Conference on Computer and Communication Technologies. the Second International Conference on Computer and Communication TechnologiesBerlin/Heidelberg, GermanySpringerVan, V.N.; Chi, L.M.; Long, N.Q.; Nguyen, G.N.; Le, D.N. A performance analysis of openstack open-source solution for IaaS cloud computing. In Proceedings of the Second International Conference on Computer and Communication Technologies; Springer: Berlin/Heidelberg, Germany, 2016; pp. 141-150. Design and Implementation of an Open Source Framework and Prototype for Named Data Networking-Based Edge Cloud Computing System. R Ullah, M A U Rehman, B S Kim, 10.1109/ACCESS.2019.2914067IEEE Access. 7Ullah, R.; Rehman, M.A.U.; Kim, B.S. Design and Implementation of an Open Source Framework and Prototype for Named Data Networking-Based Edge Cloud Computing System. IEEE Access 2019, 7, 57741-57759. [CrossRef] Remote Monitoring and Online Testing of Machine Tools for Fault Diagnosis and Maintenance Using MTComm in a Cyber-Physical Manufacturing Cloud. Al Sunny, S N Liu, X Shahriar, M R , Proceedings of the 2018 IEEE 11th International Conference on Cloud Computing (CLOUD). the 2018 IEEE 11th International Conference on Cloud Computing (CLOUD)San Francisco, CA, USA, 2-7Al Sunny, S.N.; Liu, X.; Shahriar, M.R. Remote Monitoring and Online Testing of Machine Tools for Fault Diagnosis and Maintenance Using MTComm in a Cyber-Physical Manufacturing Cloud. In Proceedings of the 2018 IEEE 11th International Conference on Cloud Computing (CLOUD), San Francisco, CA, USA, 2-7 July 2018; pp. 532-539. Hyperdrive: A flexible cloud testbed for research and education. A Sanatinia, S Deshpande, A Munshi, D Kohlbrenner, M Yessaillian, S Symonds, A Chan, G Noubir, Proceedings of the 2017 IEEE International Symposium on Technologies for Homeland Security (HST). the 2017 IEEE International Symposium on Technologies for Homeland Security (HST)Waltham, MA, USASanatinia, A.; Deshpande, S.; Munshi, A.; Kohlbrenner, D.; Yessaillian, M.; Symonds, S.; Chan, A.; Noubir, G. Hyperdrive: A flexible cloud testbed for research and education. In Proceedings of the 2017 IEEE International Symposium on Technologies for Homeland Security (HST), Waltham, MA, USA, 25-26 April 2017; pp. 1-4. Design Considerations for Cyber Security Testbeds: A Case Study on a Cyber Security Testbed for Education. M Frank, M Leitner, T Pahi, Proceedings of the 2017 IEEE 15th Intl Conf on Dependable, Autonomic and Secure Computing, 15th Intl Conf on Pervasive Intelligence and Computing, 3rd Intl Conf on Big Data Intelligence and Computing and Cyber Science and Technology Congress. the 2017 IEEE 15th Intl Conf on Dependable, Autonomic and Secure Computing, 15th Intl Conf on Pervasive Intelligence and Computing, 3rd Intl Conf on Big Data Intelligence and Computing and Cyber Science and Technology CongressOrlando, FL, USA, 6-10Frank, M.; Leitner, M.; Pahi, T. Design Considerations for Cyber Security Testbeds: A Case Study on a Cyber Security Testbed for Education. In Proceedings of the 2017 IEEE 15th Intl Conf on Dependable, Autonomic and Secure Computing, 15th Intl Conf on Pervasive Intelligence and Computing, 3rd Intl Conf on Big Data Intelligence and Computing and Cyber Science and Technology Congress (DASC/PiCom/DataCom/CyberSciTech), Orlando, FL, USA, 6-10 November 2017; pp. 38-46. Cyber-physical systems testbed based on cloud computing and software defined network. H Gao, Y Peng, K Jia, Z Wen, H Li, Proceedings of the 2015 International Conference on Intelligent Information Hiding and Multimedia Signal Processing (IIH-MSP). the 2015 International Conference on Intelligent Information Hiding and Multimedia Signal Processing (IIH-MSP)Adelaide, SA, AustraliaGao, H.; Peng, Y.; Jia, K.; Wen, Z.; Li, H. Cyber-physical systems testbed based on cloud computing and software defined network. In Proceedings of the 2015 International Conference on Intelligent Information Hiding and Multimedia Signal Processing (IIH-MSP), Adelaide, SA, Australia, 23-25 September 2015; pp. 337-340. Time Inference Attacks on Software Defined Networks: Challenges and Countermeasures. S Khorsandroo, A S Tosun, Proceedings of the 2018 IEEE 11th International Conference on Cloud Computing (CLOUD). the 2018 IEEE 11th International Conference on Cloud Computing (CLOUD)San Francisco, CA, USA, 2-7Khorsandroo, S.; Tosun, A.S. Time Inference Attacks on Software Defined Networks: Challenges and Countermeasures. In Proceedings of the 2018 IEEE 11th International Conference on Cloud Computing (CLOUD), San Francisco, CA, USA, 2-7 July 2018; pp. 342-349. Testbed for security orchestration in a network function virtualization environment. A Kalliola, S Lal, K Ahola, I Oliver, Y Miche, S Holtmanns, Proceedings of the 2017 IEEE Conference on Network Function Virtualization and Software Defined Networks (NFV-SDN). the 2017 IEEE Conference on Network Function Virtualization and Software Defined Networks (NFV-SDN)Berlin, Germany, 6-8Kalliola, A.; Lal, S.; Ahola, K.; Oliver, I.; Miche, Y.; Holtmanns, S. Testbed for security orchestration in a network function virtualization environment. In Proceedings of the 2017 IEEE Conference on Network Function Virtualization and Software Defined Networks (NFV-SDN), Berlin, Germany, 6-8 November 2017; pp. 1-4.
[]
[ "LatentCLR: A Contrastive Learning Approach for Unsupervised Discovery of Interpretable Directions", "LatentCLR: A Contrastive Learning Approach for Unsupervised Discovery of Interpretable Directions" ]
[ "Oguz Kaan Yüksel ", "Enis Simsar \nTechnical University of Munich\n\n\nBogaziçi University\n\n", "Ezgi Gülperi \nBogaziçi University\n\n", "Er ", "Pinar Yanardag \nBogaziçi University\n\n" ]
[ "Technical University of Munich\n", "Bogaziçi University\n", "Bogaziçi University\n", "Bogaziçi University\n" ]
[]
StyleGAN2] Smile on FFHQ [StyleGAN2] Car type on LSUN Cars [StyleGAN2] Fluffiness on LSUN Cats [StyleGAN2] Window on LSUN Bedrooms [BigGAN] Background removal on ImageNet Bulbul [BigGAN] Background removal transferred from BulbulFigure 1: Interpretable directions discovered in StyleGAN2 [12] and BigGAN [2]. Left and right images of each triplet are obtained by moving the latent code of the image, shown in the middle, towards negative and positive directions, respectively. Our directions are transferable, such as background removal learned on Bulbul class.AbstractRecent research has shown that it is possible to find interpretable directions in the latent spaces of pre-trained Generative Adversarial Networks (GANs). These directions enable controllable image generation and support a wide range of semantic editing operations, such as zoom or rotation. The discovery of such directions is often done in a supervised or semi-supervised manner and requires manual annotations which limits their use in practice. In comparison, unsupervised discovery allows finding subtle directions that are difficult to detect a priori. In this work, we propose a contrastive learning-based approach to discover se- † Equal contribution. Author ordering determined by a coin flip. mantic directions in the latent space of pre-trained GANs in a self-supervised manner. Our approach finds semantically meaningful dimensions comparable with state-of-theart methods.
10.1109/iccv48922.2021.01400
[ "https://arxiv.org/pdf/2104.00820v2.pdf" ]
233,004,516
2104.00820
e993d5d057af8993e89c73500cb0bc89b851b225
LatentCLR: A Contrastive Learning Approach for Unsupervised Discovery of Interpretable Directions Oguz Kaan Yüksel Enis Simsar Technical University of Munich Bogaziçi University Ezgi Gülperi Bogaziçi University Er Pinar Yanardag Bogaziçi University LatentCLR: A Contrastive Learning Approach for Unsupervised Discovery of Interpretable Directions StyleGAN2] Smile on FFHQ [StyleGAN2] Car type on LSUN Cars [StyleGAN2] Fluffiness on LSUN Cats [StyleGAN2] Window on LSUN Bedrooms [BigGAN] Background removal on ImageNet Bulbul [BigGAN] Background removal transferred from BulbulFigure 1: Interpretable directions discovered in StyleGAN2 [12] and BigGAN [2]. Left and right images of each triplet are obtained by moving the latent code of the image, shown in the middle, towards negative and positive directions, respectively. Our directions are transferable, such as background removal learned on Bulbul class.AbstractRecent research has shown that it is possible to find interpretable directions in the latent spaces of pre-trained Generative Adversarial Networks (GANs). These directions enable controllable image generation and support a wide range of semantic editing operations, such as zoom or rotation. The discovery of such directions is often done in a supervised or semi-supervised manner and requires manual annotations which limits their use in practice. In comparison, unsupervised discovery allows finding subtle directions that are difficult to detect a priori. In this work, we propose a contrastive learning-based approach to discover se- † Equal contribution. Author ordering determined by a coin flip. mantic directions in the latent space of pre-trained GANs in a self-supervised manner. Our approach finds semantically meaningful dimensions comparable with state-of-theart methods. Introduction Generative Adversarial Networks (GANs) [7] are powerful image synthesis models that have revolutionized generative modeling in computer vision. Due to their success in synthesizing high-quality images, they are widely used for various visual tasks, including image generation [38], image manipulation [35], de-noising [34,15], upscaling image resolution [30], and domain translation [39]. Until recently, GAN models have generally been interpreted as black-box models, without the ability to control the generation of images. Some degree of control can be achieved by training conditional models such as [17] and changing conditions in generation-time. Another approach is to design models that generate a more disentangled latent space such as in InfoGAN [4] where each latent dimension controls a particular attribute. However, these approaches require labels and provide only limited control, depending on the granularity of available supervised information. Albeit some progress has been done, the question of what knowledge GANs learn in the latent representation and how these representations can be used to manipulate images is still an ongoing research question. Early attempts to explicitly control the underlying generation process of GANs include simple approaches, such as modifying the latent code of images [22] or interpolating latent vectors [11]. Recently, several approaches have been proposed to explore the structure of latent space in GANs in a more principled way [10,26,9,33,21]. Most of these works discover domain-agnostic interpretable directions such as zoom, rotation, or translation, while other find domain-specific directions such as changing gender, age or expression on facial images. Typically, such methods either identify or optimize for directions and then shift the latent code in these directions to increase or decrease target semantics in the image. In this paper, we introduce LatentCLR, an optimization-based approach that uses a self-supervised contrastive objective to find interpretable directions in GANs. In particular, we use the differences caused by an edit operation on the feature activations to optimize the identifiability of each direction. Our contributions are as follows: • We propose to use contrastive learning on feature divergences to discover interpretable directions in the latent space of pre-trained GAN models such as Style-GAN2 and BigGAN. • We show that our method can find distinct and finegrained directions on a variety of datasets, and that the obtained directions are highly transferable between ImageNet [24] classes. • We make our implementation publicly available to encourage further research in this area: https:// github.com/catlab-team/latentclr. The rest of this paper is organized as follows. Section 2 discusses related work. Section 3 introduces our contrastive framework. Section 4 presents our quantitative and qualitative results. Section 5 discusses the limitations of our work and Section 6 concludes the paper. Related Work In this section, we introduce generative adversarial networks and discuss latent space manipulation methods. Generative Adversarial Networks Generative Adversarial Networks (GANs) consist of a generator and a discriminator for mapping the real world to the generative space [7]. The discriminator part of the network tries to detect whether images are from the training dataset or synthetic, while the generative part tries to generate images that are similar to the dataset. StyleGAN [11] and StyleGAN2 are among the popular GAN approaches that are capable of generating high-quality images. They use a mapping network consisting of an 8-layer perceptron that aims to map the input latent code to an intermediate latent space. Another popular GAN model is BigGAN [2], a large-scale model trained on ImageNet. Similar to Style-GAN2, it also makes use of intermediate layers by using the latent vector as input, also called skip-z inputs, as well as a class vector. Due to its conditional architecture, it can generate images in a variety of categories from ImageNet. In this paper, we work with pre-trained StyleGAN2 and Big-GAN models. Latent Space Navigation Recently, several strategies have been proposed to manipulate the latent structure of pre-trained GANs. These methods manipulate images in different ways by editing the latent code and can be divided into two groups. Supervised Setting. Supervised approaches typically use pre-trained classifiers to guide optimization-based learning to discover interpretable directions that specifically manipulate the properties of interest. InterfaceGAN [26] is a supervised approach that benefits from labeled data including gender, facial expression and age. It trains binary Support Vector Machines (SVM) [19] on each label and interprets the normal vectors of the obtained hyperplanes as latent directions. GANalyze [6] finds directions for cognitive image properties for a pre-trained BigGAN model using an externally trained assessor function. Feedback from the assessor guides the optimization process, and the resulting optimal direction allows manipulation of the desired cognitive attributes. StyleFlow [1] uses attribute-conditioned continuous normalizing flows that use labels to find edit directions in the latent space of GANs. Unsupervised Setting. One of the unsupervised works proposed by [33] discovers meaningful directions using a classifier-based approach. Given a particular manipulation, the classifier tries to detect which particular direction is applied. At the end of the optimization process, the method learns disentangled directions. Ganspace [9] is a sampling based unsupervised method where latent vectors are randomly selected from the intermediate layers of BigGAN and StyleGAN models. Then they propose to use Principal Component Analysis (PCA) [36] to find principal components that are interpreted as semantically meaningful directions. The principal components lead to a variety of useful manipulations, including zoom, rotation in BigGAN or changing gender, hair color, or age in StyleGAN models. SeFa [27] follows a related approach, using a closedform solution that specifically optimizes the intermediate weight matrix of the pre-trained GAN model. They obtain interpretable directions in the latent space by computing the eigenvectors of the first projection matrix and selecting the eigenvectors with the largest eigenvalues. A different closed-form solution is proposed by [29] that discovers directions without optimization. Another work proposed by [10] exploits task-specific edit functions. They start by applying an editing operation to the original image, e.g., zoom, and minimize the distance between the original image and the edited image to learn a direction that leads to the desired editing operation. [32] provides directions by using a GAN inversion model and aims to change the image in a particular direction without changing the remaining properties. Instead of working on latent codes, [5] uses the space of generator parameters to discover semantically meaningful directions. [18] uses a fixed decoder and trains an encoder to decouple the processes of disentanglement and synthesis. [13] uses post-hoc disentanglement that requires little to no hyperparameters. [16] is a variant of In-foGAN that uses a contrastive regularizer and aims to make the elements of the latent code set clearly identifiable from one another. A concurrent work to ours is proposed by [23] which uses an entropy-based domination loss and a hard negatives flipping strategy to achieve disentanglement. Methodology In this section, we first introduce preliminaries of contrastive learning and then discuss details of our method. Contrastive Learning Contrastive learning has recently become popular due to leading state-of-the-art results in various unsupervised representation learning tasks. It aims to learn representations by contrasting positive pairs against negative pairs [8] and is used in various computer vision tasks, including data augmentation [3,20], or diverse scene generation [31]. The core idea of contrastive learning is to move the representations of similar pairs near and dissimilar pairs far. In this work, we follow a similar approach to SimCLR framework [3] for contrastive learning. SimCLR consists of four main components: a stochastic data augmentation method that generates positive pairs (x, x + ), an encoding network f that extracts representation vectors out of augmented samples, a small projector head g that maps representations to the loss space, and a contrastive loss function that enforces the separation between positive and negative pairs. Given a random mini-batch of N samples, Sim-CLR generates N positive pairs using the specified data augmentation method. For all positive pairs, the remaining 2(N − 1) augmented samples are treated as negative examples. Let h i = f (x i ) be the representations of all 2N samples and z i = g(h i ) be the projections of these representations. Then, SimCLR considers the average of the NT-Xent loss [28,20] over all positive pairs (x i , x j ): (x i , x j ) = − log exp(sim(z i , z j )/τ ) 2N k=1 1 [k =i] exp(sim(z i , z k )/τ )(1) where sim(u, v) = u T v/ u v is the cosine similarity function, 1 [k =i] ∈ {0, 1} is an indicator function that takes the value 1 only when k = i, and τ is the temperature parameter. The two networks f and g are trained together. Intuitively, g learns a mapping to a space where cosine similarity represents semantic similarity and NT-Xent objective encourages identifiability of positive pairs among all other negative examples. This, in turn, forces f to learn representations that are invariant to the given data augmentations, up to a nonlinear mapping and cosine similarity. Latent Contrastive Learning (LatentCLR) Consider a pre-trained GAN, expressed as a mapping function G : Z → X where Z is the latent space, usually associated with a prior distribution such as the multivariate Gaussian distribution, and X is the target image domain. Given a latent code z and its generated image x = G(z), we look for edit directions ∆z such that the image x = G(z + ∆z) has semantically meaningful changes with respect to x while preserving the identity of x. Similar to [9,27,33], we limit ourselves to the unsupervised setting, where we aim to identify such edit directions without external supervision, as in [6,26,10]. Our intuition is to optimize an identifiability-based heuristic, similar to [33], in an intermediate representation space of a pre-trained GAN to find a diverse set of interpretable directions. We search for edit directions ∆z 1 , · · · , ∆z K , K > 1, that have distinguishable effects in the target representation layer. For this end, we calculate differences in representations induced by each direction and use a contrastive-learning objective to maximize the identifiability of directions. More specifically, we generalize directions with potentially more expressive conditional mappings called direction models. Then, our final approach consist of (i) concurrent directions models that apply edits to the given latent codes, (ii) a target feature layer f of a pre-trained GAN G that will be used to evaluate direction models, and as well as (iii) a contrastive learning objective for measuring identifiability of each direction model. See Figure 2 for a high-level visualization. Direction models. The direction model is a mapping D : Z × R → Z that takes latent codes along with a desired edit magnitude and outputs edited latent codes, i.e. D : (z, α) → z + ∆z, where ∆z ∝ α. We consider three alternative methods for choosing the direction model: global, linear and non-linear, which are defined as follows: • Global. We learn a fixed direction θ independent of the latent code z. D(z, α) = z + α θ θ • Linear. We learn a matrix M to output a conditional direction on the latent code z, which is a linear dependency. D(z, α) = z + α Mz Mz • Nonlinear. We learn a multi-layer perceptron, represented by NN, to represent an arbitrarily complex dependency between direction and latent code. D(z, α) = z + α NN(z) NN(z) Note that for all options, we apply 2 normalization and thus the magnitudes given by α correspond to the direct 2 distances from the latent code. The first option, Global, is in principle the most limited since it can only find fixed directions and thus edits images without considering the latent code z. However, we note that it is still able to capture common directions such as zoom, rotation, or background removal on BigGAN. The second option, Linear is able to generate conditional directions given latent code z, but is still limited for capturing finer-grained directions. The third option is an extension of the Linear direction model, where we use a neural network that models the dependency between the direction and the given latent code. Target feature differences. For each latent code z i , 1 ≤ i ≤ N in the mini-batch of size N , we compute K distinct edited latent codes: z k i = D(z i , α). Then, we calculate corresponding intermediate feature representations, h k i = G f (z k i ) , where G f is feed-forward of GAN up to the target layer f . Next, we compute the feature divergences w.r.t. the original latent code, f k i = h k i − G f (z i ). Objective function. For each edited latent code z k i , we define the following loss: (z k i ) = − log N j=1 1 [j =i] exp sim(f k i , f k j )/τ N j=1 K l=1 1 [l =k] exp sim(f k i , f l j )/τ The intuition behind our objective function is as follows. All feature divergences obtained with the same latent edit operation 1 ≤ k ≤ K, i.e., each of f k 1 , f k 2 , · · · f k N , are considered as positive pairs and contribute to the numerator. All other pairs obtained with a different edit operation, e.g., Utilizing Layer-wise Styles. Ganspace [9] discovered that the layer-wise structure of StyleGAN2 and BigGAN models can be used for fine-grained editing. By applying their directions only to a limited set of layers in test-time, they achieve less entanglement in editing and superior granularity. SeFa [27] can find more detailed directions by concatenating weight matrices and identifying eigenvectors. In contrast to Ganspace, our method can consider such layerwise structure in training time. And, in contrast to SeFa, our method, additionally, can fuse the effects of all selected layers into the target feature layer due to its flexible optimization-based objective. Experiments We evaluate the proposed method for detecting semantically meaningful directions using different models and datasets. We apply the proposed model to BigGAN and StyleGAN2 on a wide range of datasets, including human faces (FFHQ) [11], LSUN Cats, Cars, Bedrooms, Church and Horse datasets [37] and ImageNet. We also compare our method with state-of-the-art unsupervised methods [9,27] and conduct several qualitative and quantitative experiments to demonstrate the effectiveness of our approach. Next, we discuss our experimental setup and then present results on BigGAN and StyleGAN2 models. Choice of K. To learn K different directions, we use K copies of the same direction model. We observe that using too many directions leads to repetitive directions, a similar observation made by [33]. For BigGAN, we used K = 32 directions since the latent space is 128-dimensional and most interpretable directions such as zoom, rotation or translation can be obtained with a relatively small number Layers. To avoid any bias between the competing methods Ganspace and SeFa, we use the same set of StyleGAN2 layers that comes from Ganspace. We also note that slight differences in re-scoring analysis or visuals might be caused due to applying different magnitudes of change across methods, as they are not directly convertible between methods. To minimize this effect, we use the same sigma values in Ganspace as specified for each direction in their public repository, and {−3, +3} for SeFa as provided in the official implementation. Experimental Setup Results on BigGAN We evaluate our approach using a pre-trained BigGAN model conditionally trained on 1000 ImageNet classes. We trained our model on an arbitrary class, Bulbul, and obtained K = 32 directions. Qualitative Results. Our visual analysis shows that our model is able to detect several semantically meaningful directions such as zoom, rotation, contrast as well as some finer-grained features such as background removal, sitting, or green background (see Figure 1 and Figure 3 (a)). As can be seen from the figures, our method is able to manipulate the original image (labelled α=0) by shifting the latent code towards the interpretable direction (increasing α) or backwards (decreasing α). Transferability of directions. After verifying that the directions obtained for the class Bulbul are capable of manipulating the latent codes for multiple semantic directions, we investigated how transferable the discovered directions are to other ImageNet classes. Our visual analysis shows that the directions learned from the Bulbul class are applicable to a variety of Ima-geNet classes and are able to zoom a Trenchcoat, rotate a Goose, apply contrast to Volcano, add greenness to Castle classes (see Figure 3 (b)) as well as removing the background from a Teapot object (see Figure 1). An interesting transferred direction is Sitting (see Figure 3 (a)), which manipulates the latent code so that the bird in the Bulbul class sits on a tree branch when we increase α. We find that this direction, when applied to the class Bulldog, also causes the dog to stand up or sit down depending on how we move in the positive or negative direction (see Figure 3 (b)). Diversity of the directions. Next, we investigate whether we can discover unique directions when we train our model on different ImageNet classes. Figure 4 (a) shows some directions discovered by our model, such as adding tongue on Husky class, changing the time of day in Barn class, adding flowers in Bulbul class, adding lettuce on Cheeseburger class, or changing the shape in Necklace class. Comparison with other methods. We visually compare the directions obtained with our method to Ganspace using the Husky class (see Figure 4 (b)). For each direction in Ganspace, we use the parameters given in the open-source implementation * . For both methods, we use the same initial image (represented by α = 0) and move the latent code towards the direction (as α increases) and far away from the direction (as α decreases). We used the original α settings (i.e., the sigma setting) for each direction as provided in * https://github.com/harskish/ganspace the Ganspace implementation to avoid any bias that may be caused by tuning the parameter. We find that both methods achieve similar manipulations for the zoom, rotation, and background change directions, while our method causes less entanglement. For example, we note that Ganspace tends to increase tongue with rotation or add background objects with increasing zoom. User Study. To understand how the directions found by our method match human perception, we conduct a user study on Amazon's Mechanical Turk platform where each participant is shown a randomly selected 10 images out of a set of 100 randomly generated images on BigGAN's Bulbul class. Participants are shown the original image in the center and −α and +α values on the left and right sides, respectively. Following the same approach as [27], we ask n = 100 users the following questions: "Question 1: Do you think there is an obvious content change on the left and right images comparing to the one in the middle?" and "Question 2: Do you think the change on the left and right images comparing to the one in the middle is semantically meaningful?". Each question is associated with Yes/Maybe/No options, and the order of the questions is also randomized. Our user study shows that for Question 1, participants answered 17.43 on average with "Yes", 7.43 with "Maybe," and 5. Results on StyleGAN2 We apply our method on StyleGAN2 to a wide range of datasets and compare our results with state-of-the-art unsupervised methods Ganspace and SeFa. Qualitative Results. First, we visually examined the directions found by our method in several datasets, including FFHQ, LSUN Cars, Cats, Bedroom, Church, and Horse datasets (see Figure 1 and Figure 5 (b)). Our method is capable of discovering several fine-grained directions, such as changing car type or scenery in LSUN Cars, changing breed or adding fur in LSUN Cats, adding windows or turning on lights in LSUN Bedrooms, adding tower details to churches in LSUN Church, and adding riders in LSUN Horse datasets. Comparison with other methods. We compare how the directions found on FFHQ differ across methods. Figure 5 (a) shows the visual comparison between several directions found in common by all methods, including the directions Smile, Lipstick, Elderly, Curly Hair , and Young. As can be seen from the figures, all methods perform similarly and are able to manipulate the images towards the desired attributes. Figure 6 (a) illustrates various directions including Race, Eyeglass, and Pose. Figure 6 (b) illustrates three different smile directions discovered by our non-linear model. Re-scoring analysis. To understand how our method compares to the competitors in quantitative terms, we performed a re-scoring analysis [27] using attribute predictors to understand whether the manipulations changed the images towards the desired attributes. We used attribute predictors released by StyleGAN2 for the directions Smile and Lipstick, as these are the only two attributes for which a predictor is available, and which are simultaneously found in common by all methods. For the Age direction, we used an off-the-shelf age predictor [25]. Table 1 shows our results for the re-scoring analysis for three attributes: Age, Lipstick, and Smile for negative (labelled ↓) and positive (labelled ↑) directions. To see how the scores change, 500 images are randomly generated for each property. The average score of the predictors for Lipstick property is 0.43 ±0.5, the average score for Age is 0.27 ±0.1, and the average score for Smile is 0.74 ±0.43. Moving the latent codes in the negative direction, we find that our method, as well as the Ganspace and SeFa methods, decrease scores in similar ranges on the Age and Lipstick properties. We find that both our method and Ganspace are able to significantly reduce the score (from 0.74 to 0.11 and 0.05, respectively) when moving towards the negative direction for the Smile property, while SeFa reduces the score from 0.74 to 0.50 on average. When we move the latent codes in the positive direction, all methods achieve comparable results. Limitations Our method uses a pre-trained GAN model as input, so it is limited to manipulating GAN-generated images only. However, it can be extended to real images by using GAN inversion methods [40] by encoding the real images into the latent space. Like any image synthesis tool, our method also poses similar concerns and dangers in terms of misuse, as it can be applied to images of people or faces for malicious purposes, as discussed in [14]. Conclusion In this study, we propose a framework that uses contrastive learning to learn unsupervised directions. Instead of discovering fixed directions such as [33,9,27], our method can discover non-linear directions in pre-trained StyleGAN2 and BigGAN models and leads to multiple distinct and semantically meaningful directions that are highly transferable. We demonstrate the effectiveness of our approach on a variety of models and datasets, and compare it to state-of-the-art unsupervised methods. We make our implementation available at https://github.com/ catlab-team/latentclr. Figure 2 : 2Illustration of LatentCLR. First, each latent code is passed through direction models (denoted with D k ) and up to a target feature layer of the GAN (denoted with G f ) to obtain intermediate representations of edited codes. Then, the effects of direction models are computed by subtracting the representation of the original latent code. Finally, pairs produced by the same model are considered as positive, and others as negative, in a contrastive loss. f k 1 Figure 3 : 13= f l N , l = k, are considered as negative pairs and (a) Directions for general image editing operations such as zoom, or rotation discovered from the ImageNet Bulbul class, where we shift the latent code in a particular direction with increasing or decreasing α. (b) Transferred directions from the Bulbul class to various other ImageNet classes.contribute to the denominator. This can be viewed as a generalization of the NT-Xent loss (Eq. 1), where we have N -tuples of groups, one for each direction model. With this generalized contrastive loss, we enforce latent edit operations to have orthogonal effects on the features. For BigGAN experiments, we choose batch size = 16, K = 32 directions, output resolution = 512, truncation = 0.4, feature layer = generator.layers.4 and train the models for 3 epochs (each epoch includes 100k iterations), which takes about 20 minutes. For StyleGAN2 experiments, we choose batch size = 8, K = 100 directions, truncation = 0.7, feature layer = conv1 and train StyleGAN2 models for 5 epochs (each epoch corresponds to 10k iterations), which takes about 12 minutes. We use 1-3 dense layers consisting of units corresponding to latent space (128, 256, or 512), with ReLU and batch normalizations. For our experiments, we use PyTorch framework and two NVIDIA Titan RTX GPUs. Figure 4 : 4(a) Class-specific directions discovered by our method in several ImageNet classes on the BigGAN model. (b) A comparison of the directions rotate, zoom and background change between our method and Ganspace. of directions. For StyleGAN2, we used K = 100 directions since the latent space is 512-dimensional. Figure 5 : 5(a) Comparison of manipulation results on FFHQ dataset with Ganspace and SeFa methods. The leftmost image represents the original image, while images denoted with ↑ and ↓ represent the edited image moved in the positive or negative direction. (b) Directions discovered by our method in various LSUN[37] datasets. Figure 6 : 6Additional directions discovered by our method in FFHQ dataset with StyleGAN2 (left). Different types of smile directions (denoted with SMILE 1-3) discovered by the non-linear approach (right). 75 with "No". For Question 2 we got Yes on average 14.37 directions, Maybe on average 10.03 directions, and No on average 6.21 directions. These results indicate that out of K = 32 directions, participants found 82% semantically meaningful and 80.59% directions containing an obvious content change. ±0.12 0.38 ±0.15 0.42 ±0.13 ↑ Lipstick 0.58 ±0.49 0.55 ±0.49 0.66 ±0.47 Lipstick 0.35 ±0.47 0.35 ±0.48 0.36 ±0.48 Table 1: Re-scoring Analysis on Ganspace, SeFA and our method where we compare Smile, Age and Lipstick attributes using FFHQ dataset on StyleGAN2.Model Ganspace SeFa Ours ↑ Smile 0.99 ±0.11 0.89 ±0.31 0.99 ±0.11 ↑ Age 0.32 ↓ Smile 0.05 ±0.21 0.50 ±0.50 0.11 ±0.32 ↓ Age 0.23 ±0.08 0.23 ±0.06 0.23 ±0.07 ↓ Acknowledgments This publication has been produced benefiting from the 2232 International Fellowship for Outstanding Researchers Program of TUBITAK (Project No: 118c321). We also acknowledge the support of NVIDIA Corporation through the donation of the TITAN X GPU and GCP research credits from Google. We thank to Irem Simsar for proof-reading our paper. Styleflow: Attribute-conditioned exploration of stylegangenerated images using conditional continuous normalizing flows. Rameen Abdal, Peihao Zhu, J Niloy, Peter Mitra, Wonka, ACM Transactions on Graphics (TOG). 403Rameen Abdal, Peihao Zhu, Niloy J Mitra, and Peter Wonka. Styleflow: Attribute-conditioned exploration of stylegan- generated images using conditional continuous normalizing flows. ACM Transactions on Graphics (TOG), 40(3):1-21, 2021. Large scale GAN training for high fidelity natural image synthesis. Andrew Brock, Jeff Donahue, Karen Simonyan, abs/1809.11096CoRRAndrew Brock, Jeff Donahue, and Karen Simonyan. Large scale GAN training for high fidelity natural image synthesis. CoRR, abs/1809.11096, 2018. A simple framework for contrastive learning of visual representations. Ting Chen, Simon Kornblith, Mohammad Norouzi, Geoffrey Hinton, International conference on machine learning. PMLRTing Chen, Simon Kornblith, Mohammad Norouzi, and Ge- offrey Hinton. A simple framework for contrastive learning of visual representations. In International conference on ma- chine learning, pages 1597-1607. PMLR, 2020. Xi Chen, Yan Duan, Rein Houthooft, John Schulman, arXiv:1606.03657Ilya Sutskever, and Pieter Abbeel. Infogan: Interpretable representation learning by information maximizing generative adversarial nets. arXiv preprintXi Chen, Yan Duan, Rein Houthooft, John Schulman, Ilya Sutskever, and Pieter Abbeel. Infogan: Interpretable rep- resentation learning by information maximizing generative adversarial nets. arXiv preprint arXiv:1606.03657, 2016. Navigating the gan parameter space for semantic image editing. Anton Cherepkov, Andrey Voynov, Artem Babenko, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. the IEEE/CVF Conference on Computer Vision and Pattern RecognitionAnton Cherepkov, Andrey Voynov, and Artem Babenko. Navigating the gan parameter space for semantic image edit- ing. In Proceedings of the IEEE/CVF Conference on Com- puter Vision and Pattern Recognition, pages 3671-3680, 2021. Ganalyze: Toward visual definitions of cognitive image properties. Lore Goetschalckx, Alex Andonian, Aude Oliva, Phillip Isola, Proceedings of the IEEE/CVF International Conference on Computer Vision. the IEEE/CVF International Conference on Computer VisionLore Goetschalckx, Alex Andonian, Aude Oliva, and Phillip Isola. Ganalyze: Toward visual definitions of cognitive im- age properties. In Proceedings of the IEEE/CVF Interna- tional Conference on Computer Vision, pages 5744-5753, 2019. Generative adversarial nets. Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, Yoshua Bengio, Advances in Neural Information Processing Systems. Z. Ghahramani, M. Welling, C. Cortes, N. D. Lawrence, and K. Q. WeinbergerCurran Associates, Inc27Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In Z. Ghahra- mani, M. Welling, C. Cortes, N. D. Lawrence, and K. Q. Weinberger, editors, Advances in Neural Information Pro- cessing Systems 27, pages 2672-2680. Curran Associates, Inc., 2014. Dimensionality reduction by learning an invariant mapping. Raia Hadsell, Sumit Chopra, Yann Lecun, IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'06). IEEE2Raia Hadsell, Sumit Chopra, and Yann LeCun. Dimensional- ity reduction by learning an invariant mapping. In 2006 IEEE Computer Society Conference on Computer Vision and Pat- tern Recognition (CVPR'06), volume 2, pages 1735-1742. IEEE, 2006. Erik Härkönen, Aaron Hertzmann, Jaakko Lehtinen, Sylvain Paris, Ganspace, arXiv:2004.02546Discovering interpretable gan controls. arXiv preprintErik Härkönen, Aaron Hertzmann, Jaakko Lehtinen, and Sylvain Paris. Ganspace: Discovering interpretable gan con- trols. arXiv preprint arXiv:2004.02546, 2020. On the" steerability" of generative adversarial networks. Ali Jahanian, Lucy Chai, Phillip Isola, arXiv:1907.07171arXiv preprintAli Jahanian, Lucy Chai, and Phillip Isola. On the" steer- ability" of generative adversarial networks. arXiv preprint arXiv:1907.07171, 2019. A style-based generator architecture for generative adversarial networks. Tero Karras, Samuli Laine, Timo Aila, abs/1812.04948CoRRTero Karras, Samuli Laine, and Timo Aila. A style-based generator architecture for generative adversarial networks. CoRR, abs/1812.04948, 2018. Analyzing and improving the image quality of stylegan. Tero Karras, Samuli Laine, Miika Aittala, Janne Hellsten, Jaakko Lehtinen, Timo Aila, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. the IEEE/CVF Conference on Computer Vision and Pattern RecognitionTero Karras, Samuli Laine, Miika Aittala, Janne Hellsten, Jaakko Lehtinen, and Timo Aila. Analyzing and improv- ing the image quality of stylegan. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 8110-8119, 2020. On disentangled representations extracted from pretrained gans. Valentin Khrulkov, Leyla Mirvakhabova, Ivan Oseledets, Artem Babenko, Valentin Khrulkov, Leyla Mirvakhabova, Ivan Oseledets, and Artem Babenko. On disentangled representations extracted from pretrained gans. 2020. Deepfakes: a new threat to face recognition? assessment and detection. Pavel Korshunov, Sébastien Marcel, arXiv:1812.08685arXiv preprintPavel Korshunov and Sébastien Marcel. Deepfakes: a new threat to face recognition? assessment and detection. arXiv preprint arXiv:1812.08685, 2018. Siyuan Li, Iago Breno Araujo, Wenqi Ren, Zhangyang Wang, Eric K Tokuda, Roberto Hirata Junior, Roberto Cesar-Junior, Jiawan Zhang, Xiaojie Guo, Xiaochun Cao, Single image deraining: A comprehensive benchmark analysis. Siyuan Li, Iago Breno Araujo, Wenqi Ren, Zhangyang Wang, Eric K. Tokuda, Roberto Hirata Junior, Roberto Cesar-Junior, Jiawan Zhang, Xiaojie Guo, and Xiaochun Cao. Single image deraining: A comprehensive benchmark analysis, 2019. Infogan-cr and modelcentrality: Self-supervised model training and selection for disentangling gans. Zinan Lin, Kiran Thekumparampil, Giulia Fanti, Sewoong Oh, International Conference on Machine Learning. PMLRZinan Lin, Kiran Thekumparampil, Giulia Fanti, and Se- woong Oh. Infogan-cr and modelcentrality: Self-supervised model training and selection for disentangling gans. In In- ternational Conference on Machine Learning, pages 6127- 6139. PMLR, 2020. Mehdi Mirza, Simon Osindero, arXiv:1411.1784Conditional generative adversarial nets. arXiv preprintMehdi Mirza and Simon Osindero. Conditional generative adversarial nets. arXiv preprint arXiv:1411.1784, 2014. Face identity disentanglement via latent space mapping. Yotam Nitzan, Amit Bermano, Yangyan Li, Daniel Cohen-Or, arXiv:2005.07728arXiv preprintYotam Nitzan, Amit Bermano, Yangyan Li, and Daniel Cohen-Or. Face identity disentanglement via latent space mapping. arXiv preprint arXiv:2005.07728, 2020. What is a support vector machine? Nature biotechnology. S William, Noble, 24William S Noble. What is a support vector machine? Nature biotechnology, 24(12):1565-1567, 2006. Representation learning with contrastive predictive coding. Aaron Van Den Oord, Yazhe Li, Oriol Vinyals, arXiv:1807.03748arXiv preprintAaron van den Oord, Yazhe Li, and Oriol Vinyals. Repre- sentation learning with contrastive predictive coding. arXiv preprint arXiv:1807.03748, 2018. Antoine Plumerault, Céline Hervé Le Borgne, Hudelot, arXiv:2001.10238Controlling generative models with continuous factors of variations. arXiv preprintAntoine Plumerault, Hervé Le Borgne, and Céline Hude- lot. Controlling generative models with continuous factors of variations. arXiv preprint arXiv:2001.10238, 2020. Unsupervised representation learning with deep convolutional generative adversarial networks. Alec Radford, Luke Metz, Soumith Chintala, arXiv:1511.06434arXiv preprintAlec Radford, Luke Metz, and Soumith Chintala. Un- supervised representation learning with deep convolu- tional generative adversarial networks. arXiv preprint arXiv:1511.06434, 2015. Do generative models know disentanglement? contrastive learning is all you need. Tao Xuanchi Ren, Yuwang Yang, Wenjun Wang, Zeng, arXiv:2102.10543arXiv preprintXuanchi Ren, Tao Yang, Yuwang Wang, and Wenjun Zeng. Do generative models know disentanglement? contrastive learning is all you need. arXiv preprint arXiv:2102.10543, 2021. ImageNet Large Scale Visual Recognition Challenge. Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, Alexander C Berg, Li Fei-Fei, International Journal of Computer Vision (IJCV). 1153Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, San- jeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, Alexander C. Berg, and Li Fei-Fei. ImageNet Large Scale Visual Recognition Chal- lenge. International Journal of Computer Vision (IJCV), 115(3):211-252, 2015. Lightface: A hybrid deep face recognition framework. 2020 Innovations in Intelligent Systems and Applications Conference (ASYU). IEEESefik Ilkin Serengil and Alper Ozpinar. Lightface: A hy- brid deep face recognition framework. In 2020 Innovations in Intelligent Systems and Applications Conference (ASYU), pages 1-5. IEEE, 2020. Interfacegan: Interpreting the disentangled face representation learned by gans. Yujun Shen, Ceyuan Yang, Xiaoou Tang, Bolei Zhou, IEEE Transactions on Pattern Analysis and Machine Intelligence. Yujun Shen, Ceyuan Yang, Xiaoou Tang, and Bolei Zhou. Interfacegan: Interpreting the disentangled face representa- tion learned by gans. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2020. Closed-form factorization of latent semantics in gans. Yujun Shen, Bolei Zhou, arXiv:2007.06600arXiv preprintYujun Shen and Bolei Zhou. Closed-form factorization of latent semantics in gans. arXiv preprint arXiv:2007.06600, 2020. Improved deep metric learning with multiclass n-pair loss objective. Kihyuk Sohn, Proceedings of the 30th International Conference on Neural Information Processing Systems. the 30th International Conference on Neural Information Processing SystemsKihyuk Sohn. Improved deep metric learning with multi- class n-pair loss objective. In Proceedings of the 30th Inter- national Conference on Neural Information Processing Sys- tems, pages 1857-1865, 2016. Nurit Spingarn-Eliezer, Ron Banner, Tomer Michaeli, Gan, arXiv:2012.05328steerability" without optimization. arXiv preprintNurit Spingarn-Eliezer, Ron Banner, and Tomer Michaeli. Gan" steerability" without optimization. arXiv preprint arXiv:2012.05328, 2020. Learned image downscaling for upscaling using content adaptive resampler. Wanjie Sun, Zhenzhong Chen, IEEE Transactions on Image Processing. 29Wanjie Sun and Zhenzhong Chen. Learned image downscal- ing for upscaling using content adaptive resampler. IEEE Transactions on Image Processing, 29:4027-4040, 2020. Yonglong Tian, Dilip Krishnan, Phillip Isola, arXiv:1906.05849Contrastive multiview coding. arXiv preprintYonglong Tian, Dilip Krishnan, and Phillip Isola. Con- trastive multiview coding. arXiv preprint arXiv:1906.05849, 2019. Designing an encoder for stylegan image manipulation. Omer Tov, Yuval Alaluf, Yotam Nitzan, Or Patashnik, Daniel Cohen-Or, ACM Transactions on Graphics (TOG). 404Omer Tov, Yuval Alaluf, Yotam Nitzan, Or Patashnik, and Daniel Cohen-Or. Designing an encoder for stylegan im- age manipulation. ACM Transactions on Graphics (TOG), 40(4):1-14, 2021. Unsupervised discovery of interpretable directions in the gan latent space. Andrey Voynov, Artem Babenko, International Conference on Machine Learning. PMLRAndrey Voynov and Artem Babenko. Unsupervised discov- ery of interpretable directions in the gan latent space. In In- ternational Conference on Machine Learning, pages 9786- 9796. PMLR, 2020. Spatial attentive single-image deraining with a high quality real rain dataset. Tianyu Wang, Xin Yang, Ke Xu, Shaozhe Chen, Qiang Zhang, Rynson Lau, Tianyu Wang, Xin Yang, Ke Xu, Shaozhe Chen, Qiang Zhang, and Rynson Lau. Spatial attentive single-image de- raining with a high quality real rain dataset, 2019. High-resolution image synthesis and semantic manipulation with conditional gans. Ting-Chun Wang, Ming-Yu Liu, Jun-Yan Zhu, Andrew Tao, Jan Kautz, Bryan Catanzaro, Ting-Chun Wang, Ming-Yu Liu, Jun-Yan Zhu, Andrew Tao, Jan Kautz, and Bryan Catanzaro. High-resolution image synthesis and semantic manipulation with conditional gans, 2017. Principal component analysis. Chemometrics and intelligent laboratory systems. Svante Wold, Kim Esbensen, Paul Geladi, 2Svante Wold, Kim Esbensen, and Paul Geladi. Principal component analysis. Chemometrics and intelligent labora- tory systems, 2(1-3):37-52, 1987. Fisher Yu, Ari Seff, Yinda Zhang, Shuran Song, Thomas Funkhouser, Jianxiong Xiao, Lsun, arXiv:1506.03365Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprintFisher Yu, Ari Seff, Yinda Zhang, Shuran Song, Thomas Funkhouser, and Jianxiong Xiao. Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365, 2015. Stack-gan++: Realistic image synthesis with stacked generative adversarial networks. Han Zhang, Tao Xu, Hongsheng Li, Shaoting Zhang, Xiaogang Wang, Xiaolei Huang, Dimitris N Metaxas, abs/1710.10916CoRRHan Zhang, Tao Xu, Hongsheng Li, Shaoting Zhang, Xiao- gang Wang, Xiaolei Huang, and Dimitris N. Metaxas. Stack- gan++: Realistic image synthesis with stacked generative ad- versarial networks. CoRR, abs/1710.10916, 2017. Unpaired image-to-image translation using cycleconsistent adversarial networks. Jun-Yan Zhu, Taesung Park, Phillip Isola, Alexei A Efros, abs/1703.10593CoRRJun-Yan Zhu, Taesung Park, Phillip Isola, and Alexei A. Efros. Unpaired image-to-image translation using cycle- consistent adversarial networks. CoRR, abs/1703.10593, 2017. Indomain gan inversion for real image editing. Jiapeng Zhu, Yujun Shen, Deli Zhao, Bolei Zhou, European Conference on Computer Vision. SpringerJiapeng Zhu, Yujun Shen, Deli Zhao, and Bolei Zhou. In- domain gan inversion for real image editing. In European Conference on Computer Vision, pages 592-608. Springer, 2020.
[ "https://github.com/harskish/ganspace" ]
[ "General pseudo self-adjoint boundary conditions for a 1D KFG particle in a box", "General pseudo self-adjoint boundary conditions for a 1D KFG particle in a box" ]
[ "Salvatore De Vincenzo \nThe Institute for Fundamental Study (IF)\nNaresuan University\n65000PhitsanulokThailand\n" ]
[ "The Institute for Fundamental Study (IF)\nNaresuan University\n65000PhitsanulokThailand" ]
[]
We consider a 1D Klein-Fock-Gordon particle in a finite interval, or box. We construct for the first time the most general set of pseudo self-adjoint boundary conditions for the Hamiltonian operator that is present in the first order in time 1D Klein-Fock-Gordon wave equation, or the 1D Feshbach-Villars wave equation. We show that this set depends on four real parameters and can be written in terms of the one-component wavefunction for the second order in time 1D Klein-Fock-Gordon wave equation and its spatial derivative, both evaluated at the endpoints of the box. Certainly, we write the general set of pseudo self-adjoint boundary conditions also in terms of the two-component wavefunction for the 1D Feshbach-Villars wave equation and its spatial derivative, evaluated at the ends of the box; however, the set actually depends on these two column vectors each multiplied by the singular matrix that is present in the kinetic energy term of the Hamiltonian. As a consequence, we found that the two-component wavefunction for the 1D Feshbach-Villars equation and its spatial derivative do not necessarily satisfy the same boundary condition that these quantities satisfy when multiplied by the singular matrix.In any case, given a particular boundary condition for the one-component wavefunction of the standard 1D Klein-Fock-Gordon equation and using the pair of relations that arise from the very definition of the two-component wavefunction for the 1D Feshbach-Villars equation, the respective boundary condition for the latter wavefunction and its derivative can be obtained. Our results can be extended to the problem of a 1D Klein-Fock-Gordon particle moving on a real line with a point interaction (or a hole) at one point.
10.1016/j.physo.2023.100151
[ "https://export.arxiv.org/pdf/2301.01565v3.pdf" ]
255,416,239
2301.01565
14ee0c71881bd7cfc9a656f864f050c9f1dc7c7d
General pseudo self-adjoint boundary conditions for a 1D KFG particle in a box 20 Mar 2023 Salvatore De Vincenzo The Institute for Fundamental Study (IF) Naresuan University 65000PhitsanulokThailand General pseudo self-adjoint boundary conditions for a 1D KFG particle in a box 20 Mar 2023(Dated: February 23, 2023)numbers: 0365-w, 0365Ca, 0365Db, 0365Pm Keywords: 1D Klein-Fock-Gordon wave equation1D Feshbach-Villars wave equationpseudo- Hermitian operatorpseudo self-adjoint operatorboundary conditions 2 We consider a 1D Klein-Fock-Gordon particle in a finite interval, or box. We construct for the first time the most general set of pseudo self-adjoint boundary conditions for the Hamiltonian operator that is present in the first order in time 1D Klein-Fock-Gordon wave equation, or the 1D Feshbach-Villars wave equation. We show that this set depends on four real parameters and can be written in terms of the one-component wavefunction for the second order in time 1D Klein-Fock-Gordon wave equation and its spatial derivative, both evaluated at the endpoints of the box. Certainly, we write the general set of pseudo self-adjoint boundary conditions also in terms of the two-component wavefunction for the 1D Feshbach-Villars wave equation and its spatial derivative, evaluated at the ends of the box; however, the set actually depends on these two column vectors each multiplied by the singular matrix that is present in the kinetic energy term of the Hamiltonian. As a consequence, we found that the two-component wavefunction for the 1D Feshbach-Villars equation and its spatial derivative do not necessarily satisfy the same boundary condition that these quantities satisfy when multiplied by the singular matrix.In any case, given a particular boundary condition for the one-component wavefunction of the standard 1D Klein-Fock-Gordon equation and using the pair of relations that arise from the very definition of the two-component wavefunction for the 1D Feshbach-Villars equation, the respective boundary condition for the latter wavefunction and its derivative can be obtained. Our results can be extended to the problem of a 1D Klein-Fock-Gordon particle moving on a real line with a point interaction (or a hole) at one point. I. INTRODUCTION As is well known, the three-dimensional (3D) Klein-Fock-Gordon (KFG) wave equation in its standard form plays an important role in relativistic quantum mechanics [1][2][3][4][5]. As an example, when potentials fail to create particle-antiparticle pairs, the 3D KFG wave equation can be used to describe spin-zero particles, for example, the pion, a composite particle, and the Higgs boson, an apparently elementary particle. Clearly, this equation is one of the most widely used in relativistic quantum mechanics. Naturally, the search for exact solutions to this equation in specific and representative potentials has always been of interest, mainly because these solutions can be useful for modeling real physical processes. In the study of exactly solvable problems, various methods have been introduced and developed. Examples include supersymmetric quantum mechanics (SUSY QM) and/or the factorization method [6][7][8][9][10] and the Nikiforov-Uvarov (UV) method [8,9,11], among others [12][13][14][15]. It is worth mentioning that in recent years, new computational schemes or methods have been applied to obtain solutions of nonlinear partial differential equations that are related in some way to the KFG equation. See, for example, Refs. [ [16][17][18] and references therein. In reviewing the literature on KFG theory, it is immediately apparent that the 3D KFG wave equation in Hamiltonian form, i.e., the so-called 3D Feshbach-Villars (FV) wave equation [19], has not received the same attention as the standard 3D KFG equation. Certainly, both equations are equivalent, and connecting their corresponding solutions seems to be straightforward. However, the 3D FV partial differential equation is first order in time and second order in space, that is, it includes a second-order Hamiltonian operator in the spatial derivative (for a nice discussion of the procedure used by Feshbach and Villars to obtain a linear equation in the time derivative, see Ref. [20]. For a brief and concise historical discussion of similar work, but prior to that of Feshbach and Villars, see again Ref. [20], specifically, the commentary written in its reference number 3, page 191). Similarly, the one-dimensional (1D) FV wave equation has also not received sufficient attention when considering problems within the KFG theory in (1+1) dimensions. Certainly, the 1D KFG equation in its standard form is much more popular. In this regard, there is an issue within the 1D KFG theory that has received practically no attention and that we can raise with the following questions: What are the boundary conditions that the 1D FV equation can support? Can general families of boundary conditions be written for this equation? Specifically, what are the appropriate boundary conditions for this equation in the problem of a 1D KFG particle inside an interval? For example, some unexpected boundary conditions for the solutions of the 1D FV wave equation in simple physical situations were presented in Refs. [21][22][23]. In general, the boundary conditions for the solutions of the second-order KFG equation in 3D and 1D appear to be similar to those supported by the corresponding Schrödinger wavefunction (see, for example, Refs. [21,[23][24][25]), but we do not have at our disposal a wave equation that could have boundary conditions similar to those of the 1D FV equation (the presence of a singular matrix in the kinetic energy term of the Hamiltonian has much to do with this). In general, the physically acceptable boundary conditions for a wave equation that is written in Hamiltonian form must ensure that the respective Hamiltonian operator retains its essential attribute, namely, that of being self-adjoint (if that is the case). In the case of the 1D FV equation, it is known that its Hamiltonian is a formally pseudo-Hermitian operator (or a formally pseudo self-adjoint operator) [2,4], and, in principle, we could find families of general boundary conditions that agree with the property of being a pseudo self-adjoint operator, i.e., not just formally. In fact, here, we show that indeed a general four-parameter family of boundary conditions can be found for the solutions of the 1D FV equation and that it is consistent with the latter property. Incidentally, to do this is essentially to specify the domain of the Hamiltonian and that of its generalized adjoint (as is done in the case of Hamiltonians that are self-adjoint in the standard way), but, in addition, these two domains must be equal, i.e., they must always contain the same boundary condition (once the four parameters are fixed). The article is organized as follows. In Section II, we begin by introducing the KFG equations in their standard and Hamiltonian versions and the relations linking their solutions. In addition, we introduce the pseudo inner product for the two-component solutions of the 1D FV equation and briefly discuss its relation to other distinctive inner products of quantum mechanics. In particular, we note that this pseudo inner product can also be considered the scalar product for the one-component solutions of the KFG equation in its standard form. Moreover, as might be expected, this pseudo inner product does not possess the property of positive definiteness but can be independent of time. Thus, the corresponding pseudo norm can be a constant, and because this implies that the probability current density takes the same value at each end of the box, the Hamiltonian for this problem can be a pseudo-Hermitian operator. In fact, the Hamiltonian is formally pseudo-Hermitian, and we find in this section a general four-parameter set of boundary conditions that ensures that it is indeed a pseudo-Hermitian operator. We write this set in terms of the one-component wavefunction for the 1D KFG wave equation and its spatial derivative, both evaluated at the ends of the interval. Here, we also consider the nonrelativistic approximation of the general set of boundary conditions, and the results support the idea that this set is indeed the most general. In Section III, we finally write the general set of boundary conditions in terms of the two-component column vector for the 1D FV wave equation and its spatial derivative, evaluated at the ends of the interval. To be precise, the set must be written in terms of the latter two column vectors each multiplied by the singular matrix that is present in the kinetic energy term of the Hamiltonian (remember that a singular matrix does not have an inverse). In Section IV (Appendix I), we check that the time derivative of the pseudo inner product of two solutions of the 1D FV equation in a nonzero electric potential, but expressed in terms of the respective solutions of the standard KFG equation in the same potential, is proportional to a term evaluated at the ends of the box that also does not depend on the potential, i.e., it is a boundary term. In Section V (Appendix II), we show that the Hamiltonian operator for a 1D KFG particle in a box is in fact a pseudo self-adjoint operator; that is, the general matrix boundary condition, i.e., the general set of boundary conditions, ensures that the domains of the Hamiltonian and its generalized adjoint are equal. From the results shown in this section, it follows that the boundary term that arose in Section IV (Appendix I) always vanishes (certainly, for any boundary condition included in the general family of boundary conditions); consequently, the value of the pseudo inner product in this problem is conserved. Finally, concluding remarks are presented in Section VI. II. BOUNDARY CONDITIONS FOR THE 1D KFG PARTICLE IN A BOX I Let us begin by writing the 1D KFG wave equation in Hamiltonian form, i ∂ ∂t Ψ =ĥΨ,(1)whereĥ = − 2 2m (τ 3 + iτ 2 ) ∂ 2 ∂x 2 + mc 2τ 3 + V (x)1 2 ,(2) is, let us say, the KFG Hamiltonian differential operator. Here,τ 3 =σ z andτ 2 =σ y are Pauli matrices and V (x) ∈ R is the external electric potential (1 2 is the 2 × 2 identity matrix). The (matrix) operatorĥ acts on (complex) two-component column state vectors of the form Ψ = Ψ(x, t) = [ ψ 1 (x, t) ψ 2 (x, t) ] T (the symbol T represents the transpose of a matrix). Equation (1) withĥ given in Eq. (2) is the 1D FV wave equation [2][3][4]19]. The 1D KFG wave equation in its standard form, or the second order in time KFG equation in one spatial dimension [1, 5] is given by i ∂ ∂t − V (x) 2 ψ = − 2 c 2 ∂ 2 ∂x 2 + (mc 2 ) 2 ψ,(3) where ψ = ψ(x, t) is a (complex) one-component state vector or one-component wavefunction. The relation between ψ and Ψ can be defined as follows: Ψ =   ψ 1 ψ 2   = 1 2   ψ + iτ ∂ ∂t − V i ψ ψ − iτ ∂ ∂t − V i ψ   ,(4) where τ ≡ /mc 2 . The Compton wavelength is precisely λ C ≡ cτ ; thus, τ is the time taken for a ray of light to travel the distance λ C . The expression given in Eq. (3) is fully equivalent to Eq. (1) (withĥ given in Eq. (2)) [2,3]. Note that, from Eq. (4), the solution ψ of Eq. (3) depends only on the components of the column vector Ψ, namely, ψ = ψ 1 + ψ 2 .(5) Additionally, i ∂ ∂t ψ − V ψ 1 mc 2 = ψ 1 − ψ 2 .(6) Certainly, all the results we have presented so far are well known. Let us now consider a 1D KFG particle moving in the interval x ∈ Ω = [a, b], i.e., in a box. The corresponding Hamiltonian operator given in Eq. (2) acts on two-component column state vectors of the form Ψ = [ ψ 1 ψ 2 ] T and Φ = [ φ 1 φ 2 ] T , and the scalar product for these two state vectors must be defined as Ψ, Φ ≡ˆΩ dx Ψ †τ 3 Φ(7) (the symbol † denotes the usual Hermitian conjugate, or the usual formal adjoint, of a matrix and an operator) [2][3][4]19]. Additionally, the square of the corresponding norm (or rather, pseudo norm) is Ψ 2 ≡ Ψ, Ψ =´Ω dx ̺, where ̺ = ̺(x, t) = Ψ †τ 3 Ψ = |ψ 1 | 2 − |ψ 2 | 2 is the 1D KFG probability density. Certainly, ̺ is not positive definite and calling it probability density is not absolutely correct (although it can be interpreted as a charge density) [2][3][4]19]. Note that the integral in (7) can also be identified with the usual scalar product in Dirac's theory in (1+1) dimensions, namely, Ψ, Φ D ≡´Ω dx Ψ † Φ, which is an inner product on the Hilbert space of two-component square-integrable wavefunctions, L 2 (Ω) ⊕ L 2 (Ω); therefore, Ψ, Φ ≡ Ψ,τ 3 Φ D ,(8) and Ψ, Φ D = Ψ,τ 3 Φ . Because Ψ, Ψ can be a negative quantity, the scalar product in Eq. (7) is an indefinite (or improper) inner product, or a pseudo inner product, on an infinitedimensional complex vector space. In general, such a vector space itself is not necessarily a Hilbert space. Similarly, writing Ψ and Φ in the integrand in (7) in terms of their respective components, that is, using the relations that arise from Eq. (4) and other analogous relations for Φ (which are obtained from Eq. (4) by making the replacements Ψ → Φ, ψ 1 → φ 1 , ψ 2 → φ 2 and ψ → φ), we obtain Ψ, Φ = i 2mc 2ˆΩ dx ψ * φ t − ψ * t φ − 2V i ψ * φ(9) (where the asterisk * denotes the complex conjugate, and ψ t ≡ ∂ψ/∂t, etc), or also, Ψ, Φ = i 2mc 2 ψ, φ t S − ψ t , φ S − 2 i ψ, V φ S ≡ ψ, φ KFG ,(10) where ψ, φ KFG can be considered the scalar product for the one-component solutions of the 1D KFG equation in Eq. (3) (see Appendix I). Note that , S denotes the usual scalar product in the Schrödinger theory in one spatial dimension, namely, ψ, φ S ≡´Ω dx ψ * φ, which is an inner product on the Hilbert space of one-component square-integrable wavefunctions, L 2 (Ω). Certainly, ψ and ψ t , and φ, V φ, and φ t , must belong to L 2 (Ω) to ensure that ψ, φ KFG exists [26]. It can be noted that there is an isomorphism between the vectorial space of the solutions ψ of the standard 1D KFG equation for the corresponding 1D particle, namely, (1) withĥ given in Eq. (2) [27]. In effect, a possible initial state vector, for example, at t = 0, would have the form ∂ t − V i 2 +d ψ = 0 (11) (Eq. (3)), whered ≡ −c 2 ∂ xx +τ −2 (∂ t ≡ ∂/∂t and ∂ xx ≡ ∂ 2 /∂x 2 ,Ψ(0) =   ψ 1 (0) ψ 2 (0)   = 1 2   ψ(0) + iτ ψ t (0) − V i ψ(0) ψ(0) − iτ ψ t (0) − V i ψ(0)   ,(12) that arises immediately from the relation given in Eq. (4). Thus, giving an initial state vector as Ψ(0) is equivalent to providing the initial data for the solution vector ψ, namely, ψ(0) and ψ t (0). Incidentally, operatorsd, which can act on the one-component state vectors ψ, andĥ, which can act on the two-component state vectors Ψ, are related as follows: h = + 2 τ (τ 3 + iτ 2 )d + 2 τ −1 (τ 3 − iτ 2 ) + V (x)1 2 .(13) Although the scalar product in Eqs. (7) and (10) does not possess the property of positive definiteness (i.e., Ψ, Ψ < 0), it is a time-independent scalar product. Indeed, using Eq. (3) for ψ and ψ * , and for φ and φ * , it can be demonstrated that the following relation is verified: d dt Ψ, Φ = − i 2m [ ψ * x φ − ψ * φ x ]| b a = d dt ψ, φ KFG ,(14) where [ g ]| b a ≡ g(b, t) − g(a, t), and ψ x ≡ ∂ψ/∂x, etc. This result is also valid when the external potential V is different from zero inside the box (see Appendix I). The term evaluated at the endpoints of the interval Ω must vanish due to the boundary condition satisfied by ψ and φ, or Ψ and Φ (see Appendix II). Additionally, if we make ψ = φ, or Ψ = Φ, in Eq. (14), we obtain the result d dt Ψ, Ψ = − [ j ]| b a = d dt ψ, ψ KFG ,(15)where j = j(x, t) = (i /2m)(ψ * x ψ −ψ * ψ x ) would be the probability current density, although we know that this quantity, as well as ̺, cannot be interpreted as probability quantities [2,3]. The disappearance of the boundary term in Eq. (15) implies that the pseudo norm remains constant, and because j(a, t) = j(b, t), we have thatĥ must be a pseudo-Hermitian operator. In the case that Ω = R, the scalar product Ψ, Φ is a time-independent constant whenever Ψ and Φ are two normalizable solutions, i.e., solutions that have their pseudo norm finite. The square of the pseudo norm of these functions could be negative, but their magnitude cannot be infinite if the boundary term in Eq. (14) is expected to be zero. Next, we use the pseudo inner product given in Eq. (7), which is defined over an indefinite inner product space [20]. For a collection of basic properties of this scalar product (but also of general results on Hamiltonians of the type given in Eq. (2)), see Ref. [27]. Using integration by parts twice, it can be demonstrated that the Hamiltonian differential operatorĥ in Eq. (2) satisfies the following relation: Ψ,ĥΦ = ĥ adj Ψ, Φ + f [Ψ, Φ],(16) where the boundary term f [Ψ, Φ] is given by f [Ψ, Φ] ≡ 2 2m Ψ † xτ 3 (τ 3 + iτ 2 )Φ − Ψ †τ 3 (τ 3 + iτ 2 )Φ x b a .(17) This quantity can also be written in a way that will be especially important, namely, f [Ψ, Φ] ≡ 2 2m 1 2 ((τ 3 + iτ 2 )Ψ x ) † (τ 3 + iτ 2 )Φ − ((τ 3 + iτ 2 )Ψ) † (τ 3 + iτ 2 )Φ x b a .(18) The latter somewhat unexpected expression is true because the singular matrixτ 3 + iτ 2 obeys the following relation: (16) is the generalized Hermitian conjugate, or the formal generalized adjoint ofĥ, namely,ĥ (τ 3 + iτ 2 ) † (τ 3 + iτ 2 ) = 2τ 3 (τ 3 + iτ 2 ); however, (τ 3 + iτ 2 ) 2 =0. The differential operatorĥ adj in Eq.adj =η −1ĥ †η =τ 3ĥ †τ 3(19) (η =τ 3 =η −1 is sometimes called the metric operator; in this case,η is a bounded operator and satisfiesη 3 =η) and therefore (just formally, i.e., by using only the scalar product definition given in Eq. (7)), Ψ,ĥΦ = ĥ adj Ψ, Φ .(20) The latter is essentially the relation that defines the generalized adjoint differential operatorĥ adj on an indefinite inner product space. Clearly, the latter definition requires that f [Ψ, Φ] in Eq. (16) vanishes. The Hamiltonian operator in Eq. (2) also formally satisfies the following relation: h =ĥ adj ,(21) that is,ĥ is formally pseudo-Hermitian (or formally generalized Hermitian), or formally pseudo self-adjoint (or formally generalized self-adjoint). However, if the boundary conditions imposed on Ψ and Φ at the endpoints of the interval Ω lead to the cancellation of the boundary term in Eq. (16), then the differential operatorĥ is indeed pseudo-Hermitian (or generalized Hermitian), and as shown in Appendix II, it is also pseudo self-adjoint (or generalized self-adjoint), i.e., Ψ,ĥΦ = ĥ Ψ, Φ .(22) Precisely, we want to obtain a general set of boundary conditions for the pseudo-Hermitian Hamiltonian differential operator. Thus, if we impose Ψ = Φ in the latter relation and in Eq. (16) (with the result in Eq. (21)), we obtain the following condition: f [Ψ, Ψ] = i [ j ]| b a = 0 ( ⇒ j(b, t) = j(a, t) ) ,(23) where j = j(x, t) is given by j = i 2m 1 2 ((τ 3 + iτ 2 )Ψ x ) † (τ 3 + iτ 2 )Ψ − ((τ 3 + iτ 2 )Ψ) † (τ 3 + iτ 2 )Ψ x(24) (see Eq. (18)). But also becauseτ 3 (τ 3 + iτ 2 ) =1 2 +σ x (the latter if we use the expression given by Eq. (17)), and the result in Eq. (5), we obtain j = i 2m ( ψ * x ψ − ψ * ψ x ) ,(25) as expected (see the comment made just after Eq. (15)). Certainly, all the generalized Hermitian boundary conditions must lead to the equality of j at the endpoints of the interval Ω. Furthermore, we also obtain the result Ψ,ĥΨ = ĥ Ψ, Ψ = Ψ,ĥΨ * (the superscript * denotes the complex conjugate); therefore, Ψ,ĥΨ ≡ ĥ Ψ ∈ R, i.e., the generalized mean value of the Hamiltonian operator is real valued. Other typical properties of operators that are Hermitian in the usual sense hold here as well; for example, the eigenvalues are real (see, for example, Refs. [2,4]). Substituting j from Eq. (25) into Eq. (23), we obtain the result (we omit the variable t in the expressions that follow) λ 2m 2 f [Ψ, Ψ] = [ ψ λψ * x − ψ * λψ x ]| b a = [ ψ(b) λψ * x (b) − ψ * (b) λψ x (b) ] − [ ψ(a) λψ * x (a) − ψ * (a) λψ x (a) ] = 0,(26) where λ ∈ R is a parameter required for dimensional reasons. It is very convenient to rewrite the latter two terms using the following identity: z 1 z * 2 − z * 1 z 2 = i 2 [ (z 1 + iz 2 )(z 1 + iz 2 ) * − (z 1 − iz 2 )(z 1 − iz 2 ) * ] = i 2 |z 1 + iz 2 | 2 − |z 1 − iz 2 | 2 ,(27) where z 1 and z 2 are complex numbers. Then, the following result is obtained: λ 2m 2 f [Ψ, Ψ] = i 2 |ψ(b) + iλψ x (b)| 2 − |ψ(b) − iλψ x (b)| 2 − i 2 |ψ(a) + iλψ x (a)| 2 − |ψ(a) − iλψ x (a)| 2 = i 2 |ψ(b) + iλψ x (b)| 2 + |ψ(a) − iλψ x (a)| 2 − i 2 |ψ(b) − iλψ x (b)| 2 + |ψ(a) + iλψ x (a)| 2 = 0,(28) that is, λ 2m 2 f [Ψ, Ψ] = i 2   ψ(b) + iλψ x (b) ψ(a) − iλψ x (a)   †   ψ(b) + iλψ x (b) ψ(a) − iλψ x (a)   − i 2   ψ(b) − iλψ x (b) ψ(a) + iλψ x (a)   †   ψ(b) − iλψ x (b) ψ(a) + iλψ x (a)   = 0.(29) Let us now consider the following general matrix boundary condition:   ψ(b) + iλψ x (b) ψ(a) − iλψ x (a)   =M   ψ(b) − iλψ x (b) ψ(a) + iλψ x (a)   ,(30) whereM is an arbitrary complex matrix. By substituting Eq. (30) into Eq. (29), we obtain i 2   ψ(b) − iλψ x (b) ψ(a) + iλψ x (a)   † M †M −1 2   ψ(b) − iλψ x (b) ψ(a) + iλψ x (a)   = 0; therefore,M is a unitary matrix (the justification for this result is given in the comment that follows Eq. (A14)). Thus, a general set of generalized Hermitian boundary conditions for the 1D KFG particle in a box can be written as follows:   ψ(b) − iλψ x (b) ψ(a) + iλψ x (a)   =Û (2×2)   ψ(b) + iλψ x (b) ψ(a) − iλψ x (a)   ,(31) whereÛ (2×2) =M −1 is also unitary. This family of boundary conditions is similar to the one corresponding to the problem of the 1D Schrödinger particle enclosed in a box; for example, see Eq. (28) in Ref. [28]. In relation to this, we can also take the nonrelativistic approximation of the general boundary condition given in Eq. (31). For that purpose, it is convenient to first write the KFG wavefunction ψ = ψ(x, t) as follows: ψ = ψ S exp(−i mc 2 t/ ), where ψ S = ψ S (x, t) is the Schrödinger wavefunction. Because in this approximation we have that | i (ψ S ) t | ≪ mc 2 | ψ S |, we can write ψ t = (−i mc 2 t/ )ψ, and therefore ψ 1 = 1 − V 2mc 2 ψ and ψ 2 = V 2mc 2 ψ (see Eq. (4)). Thus, for weak external potentials and to the lowest order in v/c (and for positive energy solutions), ψ 1 ≈ ψ satisfies the Schrödinger equation in the potential V + mc 2 (the latter mc 2 can be eliminated by using the expression ψ 1 ≈ ψ = ψ S exp(−i mc 2 t/ )) but also (ψ 1 ) x ≈ ψ x (see, for example, Refs. [2,19,23]). It is then clear that, in the problem of the particle in a box, the one-component KFG wavefunction satisfies the same boundary conditions as the onecomponent Schrödinger wavefunction. Incidentally, a similar result to Eq. (31) had already been obtained by taking the nonrelativistic limit of the most general family of boundary conditions for the 1D Dirac particle enclosed in a box [29]. Additionally, in the analogous problem of a 1D Schrödinger particle in the presence of a point interaction at the point x = 0 (or a hole at the origin), the most general family of boundary conditions is similar to that given in Eq. (31) [30]. Indeed, all these results substantiate that the set of boundary conditions dependent on the four real parameters given in Eq. (31) is also the most general for a 1D KFG particle in the interval Of all the boundary conditions included in the four-parameter family of boundary conditions, only those arising from a diagonal unitary matrix describe a particle in an impenetrable box. This is because, for these boundary conditions, the probability current density satisfies the relation j(b) = j(a) = 0 for all t. Thus, the most general family of confining boundary conditions for a 1D KFG particle in a box only has two (real) parameters. The latter result is due to the similarity between the general set of boundary conditions given in Eq. (31) and the general sets of boundary conditions for the 1D Dirac and Schrödinger particles, and because we already know that the confining boundary conditions come from a matrixÛ (2×2) that is diagonal [29]. III. BOUNDARY CONDITIONS FOR THE 1D KFG PARTICLE IN A BOX II Here, we obtain the most general set of pseudo self-adjoint boundary conditions for the Hamiltonian operator in the 1D FV equation, that is, we write the latter set in terms of Ψ and Ψ x evaluated at the endpoints of the box. More specifically, in terms of (τ 3 + iτ 2 )Ψ and (τ 3 + iτ 2 )Ψ x . Indeed, following a procedure similar to that used above to obtain Eq. (26), namely, substituting j from Eq. (24) into Eq. (23), we obtain λ 2m 2 f [Ψ, Ψ] = 1 2 ((τ 3 + iτ 2 )λΨ x ) † (τ 3 + iτ 2 )Ψ − ((τ 3 + iτ 2 )Ψ) † (τ 3 + iτ 2 )λΨ x b a = 1 2 ((τ 3 + iτ 2 )λΨ x (b)) † (τ 3 + iτ 2 )Ψ(b) − ((τ 3 + iτ 2 )Ψ(b)) † (τ 3 + iτ 2 )λΨ x (b) − 1 2 ((τ 3 + iτ 2 )λΨ x (a)) † (τ 3 + iτ 2 )Ψ(a) − ((τ 3 + iτ 2 )Ψ(a)) † (τ 3 + iτ 2 )λΨ x (a) = 0,(32) where again, we insert the real parameter λ for dimensional reasons. Now, we use the following matrix identity twice: Z † 2Ẑ 1 −Ẑ † 1Ẑ 2 = i 2 (Ẑ 1 + iẐ 2 ) † (Ẑ 1 + iẐ 2 ) − (Ẑ 1 − iẐ 2 ) † (Ẑ 1 − iẐ 2 ) .(33) Then, we obtain the following result: λ 2m 2 f [Ψ, Ψ] = 1 2 i 2 ((τ 3 + iτ 2 )(Ψ + iλΨ x )(b)) † (τ 3 + iτ 2 )(Ψ + iλΨ x )(b) − ((τ 3 + iτ 2 )(Ψ − iλΨ x )(b)) † (τ 3 + iτ 2 )(Ψ − iλΨ x )(b) − 1 2 i 2 ((τ 3 + iτ 2 )(Ψ + iλΨ x )(a)) † (τ 3 + iτ 2 )(Ψ + iλΨ x )(a) − ((τ 3 + iτ 2 )(Ψ − iλΨ x )(a)) † (τ 3 + iτ 2 )(Ψ − iλΨ x )(a) = 0,(34) that is, λ 2m 2 f [Ψ, Ψ] = 1 2 i 2   (τ 3 + iτ 2 )(Ψ + iλΨ x )(b) (τ 3 + iτ 2 )(Ψ − iλΨ x )(a)   †   (τ 3 + iτ 2 )(Ψ + iλΨ x )(b) (τ 3 + iτ 2 )(Ψ − iλΨ x )(a)   − 1 2 i 2   (τ 3 + iτ 2 )(Ψ − iλΨ x )(b) (τ 3 + iτ 2 )(Ψ + iλΨ x )(a)   †   (τ 3 + iτ 2 )(Ψ − iλΨ x )(b) (τ 3 + iτ 2 )(Ψ + iλΨ x )(a)   = 0.(35) Now, we propose writing a general matrix boundary condition as follows:   (τ 3 + iτ 2 )(Ψ + iλΨ x )(b) (τ 3 + iτ 2 )(Ψ − iλΨ x )(a)   =   (τ 3 + iτ 2 )(Ψ − iλΨ x )(b) (τ 3 + iτ 2 )(Ψ + iλΨ x )(a)   ,(36) where is an arbitrary 4 × 4 complex matrix. By substituting Eq. (36) into Eq. (35), we obtain 1 2 i 2   (τ 3 + iτ 2 )(Ψ − iλΨ x )(b) (τ 3 + iτ 2 )(Ψ + iλΨ x )(a)   †  †Â −1 4   (τ 3 + iτ 2 )(Ψ − iλΨ x )(b) (τ 3 + iτ 2 )(Ψ + iλΨ x )(a)   = 0, then is a unitary matrix (1 4 is the 4 × 4 identity matrix). Note that the components of the column vectors in Eq. (36) are themselves 2 × 1 column matrices and are given by (τ 3 + iτ 2 )(Ψ ± iλΨ x )(x) =   (ψ ± iλψ x )(x) −(ψ ± iλψ x )(x)   , x = a, b.(37) Thus, the general boundary condition in Eq. (36) can be written as follows:        (ψ + iλψ x )(b) −(ψ + iλψ x )(b) (ψ − iλψ x )(a) −(ψ − iλψ x )(a)        =        (ψ − iλψ x )(b) −(ψ − iλψ x )(b) (ψ + iλψ x )(a) −(ψ + iλψ x )(a)        .(38) On the other hand, this relation can also be written as follows:        (ψ + iλψ x )(b) (ψ − iλψ x )(a) (ψ + iλψ x )(b) (ψ − iλψ x )(a)        =ŜÂŜ †        (ψ − iλψ x )(b) (ψ + iλψ x )(a) (ψ − iλψ x )(b) (ψ + iλψ x )(a)        ,(39) whereŜ is given byŜ =        1 0 0 0 0 0 1 0 0 −1 0 0 0 0 0 −1        = 1 2 σ z ⊗1 2 + iσ y ⊗σ x + iσ x ⊗σ y +1 2 ⊗σ z ,(40) where ⊗ denotes the Zehfuss-Kronecker product of matrices, or the matrix direct product F ⊗Ĝ ≡      F 11Ĝ · · · F 1nĜ . . . . . . . . . F m1Ĝ · · · F mnĜ      ,(41) which is bilinear and associative and satisfies, among other properties, the mixed-product property: (F ⊗Ĝ)(Ĵ ⊗K) = (FĴ ⊗ĜK) (see, for example, Ref. [31]). The matrixŜ is unitary, and therefore,ŜÂŜ † is also a unitary matrix. Now, notice that the left-hand side of the relation in Eq. (39) is given by (see Eq. (30))            (ψ + iλψ x )(b) (ψ − iλψ x )(a)     (ψ + iλψ x )(b) (ψ − iλψ x )(a)            =         M   (ψ − iλψ x )(b) (ψ + iλψ x )(a)   M   (ψ − iλψ x )(b) (ψ + iλψ x )(a)            =  M0 0M          (ψ − iλψ x )(b) (ψ + iλψ x )(a) (ψ − iλψ x )(b) (ψ + iλψ x )(a)        ,(42) and substituting the latter relation into Eq. (39), we obtain SÂŜ † =  M0 0M   =1 2 ⊗M(43) (becauseM is a unitary matrix, the block diagonal matrix in Eq. (43) is also unitary). Then, from Eq. (43), we can write the matrix as follows: A =Ŝ †  M0 0M  Ŝ =Ŝ † (1 2 ⊗M)Ŝ.(44) Thus, the most general family of pseudo self-adjoint boundary conditions for the 1D KFG particle in a box, that is, for the Hamiltonian operator in the 1D FV wave equation, can be written as follows (see Eq. (36)):   (τ 3 + iτ 2 )(Ψ − iλΨ x )(b) (τ 3 + iτ 2 )(Ψ + iλΨ x )(a)   =Û (4×4)   (τ 3 + iτ 2 )(Ψ + iλΨ x )(b) (τ 3 + iτ 2 )(Ψ − iλΨ x )(a)   ,(45)whereÛ (4×4) = −1 = † =Ŝ †  M †0 0M †  Ŝ =Ŝ †  M −10 0M −1  Ŝ =Ŝ †  Û (2×2)0 0Û (2×2)  Ŝ =Ŝ † (1 2 ⊗Û (2×2) )Ŝ(46) (to reach this result, we use Eq. (44) and the fact thatÛ (2×2) =M −1 , the latter two results and only some properties of the matrix direct product could also be used). Note that the general matrix boundary condition in Eq. (45) could also be written as follows: ( 1 2 ⊗ (τ 3 + iτ 2 ))   (Ψ − iλΨ x )(b) (Ψ + iλΨ x )(a)   =Û (4×4) (1 2 ⊗ (τ 3 + iτ 2 ))   (Ψ + iλΨ x )(b) (Ψ − iλΨ x )(a)   ;(47)condition is (τ 3 + iτ 2 )Ψ x (a) = (τ 3 + iτ 2 )Ψ x (b) = 0 (Û (4×4) = +1 4 = +1 2 ⊗1 2 ); the periodic boundary condition is (τ 3 + iτ 2 )Ψ(a) = (τ 3 + iτ 2 )Ψ(b) and (τ 3 + iτ 2 )Ψ x (a) = (τ 3 + iτ 2 )Ψ x (b) (Û (4×4) =σ x ⊗1 2 ); the antiperiodic boundary condition is (τ 3 + iτ 2 )Ψ(a) = −(τ 3 + iτ 2 )Ψ(b) and (τ 3 + iτ 2 )Ψ x (a) = −(τ 3 + iτ 2 )Ψ x (b) (Û (4×4) = −σ x ⊗1 2 ) ; a mixed boundary condition is (τ 3 + iτ 2 )Ψ(a) = (τ 3 + iτ 2 )Ψ x (b) = 0 (Û (4×4) =σ z ⊗1 2 ) ; another mixed boundary condition is (τ 3 + iτ 2 )Ψ x (a) = (τ 3 + iτ 2 )Ψ(b) = 0 (Û (4×4) = −σ z ⊗1 2 ) ; a kind of Robin boundary condition (and a kind of MIT bag boundary condition for a 1D KFG particle) is (τ 3 +iτ 2 )(Ψ(a)−λΨ x (a)) = 0 and (τ 3 + iτ 2 )(Ψ(b) + λΨ x (b)) = 0 (Û (4×4) = i1 4 = i1 2 ⊗1 2 ). Then, to write all these boundary conditions in terms of ψ(a) and ψ(b), and ψ x (a) and ψ x (b), we must use the fact that Ψ = [ ψ 1 ψ 2 ] T and ψ = ψ 1 + ψ 2 (Eq. (5)). If we wish to obtain explicit relations between the components of Ψ and Ψ x at x = a and Ψ and Ψ x at x = b, we must use the relations given in Eqs. (5) and (6). Additionally, it can be shown that when the matrixÛ (2×2) is diagonal, then the matrixÛ (4×4) is also diagonal; consequently, diagonal matricesÛ (4×4) in Eq. (45) lead to confining boundary conditions (see the last paragraph of Sect. II). In general, the boundary conditions imposed on (τ 3 + iτ 2 )Ψ and (τ 3 + iτ 2 )Ψ x at the endpoints of the box do not imply that Ψ and Ψ x must also satisfy them. For example, let us consider the problem of the 1D KFG particle in the step potential ( V (x) = V 0 Θ(x), where Θ(x) is the Heaviside step function). This problem was also considered in Refs. [21,23]. The step potential is a (soft) point interaction in the neighborhood of the origin, that is, between the points x = a → 0+ and x = b → 0−, and the boundary condition is the periodic boundary condition, which in this case becomes the continuity condition of (τ 3 + iτ 2 )Ψ and (τ 3 + iτ 2 )Ψ x at x = 0, i.e., (τ 3 + iτ 2 )Ψ(0−) = (τ 3 + iτ 2 )Ψ(0+) and (τ 3 + iτ 2 )Ψ x (0−) = (τ 3 + iτ 2 )Ψ x (0+). As we know, from this condition, it is obtained that ψ(0−) = ψ(0+) and ψ x (0−) = ψ x (0+). If the relations ψ 1 + ψ 2 = ψ (Eq. (5)) and ψ 1 − ψ 2 = (E − V )ψ/mc 2 (Eq. (6)) are used (in the latter, we also assumed that ψ is an energy eigenstate), one can find relations between {Ψ(0+), Ψ x (0+)} and {Ψ(0−), Ψ x (0−)}. We find that the relation given in Eq. (30) in Ref. [21] is none other than the boundary condition (τ 3 + iτ 2 )Ψ(0−) = (τ 3 + iτ 2 )Ψ(0+), with Eqs. (5) and (6) evaluated at x = 0±. Likewise, the relation given in Eq. (31) of the same reference is none other than (τ 3 + iτ 2 )Ψ x (0−) = (τ 3 + iτ 2 )Ψ x (0+), with the spatial derivatives of Eqs. (5) and (6) also evaluated at x = 0±. Finally, adding the latter two boundary conditions, we obtain Eq. (32) of Ref. [21]. Clearly, if the height of the step potential is not zero, then Ψ(0+) is different from Ψ(0−), and Ψ x (0+) is different from Ψ x (0−). Similarly, in Ref. [23], it was explicitly proven that Ψ(0+) = Ψ(0−) and Ψ x (0+) = Ψ x (0−) (see Eqs. (19) and (20) in that reference), but it was also shown that the boundary condition should be written in the form (τ 3 +iτ 2 )Ψ(0−) = (τ 3 +iτ 2 )Ψ(0+) and (τ 3 +iτ 2 )Ψ x (0−) = (τ 3 +iτ 2 )Ψ x (0+). Incidentally, in the same reference, it was shown that the latter boundary condition can be obtained by integrating the 1D FV equation from x = 0− to x = 0+. On the other hand, in the problem of the 1D KFG particle inside the box Ω = [a, b], and subjected to the potential V , with the Dirichlet boundary condition, (τ 3 + iτ 2 )Ψ(a) = (τ 3 + iτ 2 )Ψ(b) = 0, we know that ψ also satisfies this condition, namely, ψ(a) = ψ(b) = 0. The latter boundary condition together with Eqs. (5) and (6) lead us to the boundary condition Ψ(a) = Ψ(b) = 0. Indeed, in addition to ψ 1 (a) + ψ 2 (a) = ψ 1 (b) + ψ 2 (b) = 0, ψ 1 (a) − ψ 2 (a) = ψ 1 (b) − ψ 2 (b) = 0 (because ψ t (a, t) = ψ t (b, t) = 0 also holds). Finally, Ψ also satisfies the Dirichlet boundary condition at the edges of the box (the latter boundary condition was precisely the one used in Ref. [22]). In short, let us suppose that the one-component wavefunction ψ can vanish at a point on the real line, for example, at x = 0 (also V (0+) and V (0−) must be finite numbers there). The latter is the Dirichlet boundary condition, namely, ψ(0−) = ψ(0+) = 0 ≡ ψ(0). Certainly, this result is obtained from the disappearance of (τ 3 + iτ 2 )Ψ at that same point, i.e., from the fact that the Hamiltonian operator with the latter boundary condition is a pseudo self-adjoint operator; then, the latter condition implies that the entire two-component wavefunction Ψ has to disappear at that point (use Eqs. (5) and (6)). In other words, the 1D FV wave equation is a second-order equation in the spatial derivative that accepts the vanishing of the entire twocomponent wavefunction at a point. On the other hand, let us now suppose that ψ x can vanish at a point on the real line, for example, at x = 0, but ψ is nonzero there (also V x (0+) and V x (0−) must be finite numbers there). The latter is the Neumann boundary condition, namely, ψ x (0−) = ψ x (0+) = 0 ≡ ψ x (0) . Indeed, we also have that (τ 3 + iτ 2 )Ψ x vanishes at that same point. Then, it can be shown that (ψ 1 ) x and (ψ 2 ) x do not have to vanish at the point in question, and therefore, Ψ x is not zero there either (use Eqs. (5) and (6)). IV. APPENDIX I The 1D KFG wave equation given in Eq. (3) can also be written as follows: − 2 ∂ 2 ∂t 2 − i2 V (x) ∂ ∂t + (V (x)) 2 ψ = − 2 c 2 ∂ 2 ∂x 2 + (mc 2 ) 2 ψ,(A1) and therefore, ψ tt = c 2 ψ xx − mc 2 2 ψ + 2V i ψ t + V 2 2 ψ.(A2) The scalar product for the two-component column state vectors Ψ = [ ψ 1 ψ 2 ] T and Φ = [ φ 1 φ 2 ] T , where ψ 1 + ψ 2 = ψ and φ 1 + φ 2 = φ, is given by Ψ, Φ ≡ˆΩ dx Ψ †τ 3 Φ = i 2mc 2ˆΩ dx ψ * ∂ ∂t − V i φ − ∂ ∂t − V i ψ * φ = i 2mc 2ˆΩ dx ψ * φ t − ψ * t φ − 2V i ψ * φ ≡ ψ, φ KFG .(A3) The latter quantity is preserved in time; in fact, taking its time derivative and using the result in Eq. (A2), and a similar relation for φ (ψ and φ are solutions of the 1D KFG wave equation in its standard form), one obtains the same relation given in Eq. (14), namely, Let us return to the result given in Eq. (16), namely, d dt Ψ, Φ = d dt ψ, φ KFG = − i 2m [ ψ * x φ − ψ * φ x ]| b a .(A4Ξ,ĥΦ = ĥ adj Ξ, Φ + f [Ξ, Φ],(A5) where f [Ξ, Φ] is given by (see Eq. (18)) f [Ξ, Φ] ≡ 2 2m 1 2 ((τ 3 + iτ 2 )Ξ x ) † (τ 3 + iτ 2 )Φ − ((τ 3 + iτ 2 )Ξ) † (τ 3 + iτ 2 )Φ x b a .(A6)φ 1 + φ 2 = φ and ξ 1 + ξ 2 = ξ.(A7) The boundary term in Eq. (A6) can be written in terms of φ and ξ, namely, f [Ξ, Φ] = 2 2m [ ξ * x φ − ξ * φ x ]| b a .(A8) First, let us suppose that every column vector Φ ∈ D(ĥ) satisfies the boundary conditions The latter relation is precisely the one that defines the generalized adjoint differential operator. It is clear that its verification did not require the imposition of any boundary condition on the vectors Ξ ∈ D(ĥ adj ). Thus, until now, we have that D(ĥ) = D(ĥ adj ) (in fact, we have that D(ĥ) ⊂ D(ĥ adj ), i.e., D(ĥ) is a restriction of D(ĥ adj )). If the operatorĥ is to be a pseudo self-adjoint differential operator, the relation given in Eq. (21), namely,ĥ =ĥ adj , must be verified, and therefore, D(ĥ) = D(ĥ adj ). To achieve this, we must allow every vector Φ ∈ D(ĥ) to satisfy more general boundary conditions, that is, we must relax the domain ofĥ. Let us suppose that we have a set of boundary conditions to be imposed on a vector Φ ∈ D(ĥ); if the cancellation of the boundary term f [Ξ, Φ] by these boundary conditions only depends on imposing the same boundary conditions on the vector Ξ ∈ D(ĥ adj ), thenĥ will be a pseudo self-adjoint differential operator. First, from Eq. (A8), we write the boundary term in Eq. (A5) as follows: λ 2m 2 f [Ξ, Φ] = [ φ λξ * x − ξ * λφ x ]| b a = [ φ(b) λξ * x (b) − ξ * (b) λφ x (b) ] − [ φ(a) λξ * x (a) − ξ * (a) λφ x (a) ] = 0. (A10) It is fairly convenient to rewrite the latter two terms using the following identity: z 1 z * 2 − z * 3 z 4 = i 2 [ (z 1 + iz 4 )(z 3 + iz 2 ) * − (z 1 − iz 4 )(z 3 − iz 2 ) * ] ,(A11) where z 1 , z 2 , z 3 and z 4 are complex numbers. The latter relation is the generalization of that given in Eq. (27). In fact, making the replacements z 3 → z 1 and z 4 → z 2 in Eq. (A11), the relation given in Eq. (27) is obtained. Then, the following result is derived: λ 2m 2 f [Ξ, Φ] = i 2 [(φ(b) + iλφ x (b)) (ξ(b) + iλξ x (b)) * − (φ(b) − iλφ x (b)) (ξ(b) − iλξ x (b)) * ] − i 2 [(φ(a) + iλφ x (a)) (ξ(a) + iλξ x (a)) * − (φ(a) − iλφ x (a)) (ξ(a) − iλξ x (a)) * ] = i 2 [(φ(b) + iλφ x (b)) (ξ(b) + iλξ x (b)) * + (φ(a) − iλφ x (a)) (ξ(a) − iλξ x (a)) * ] − i 2 [(φ(b) − iλφ x (b)) (ξ(b) − iλξ x (b) ) * + (φ(a) + iλφ x (a)) (ξ(a) + iλξ x (a)) * ] = 0, this means that λ 2m 2 f [Ξ, Φ] = i 2   ξ(b) + iλξ x (b) ξ(a) − iλξ x (a)   †   φ(b) + iλφ x (b) φ(a) − iλφ x (a)   − i 2   ξ(b) − iλξ x (b) ξ(a) + iλξ x (a)   †   φ(b) − iλφ x (b) φ(a) + iλφ x (a)   = 0.(A12) Let us now consider a more general set of boundary conditions to be imposed on a vector Φ ∈ D(ĥ) (i.e., more general than the boundary conditions that we presented after Eq. (A8)), namely,   φ(b) + iλφ x (b) φ(a) − iλφ x (a)   =N   φ(b) − iλφ x (b) φ(a) + iλφ x (a)   ,(A13) whereN in an arbitrary complex matrix. By substituting the latter relation in Eq. (A12), we obtain the following result: λ 2m 2 f [Ξ, Φ] = i 2           ξ(b) + iλξ x (b) ξ(a) − iλξ x (a)   †N −   ξ(b) − iλξ x (b) ξ(a) + iλξ x (a)   †      φ(b) − iλφ x (b) φ(a) + iλφ x (a)        = 0, and therefore,  therefore,N is a unitary matrix. Thus, the most general family of pseudo self-adjoint, or generalized self-adjoint boundary conditions, for the 1D KFG particle in a box can be written in the form given by Eq. (31), namely,  ξ(b) + iλξ x (b) ξ(a) − iλξ x (a)   †N =   ξ(b) − iλξ x (b) ξ(a) + iλξ x (a)   †(  ξ(b) − iλξ x (b) ξ(a) + iλξ x (a)   =Û   ξ(b) + iλξ x (b) ξ(a) − iλξ x (a)   ,(A16) whereÛ =N −1 . The fact that the boundary condition for Φ ∈ D(ĥ) (for example, given in terms of φ) is the same boundary condition for Ξ ∈ D(ĥ adj ) (given in terms of ξ) ensures that D(ĥ) = D(ĥ adj ); therefore,ĥ, which was already a pseudo-Hermitian operator, is also a pseudo self-adjoint operator. Additionally, the boundary term given in Eq. (14), or in Eq. (A4), vanishes, and therefore, the pseudo inner product is conserved. VI. CONCLUDING REMARKS The KFG Hamiltonian operator, or the Hamiltonian that is present in the first order in time 1D KFG wave equation, i.e., the 1D FV wave equation, is formally pseudo-Hermitian. This is a well-known fact, and its verification does not require knowledge of the domain of the Hamiltonian or its adjoint. We have shown that this operator is also a pseudo-Hermitian operator, but in addition, it is a pseudo self-adjoint operator when it describes a 1D KFG particle in a finite interval. Consequently, we constructed the most general set of boundary conditions for this operator, which is characterized by four real parameters and is consistent with the last two properties. All these results can be extended to the problem of a 1D KFG particle moving on a real line with a penetrable or an impenetrable obstacle at one point, i.e., with a point interaction (or a hole) there. For instance, assuming the point is x = 0, it suffices to make the replacements x = a → 0+ and x = b → 0− in the general set of boundary conditions for the particle in the interval [a, b]. As we have shown, the general set of boundary conditions can be written in terms of the one-component wavefunction for the second order in time 1D KFG wave equation, that is, ψ, and its derivative ψ x , both evaluated at the ends of the box. Certainly, we showed that the general set can also be written in terms of the two-component column vectors for the 1D FV wave equation, that is, (τ 3 + iτ 2 )Ψ and (τ 3 + iτ 2 )Ψ x , evaluated at the ends of the box. We only used algebraic arguments and simple concepts that are within the general theory of linear operators on a space with indefinite inner product to build these sets of boundary conditions. From the results presented in Section III, we also found that Ψ and Ψ x do not necessarily satisfy the same boundary condition that (τ 3 +iτ 2 )Ψ and (τ 3 +iτ 2 )Ψ x satisfy. In any case, given a particular boundary condition that ψ and ψ x satisfy at the ends of the box and using the relations that arise between the components of the column vector Ψ, that is, ψ 1 and ψ 2 , and quantities ψ, ψ t , and the potential V (see Eqs. (5) and (6)), the respective boundary condition on Ψ and Ψ x can be obtained. We think that our article will be of interest to those interested in the fundamental and technical aspects of relativistic wave equations. Furthermore, to the best of our knowledge, the main results of our article, i.e., those related to general pseudo self-adjoint sets of boundary conditions in the 1D KFG theory, do not appear to have been considered before. etc) and the vectorial space of the initial state vectors of the 1D KFG equation in Hamiltonian form for this 1D particle, namely, Eq. [a, b]. Moreover, by making the replacements a → 0+ and b → 0− in Eq.(31), we obtain the respective most general set of boundary conditions for the case in which the 1D KFG particle moves along the real line with a hole at the origin. Some examples of boundary conditions for this system can be seen in Refs.[21,23] and will be briefly discussed in Section III.For all the boundary conditions that are part of the general set of boundary conditions in Eq. (31),ĥ is a pseudo-Hermitian operator, but it is also a pseudo self-adjoint operator (see Appendix II). Certainly, the result in Eq. (31) is given in terms of the wavefunction ψ, but if the relation in Eq. (5) is used, it can also be written in terms of the components of Ψ = [ ψ 1 ψ 2 ] T , i.e., in terms of ψ 1 + ψ 2 , and its spatial derivative (ψ 1 ) x + (ψ 2 ) x , evaluated at the edges x = a and x = b. Actually, the general family of boundary conditions given in Eq. (31) must be written in terms of (τ 3 + iτ 2 )Ψ and (τ 3 + iτ 2 )Ψ x evaluated at the ends of the box. We work on this in the next section. We give below some examples of boundary conditions that are contained in Eq. (31): ψ(a) = ψ(b) = 0 (Û (2×2) = −1 2 ), i.e., ψ can satisfy the Dirichlet boundary condition; ψ x (a) = ψ x (b) = 0 (Û (2×2) = +1 2 ), i.e., ψ can satisfy the Neumann boundary condition; ψ(a) = ψ(b) and ψ x (a) = ψ x (b) (Û (2×2) = +σ x ), ψ can satisfy the periodic boundary condition; ψ(a) = −ψ(b) and ψ x (a) = −ψ x (b) (Û (2×2) = −σ x ), ψ can satisfy the antiperiodic boundary condition; ψ(a) = ψ x (b) = 0 (Û (2×2) =σ z ), i.e., ψ can satisfy a mixed boundary condition; ψ x (a) = ψ(b) = 0 (Û (2×2) = −σ z ), i.e., ψ can satisfy another mixed boundary condition; ψ(a) − λψ x (a) = 0 and ψ(b) + λψ x (b) = 0 (Û (2×2) = i1 2 ), ψ can satisfy a kind ofRobin boundary condition. In fact, the latter boundary condition would be the KFG version of the boundary condition commonly used in the so-called (one-dimensional) MIT bag model for hadronic structures (see, for example, Ref.[29]). All these boundary conditions are typical of wave equations that are of the second order in the spatial derivative. however, the matrix1 2 ⊗ (τ 3 + iτ 2 ) does not have an inverse and the column vector on the left side of this relation cannot be cleared. Thus, the expression given in Eq. (47) is an elegant way to write the general boundary condition, but it is not functional and could lead to errors.The boundary conditions that were presented just before the last paragraph of Sect. II can be extracted from Eq. (45) if the matrixÛ (2×2) is known. In effect, the Dirichlet boundary condition is (τ 3 + iτ 2 )Ψ(a) = (τ 3 + iτ 2 )Ψ(b) = 0 (Û (4×4) = −1 4 = −1 2 ⊗1 2 ); the Neumann boundary ) As follows from the results obtained in Appendix II, if ψ and φ both satisfy any boundary condition included in the most general set of boundary conditions, the boundary term in Eq. (A4) always vanishes. V. APPENDIX II The goal of this section is to show that if the functions belonging to the domain ofĥ (considered a densely defined operator) obey any of the boundary conditions included in Eq. (31), then the functions of the domain ofĥ adj must obey the same boundary condition. This means that for the general family of boundary conditions given in Eq. (31), the operatorĥ =ĥ adj is pseudo self-adjoint. Our results are obtained using simple arguments that are part of the general theory of linear operators in an indefinite inner product space (see, for example, Refs. [32, 33]). Here,ĥ can act on column vectors Φ = [ φ 1 φ 2 ] T ∈ D(ĥ), where D(ĥ) is the domain ofĥ, a set of column vectors on which we allow the differential operatorĥ to act (D(ĥ) is a linear subset of the indefinite inner product space), which fundamentally includes boundary conditions, andh adj can act on column vectors Ξ = [ ξ 1 ξ 2 ] T ∈ D(ĥ adj ) (in general, D(ĥ adj ) may not coincide with D(ĥ)). By virtue of the result given in Eq. (5), the respective solutions of Eq. (3) are the following: (τ 3 3+ iτ 2 )Φ(a) = (τ 3 + iτ 2 )Φ(b) = 0 and (τ 3 + iτ 2 )Φ x (a) = (τ 3 + iτ 2 )Φ x (b) = 0, or, equivalently, φ(a) = φ(b) = 0 and φ x (a) = φ x (b) = 0 (remember the first relation in Eq. (A7)). In this case, the boundary term in Eq. (A5) vanishes, and we have the result Ξ,ĥΦ = ĥ adj Ξ, Φ .(A9) ( This result is because, at this point, we cannot impose any boundary conditions that would completely annul the column vectors in Eq. (A13), for example). Every vector Ξ ∈ D(ĥ adj ) should satisfy the same boundary conditions that Φ ∈ D(ĥ) satisfies, i.e., the boundary conditions in Eq. (A13), namely, Hermitian conjugate of the matrix relation in Eq. (A14) and substituting this result into Eq. (A15), we obtain   ξ(b) + iλξ x (b) ξ(a) − iλξ x (a) b) + iλξ x (b) ξ(a) − iλξ x (a)   ; * URL: https://orcid.org/0000-0002-5009-053X; Electronic address: [[email protected]] AcknowledgmentsThe author wishes to thank the referees for their comments and suggestions.Conflicts of interestThe author declares no conflicts of interest. Der Comptoneffekt nach der Schrödingerschen Theorie. W Gordon, Zeitschrift für Physik. 40W. Gordon, "Der Comptoneffekt nach der Schrödingerschen Theorie," Zeitschrift für Physik 40, 117-33 (1926). G Baym, Lectures on Quantum Mechanics. New YorkWestview PressG. Baym, Lectures on Quantum Mechanics (Westview Press, New York, 1990). W Greiner, Relativistic Quantum Mechanics. BerlinSpringer3rd ed.W. Greiner, Relativistic Quantum Mechanics, 3rd ed. (Springer, Berlin, 2000). A Wachter, Relativistic Quantum Mechanics. BerlinSpringerA. Wachter, Relativistic Quantum Mechanics (Springer, Berlin, 2011). Equation with the many fathers. The Klein-Gordon equation in 1926. H Kragh, Am. J. Phys. 52H. Kragh, "Equation with the many fathers. The Klein-Gordon equation in 1926," Am. J. Phys. 52, 1024-33 (1984). F Cooper, A Khare, U Sukhatme, Supersymmetry in Quantum Mechanics. SingaporeWorld ScientificF. Cooper, A. Khare, U. Sukhatme, Supersymmetry in Quantum Mechanics (World Scientific, Singapore, 2001). S. -H Dong, Factorization Method in Quantum Mechanics. DordrechtSpringerS. -H. Dong, Factorization Method in Quantum Mechanics (Springer, Dordrecht, 2007). Bound state solution of the Klein-Fock-Gordon equation with the Hulthén plus a ring-shaped-like potential within SUSY quantum mechanics. A I Ahmadov, . M Sh, M V Nagiyev, V A Qocayeva, Tarverdiyeva, Int. J. Mod. Phys. A. 331850203A. I. Ahmadov, Sh. M. Nagiyev, M. V. Qocayeva, V. A. Tarverdiyeva, "Bound state solution of the Klein-Fock-Gordon equation with the Hulthén plus a ring-shaped-like potential within SUSY quantum mechanics," Int. J. Mod. Phys. A 33, 1850203 (2018). Arbitrary ℓ-state solutions of the Klein-Fock-Gordon equation with the Manning-Rosen plus a class of Yukawa potentials. A I Ahmadov, M Demirci, S M Aslanova, M F Mustamin, PhysA. I. Ahmadov, M. Demirci, S. M. Aslanova, M. F. Mustamin, "Arbitrary ℓ-state solutions of the Klein-Fock-Gordon equation with the Manning-Rosen plus a class of Yukawa potentials," Phys. . Lett. A. 384126372Lett. A 384, 126372 (2020). Analytical bound state solutions of the Klein-Fock-Gordon equation for the sum of Hulthén and Yukawa potential within SUSY quantum mechanics. A I Ahmadov, S M Aslanova, M Sh, S V Orujova, Badalov, Adv. High Energy Phys. 20218830063A. I. Ahmadov, S. M. Aslanova, M. Sh. Orujova, S. V. Badalov, "Analytical bound state solutions of the Klein-Fock-Gordon equation for the sum of Hulthén and Yukawa potential within SUSY quantum mechanics," Adv. High Energy Phys. 2021, 8830063 (2021). A F Nikiforov, V B Uvarov, Special functions of Mathematical Physics. BostonBirkhüuserA. F. Nikiforov, V. B. Uvarov, Special functions of Mathematical Physics (Birkhüuser, Boston, 2013). Exact solution of the Schrödinger and Klein-Gordon equations for generalised Hulthén potentials. M , J. Phys. A: Math. Gen. 14M. Znojil, "Exact solution of the Schrödinger and Klein-Gordon equations for generalised Hulthén potentials," J. Phys. A: Math. Gen. 14, 383-94 (1981). Bound states of the Klein-Gordon equation with vector and scalar Hulthéntype potentials. F Domínguez-Adame, Phys. Lett. A. 136F. Domínguez-Adame, "Bound states of the Klein-Gordon equation with vector and scalar Hulthén- type potentials," Phys. Lett. A 136, 175-7 (1989). Path integral for Klein-Gordon particle in vector plus scalar Hulthén-type potentials. A Messouber, Physica A. 234A. Messouber, "Path integral for Klein-Gordon particle in vector plus scalar Hulthén-type potentials," Physica A 234, 529-44 (1996). Scattering of Klein-Gordon particles in the background of mixed scalar-vector generalized symmetric Woods-Saxon potential. B C Lütffüoğlu, J Lipovský, J Kříž, Eur. Phys. J. Plus. 13317B. C. Lütffüoğlu, J. Lipovský, J. Kříž, "Scattering of Klein-Gordon particles in the background of mixed scalar-vector generalized symmetric Woods-Saxon potential," Eur. Phys. J. Plus 133, 17 (2018). Analytical and semi-analytical solutions for Phi-four equation through three recent schemes. M M A Khater, A A Mousa, M A El-Shorbagy, R A M Attia, Results Phys. 22103954M. M. A. Khater, A. A. Mousa, M. A. El-Shorbagy, R. A. M. Attia, "Analytical and semi-analytical solutions for Phi-four equation through three recent schemes," Results Phys. 22, 103954 (2021). Diverse accurate computational solutions of the nonlinear Klein-Fock-Gordon equation. M M A Khater, M S Mohamed, S K Elagan, Results Phys. 23104003M. M. A. Khater, M. S. Mohamed, S. K. Elagan, "Diverse accurate computational solutions of the nonlinear Klein-Fock-Gordon equation," Results Phys. 23, 104003 (2021). Abundant novel wave solutions of nonlinear Klein-Gordon-Zakharov (KGZ) model. M M A Khater, A A Mousa, M A El-Shorbagy, R A M Attia, Eur. Phys. J. Plus. 136604M. M. A. Khater, A. A. Mousa, M. A. El-Shorbagy, R. A. M. Attia, "Abundant novel wave solutions of nonlinear Klein-Gordon-Zakharov (KGZ) model," Eur. Phys. J. Plus 136, 604 (2021). Elementary relativistic wave mechanics of spin 0 and spin 1/2 particles. H Feshbach, F Villars, Rev. Mod. Phys. 30H. Feshbach and F. Villars, "Elementary relativistic wave mechanics of spin 0 and spin 1/2 particles," Rev. Mod. Phys. 30, 24-45 (1958). An eight-component relativistic wave equation for spin-1 2 particles II. D S Staudte, J. Phys. A: Math. Gen. 29D. S. Staudte, "An eight-component relativistic wave equation for spin-1 2 particles II," J. Phys. A: Math. Gen. 29, 169-92 (1996). Boundary conditions for the one-dimensional Feshbach-Villars equation. M Merad, L Chetouani, A Bounames, Phys. Lett. A. 267M. Merad, L. Chetouani and A. Bounames, "Boundary conditions for the one-dimensional Feshbach- Villars equation," Phys. Lett. A 267, 225-31 (2000). Relativistic particle in a box: Klein-Gordon versus Dirac equations. P Alberto, S Das, E C Vagenas, Eur. J. Phys. 3925401P. Alberto, S. Das and E. C. Vagenas, "Relativistic particle in a box: Klein-Gordon versus Dirac equations," Eur. J. Phys. 39, 025401 (2018). On the mean value of the force operator for 1D particles in the step potential. S. De Vincenzo, Rev. Bras. Ens. Fis. 4320200422S. De Vincenzo, "On the mean value of the force operator for 1D particles in the step potential," Rev. Bras. Ens. Fis. 43, e20200422 (2021). A relativistic spin zero particle in a spherical cavity. T M Gouveia, M C N Fiolhais, J L Birman, Eur. J. Phys. 3655021T. M. Gouveia, M. C. N. Fiolhais and J. L. Birman, "A relativistic spin zero particle in a spherical cavity," Eur. J. Phys. 36, 055021 (2015). Relativistic spin-0 particle in a box: Bound states, wave packets, and the disappearance of the Klein paradox. M Alkhateeb, A Matzkin, Am. J. Phys. 90297M. Alkhateeb and A. Matzkin, "Relativistic spin-0 particle in a box: Bound states, wave packets, and the disappearance of the Klein paradox," Am. J. Phys. 90, 297 (2022). Quantum mechanics of Klein-Gordon fields I: Hilbert space, localized states, and chiral symmetry. A Mostafazadeh, F Zamani, Ann. Phys. 321A. Mostafazadeh and F. Zamani, "Quantum mechanics of Klein-Gordon fields I: Hilbert space, localized states, and chiral symmetry," Ann. Phys. 321, 2183-2209 (2006). Hilbert space structures on the solution space of Klein-Gordon-type evolution equations. A Mostafazadeh, Class. Quantum Grav. 20A. Mostafazadeh, "Hilbert space structures on the solution space of Klein-Gordon-type evolution equations," Class. Quantum Grav. 20, 155-71 (2003). Self-adjoint extensions of operators and the teaching of quantum mechanics. G Bonneau, J Faraut, G Valent, Am. J. Phys. 69G. Bonneau, J. Faraut and G. Valent, "Self-adjoint extensions of operators and the teaching of quan- tum mechanics," Am. J. Phys. 69, 322-31 (2001); . Preprint, arXiv:0103153v1quant-phPreprint, arXiv:0103153v1 [quant-ph] (2001). General boundary conditions for a Dirac particle in a box and their non-relativistic limits. V Alonso, S. De Vincenzo, J. Phys. A: Math. Gen. 30V. Alonso and S. De Vincenzo, "General boundary conditions for a Dirac particle in a box and their non-relativistic limits," J. Phys. A: Math. Gen. 30, 8573-85 (1997). Characterization of one-dimensional point interactions for the Schrödinger operator by means of boundary conditions. Z Brzeźniak, B Jefferies, J. Phys. A: Math. Gen. 34Z. Brzeźniak and B. Jefferies, "Characterization of one-dimensional point interactions for the Schrödinger operator by means of boundary conditions," J. Phys. A: Math. Gen. 34, 2977-83 (2001). On the history of the Kronecker product. H V Henderson, F Pukelsheim, S R Searle, Linear and Multilinear Algebra. 14H. V. Henderson, F. Pukelsheim and S. R. Searle, "On the history of the Kronecker product," Linear and Multilinear Algebra 14, 113-20 (1983). T Ya, I S Azizov, Iokhvidov, Linear Operators in Spaces with an Indefinite Metric. ChichesterJohn Wiley & SonsT. Ya. Azizov and I. S. Iokhvidov, Linear Operators in Spaces with an Indefinite Metric (John Wiley & Sons, Chichester, 1989). J Bognár, Indefinite Inner Product Spaces. BerlinSpringerJ. Bognár, Indefinite Inner Product Spaces (Springer, Berlin, 1974).
[]
[ "Microwalk-CI: Practical Side-Channel Analysis for JavaScript Applications", "Microwalk-CI: Practical Side-Channel Analysis for JavaScript Applications" ]
[ "Jan Wichelmann [email protected] \nUniversity of Lübeck Lübeck\nGermany\n", "Florian Sieck [email protected] \nUniversity of Lübeck\nLübeckGermany\n", "Anna Pätschke [email protected] \nUniversity of Lübeck\nLübeckGermany\n", "Thomas Eisenbarth [email protected] \nUniversity of Lübeck\nLübeckGermany\n" ]
[ "University of Lübeck Lübeck\nGermany", "University of Lübeck\nLübeckGermany", "University of Lübeck\nLübeckGermany", "University of Lübeck\nLübeckGermany" ]
[]
Secret-dependent timing behavior in cryptographic implementations has resulted in exploitable vulnerabilities, undermining their security. Over the years, numerous tools to automatically detect timing leakage or even to prove their absence have been proposed. However, a recent study at IEEE S&P 2022 showed that, while many developers are aware of one or more analysis tools, they have major difficulties integrating these into their workflow, as existing tools are tedious to use and mapping discovered leakages to their originating code segments requires expert knowledge. In addition, existing tools focus on compiled languages like C, or analyze binaries, while the industry and open-source community moved to interpreted languages, most notably JavaScript.In this work, we introduce Microwalk-CI, a novel side-channel analysis framework for easy integration into a JavaScript development workflow. First, we extend existing dynamic approaches with a new analysis algorithm, that allows efficient localization and quantification of leakages, making it suitable for use in practical development. We then present a technique for generating execution traces from JavaScript applications, which can be further analyzed with our and other algorithms originally designed for binary analysis. Finally, we discuss how Microwalk-CI can be integrated into a continuous integration (CI) pipeline for efficient and ongoing monitoring. We evaluate our analysis framework by conducting a thorough evaluation of several popular JavaScript cryptographic libraries, and uncover a number of critical leakages.
10.1145/3548606.3560654
[ "https://export.arxiv.org/pdf/2208.14942v1.pdf" ]
251,953,665
2208.14942
d8b8a03d2a74114f0e542b74c50084f1c7c86848
Microwalk-CI: Practical Side-Channel Analysis for JavaScript Applications Jan Wichelmann [email protected] University of Lübeck Lübeck Germany Florian Sieck [email protected] University of Lübeck LübeckGermany Anna Pätschke [email protected] University of Lübeck LübeckGermany Thomas Eisenbarth [email protected] University of Lübeck LübeckGermany Microwalk-CI: Practical Side-Channel Analysis for JavaScript Applications Secret-dependent timing behavior in cryptographic implementations has resulted in exploitable vulnerabilities, undermining their security. Over the years, numerous tools to automatically detect timing leakage or even to prove their absence have been proposed. However, a recent study at IEEE S&P 2022 showed that, while many developers are aware of one or more analysis tools, they have major difficulties integrating these into their workflow, as existing tools are tedious to use and mapping discovered leakages to their originating code segments requires expert knowledge. In addition, existing tools focus on compiled languages like C, or analyze binaries, while the industry and open-source community moved to interpreted languages, most notably JavaScript.In this work, we introduce Microwalk-CI, a novel side-channel analysis framework for easy integration into a JavaScript development workflow. First, we extend existing dynamic approaches with a new analysis algorithm, that allows efficient localization and quantification of leakages, making it suitable for use in practical development. We then present a technique for generating execution traces from JavaScript applications, which can be further analyzed with our and other algorithms originally designed for binary analysis. Finally, we discuss how Microwalk-CI can be integrated into a continuous integration (CI) pipeline for efficient and ongoing monitoring. We evaluate our analysis framework by conducting a thorough evaluation of several popular JavaScript cryptographic libraries, and uncover a number of critical leakages. INTRODUCTION Collection of sensitive data is common in today's cloud and Internet of Things (IoT) environments, and affects everyone. Protecting this private and sensitive data is of utmost importance, therefore requiring secure cryptography routines and secrets. However, especially the cloud allows attackers to observe the execution of victim code using side-channels in co-located environments [26]. These attacks range from Last Level Cache (LLC) [35] and de-duplication attacks [34] to the observation of memory access patterns [64] or main memory access contention [44]. The spatial resolution depends on the granularity of the attacked buffer and the temporal resolution on the capabilities of the attacker, meaning either the ability to achieve a sufficiently high measurement frequency [65] or to interrupt and pause the victim code [14,64]. To avoid side-channel vulnerabilities, programmers should write constant-time code, i.e., software which does not contain input or secret-dependent control flow or memory accesses. Depending on the problem at hand, this can be achieved by different means: For example, a secret-dependent data access may be replaced by accessing every element of the target array and then choosing the correct one with a mask. Conditionals can be adjusted by always executing both branches and then selecting the result. However, for complex projects like large cryptographic libraries, finding such vulnerabilities is a difficult and time-intensive task. Thus, the research community has developed a number of analysis strategies [5,13,16,17,33,46,60,62,63], that aim at automating the detection of side-channel leakages in a given code base. However, a recent study [29] that conducted a survey between crypto library developers found that while most developers were aware of and welcome those tools, they had major difficulties using them due to bad usability, lack of availability and maintenance or high resource consumption. The authors worked out a number of recommendations for creators of analysis tools: The tools should be well-documented and easily usable, such that adoption requires low effort from the developer. Another focus is on compatibility: The analysis shouldn't require use of special languages or language subsets. Finally, the tools should aid efficient development, i.e., quickly yield results with less focus on completeness, making them suitable for inclusion in a continuous integration (CI) workflow. In this work, we study how these challenges can be addressed, and adapt the existing Microwalk [63] framework to fit the given objectives. Microwalk was originally designed for finding leakages in binary software, for which it generates a number of execution traces for a set of random inputs and then compares them. The dynamic analysis approach of Microwalk is quite fast, as it does only run the target program several times, and then compares the resulting execution traces with a simple linear algorithm. However, due to the simplistic leakage quantification, the resulting analysis reports contain a lot of potential vulnerabilities, with little or even misleading information about their cause. This makes it difficult to assess their severity and address them efficiently, especially for complex libraries. Finally, the initial setup can be time-consuming, as the different components need to be compiled from source. We mitigate these issues by designing Microwalk-CI , which features a new leakage analysis algorithm that combines the performance benefits of dynamic analysis with an accurate leakage localization and quantification, easing the assessment and investigation of the reported leakages. In addition, we add support for running Microwalk-CI in an automated environment like a CI pipeline, and create a Docker image that contains Microwalk-CI and its dependencies for easy use. Finally, we create simple templates that allow quick adoption of Microwalk-CI 's analysis capabilities into a cryptographic library's CI workflow with little effort by the developer. During the research on leakage detection tools, as part of their evaluation, many vulnerabilities in popular cryptographic libraries have been uncovered and fixed. However, the developer community is moving away from compiled languages like C or C++ and instead embraces interpreted scripting languages like JavaScript or Python. In fact, the 2021 Stack Overflow developer survey and the January 2021 Redmonk programming language ranking found that those two languages are the most popular, both for private and professional contexts [45,56]. JavaScript was originally designed as a client-side language for web browsers, but, with the arrival of Node.js [40], it has seen growing adoption for server-side software as well. Consequently, the community has come up with a number of cryptographic libraries written in pure JavaScript. However, due to the lack of appropriate tooling and attention of the research community, these libraries have never been vetted for their robustness against side-channel attacks, which is worrying given the fact that the servers using them may be hosted in IaaS cloud environments. To address this, Microwalk-CI offers a novel method for applying Microwalk's original binary analysis algorithms to JavaScript libraries by using the Jalangi2 [48,55] source code instrumentation library to generate compatible traces. The new tracing backend comes with a simple code template and supports full automation, such that the analysis can be easily added to the CI workflows of respective libraries. We evaluate several popular JavaScript cryptographic libraries, uncovering a number of high-severity leakages. By supporting the analysis of JavaScript, we strive to improve the security of software and rise awareness for the importance of constant-time cryptographic code in the community of web and cloud developers. The underlying concepts of our source-based trace generator can be used for building analysis support for other programming languages as well, making side-channel leakage analysis available for all common platforms and at a low barrier. Our Contribution In summary, we make the following contributions: • We introduce a novel call tree-based analysis method, which allows efficient and accurate localization and quantification of leakages. • We show the first dynamic leakage analysis tool for JavaScript libraries. • We propose a new approach for integrating a fully automated timing leakage analysis into the crypto library development workflow, which requires low effort from the developer and immediately reports newly introduced vulnerabilities. • We evaluate the new analysis framework with several widelyused JavaScript libraries and uncover a significant number of previously unaddressed leakages. The source code of Microwalk-CI is available at https://github. com/microwalk-project/Microwalk. Disclosure We contacted the authors of the affected libraries, informed them about our findings and offered to aid in fixing the vulnerabilities. The author of elliptic acknowledged the discovered vulnerabilities, but noted that the package is no longer maintained and that fixing the vulnerabilities would require major changes, as side-channel resistance wasn't part of the underlying design considerations. There was no response from the other library authors. BACKGROUND 2.1 Microarchitectural Timing Attacks Implementations of cryptographic algorithms are often run on hardware resources that are shared between different processes. If the code exhibits secret-dependent behavior, malicious processes can use the resulting information leakage to extract secrets like private keys through side-channel analysis. Cache attacks are a prominent example for exploiting resource contention with a victim process: By measuring the time it takes for repeatedly clearing and accessing a specific cache entry, the attacker can see whether the victim accessed a similar cache entry in the meantime [1,10,42,53,65]. Other attack vectors include the translation lookaside buffer (TLB) [22] and the branch prediction unit [2]. The most widely used software countermeasure against these attacks is writing constant-time code that does not contain secretdependent memory accesses or branches, and that uses instructions that do not come along with operand-dependent runtime [12]. There exists a variety of tools [5,13,16,17,33,46,60,62,63], that feature different analysis approaches. Some of these tools are opensource, with varying performance and usability [29]. Microwalk Microwalk [63] is a framework for checking the constant-time properties of software binaries in an automated fashion. It follows a dynamic analysis approach, i.e., it executes the target program with a number of random inputs and collects execution traces, which contain branch targets, memory allocations and memory accesses. This is done through a three-stage pipeline, where traces are generated, preprocessed, and analyzed. Each stage has various modules, which are chosen by the user depending on their application. Furthermore, Microwalk has a plugin architecture, that allows easy extension by loading custom modules. Currently, Microwalk only has one trace generation module, which is based on Intel Pin [27] and produces traces for binary software. Correspondingly, there is a preprocessor module that converts the raw traces generated by Pin into Microwalk's own format. Finally, these preprocessed traces can be fed into a number of analysis modules, e.g., for computing the mutual information between memory access patterns and inputs, or for dumping the preprocessed traces in a human-readable format. , , Mutual Information and Guessing Entropy Mutual information (MI) quantifies the interdependence of two random variables, i.e., it models how much information an attacker can learn about one variable on average by observing the other one [23]. It has been widely used for quantifying side-channel leakages [9,28,63,67]. The mutual information of the random variables : → X and : → Y is defined as ( , ) = ∑︁ ∈X ∈Y Pr[ = , = ] · log 2 Pr[ = , = ] Pr[ = ] · Pr[ = ] . The information is measured in bits. In our setting, the random variable represents a secret and the information that can be gathered by observing the system state through a side-channel. The guessing entropy (GE) of a random variable : → X quantifies the average number of guesses that have to be made in order to guess the value of correctly [32]. If X is indexed such that Pr[ = ] ≥ Pr[ = ] for , ∈ X and ≤ , the guessing entropy is defined as ( ) = ∑︁ 1≤ ≤ | X | · Pr[ = ]. The conditional guessing entropy (conditional GE) ( | ) for random variables and is defined as ( | ) = ∑︁ ∈Y Pr[ = ] · ( | = ). ( | ) measures the expected number of guesses that are needed to determine the value of for a known value of . A variant of the conditional GE, the minimal conditional guessing entropy (minimal GE), determines the lower bound of expected guesses. It is defined aŝ ( | ) = min ∈Y ( | = ), i.e., it outputs the minimal number of guesses that are needed to find out one of the possible values of . JavaScript Instrumentation JavaScript code can be instrumented in different ways, each coming with their own benefits and drawbacks. FoxHound [49] modifies Firefox's JavaScript engine. While this allows many optimizations, it comes with the downside of being constrained to one specific JavaScript engine and requiring constant maintenance to keep up with the upstream project. OpenTelemetry [41] and Google's tracing framework [21] create program traces to monitor and profile software, but require the developer to insert instrumentation calls into their source code manually. While being very specific and thus only introducing the necessary overhead, they are not generally applicable without a lot of manual effort. Lastly, the JavaScript code can be dynamically instrumented in a source-to-source fashion. Jalangi2 [48,52,55] wraps the loading process of JavaScript files and injects instrumentation code into the source code. The user of the instrumentation framework can write and register custom callback routines, which are supplied with the current execution state. This approach comes with a certain overhead, but it is flexible and works with arbitrary JavaScript code without manual adjustments. A FAST LEAKAGE ANALYSIS ALGORITHM We propose a new leakage analysis algorithm that is optimized for quickly delivering detailed leakage information, aiding developers in efficiently locating and fixing issues. Before we dive into the algorithm, we define the leakage model and discuss the objectives a thorough leakage analysis must meet. Then, we describe how the traces are processed to build a call tree, which in a final step is broken down to compute leakage metrics for specific instructions. Leakage Model To ensure that we detect all leakages which may be exploited by current and future attack methods, we choose a strong leakage model: An attacker tries to extract secret inputs from an implementation through a side-channel attack, which allows them to get a trace of all executed instructions and all accessed memory addresses. They also have access to all public inputs and outputs. Under certain conditions, a hypervisor/OS-level adversary can single step instructions [4,38,54], or have below cache-line resolution [37,66]. However, for more relaxed adversarial scenarios like cross-VM attacks, granularities of 32 or 64 bytes and hundreds of instructions may be more appropriate. Adjusting the processing of the leakage accordingly allows an analysis under such a leakage model as well, but, we believe that the most conservative approach should be applied, i.e., assuming a maximum resolution attacker. Attacks exploiting speculative execution are considered off-scope, as we focus on leakages caused by actual secret-dependent control flow or memory accesses, i.e., code paths that are reached architecturally. This leakage model and the following analysis approach are consistent with the models used by Microwalk [63] and DATA [62]. 3.1.1 Analysis approach. The leakage model can be turned into a dynamic analysis approach by making the following observation: Since the attacker tries to infer a secret solely by looking at an execution trace and public inputs/outputs, they can only succeed if the trace depends on the secret. I.e., if changing the secret does never influence the observed trace, the implementation does not leak the secret and is constant-time. We model this by giving the attacker a number of secret inputs and corresponding execution traces, and asking them to map the inputs to the respective traces. If they perform better than guessing, we consider the implementation as leaking. If all traces are identical, the implementation is considered constant-time. Objectives For an efficient and useful dynamic leakage analysis, we identified three major objectives: Accurate localization of leakages, a quantification of leakage severity, and performance. Localization. While varying address traces for a memory read instruction are a clear sign that there is leakage, which can be extracted by monitoring that particular instruction [63], they do not indicate where the leakage is actually caused. E.g., a non-constanttime function may be called two times, once with a secret-dependent parameter, and once a varying number of times in a loop, but with a Quantification. In addition to an accurate localization, there is a need for a rough quantification of the severity of leakages. For example, a chain of nested if statements may only leak a few bits of the secret each, but the leakage aggregates up to a point which allows an attacker to easily distinguish different secrets just by looking at the resulting sequence of branch instructions. At the same time, a lone if statement which merely handles a special case during key file parsing (e.g., whether a parameter has some additional byte) does not necessarily pose an urgent problem. The analysis should assign each leakage with a score allowing the developer to prioritize between findings. Performance. Finally, for integrating the leakage analysis into a development workflow, performance is important: When checking whether a proposed change impacts security, or whether a given patch fixes a previously discovered leakage, the developer should not need to wait several ten minutes or hours until analysis results are available. The analysis should be efficient enough to run it both on a standard developer machine and in a hosted CI environment. Algorithm Idea In order to find leakages, we need to compare the generated traces, and find sections where they diverge and, later, merge again. However, due to the performance requirements and the immense size of traces, especially for asymmetric primitives, we cannot afford running a traditional diff or trace alignment algorithm, which usually have quadratic complexity. At the same time, we do not want to lose information, as we want to accurately pinpoint the detected leakages. Thus, we opt for a data structure that preserves all necessary information in an efficient way, and which allows to conduct a thorough leakage analysis which can discover and quantify trace divergences in linear time. For that, we merge the traces into a call tree, where each function call and a few other trace entries form the nodes, and where subsequent function calls and trace divergences generate branches. Each node holds the IDs of the traces which reach that node. The tree can be built on-the-fly while the traces are processed, so it can be integrated into a leakage analysis pipeline like the one offered by Microwalk. After the traces have been processed, a final step traverses the tree and evaluates for each instruction in each call stack, whether it caused a divergence and how severe that divergence is. In the following sections, we elaborate on the respective steps. Step 1: Building the Call Tree We merge the traces into a tree in a greedy way, i.e., we simultaneously iterate over a trace and the current tree entries, and add the trace entries to the tree. In order to save memory and get a readable representation of the traces with little tree depth, we can exploit the fact that traces of constant-time implementations tend to have long shared sequences without any differences, and thus use a radix trie instead of a plain tree, such that each node holds an as long as possible sequence of consecutive trace entries. The resulting tree for the example code in Figure 1 is illustrated in Figure 8 in Appendix A. Types of trace entries. In order to address the leakage model, the execution traces used by Microwalk contain information about branches, memory allocations and memory accesses. Branches cover call, return and jump instructions. A branch trace entry has a source address, a target address and a taken bit that denotes whether the branch was taken or skipped (e.g., due to a failed comparison). The source and target addresses consist each of an image ID (i.e., the binary which contains the corresponding instruction) and an offset. Memory allocations are used to keep track of memory blocks on the heap and stack. Each time the analyzed program calls malloc or a similar function, a new allocation block is registered with a unique ID and the allocation block size. Memory accesses contain the image ID and offset of the corresponding instruction and the allocation block ID and offset of the accessed address. This relative addressing allows to compare traces even when they each operate on their own allocated memory regions, which have different absolute addresses. 3.4.2 Tree layout. As mentioned above, we chose a radix trie-like representation of the merged trace entries, as this reduces tree depth, speeds up analysis and enhances readability of tree dumps. A tree node consists of two parts: The consecutive trace entries which are present for all traces hitting this node, and a list of (edges to) split nodes (Figure 2), which represent divergences between the different traces. The trace entry list may contain call nodes ( Figure 3) which open their own sub tree, but always return back into the current node and may be followed by other trace entries. Edges start from within the trace entry list (for calls) or from the split node list. If an edge leads to a split node, it is annotated with the trace IDs taking this specific edge. Figure 3: A generic function call with a call node. When a function call entry is encountered, a new call node is created, that subsequently receives the trace entries for the given function. Once the function ends (return statement), the trace entry list of the prior call node is continued. Note that the return statement may also end up in the split node tree of the call node, if there are trace divergences within the function. nodes are only created when a function call or a jump targets a different instruction than the already existing trace entry, as the resulting sub tree may be fairly different. Other differences like varying memory access offsets are only recorded in the respective trace entry, as they don't affect control flow and the current entry is thus likely followed by other, non-conflicting entries. A function call is handled by creating a new tree node at the current position in the list of consecutive trace entries of the current tree node. Afterwards, the current node is pushed onto a stack and the new call node is set as the current node, such that subsequent trace entries are stored in the new node. When encountering a return statement, the last node is popped from the stack, and insertion of trace entries is resumed after the earlier created call node. If the target address of the current call entry does not match the target address recorded in an existing call node, a split is triggered. If a conflict between an existing and a new trace entry is detected, the algorithm generates two new split nodes: One node receives void f1 ( int secret ) { f2 ( secret ); } // f1 +0 void f2 ( int secret ) { f3 ( secret ); } // f2 +0 void f3 ( int secret ) { // f3 +0 int tmp = 0; for ( int i = 0; i < 2; ++ i) { // f3 +2 if ( secret & (1 << i )) { // f3 +3 ++ tmp ; } } // f3 +6 } (a) Program Call stack: main +X -> f1 +0 f1 +0 -> f2 +0 f2 +0 -> f3 +0 Instructions: jump at f3 +2 jump at f3 +3 jump at f3 +6 the original conflicting trace entry, the remaining consecutive trace entries and the split node list of the current node; the other node is initialized with the new conflicting trace entry and an empty split node list. The branches to both nodes are annotated with the corresponding trace IDs. The current node is then set to the new split node, such that the new trace entries end up in the new node. The call node stack is not updated, i.e., the next return statement ends the divergence and restores the state before the last call node. This way, we can recover from a trace divergence and discover additional leakages in other function calls. Cases where there are more than two possible targets for an instruction (e.g., an indirect jump) are handled appropriately, by generating further split nodes at the same level. Step 2: Leakage Analysis After trace processing has concluded, we have a call tree that encodes the similarities and differences of all traces. We now perform a final step that collects this information and computes leakage measures, such that we can assign leakage information to each instruction, meeting our localization and quantification objectives. 3.5.1 Building call stacks with trace ID trees per instruction. First, we consolidate the call tree into a number of call stacks, and store the trace split information for each instruction in the corresponding call stack. The split information consists of trace ID trees, which encode how multiple executions of the given instruction for a certain function invocation led to trace divergence. This greatly simplifies the computation of leakage measures for individual instructions, and allows to display expressive information about the leakage behavior of a given instruction to the developer. If a function is called multiple times (i.e., the same call stack occurs repeatedly), additional trace ID trees are created (no merging). When a split is encountered, new child nodes for each edge of the split are added to the trace ID tree for the responsible instruction. Figure 4 illustrates the resulting trace ID tree for a simple program counting bits in a secret variable: When the jump instruction in question is encountered first, all traces are identical (tree level 0). At that point, execution diverges for traces with an even versus an odd secret. After the second iteration, traces are again split depending on the second bit of the secret. In the end, there are four different possible traces for the given function call. Computing leakage measures. After recording the divergence behavior of instructions per call stack, we can compute various measures to quantify the corresponding leakage. We feature three efficiently computable metrics that give the developer an indication of the severity of each detected leakage: Mutual information, conditional guessing entropy and minimal conditional guessing entropy. If the function containing the analyzed instruction is invoked multiple times for the same call stack and thus produces multiple trace ID trees, the algorithm computes the metrics for each tree separately and outputs the mean, minimum, maximum, and standard deviation for each metric. All metrics depend on the size of leaves in the trace ID tree. Mutual information measures the average amount of information an attacker learns when observing a trace. The MI of the trace ID and the observed trace is defined as ( , ) = | | ∑︁ =1 | | ∑︁ =1 Pr[ = , = ] · log 2 Pr[ = , = ] Pr[ = ] · Pr[ = ] . With Pr[ = , = ] = 0 if ∉ 1 if ∈ we get ( , ) = | | ∑︁ =1 | | · log 2 1 1 · | | = 1 ℓ ∑︁ =1 | | · log 2 | | . The value of ( , ) can be interpreted as bits: In the best case, there is only one leaf containing all trace IDs, such that the attacker learns nothing (0 bits). In the worst case, with one leaf for each trace ID, the attacker learns log 2 ( ) bits. The MI of the example in Figure 4 is 1 6 2 · 2 · log 2 (3) + 2 · 1 · log 2 (6) ≈ 1.33 bits. This metric has a few drawbacks: Due to its logarithmic nature, with an increasing number of traces it only grows slowly. Another shortcoming is the averaging, i.e., a high leakage in a few cases may get suppressed by the smaller leakage of all other cases. Finally, it may be mistakenly interpreted as additive due to its "bits" unit (i.e., 10 instructions leaking 3 bits each does not mean that there is a leakage of 30 bits). However, it does perform well for small and balanced leakages, e.g., when an instruction constantly divides the traces into two groups of similar size. Conditional guessing entropy measures the expected number of guesses an attacker needs for associating a given trace with a secret input. The conditional GE ( | ) for determining a trace ID, modeled as random variable , for a known value of an observed trace (random variable ) is calculated as ( | ) = | | ∑︁ =1 Pr[ = ] · ( | = ) = | | ∑︁ =1 Pr[ = ] · | | ∑︁ =1 · Pr[ = | = ].(1) Since Pr[ = | = ] = 0 if ∉ 1 | | if ∈ , we can simplify (1) to ( | ) = ℓ ∑︁ =1 Pr[ = ] · 1 | | | | ∑︁ =1 = 1 2 ℓ ∑︁ =1 | | · (| | + 1). Note that the value of ( | ) is upper-bounded by +1 2 , which is the best case where there is only one leaf which contains all trace IDs, i.e., all traces are identical. For the example in Figure 4, we get ( | ) = 1 2·6 (6 + 2 + 6 + 2) ≈ 1.33 guesses. Small values for the conditional GE convey that an instruction sequence leads to almost unique traces, implying that there is widespread leakage affecting most to all traces. On the other side, a high value means that most traces are similar and do not leak much information. However, this being an average measure just like MI, there may well be special cases where there is a very high leakage. Those risk being obscured by this metric, thus we add an additional worst-case metric designed for catching these cases. Minimal conditional guessing entropy measures the minimal number of guesses an attacker needs for associating a given trace with a secret input. It is calculated similarly to the conditional GE, but takes the minimum of all individual outcomes instead of weighting them: ( | ) = min =1,..., | | ( | = ) = min =1,...,ℓ | | + 1 2 . For the example in Figure 4, we getˆ( | ) = min{1.5, 1, 1.5, 1} = 1 guesses, i.e., there is at least one trace that is unique. Minimal GE is the most definite leakage measure; it gives the number of guesses needed for the trace which leaks most. A high value for the minimal GE affirms that there is no outlier with high leakage. We thus recommend using this metric when evaluating the severity of a detected leakage. 3.5.3 Leakage severity and score. While the full analysis report provides detailed information about each leakage, we also seek to condense this information into a single, uniform score, such that the developer can quickly prioritize. That score should require little context: The developer should not need to be familiar with entropy, nor know analysis details like the particular number of test cases, which determines the upper bounds for the various metrics. Additionally, providing a single score allows easy integration of the leakage report into the user interface of modern development platforms like GitLab. The platform can then use that score for sorting and assigning a severity to the leakage. We chose minimal GE for computing the leakage score, as it represents the worst-case leakage. Instead of reporting the minimal GE value directly, we map it onto a linear scale of 0 to 100, where 0 corresponds to a minimal GE of +1 2 (i.e., no leakage), and 100 corresponds to a minimal GE of 1 (i.e., maximum leakage). If there are multiple trace ID trees for a given instruction (see Section 3.5.1), we show the mean and the standard deviation over the individual minimal GE values. Implementation We implemented the described algorithm as a new analysis module in Microwalk-CI 's source tree. It integrates directly into the leakage analysis pipeline, i.e., it receives and handles preprocessed traces from the previous pipeline stage, the trace preprocessor. The tree is implemented as a recursive data structure, where each node holds a list of successor and split nodes. We do not store the consecutive non-diverging trace entries as a plain ITraceEntry list (as is suggested in the algorithm description), but as full-featured tree nodes as well. Apart from making the code more readable, this simplifies adding new divergences and storing temporary data for the final leakage analysis step, at the cost of additional memory overhead (we discuss this trade-off in Section 6.2). Our implementation offers functionality for generating leakage reports and other detailed analysis result files optimized for readability, including an optional full call tree dump for debugging purposes. All features can be controlled via the Microwalk-CI configuration file infrastructure, allowing easy adoption of the new analysis module. In total, the module has 1,363 lines of C# code. JAVASCRIPT LEAKAGE ANALYSIS We now show how we can apply Microwalk-CI 's generic analysis methods to JavaScript libraries, despite them being originally designed for binary analysis. First, we present a simple trace generator relying on the Jalangi2 instrumentation framework. Then, we show how these traces can be preprocessed such that they use the generic trace format from Microwalk-CI . Instrumenting JavaScript code Microwalk-CI expects multiple execution traces with varying secret input for the analyzed target function. These execution traces are then fed into various analysis modules for finding non-constanttime behavior, i.e. control flow or data flow-dependencies from secret input. A trace needs to contain the following information: • Address and size of all loaded program modules (e.g., binaries or source files, called "images" internally); • the control-flow of the analyzed program, encoded as a sequence of branch source and target addresses; • address and size of all heap memory objects; and • the instructions and target addresses of all memory accesses. We translate this to JavaScript by collecting a trace of all executed code lines, and recording access offsets to any object or array. For instrumentation, we use Jalangi2 [48]. Jalangi2 instruments the code at load time by inserting callbacks before and after certain source tokens, e.g., conditionals, expressions or return statements. First, we register the provided SMemory analysis module, which assigns a shadow object to each object, that contains a unique ID and the object value, allowing us to map accesses to known objects. We then create an own analysis front-end, called tracer, which registers some callbacks to record the necessary information and write it to a file for further processing. The tracer has 252 lines of code, and is chained after the SMemory analysis, which supplies the means for memory access tracking. Figure 5 illustrates the structure of the trace files for a simple toy example. The example has an input-dependent branch in line 10 and a secret-dependent memory access in line 11, which should be detected by our analysis toolchain. Trace File Structure Each trace is structured as follows: The first element defines the type of the trace event, e.g. Call or Expr (for Expression). This is followed by the exact source location of the event, meaning the script file name with start/end line and column number. For a Call, the first location describing the source of the call is followed by a second location describing the target, which in turn is followed by the name of the called function. Expr entries log the locations of all executed expressions; this information is only needed for reconstructing control flow edges. Similarly, Ret1 records the occurrence of a return-statement, which must be tracked due to not being covered by an expression. Ret2 is generated after a function has returned, and records the entire ranges of the function call and the executed function; however, the associated callback does not know where the control flow originated from, thus the necessity of tracking expressions and return statements. The same is true for Cond entries, which mark the execution of a conditional and thus the begin of a control-flow edge. To illustrate this, Figure 5b shows the case of a taken else-branch with the assignment ret = 0 in line 14 of the trace. If we compare this trace to the Figures 5c and 5d, which both show a taken ifbranch, it becomes apparent that the control-flow deviation only shows up due to the differences in lines 14 and 15; everything else is identical. Thus, only tracking all expressions and read/write operations allows us to reconstruct the entire control flow. Comparing line 14 of Figures 5c and 5d demonstrates how the traces enable us to discover secret-dependent memory accesses. The last two elements of the Get entry represent the ID of the shadow object, and the accessed property or offset. Both elements differ between the traces: The object IDs are assigned by Jalangi2 and thus vary for subsequent invocations of processTestcase, and the accessed offset depends on the input. The analysis conducted by 1 Microwalk-CI will later match the object IDs belonging to identical objects between traces, such that it can compare the offsets. This example shows a very short excerpt of a trace for a toy program. Analyzing real world code may result in traces with millions of events, resulting in huge files. To reduce the storage overhead, we compress the trace by shortening strings and encoding repeating lines. For most targets, these measures are sufficient to keep the trace files within a few ten megabytes. Additional compression could be achieved e.g. by using LZMA, which due to the high rate of repetitions and hence low entropy usually manages to bring down the trace file size to a few hundred kilobytes. Trace Preprocessing These raw traces are not yet suitable for use by Microwalk-CI ; we need to translate the sequence of executed lines to branch entries, generate allocation information for the objects showing up in the traces, and finally produce compatible binary traces, which can be fed into analysis modules like the one described in Section 3. For this, we implemented a new preprocessor module, which has 702 lines of code and resides in a plugin. The module iterates through each entry of the raw trace, generating a preprocessed trace on-thefly. It recognizes branches by waiting for the next code location that is outside the corresponding conditional; if an access to a previously unknown object is detected, an allocation is created. Note that our analysis module is designed for binary analysis, i.e., it works with actual memory addresses and offsets. In fact, this proves valuable for later analysis, as this simplifies encoding trace entries and gives clear identifiers for referring to certain instructions. Thus, we chose to generate a mapping of observed source locations to dummy addresses, by encoding the line and column numbers onto a base address belonging to the respective source file. This mapping is stored in a special map file, such that it can be mapped back to a human-readable source line after analysis. In summary, we now have a tool chain that instruments JavaScript programs, generates raw execution traces and converts them into the Microwalk-CI binary trace format, allowing us to analyze arbitrary JavaScript software with the existing and new generic analysis algorithms, without having to create a dedicated analysis tool. INTEGRATION INTO DEVELOPMENT WORKFLOW In this section, we show how one can simplify usage of Microwalk-CI to a degree that it only needs a one-time effort by the developer to set it up and register the functions that need to be analyzed. From that point, the tool is part of the CI pipeline of the respective library, and runs each time a new commit is submitted. The developer is then able to easily verify whether a code change introduces new leakages, without requiring any manual intervention. Dockerizing the Analysis Framework In order to use the analysis framework in an automated environment, we must ensure that all its dependencies are present and the environment is configured correctly. For this task, common CI systems allow the use of Docker containers. When a job starts, a new container is started from a predefined Docker image. The CI system checks out the current source code and then executes a user-defined script within the container. This has the advantage of being independent of the host system: The analysis job may run on the developer's private server, but also on cloud infrastructure administrated by external providers. We thus create a pre-configured Docker image containing the components needed for our JavaScript analysis: The Jalangi2 runtime, the analysis script and the Microwalk-CI binaries. The image is uploaded to a Docker registry. Analysis Template Having solved the installation and configuration problem, we now need to setup the necessary infrastructure to actually run the analysis for the specific library. Instead of requiring the developer to dive into the proper usage of the analysis toolchain, we designed a template that is simple and generic enough to work with most libraries, and which only needs minimal understanding and adjustment. The resulting file structure is depicted in Figure 6. The template features a script file index.js, which serves as analysis entry point and is responsible for loading test cases and executing the target implementations. A target is any independently testable code unit, e.g., a single primitive in a cryptographic library. The individual microwalk/target-*.js script files consist of a single function, that receives the current test case data buffer and is expected to call the associated library code. Each target also needs a number of test cases, which may have a custom format and thus need to be generated once by the developer. The test cases are stored in the microwalk/testcases/ subdirectory. Finally, the microwalk folder has a bash script analyze.sh, that is called by the CI. The analysis script iterates through the target files, and runs the Microwalk-CI pipeline. The Microwalk-CI configuration is located in two generic YAML files, which can be adjusted by the developer if they wish to use other analysis modules or options than the preconfigured ones. The abstractions offered by our template allows the developer to focus on supplying simple wrappers for their library interface and generating a number of random test cases; everything else is taken care of by the existing scripts. We implemented a similar template for compiled software, so the approach is the same for C libraries. Reports After the CI job has completed, it yields a couple of analysis result files. As of our analysis objectives in Section 3, these files are designed to be human-readable and offer as much insight into a leakage as possible. However, if there are a lot of leakage candidates, going through this list may be tedious, especially if the result files are stored separately and need to be inspected manually for each commit. We thus looked into ways for integrating these results into the usual development workflow. For GitLab, there is a Code Quality Reports [20] feature, which shows up in the merge request UI. It allows to assign a severity, a description and a source code file and line to each entry, which makes it suitable to display the results from our leakage analysis. Microwalk-CI consolidates the analysis result into a report that can be parsed by GitLab. For this, the leakages must be mapped to their originating locations in the source code. This is straightforward for JavaScript, as this information already shows up in the analysis result file; for binary programs, we resort to parsing the DWARF Critical -(target-toy-example) Found vulnerable memory access instruction, leakage score 100.00% +/-0%. Check analysis result in artifacts for details. in target.js:11 Major -(target-toy-example) Found vulnerable jump instruction, leakage score 53.33% +/-0%. Check analysis result in artifacts for details. in target.js:10 Figure 7: GitLab report for the toy example from Figure 5a. The leakage score is a relative representation of the minimal GE as explained in Section 3.5.3. debug information in order to map offsets to file names and lines. The code quality report also shows a severity of a given problem, which can be one of info, minor, major, critical and blocker (a continuous scale is not supported). Assigning these levels to specific leakages is somewhat arbitrary and depends on the preferences of the individual developer; we settled for minor if the minimal GE is higher than 80% of its upper bound, critical if the minimal GE is lower than 20% of its upper bound, and major for everything in between. This ensures that instances with high leakage are displayed prominently. Figure 7 shows an example report. EVALUATION AND DISCUSSION To evaluate Microwalk-CI , we applied it to several popular JavaScript crypto libraries. In the following, we describe our experimental setup and discuss the performance and discovered vulnerabilities. Experimental Setup As targets we pulled eight popular JavaScript libraries for cryptography and utility functions from NPM, and set up a local GitLab repository for each. Using the version from NPM instead of the version from GitHub allows us to analyze the code deployed to millions of users. We then applied our template and created target-*.js files for selected cryptographic primitives and utility functions that deal with secret data. For each target we generated 16 random test cases, which were subsequently checked in into the source tree. The GitLab instance takes care of managing the CI jobs and visualizing the resulting code quality reports. The analysis jobs themselves are executed through a Docker-based GitLab Runner on a separate machine (build server), which has an AMD EPYC 7763 processor with 128 GB DDR4 RAM. We configured the Microwalk-CI trace preprocessor step to use up to 4 CPU cores. After all CI jobs had completed, we collected the performance statistics generated by GitLab and the CI jobs, and went through the leakage reports. The results are visualized in Table 1. 6.2 Performance 6.2.1 Computation time. The CPU time spent for trace generation, preprocessing and analysis mostly depends on two factors. First, it correlates with the complexity of the analyzed targets: For the investigated libraries, symmetric algorithms and utility functions performed very well, while asymmetric primitives took significantly longer, which is expected. Second, the CPU time scales linearly with the number of test cases. We discuss the corresponding trade-off between accuracy and performance in Section 6.4. Table 1: Targets analyzed with JavaScript Microwalk-CI , performance metrics and the number of detected leakages (total and unique code lines). "Tr. CPU" shows the CPU time for generating the raw traces, "Prep. CPU" for trace preprocessing, and "An. CPU" for the analysis step. "Duration" denotes the wall clock time spent for the entire CI job (including setup and cleanup). Finally, "Prep. RAM" and "An. RAM" show the peak memory usage for the preprocessing and analysis steps, respectively. The computational cost for the trace generation step mainly stems from the instrumentation itself, as our tracer script is already quite minimal. Significant optimizations would thus need to target the Jalangi2 implementation. The CPU time spent for the preprocessing step correlates with the size of the raw traces. The implementation is parallelized, so each trace can be processed independently. Profiling shows a slight bottleneck in the string parsing code, so switching to a binary trace format may further improve preprocessing performance, at the cost of higher code complexity in the trace generation. The analysis step took less than one CPU minute for every investigated target; this underlines the efficiency of the presented analysis algorithm, and that it is fast enough to be used in a productive setting. The time spent for the analysis mostly depends on the trace size, as when building the call tree, each trace entry is converted into a tree node or embedded into an existing one. Another factor is the number of leakages, as is apparent when comparing the analysis times of the various ed25519 implementations. The measured overall duration heavily depends on where most CPU time is spent: While the trace generation and the analysis are mostly sequential, the trace preprocessing is heavily parallelized. Thus, a high CPU time for preprocessing does contribute less to the overall duration. Apart from one outlier, elliptic's p384, the measured times stayed well within a few minutes, which can be considered acceptable for productive use in a CI pipeline. 6.2.2 Memory usage. The inherently different pipeline steps also reflect in different memory requirements. , , The trace generation step has a negligible memory footprint, which mostly depends on the size of the array that is used for buffering trace entries before writing them to the output file. The memory consumption of the preprocessing step is mainly caused by loading chunks of the trace file into memory and decompressing them. Parallelization of the preprocessing step means that several trace files are being held in memory simultaneously. The memory usage of the preprocessing can be reduced by decreasing the number of parallel threads (4 in our experiment). In the call tree analysis step, the memory demand is driven by the size of the preprocessed traces and, most notably, their level of divergence. If the target is constant-time and thus all traces are identical, the tree does not have any split nodes, so all traces end up in the same nodes. Adding a trace ID to an existing node does not involve any significant memory cost, as the trace IDs assigned to a call tree node are stored as a bitfield. However, if the traces heavily diverge, the analysis produces many split nodes with partially redundant subtrees. This distinction becomes apparent by the implementations of ed25519 in elliptic and in tweetnacl: While using comparable tracing and preprocessing time, the constant-time implementation in tweetnacl requires much less memory than the implementation in elliptic, which relies on the leaking bn.js and hash.js libraries. Through continuously applying Microwalk-CI and mitigating non-constant-time behavior such that only small leakages pop up during analysis, the peak memory usage of the analysis step can be kept within the bounds of a typical CI environment. Overall, the peak memory usage of Microwalk-CI is on an acceptable level. The highest memory consumption was observed when analyzing elliptic's p384. This is certainly a worst case example, as large parts of its code are non-constant time, while Microwalk-CI is optimized for finding mid-level leakages in an otherwise fairly constant-time software. However, most of p384's code is shared with the other curve implementations, which contain the same leakages, but can be analyzed more efficiently. Also, a significant part of the identified leakages reside in the the SHA-512 implementation of hash.js, which should be analyzed separately. As expected, more complex algorithms like asymmetric cryptography require more memory in the analysis. But, even those only require an amount of memory which, today, is commonly available. Vulnerabilities Our leakage analysis identified many leakages in the given libraries. We evaluated whether those are in fact actual vulnerabilities, and discuss a few examples in the following. In general, the leakages were correctly assigned to the respective leaking code lines, and we did not encounter any false positives (i.e., code lines that don't leak by themselves). In addition to the report shown in the user interface (Figure 7), a detailed leakage report is generated, which provides the full calling context for each leakage and shows how the different test cases contributed to tree divergences. Leakages in AES. All investigated implementations of AES use table lookups into S-boxes or precomputed T-tables, making those highly susceptible to timing attacks. The exploitability of such lookups was previously shown in other work [10]. All leakages found in aes-js by Microwalk-CI have a maximum leakage score. Additionally, Microwalk-CI discovers input-dependent behavior in the AES-GCM encryption of node-forge. Manual inspection shows that these leakages in the tableMultiply function in the file cipherModes.js occur during the computation of the GHASH which is used for the final computation of the authentication tag. The tableMultiply function uses a table precomputed from the hash key and multiplies by accessing this table with an index which is an intermediate value computed from the current ciphertext block and the previous hash value. Learning this intermediate value potentially allows to gain information about the GHASH key, compromising the authentication property. The implementation in node-forge uses 4-bit tables. Whether this implementation and leakage is exploitable, is left to future work. We recommend not having any secret-dependent non-constant-time code. 6.3.2 Elliptic curve implementations. node-forge and tweetnacl feature custom constant-time big number arithmetic that is specifically designed for the supported curves. The elliptic library, however, relies entirely on arithmetic from the general-purpose bn.js [11] library, which features a lot of input-dependent control flow and memory accesses. Thus, we see very high leakage over all supported primitives. The leakages detected in the big number and elliptic code itself are mostly assigned scores between 80 and 100. In addition, for computing the signature, elliptic's ECDSA implementation uses the hash.js [24] library, which offers pure-JavaScript implementations for SHA-1 and SHA-2. For ECDSA and EdDSA signatures with the curves p384 and ed25519, respectively, the leakage report points to a significant amount of leakage in lib/hash/sha/512.js for a variety of call stacks. Here, the implementation works around a limitation of JavaScript, which represents all numbers in IEEE-754 double precision floating point, and temporarily converts them to 32-bit signed integers for bitwise arithmetic. If the most-significant bit ends up being 1, JavaScript sign-extends it such that the result is negative. The implementation checks for this in an if statement and adds 0x100000000 to get a positive number. This leakage may pose a security issue, as ECDSA and EdDSA use the hash function for generating a nonce from the private key. Microwalk-CI assigns leakage scores between 60 and 70 for most of the leakages in lib/hash/sha/512.js. Future work could investigate whether the leakage of the most-significant bit can be used to learn parts of the private key. The libraries elliptic, bn.js and hash.js are from the same author. Base64 encoding. We also found leakages in some of the various Base64 implementations. All of them were caused by the use of lookup tables, where 6-bit chunks are mapped to ASCII characters and vice versa. The only known attack against Base64 encoding relies on a precise controlled channel that is not available for common JavaScript deployments [54]. However, depending on the memory layout of the respective lookup tables, partial information may be accessible via a cache attack. js-base64 does also feature a vulnerable Base64 implementation; however, it first checks whether the Buffer class with native Base64 support is present, which is the case for our Node.js build. Number of Test Cases As mentioned in the performance analysis, computation time and, to a lesser degree, memory consumption, scale with the number of test cases. A higher number of test cases increases the chance of triggering uncommon code paths and thus finding more leakages. In the following, we analyze this trade-off and point out approaches for striking a good balance between accuracy and performance. In our performance analysis, we ran 16 test cases for each library. This number is within the same order of magnitude as the one used for the evaluation in [62], where the authors recommend running 10 test cases. To check whether the small number of test cases had impact on the number of detected leakages, we repeated our analysis with 48 additional test cases (64 total) for each target and compared the results with those of the first analysis. 6.4.1 Performance. Increasing the number of test cases does not affect every pipeline step in the same way. Doubling the number of test cases roughly doubles the CPU time needed for trace generation, but that does not apply to the analysis step: There, the first test case takes much longer than subsequent ones, as it needs to build the tree from scratch, which involves spending a lot of time in the memory allocator. Later non-diverging test cases only need to iterate the existing tree, which takes considerably less resources. We observed that the duration increased by factor 3 to 3.5, although we ran 4 times as many test cases. Leakages. Except for targets in the libraries elliptic and node-forge, Microwalk-CI found the same amount of leakages with 64 test cases as with 16. For elliptic, all targets show a small single digit increase in the number of overall and unique leakages. For all new leakages, we determined that these were initially missed due to a saturation effect (see Section 6.6) and not by lack of coverage, and would have been found by re-running the analysis after fixing the preceding leakages. For node-forge's RSA implementation, the difference is a bit larger. While Microwalk-CI finds 223 overall and 111 unique leakages with 16 test cases, it was able to discover 255 overall and 125 unique leakages with 64 test cases. Manual investigation shows again that most leakages were missed due to a saturation effect. However, a small number was missed due to insufficient coverage of the initial 16 test cases. Recommendations. We recommend the developer to choose an overall duration that is acceptable during ongoing development and determine an according test case number. In addition, the coverage of the generated test cases could be checked with a separate tool to ensure that all relevant code gets executed. Finally, the developer could add another larger collection of test cases that runs as a final check before releasing the next version, where a longer analysis time is acceptable. Comparison with Microwalk's original Analysis Module Microwalk originally features two analysis modules that implement the memory access trace (MAT) analysis method for finding leakages. The method was first presented in [63]. For each memory accessing instruction, the modules generate a hash over all accessed offsets. By comparing the hashes between traces, the amount of leakage for each memory accessing instruction is computed. Due to the focus on memory accesses, control flow leakages are only discovered indirectly or may even be missed entirely. The first module, that was originally published with [63], generates only one leakage report per instruction. The later added second module (referred to by us as CMAT module) is an extension of the first module that additionally distinguishes between call stacks to achieve a higher accuracy. To compare the existing analysis method with our new approach, we ran a selection of the targets with the CMAT module, using the same 16 test cases as for the initial analysis. The results are shown in Table 2. Since the CMAT module only stores a single mapping of call stacks and instructions to hashes, it generally takes less resources than our new tree-based approach, both in computation time and memory consumption. However, the preceding trace generation and preprocessing, which take most of the time, are identical, so the actual difference in overall duration is limited. For aes-js' AES-ECB implementation, the CMAT module reports a number of secret-dependent table accesses with full leakage, which are identical to the leakages reported by our new analysis module. This is the kind of leakage that the MAT analysis was designed for: Through hashing the sequence of memory addresses that a given instruction accesses, secret-dependent variations are discovered. Our new analysis detects these leakages through the address lists stored in the individual memory access trace entries, which ultimately yields the same result, but takes more memory. The result from the CMAT module for elliptic's p192 is very imprecise and contains many false positives: It reports 811 leaking lines in total, which includes lines like "this.pendingTotal = 0;". As a fixed offset is accessed, this line is a clear false positive. The leakage in question was in fact caused by a control flow variation higher up in the call chain, leading to a varying number of executions of the given instruction, which in turn produced a different memory access offset hash. The other false positives follow a similar pattern. Our new tree-based approach handles control flow and memory access leakages separately, which reduces false positives and allows accurately attributing a leakage to a specific code line. Limitations of the Analysis Algorithm As other dynamic analysis approaches, Microwalk-CI needs a good coverage of the program in order to give an accurate leakage detection result. If a particular path is never executed, it does not appear in the traces and thus never reaches the analysis modules. However, for cryptographic code, randomly generated test cases tend to work very well [62,63]. For other targets, it may be worth exploring other methods for generating coverage, e.g., fuzzing. Finally, in our analysis algorithm, some leakages may be obscured by other leakages at a higher tree level. If leakages on higher levels cause splits that result in a unique sub tree for each trace, the lower leakages can not cause any more divergences and thus are overlooked. This "saturation" is an inherent property of the analysis approach, and the price payed for having a linear-time algorithm. We do not believe that this impacts practical usage: After having a library reach a certain state of "constant-time-ness", we only expect few new leakages being reported, as certain functions are touched. And even if a leakage is not reported in a first pass, it will show up after committing the fixes for the previously reported leakages. It is unlikely that a number of unfixed low-severity leakages obscure a subsequent severe leakage. This would imply a fully split up tree, which, in itself, signals a high-severity leakage. Other work tries to find all trace leakages in a single pass, but uses significantly more resources with every CI run and thus is not suitable for integration into an everyday-development workflow. RELATED WORK Constant-time program analysis has a long tradition as there are different classes of vulnerabilities that can be found through various analysis techniques [36]. Some tools for checking constanttime behavior depend on the availability of source code. Irazoqui et al. [28] introduce secret-dependent cache trace analysis, ct-fuzz [25] specializes fuzzing for timing leakages, ct-verif [5] describes constant-time through safety properties and CaSym [13] uses symbolic execution to model the execution behavior of a program. Microwalk-CI does not require access to the source code for compiled languages. Unlike Microwalk-CI which uses dynamic program analysis and compares real execution traces, static binary analysis tries to simulate the execution of every possible program path. BIN-SEC/REL [16] uses relational symbolic execution of two execution traces to efficiently analyze binary code, however is limited by the high performance impact of static analysis. CacheS [59], based on CacheD [60], combines taint tracking and symbolic execution to find cache line granular leakage and secret-dependent branches. Moreover, CacheAudit [17] tracks relational information about memory blocks to compute upper bounds for leakages. In contrast with these works, Microwalk-CI finds any leakage with byte granularity. DATA [62] and its (EC)DSA-specific extension [61] find microarchitectural and timing side-channels in binaries via dynamic binary analysis. The trace alignment approach of DATA is based on computing pairwise differences between traces, leading to a computation time that is quadratic both in the number of traces and in the trace length. While it yields more leakage candidates after a single pass, it needs more computational resources and thus is not a suited for use in a CI environment. Abacus [6] identifies secret-dependent memory access instructions using symbolic execution. Then, the authors use Monte Carlo sampling to estimate the amount of leaked information. A shortcoming of the approach is that Abacus only uses one trace and therefore suffers from low coverage. dudect [46] measures timing behavior in a statistical way without any model of the underlying hardware, which is fast, but also yields imprecise results. ctgrind [33] and TIMECOP [39] search the code for secret-dependent jump or memory accesses like table-lookups and variable-time CPU instructions, but are rather manual. Analysis of JavaScript code recently received more focus in the research community as it is widely used in browsers including many security-critical workloads. Basic properties of JavaScript regarding security of code have been widely analyzed [30,31,52,57]. Just as in other programming languages, various attacks on secret-dependent behavior have been conducted [51,53]. A common prerequisite for exploiting timing-dependent properties of code is having precise timers [47], though this can be bypassed [53]. Apart from countermeasures like disabling timers or blocking certain functionality [50], little work has gone into finding non-constant-time JavaScript code. CONCLUSION With Microwalk-CI we have shown how one can design a sidechannel analysis framework that is suitable for integration into a day-to-day development workflow. We have presented a new trace processing algorithm that merges the recorded traces into a call tree, allowing us to precisely localize and quantify leakages in a short time frame. Moreover, by "dockerizing" the analysis, we have provided the means for easy and fast usage without the necessity of understanding the details of the framework. With the design and implementation of a tracer for JavaScript and the integration with Microwalk-CI , we have built the first comprehensive constant-time verifier for JavaScript code and demonstrated how analysis techniques originally developed for binary analysis can be used for interpreted or just-in-time compiled languages. Microwalk-CI is constructed in a modular fashion and allows to add tracing backends for other languages with limited effort. Overall, Microwalk-CI carries the potential to increase the sidechannel security for many popular libraries written in potentially any programming language, and raises awareness for the risks of non-constant-time code in new communities. Figure 1 and three different values of secret: 1 (trace ID 0), 2 (trace ID 1) and 3 (trace ID 2). The dump is generated by running a depth-first search on the tree and printing the individual nodes with appropriate indentation. Trace entry types are highlighted with blue color, trace IDs with red color. A CALL TREE DUMP Figure 1 : 1A sample program illustrating different kinds of leakages: The lookup function is not constant-time, since it does an input-based array lookup, so the memory access to table[index] would be marked as leaking if index is secret. Another cause of leakage is in func, which calls lookup a varying number of times depending on a secret value. constant parameter (Figure 1). A correct analysis should distinguish the two invocations of lookup and mark the table access in line 12 as leaking for the first invocation (line 2); for the second invocation (line 6), the secret-dependent branch in line 5 should be reported, as the table access in line 12 itself does not add any leakage. Figure 2 : 23.4.3Inserting trace entries into the tree. The handling of equal and conflicting trace entries depends on the respective type. Split , , <...trace entries...> A generic trace divergence with two split nodes. While traces 1 to 4 share the entries in the left node, they differ at the jump statement at location A: Traces 1 and 3 jump to location B, while traces 2 and 4 jump to C. Here, each case gets its own split node, and processing of trace entries is resumed there.<...trace entries...> <...trace entries...> Trace ID tree for jump at f3+3 Figure 4 : 4Example for call stack and trace ID generation. The program in (a) counts the number of 1s in the two leastsignificant bits of a secret variable by repeatedly executing a secret-dependent if statement. When calling f1 from main with secret values from 0 (trace ID 0) to 5 (trace ID 5), we get the call stack as shown in (b), with three detected jump instructions. The secret-dependent jump at f3+3 leads to divergence of traces, as is visible in the resulting trace ID tree in (c). Traces sharing a tree node at tree level ℎ ≥ 0 are identical for at least ℎ consecutive executions of the instruction. For traces, let = {0, 1, . . . , − 1} be the set of trace IDs. The set of leaves for a given trace ID tree is then defined as = { | ⊆ ∧ ≠ ∅} with 1 ∪ 2 ∪ . . . ∪ ℓ = and ∩ = ∅ for , ∈ and ≠ . This can be read as the tree having ℓ leaves ( = 1, . . . , ℓ), where each holds the trace IDs ending up in this particular leaf. Those traces are considered identical. Let : → N be a random variable for picking a trace ID. The trace IDs are uniformly distributed, hence Pr[ = ] = 1 for each = 1, . . . , . Let : → N be a random variable for observing a particular trace, with Pr[ = ] = | | | | = | | for = 1, . . . , ℓ. Figure 5 : 5Traces created by Microwalk-CI for a JavaScript toy example. Indented lines are wrapped for readability and are formatted in a single line in the original trace file. Figure 6 : 6Generic source tree of a JavaScript project containing our analysis template. Figure 8 : 8Call tree dump for the example in /** Cond;target.js:10:8:10:21 13 Expr;target.js:10:5:14:6 14 Get;target.js:11:15:11:25;17;0 15 Expr;target.js:11:9:11:30 16 Ret1 ; target . js :16:12:16:15 17 Expr ; target . js :16:5:16:16 18 Ret2 ; target . js :4:1:17:2:; Cond;target.js:10:8:10:21 13 Expr;target.js:10:5:14:6 14 Get;target.js:11:15:11:25;19;2 15 Expr;target.js:11:9:11:30 16 Ret1 ; target . js :16:12:16:15 17 Expr ; target . js :16:5:16:16 18 Ret2 ; target . js :4:1:17:2:; 19 / index . js :28:5:28:43 20 ...2 * Simplified demo test case 3 **/ 4 function processTestcase ( buffer ) 5 { 6 var val = parseInt ( buffer ); 7 var array = [0 , 1 , ... , 15]; 8 var ret = -1; 9 10 if( val % 11 ret = array [ val ] + 1; 12 } else { 13 ret = 0; 14 } 15 16 return ret ; 17 } 18 19 (a) Source 1 ... 2 Call ;/ index . js :28:5:28:43; 3 target . js :4:1:17:2:; 4 processTestcase 5 Call ; target . js :6:15:6:39; 6 [ extern ]: parseInt :; parseInt 7 Ret2 ;[ extern ]: parseInt :; 8 target . js :6:15:6:39 9 Expr ; target . js :6:15:6:39 10 Expr ; target . js :7:17:7:71 11 Expr ; target . js :8:15:8:17 12 Cond;target.js:10:8:10:21 13 Expr;target.js:10:5:14:6 14 Expr;target.js:13:9:13:17 15 Ret1 ; target . js :16:12:16:15 16 Expr ; target . js :16:5:16:16 17 Ret2 ; target . js :4:1:17:2:; 18 / index . js :28:5:28:43 19 ... (b) Trace for buffer = 1. 1 ... 2 Call ;/ index . js :28:5:28:43; 3 target . js :4:1:17:2:; 4 processTestcase 5 Call ; target . js :6:15:6:39; 6 [ extern ]: parseInt :; parseInt 7 Ret2 ;[ extern ]: parseInt :; 8 target . js :6:15:6:39 9 Expr ; target . js :6:15:6:39 10 Expr ; target . js :7:17:7:71 11 Expr ; target . js :8:15:8:17 12 19 / index . js :28:5:28:43 20 ... (c) Trace for buffer = 0. 1 ... 2 Call ;/ index . js :28:5:28:43; 3 target . js :4:1:17:2:; 4 processTestcase 5 Call ; target . js :6:15:6:39; 6 [ extern ]: parseInt :; parseInt 7 Ret2 ;[ extern ]: parseInt :; 8 target . js :6:15:6:39 9 Expr ; target . js :6:15:6:39 10 Expr ; target . js :7:17:7:71 11 Expr ; target . js :8:15:8:17 12 Table 2 : 2Results of the analysis step of selected targets with the original Microwalk CMAT module, and its resource usage. Time and memory consumption of the trace generation and preprocessing steps are identical to those shown in Table 1.Target CPU Duration RAM # Lkgs. # Unique aes-js AES-ECB < 1 sec 8 sec 168 MB 16 16 elliptic p192 4 sec 253 sec 289 MB 4,003 811 tweetnacl ed25519 5 sec 126 sec 286 MB 0 0 #return lookup +1 -> func +1 #jump func +4 -> <?> ( not taken ) #call func +5 -> lookup +0 Trace entries: #memory-read at lookup +1 table [1]: 0, 1, 2 #return lookup +1 -> func +5 #jump func +6 -> func +4 Splits: @split: 0 Trace entries: #jump func +4 -> func +7 #return func +7 -> main +1 @split: 1, 2 Trace entries: #jump func +4 -> <?> ( not taken ) #call func +5 -> lookup +0 Trace entries: #memory-read at lookup +1 table [1]: 1, 2 #return lookup +1 -> func +5 #jump func +6 -> func +4 Splits: @split: 1 Trace entries: #jump func +4 -> func +7 #return func +7 -> main +1 @split: 2 Trace entries: #jump func +4 -> <?> ( not taken ) #call func +5 -> lookup +0 Trace entries: #memory-read at lookup +1 table [1]: 2 #return lookup +1 -> func +5 #jump func +6 -> func +4 #jump func +4 -> func +7 #return func +7 -> main +1@root Trace entries: #call main +1 -> func +0 Trace entries: #call func +1 -> lookup +0 Trace entries: #memory-read at lookup +1 table [1]: 0 table [2]: 1 table [3]: 2 ACKNOWLEDGMENTSThe authors thank Julia Tönnies for her help in evaluating the suitability of the leakage metrics, and the anonymous reviewers for their helpful comments and suggestions. This work has been supported by Deutsche Forschungsgemeinschaft (DFG) under grants 427774779 and 439797619, and by Bundesministerium für Bildung und Forschung (BMBF) through projects ENCOPIA and PeT-HMR. New Results on Instruction Cache Attacks. Onur Aciiçmez, Billy Bob Brumley, Philipp Grabher, 10.1007/978-3-642-15031-9_8Cryptographic Hardware and Embedded Systems, CHES 2010, 12th International Workshop. Santa Barbara, CA, USASpringer6225Proceedings (Lecture Notes in Computer ScienceOnur Aciiçmez, Billy Bob Brumley, and Philipp Grabher. 2010. New Results on Instruction Cache Attacks. In Cryptographic Hardware and Embedded Systems, CHES 2010, 12th International Workshop, Santa Barbara, CA, USA, August 17-20, 2010. Proceedings (Lecture Notes in Computer Science, Vol. 6225). Springer, 110-124. https://doi.org/10.1007/978-3-642-15031-9_8 Predicting Secret Keys Via Branch Prediction. Onur Aciiçmez, Çetin Kaya Koç, Jean-Pierre Seifert, 10.1007/11967668_15Topics in Cryptology -CT-RSA 2007, The Cryptographers' Track at the RSA Conference. San Francisco, CA, USASpringer4377Onur Aciiçmez, Çetin Kaya Koç, and Jean-Pierre Seifert. 2007. Predicting Secret Keys Via Branch Prediction. In Topics in Cryptology -CT-RSA 2007, The Cryptog- raphers' Track at the RSA Conference 2007, San Francisco, CA, USA, February 5-9, 2007, Proceedings (Lecture Notes in Computer Science, Vol. 4377). Springer, 225-242. https://doi.org/10.1007/11967668_15 . Aes-Js Accessed, AES-JS. Accessed: 2022-05-02. https://github.com/ricmoo/aes-js. When One Vulnerable Primitive Turns Viral: Novel Single-Trace Attacks on ECDSA and RSA. Alejandro Cabrera Aldaya, Billy Bob Brumley, 10.13154/tches.v2020.i2.196-221IACR Trans. Cryptogr. Hardw. Embed. Syst. 2020Alejandro Cabrera Aldaya and Billy Bob Brumley. 2020. When One Vulnerable Primitive Turns Viral: Novel Single-Trace Attacks on ECDSA and RSA. IACR Trans. Cryptogr. Hardw. Embed. Syst. 2020, 2 (2020), 196-221. https://doi.org/10. 13154/tches.v2020.i2.196-221 Verifying Constant-Time Implementations. Manuel José Bacelar Almeida, Gilles Barbosa, François Barthe, Michael Dupressoir, Emmi, 25th USENIX Security Symposium, USENIX Security 16. Austin, TX, USAUSENIX AssociationJosé Bacelar Almeida, Manuel Barbosa, Gilles Barthe, François Dupressoir, and Michael Emmi. 2016. Verifying Constant-Time Implementations. In 25th USENIX Security Symposium, USENIX Security 16, Austin, TX, USA, August 10- 12, 2016. USENIX Association, 53-70. https://www.usenix.org/conference/ usenixsecurity16/technical-sessions/presentation/almeida Abacus: Precise Side-Channel Analysis. Qinkun Bao, Zihao Wang, Xiaoting Li, James R Larus, Dinghao Wu, 10.1109/ICSE43902.2021.00078ICSE 2021. Madrid, SpainIEEEQinkun Bao, Zihao Wang, Xiaoting Li, James R. Larus, and Dinghao Wu. 2021. Abacus: Precise Side-Channel Analysis. In 43rd IEEE/ACM International Confer- ence on Software Engineering, ICSE 2021, Madrid, Spain, 22-30 May 2021. IEEE, 797-809. https://doi.org/10.1109/ICSE43902.2021.00078 Automatic Application of Power Analysis Countermeasures. Ali Galip Bayrak, Francesco Regazzoni, David Novo, Philip Brisk, François-Xavier Standaert, Paolo Ienne, 10.1109/TC.2013.219IEEE Trans. Computers. 64Ali Galip Bayrak, Francesco Regazzoni, David Novo, Philip Brisk, François-Xavier Standaert, and Paolo Ienne. 2015. Automatic Application of Power Analysis Countermeasures. IEEE Trans. Computers 64, 2 (2015), 329-341. https://doi.org/ 10.1109/TC.2013.219 Cache-Timing Attacks on AES. J Daniel, Bernstein, Daniel J Bernstein. 2005. Cache-Timing Attacks on AES. Software Mitigations to Hedge AES Against Cache-Based Software Side Channel Vulnerabilities. Ernie Brickell, Gary Graunke, Michael Neve, Jean-Pierre Seifert, IACR Cryptol. ePrint Arch. 52Ernie Brickell, Gary Graunke, Michael Neve, and Jean-Pierre Seifert. 2006. Soft- ware Mitigations to Hedge AES Against Cache-Based Software Side Channel Vul- nerabilities. IACR Cryptol. ePrint Arch. (2006), 52. http://eprint.iacr.org/2006/052 CaSym: Cache Aware Symbolic Execution for Side Channel Detection and Mitigation. Robert Brotzman, Shen Liu, Danfeng Zhang, Gang Tan, Mahmut T Kandemir, 10.1109/SP.2019.000222019 IEEE Symposium on Security and Privacy. San Francisco, CA, USAIEEE2019Robert Brotzman, Shen Liu, Danfeng Zhang, Gang Tan, and Mahmut T. Kandemir. 2019. CaSym: Cache Aware Symbolic Execution for Side Channel Detection and Mitigation. In 2019 IEEE Symposium on Security and Privacy, S&P 2019, San Francisco, CA, USA, May 19-23, 2019. IEEE, 505-521. https://doi.org/10.1109/SP. 2019.00022 SGX-Step: A Practical Attack Framework for Precise Enclave Execution Control. Jo Van Bulck, Frank Piessens, Raoul Strackx, SysTEX@SOSP. ACM. 4Jo Van Bulck, Frank Piessens, and Raoul Strackx. 2017. SGX-Step: A Practical Attack Framework for Precise Enclave Execution Control. In SysTEX@SOSP. ACM, 4:1-4:6. Binsec/Rel: Efficient Relational Symbolic Execution for Constant-Time at Binary-Level. Lesly-Ann Daniel, Sébastien Bardin, Tamara Rezk, 10.1109/SP40000.2020.000742020 IEEE Symposium on Security and Privacy. San Francisco, CA, USAIEEE2020Lesly-Ann Daniel, Sébastien Bardin, and Tamara Rezk. 2020. Binsec/Rel: Efficient Relational Symbolic Execution for Constant-Time at Binary-Level. In 2020 IEEE Symposium on Security and Privacy, S&P 2020, San Francisco, CA, USA, May 18-21, 2020. IEEE, 1021-1038. https://doi.org/10.1109/SP40000.2020.00074 CacheAudit: A Tool for the Static Analysis of Cache Side Channels. Goran Doychev, Dominik Feld, Boris Köpf, Laurent Mauborgne, Proceedings of the 22th USENIX Security Symposium. the 22th USENIX Security SymposiumWashington, DC, USAUSENIX AssociationGoran Doychev, Dominik Feld, Boris Köpf, Laurent Mauborgne, and Jan Reineke. 2013. CacheAudit: A Tool for the Static Analysis of Cache Side Channels. In Proceedings of the 22th USENIX Security Symposium, Washington, DC, USA, August 14-16, 2013. USENIX Association, 431-446. https://www.usenix.org/conference/ usenixsecurity13/technical-sessions/paper/doychev . Elliptic, Accessed, Elliptic. Accessed: 2022-05-02. https://github.com/indutny/elliptic. Gitlab, Code Quality. GitLab. Accessed: 2022-04-26. Code Quality. https://docs.gitlab.com/ee/user/ project/merge_requests/code_quality.html. Tracing Framework. Google, Accessed, Google. Accessed: 2022-04-26. Tracing Framework. https://github.com/google/ tracing-framework. Translation Leak-aside Buffer: Defeating Cache Side-channel Protections with TLB Attacks. Ben Gras, Kaveh Razavi, Herbert Bos, Cristiano Giuffrida, 27th USENIX Security Symposium, USENIX Security. Baltimore, MD, USAUSENIX AssociationBen Gras, Kaveh Razavi, Herbert Bos, and Cristiano Giuffrida. 2018. Translation Leak-aside Buffer: Defeating Cache Side-channel Protections with TLB Attacks. In 27th USENIX Security Symposium, USENIX Security 2018, Baltimore, MD, USA, August 15-17, 2018. USENIX Association, 955-972. https://www.usenix.org/ conference/usenixsecurity18/presentation/gras Information Theory with Applications. Silviu Guias, McGraw-Hill CompaniesSilviu Guias, u. 1977. Information Theory with Applications. McGraw-Hill Compa- nies. ct-fuzz: Fuzzing for Timing Leaks. Shaobo He, Michael Emmi, Gabriela F Ciocarlie, 10.1109/ICST46399.2020.0006313th IEEE International Conference on Software Testing, Validation and Verification. Porto, PortugalIEEE2020Shaobo He, Michael Emmi, and Gabriela F. Ciocarlie. 2020. ct-fuzz: Fuzzing for Timing Leaks. In 13th IEEE International Conference on Software Testing, Validation and Verification, ICST 2020, Porto, Portugal, October 24-28, 2020. IEEE, 466-471. https://doi.org/10.1109/ICST46399.2020.00063 Cache Attacks Enable Bulk Key Recovery on the Cloud. Berk Mehmet Sinan Inci, Gorka Gülmezoglu, Thomas Irazoqui, Berk Eisenbarth, Sunar, Cryptographic Hardware and Embedded Systems -CHES 2016 -18th International Conference. Benedikt Gierlichs and Axel Y. PoschmannSanta Barbara, CA, USA9813Mehmet Sinan Inci, Berk Gülmezoglu, Gorka Irazoqui, Thomas Eisenbarth, and Berk Sunar. 2016. Cache Attacks Enable Bulk Key Recovery on the Cloud. In Cryptographic Hardware and Embedded Systems -CHES 2016 -18th International Conference, Santa Barbara, CA, USA, August 17-19, 2016, Proceedings (Lecture Notes in Computer Science, Vol. 9813), Benedikt Gierlichs and Axel Y. Poschmann (Eds.). . Springer, 10.1007/978-3-662-53140-2_18Springer, 368-388. https://doi.org/10.1007/978-3-662-53140-2_18 Did we learn from LLC Side Channel Attacks? A Cache Leakage Detection Tool for Crypto Libraries. Gorka Irazoqui, Kai Cong, Xiaofei Guo, Hareesh Khattri, Arun K Kanuparthi, Thomas Eisenbarth, Berk Sunar, arXiv:1709.01552Gorka Irazoqui, Kai Cong, Xiaofei Guo, Hareesh Khattri, Arun K. Kanuparthi, Thomas Eisenbarth, and Berk Sunar. 2017. Did we learn from LLC Side Channel At- tacks? A Cache Leakage Detection Tool for Crypto Libraries. CoRR abs/1709.01552 (2017). arXiv:1709.01552 http://arxiv.org/abs/1709.01552 They're not that hard to mitigate": What Cryptographic Library Developers Think About Timing Attacks. J Jancar, M Fourné, D De Almeida Braga, M Sabt, P Schwabe, G Barthe, P Fouque, Y Acar, 2022 IEEE Symposium on Security and Privacy (S&P). J. Jancar, M. Fourné, D. De Almeida Braga, M. Sabt, P. Schwabe, G. Barthe, P. Fouque, and Y. Acar. 2022. "They're not that hard to mitigate": What Crypto- graphic Library Developers Think About Timing Attacks. In 2022 IEEE Symposium on Security and Privacy (S&P). 755-772. Type Analysis for JavaScript. Anders Simon Holm Jensen, Peter Møller, Thiemann, 10.1007/978-3-642-03237-0_17Proceedings (Lecture Notes in Computer Science. (Lecture Notes in Computer ScienceLos Angeles, CA, USASpringer5673Static Analysis, 16th International SymposiumSimon Holm Jensen, Anders Møller, and Peter Thiemann. 2009. Type Analysis for JavaScript. In Static Analysis, 16th International Symposium, SAS 2009, Los Angeles, CA, USA, August 9-11, 2009. Proceedings (Lecture Notes in Computer Science, Vol. 5673). Springer, 238-255. https://doi.org/10.1007/978-3-642-03237- 0_17 JSAI: A Static Analysis Platform for JavaScript. Vineeth Kashyap, Kyle Dewey, Ethan A Kuefner, John Wagner, Kevin Gibbons, John Sarracino, Ben Wiedermann, Ben Hardekopf, 10.1145/2635868.2635904Proceedings of the 22nd ACM SIGSOFT International Symposium on Foundations of Software Engineering, (FSE-22). the 22nd ACM SIGSOFT International Symposium on Foundations of Software Engineering, (FSE-22)Hong Kong, ChinaACMVineeth Kashyap, Kyle Dewey, Ethan A. Kuefner, John Wagner, Kevin Gibbons, John Sarracino, Ben Wiedermann, and Ben Hardekopf. 2014. JSAI: A Static Analy- sis Platform for JavaScript. In Proceedings of the 22nd ACM SIGSOFT International Symposium on Foundations of Software Engineering, (FSE-22), Hong Kong, China, November 16 -22, 2014. ACM, 121-132. https://doi.org/10.1145/2635868.2635904 An Information-Theoretic Model for Adaptive Side-Channel Attacks. Boris Köpf, David A Basin, 10.1145/1315245.1315282Proceedings of the 2007 ACM Conference on Computer and Communications Security. the 2007 ACM Conference on Computer and Communications SecurityAlexandria, Virginia, USAACMBoris Köpf and David A. Basin. 2007. An Information-Theoretic Model for Adaptive Side-Channel Attacks. In Proceedings of the 2007 ACM Conference on Computer and Communications Security, CCS 2007, Alexandria, Virginia, USA, October 28-31, 2007. ACM, 286-296. https://doi.org/10.1145/1315245.1315282 ctgrind: Checking that Functions are Constant Time with Valgrind. Adam Langley, Adam Langley. 2010. ctgrind: Checking that Functions are Constant Time with Valgrind. A Memory-Deduplication Side-Channel Attack to Detect Applications in Co-Resident Virtual Machines. Jens Lindemann, Mathias Fischer, 10.1145/3167132.3167151Proceedings of the 33rd Annual ACM Symposium on Applied Computing, SAC 2018. Hisham M. Haddad, Roger L. Wainwright, and Richard Chbeirthe 33rd Annual ACM Symposium on Applied Computing, SAC 2018Pau, FranceACMJens Lindemann and Mathias Fischer. 2018. A Memory-Deduplication Side- Channel Attack to Detect Applications in Co-Resident Virtual Machines. In Proceedings of the 33rd Annual ACM Symposium on Applied Computing, SAC 2018, Pau, France, April 09-13, 2018, Hisham M. Haddad, Roger L. Wainwright, and Richard Chbeir (Eds.). ACM, 183-192. https://doi.org/10.1145/3167132.3167151 Last-Level Cache Side-Channel Attacks are Practical. Fangfei Liu, Yuval Yarom, Qian Ge, Gernot Heiser, Ruby B Lee, 10.1109/SP.2015.432015 IEEE Symposium on Security and Privacy. San Jose, CA, USAIEEE Computer SocietyFangfei Liu, Yuval Yarom, Qian Ge, Gernot Heiser, and Ruby B. Lee. 2015. Last- Level Cache Side-Channel Attacks are Practical. In 2015 IEEE Symposium on Security and Privacy, S&P 2015, San Jose, CA, USA, May 17-21, 2015. IEEE Computer Society, 605-622. https://doi.org/10.1109/SP.2015.43 A Survey of Microarchitectural Side-channel Vulnerabilities, Attacks, and Defenses in Cryptography. Xiaoxuan Lou, Tianwei Zhang, Jun Jiang, Yinqian Zhang, 10.1145/3456629ACM Comput. Surv. 54Xiaoxuan Lou, Tianwei Zhang, Jun Jiang, and Yinqian Zhang. 2021. A Survey of Microarchitectural Side-channel Vulnerabilities, Attacks, and Defenses in Cryptography. ACM Comput. Surv. 54, 6 (2021), 122:1-122:37. https://doi.org/10. 1145/3456629 MemJam: A False Dependency Attack Against Constant-Time Crypto Implementations. Ahmad Moghimi, Jan Wichelmann, Thomas Eisenbarth, Berk Sunar, 10.1007/s10766-018-0611-9Int. J. Parallel Program. 47Ahmad Moghimi, Jan Wichelmann, Thomas Eisenbarth, and Berk Sunar. 2019. MemJam: A False Dependency Attack Against Constant-Time Crypto Implemen- tations. Int. J. Parallel Program. 47, 4 (2019), 538-570. https://doi.org/10.1007/ s10766-018-0611-9 CopyCat: Controlled Instruction-Level Attacks on Enclaves. Daniel Moghimi, Jo Van Bulck, Nadia Heninger, Frank Piessens, Berk Sunar, 29th USENIX Security Symposium, USENIX Security 2020. Srdjan Capkun and Franziska RoesnerUSENIX AssociationDaniel Moghimi, Jo Van Bulck, Nadia Heninger, Frank Piessens, and Berk Sunar. 2020. CopyCat: Controlled Instruction-Level Attacks on Enclaves. In 29th USENIX Security Symposium, USENIX Security 2020, August 12-14, 2020, Srdjan Capkun and Franziska Roesner (Eds.). USENIX Association, 469-486. https://www.usenix. org/conference/usenixsecurity20/presentation/moghimi-copycat TIMECOP: Automated Dynamic Analysis for Timing Side. Moritz Neikes, Moritz Neikes. 2020. TIMECOP: Automated Dynamic Analysis for Timing Side- Channels. https://www.post-apocalyptic-crypto.org/timecop/ OpenJS Foundation. Accessed: 2022-05-02. Node.js -JavaScript Runtime. OpenJS Foundation. Accessed: 2022-05-02. Node.js -JavaScript Runtime. https: //nodejs.org. . Opentelemetry, Accessed, OpenTelemetry JavaScript. OpenTelemetry. Accessed: 2022-04-26. OpenTelemetry JavaScript. https://github. com/open-telemetry/opentelemetry-js. Cache Attacks and Countermeasures: The Case of AES. Arne Dag, Adi Osvik, Eran Shamir, Tromer, 10.1007/11605805_1Topics in Cryptology -CT-RSA 2006, The Cryptographers' Track at the RSA Conference. San Jose, CA, USASpringer3860Dag Arne Osvik, Adi Shamir, and Eran Tromer. 2006. Cache Attacks and Coun- termeasures: The Case of AES. In Topics in Cryptology -CT-RSA 2006, The Cryp- tographers' Track at the RSA Conference 2006, San Jose, CA, USA, February 13-17, 2006, Proceedings (Lecture Notes in Computer Science, Vol. 3860). Springer, 1-20. https://doi.org/10.1007/11605805_1 DRAMA: Exploiting DRAM Addressing for Cross-CPU Attacks. Peter Pessl, Daniel Gruss, Clémentine Maurice, Michael Schwarz, Stefan Mangard, 25th USENIX Security Symposium, USENIX Security 16. Thorsten Holz and Stefan SavageAustin, TX, USAUSENIX AssociationPeter Pessl, Daniel Gruss, Clémentine Maurice, Michael Schwarz, and Stefan Mangard. 2016. DRAMA: Exploiting DRAM Addressing for Cross-CPU Attacks. In 25th USENIX Security Symposium, USENIX Security 16, Austin, TX, USA, August 10-12, 2016, Thorsten Holz and Stefan Savage (Eds.). USENIX Association, 565- 581. https://www.usenix.org/conference/usenixsecurity16/technical-sessions/ presentation/pessl The RedMonk Programming Language Rankings. Red Monk. Accessed: 2022-05-02. The RedMonk Programming Language Rank- ings: January 2022. https://redmonk.com/sogrady/2022/03/28/language-rankings- 1-22/. Dude, is my code constant time. Oscar Reparaz, Josep Balasch, Ingrid Verbauwhede, 10.23919/DATE.2017.7927267Design, Automation & Test in Europe Conference & Exhibition. Lausanne, SwitzerlandIEEEOscar Reparaz, Josep Balasch, and Ingrid Verbauwhede. 2017. Dude, is my code constant time?. In Design, Automation & Test in Europe Conference & Exhibition, DATE 2017, Lausanne, Switzerland, March 27-31, 2017. IEEE, 1697-1702. https: //doi.org/10.23919/DATE.2017.7927267 SoK: In Search of Lost Time: A Review of JavaScript Timers in Browsers. Thomas Rokicki, Clémentine Maurice, Pierre Laperdrix, 10.1109/EuroSP51992.2021.00039IEEE European Symposium on Security and Privacy. Vienna, AustriaIEEE2021Thomas Rokicki, Clémentine Maurice, and Pierre Laperdrix. 2021. SoK: In Search of Lost Time: A Review of JavaScript Timers in Browsers. In IEEE European Symposium on Security and Privacy, EuroS&P 2021, Vienna, Austria, September 6-10, 2021. IEEE, 472-486. https://doi.org/10.1109/EuroSP51992.2021.00039 Samsung, Accessed, Jalangi2 Source. Samsung. Accessed: 2022-04-26. Jalangi2 Source. https://github.com/Samsung/ jalangi2. JavaScript Zero: Real JavaScript and Zero Side-Channel Attacks. Michael Schwarz, Moritz Lipp, Daniel Gruss, 25th Annual Network and Distributed System Security Symposium. San Diego, California, USAThe Internet SocietyMichael Schwarz, Moritz Lipp, and Daniel Gruss. 2018. JavaScript Zero: Real JavaScript and Zero Side-Channel Attacks. In 25th Annual Network and Distributed System Security Symposium, NDSS 2018, San Diego, California, USA, February 18- 21, 2018. The Internet Society. http://wp.internetsociety.org/ndss/wp-content/ uploads/sites/25/2018/02/ndss2018_07A-3_Schwarz_paper.pdf Fantastic Timers and Where to Find Them: High-Resolution Microarchitectural Attacks in JavaScript. Michael Schwarz, Clémentine Maurice, Daniel Gruss, Stefan Mangard, 10.1007/978-3-319-70972-7_13Financial Cryptography and Data Security -21st International Conference. Sliema, MaltaSpringer10322Revised Selected PapersMichael Schwarz, Clémentine Maurice, Daniel Gruss, and Stefan Mangard. 2017. Fantastic Timers and Where to Find Them: High-Resolution Microarchitec- tural Attacks in JavaScript. In Financial Cryptography and Data Security -21st International Conference, FC 2017, Sliema, Malta, April 3-7, 2017, Revised Se- lected Papers (Lecture Notes in Computer Science, Vol. 10322). Springer, 247-267. https://doi.org/10.1007/978-3-319-70972-7_13 Jalangi: A Selective Record-Replay and Dynamic Analysis Framework for JavaScript. Koushik Sen, Swaroop Kalasapur, G Tasneem, Simon Brutch, Gibbs, 10.1145/2491411.2491447Joint Meeting of the European Software Engineering Conference and the ACM SIGSOFT Symposium on the Foundations of Software Engineering, ES-EC/FSE'13. Saint Petersburg, Russian FederationACMKoushik Sen, Swaroop Kalasapur, Tasneem G. Brutch, and Simon Gibbs. 2013. Jalangi: A Selective Record-Replay and Dynamic Analysis Framework for JavaScript. In Joint Meeting of the European Software Engineering Conference and the ACM SIGSOFT Symposium on the Foundations of Software Engineering, ES- EC/FSE'13, Saint Petersburg, Russian Federation, August 18-26, 2013. ACM, 488-498. https://doi.org/10.1145/2491411.2491447 Prime+Probe 1, JavaScript 0: Overcoming Browser-based Side-Channel Defenses. Anatoly Shusterman, Ayush Agarwal, O&apos; Sioli, Daniel Connell, Yossi Genkin, Yuval Oren, Yarom, 30th USENIX Security Symposium, USENIX Security 2021. USENIX AssociationAnatoly Shusterman, Ayush Agarwal, Sioli O'Connell, Daniel Genkin, Yossi Oren, and Yuval Yarom. 2021. Prime+Probe 1, JavaScript 0: Overcoming Browser-based Side-Channel Defenses. In 30th USENIX Security Symposium, USENIX Security 2021, August 11-13, 2021. USENIX Association, 2863-2880. https://www.usenix. org/conference/usenixsecurity21/presentation/shusterman Util: : Lookup: Exploiting Key Decoding in Cryptographic Libraries. Florian Sieck, Sebastian Berndt, Jan Wichelmann, Thomas Eisenbarth, 10.1145/3460120.3484783CCS '21: 2021 ACM SIGSAC Conference on Computer and Communications Security. Kim, Jong Kim, Giovanni Vigna, and Elaine ShiRepublic of KoreaACMVirtual EventFlorian Sieck, Sebastian Berndt, Jan Wichelmann, and Thomas Eisenbarth. 2021. Util: : Lookup: Exploiting Key Decoding in Cryptographic Libraries. In CCS '21: 2021 ACM SIGSAC Conference on Computer and Communications Security, Virtual Event, Republic of Korea, November 15 -19, 2021, Yongdae Kim, Jong Kim, Giovanni Vigna, and Elaine Shi (Eds.). ACM, 2456-2473. https://doi.org/10.1145/3460120. 3484783 Manu Sridharan, Koushik Sen, Liang Gong, Jalangi2 Presentation. Manu Sridharan, Koushik Sen, and Liang Gong. Accessed: 2022-04-26. Jalangi2 Presentation. https://manu.sridharan.net/files/JalangiTutorial.pdf. Developer Survey -Programming, scripting, and markup languages. Stack Overflow, Stack Overflow. Accessed: 2022-05-02. 2021 Developer Survey -Program- ming, scripting, and markup languages. https://insights.stackoverflow.com/ survey/2021#section-most-popular-technologies-programming-scripting-and- markup-languages. Automated Analysis of Security-Critical JavaScript APIs. Ankur Taly, Úlfar Erlingsson, John C Mitchell, Mark S Miller, Jasvir Nagra, 10.1109/SP.2011.3932nd IEEE Symposium on Security and Privacy. Berkeley, California, USAIEEE Computer SocietyAnkur Taly, Úlfar Erlingsson, John C. Mitchell, Mark S. Miller, and Jasvir Nagra. 2011. Automated Analysis of Security-Critical JavaScript APIs. In 32nd IEEE Symposium on Security and Privacy, S&P 2011, 22-25 May 2011, Berkeley, California, USA. IEEE Computer Society, 363-378. https://doi.org/10.1109/SP.2011.39 . Tweetnacl, Js, TweetNaCl.js. Accessed: 2022-05-02. https://tweetnacl.js.org. Identifying Cache-Based Side Channels through Secret-Augmented Abstract Interpretation. Shuai Wang, Yuyan Bao, Xiao Liu, Pei Wang, Danfeng Zhang, Dinghao Wu, 28th USENIX Security Symposium, USENIX Security 2019. Santa Clara, CA, USAUSENIX AssociationShuai Wang, Yuyan Bao, Xiao Liu, Pei Wang, Danfeng Zhang, and Dinghao Wu. 2019. Identifying Cache-Based Side Channels through Secret-Augmented Ab- stract Interpretation. In 28th USENIX Security Symposium, USENIX Security 2019, Santa Clara, CA, USA, August 14-16, 2019. USENIX Association, 657-674. https: //www.usenix.org/conference/usenixsecurity19/presentation/wang-shuai CacheD: Identifying Cache-Based Timing Channels in Production Software. Shuai Wang, Pei Wang, Xiao Liu, Danfeng Zhang, Dinghao Wu, 26th USENIX Security Symposium. Vancouver, BC, CanadaShuai Wang, Pei Wang, Xiao Liu, Danfeng Zhang, and Dinghao Wu. 2017. CacheD: Identifying Cache-Based Timing Channels in Production Software. In 26th USENIX Security Symposium, USENIX Security 2017, Vancouver, BC, Canada, August 16-18, 2017. USENIX Association, 235-252. https://www.usenix.org/ conference/usenixsecurity17/technical-sessions/presentation/wang-shuai Big Numbers -Big Troubles: Systematically Analyzing Nonce Leakage in (EC)DSA Implementations. Samuel Weiser, David Schrammel, Lukas Bodner, Raphael Spreitzer, 29th USENIX Security Symposium, USENIX Security 2020. USENIX AssociationSamuel Weiser, David Schrammel, Lukas Bodner, and Raphael Spreitzer. 2020. Big Numbers -Big Troubles: Systematically Analyzing Nonce Leakage in (EC)DSA Implementations. In 29th USENIX Security Symposium, USENIX Security 2020, August 12-14, 2020. USENIX Association, 1767-1784. https://www.usenix.org/ conference/usenixsecurity20/presentation/weiser DATA-Differential Address Trace Analysis: Finding Address-based Side-Channels in Binaries. Samuel Weiser, Andreas Zankl, Raphael Spreitzer, Katja Miller, Stefan Mangard, Georg Sigl, 27th USENIX Security Symposium (USENIX Security 18). USENIX Association. Samuel Weiser, Andreas Zankl, Raphael Spreitzer, Katja Miller, Stefan Man- gard, and Georg Sigl. 2018. DATA-Differential Address Trace Analysis: Finding Address-based Side-Channels in Binaries. In 27th USENIX Security Symposium (USENIX Security 18). USENIX Association, 603-620. Microwalk: A Framework for Finding Side-Channels in Binaries. Jan Wichelmann, Ahmad Moghimi, Thomas Eisenbarth, Berk Sunar, Proceedings of the 34th Annual Computer Security Applications Conference. the 34th Annual Computer Security Applications ConferenceACMJan Wichelmann, Ahmad Moghimi, Thomas Eisenbarth, and Berk Sunar. 2018. Microwalk: A Framework for Finding Side-Channels in Binaries. In Proceedings of the 34th Annual Computer Security Applications Conference. ACM, 161-173. Controlled-Channel Attacks: Deterministic Side Channels for Untrusted Operating Systems. Yuanzhong Xu, Weidong Cui, Marcus Peinado, 10.1109/SP.2015.452015 IEEE Symposium on Security and Privacy. San Jose, CA, USAIEEE Computer SocietyYuanzhong Xu, Weidong Cui, and Marcus Peinado. 2015. Controlled-Channel Attacks: Deterministic Side Channels for Untrusted Operating Systems. In 2015 IEEE Symposium on Security and Privacy, S&P 2015, San Jose, CA, USA, May 17-21, 2015. IEEE Computer Society, 640-656. https://doi.org/10.1109/SP.2015.45 FLUSH+RELOAD: A High Resolution, Low Noise, L3 Cache Side-Channel Attack. Yuval Yarom, Katrina Falkner, Proceedings of the 23rd USENIX Security Symposium. Kevin Fu and Jaeyeon Jungthe 23rd USENIX Security SymposiumSan Diego, CA, USAUSENIX AssociationYuval Yarom and Katrina Falkner. 2014. FLUSH+RELOAD: A High Resolution, Low Noise, L3 Cache Side-Channel Attack. In Proceedings of the 23rd USENIX Security Symposium, San Diego, CA, USA, August 20-22, 2014, Kevin Fu and Jaeyeon Jung (Eds.). USENIX Association, 719-732. https://www.usenix.org/conference/ usenixsecurity14/technical-sessions/presentation/yarom CacheBleed: A Timing Attack on OpenSSL Constant Time RSA. Yuval Yarom, Daniel Genkin, Nadia Heninger, 10.1007/978-3-662-53140-2_17Cryptographic Hardware and Embedded Systems -CHES 2016 -18th International Conference. Benedikt Gierlichs and Axel Y. PoschmannSanta Barbara, CA, USASpringer9813Proceedings (Lecture Notes in Computer ScienceYuval Yarom, Daniel Genkin, and Nadia Heninger. 2016. CacheBleed: A Tim- ing Attack on OpenSSL Constant Time RSA. In Cryptographic Hardware and Embedded Systems -CHES 2016 -18th International Conference, Santa Barbara, CA, USA, August 17-19, 2016, Proceedings (Lecture Notes in Computer Science, Vol. 9813), Benedikt Gierlichs and Axel Y. Poschmann (Eds.). Springer, 346-367. https://doi.org/10.1007/978-3-662-53140-2_17 New Models of Cache Architectures Characterizing Information Leakage from Cache Side Channels. Tianwei Zhang, Ruby B Lee ; Charles, N PayneJr, Adam Hahn, 10.1145/2664243.2664273Proceedings of the 30th Annual Computer Security Applications Conference, ACSAC 2014. Kevin R. B. Butler, and Micah Sherrthe 30th Annual Computer Security Applications Conference, ACSAC 2014New Orleans, LA, USAACMTianwei Zhang and Ruby B. Lee. 2014. New Models of Cache Architectures Characterizing Information Leakage from Cache Side Channels. In Proceedings of the 30th Annual Computer Security Applications Conference, ACSAC 2014, New Orleans, LA, USA, December 8-12, 2014, Charles N. Payne Jr., Adam Hahn, Kevin R. B. Butler, and Micah Sherr (Eds.). ACM, 96-105. https://doi.org/10.1145/ 2664243.2664273
[ "https://github.com/ricmoo/aes-js.", "https://github.com/indutny/elliptic.", "https://github.com/google/", "https://github.com/Samsung/" ]
[ "Improving Communication Patterns in Polyhedral Process Networks", "Improving Communication Patterns in Polyhedral Process Networks" ]
[ "Christophe Alias \nCNRS\nENS de Lyon\nUCBL\nUniversité de Lyon\nInria\n" ]
[ "CNRS\nENS de Lyon\nUCBL\nUniversité de Lyon\nInria" ]
[]
Embedded system performances are bounded by power consumption. The trend is to offload greedy computations on hardware accelerators as GPU, Xeon Phi or FPGA. FPGA chips combine both flexibility of programmable chips and energy-efficiency of specialized hardware and appear as a natural solution. Hardware compilers from high-level languages (High-level synthesis, HLS) are required to exploit all the capabilities of FPGA while satisfying tight time-tomarket constraints. Compiler optimizations for parallelism and data locality restructure deeply the execution order of the processes, hence the read/write patterns in communication channels. This breaks most FIFO channels, which have to be implemented with addressable buffers. Expensive hardware is required to enforce synchronizations, which often results in dramatic performance loss. In this paper, we present an algorithm to partition the communications so that most FIFO channels can be recovered after a loop tiling, a key optimization for parallelism and data locality. Experimental results show a drastic improvement of FIFO detection for regular kernels at the cost of a few additional storage. As a bonus, the storage can even be reduced in some cases.
null
[ "https://arxiv.org/pdf/1801.04821v1.pdf" ]
36,583,167
1801.04821
f51806880e06704967d8149e87dc4a763d3f7c86
Improving Communication Patterns in Polyhedral Process Networks Christophe Alias CNRS ENS de Lyon UCBL Université de Lyon Inria Improving Communication Patterns in Polyhedral Process Networks Embedded system performances are bounded by power consumption. The trend is to offload greedy computations on hardware accelerators as GPU, Xeon Phi or FPGA. FPGA chips combine both flexibility of programmable chips and energy-efficiency of specialized hardware and appear as a natural solution. Hardware compilers from high-level languages (High-level synthesis, HLS) are required to exploit all the capabilities of FPGA while satisfying tight time-tomarket constraints. Compiler optimizations for parallelism and data locality restructure deeply the execution order of the processes, hence the read/write patterns in communication channels. This breaks most FIFO channels, which have to be implemented with addressable buffers. Expensive hardware is required to enforce synchronizations, which often results in dramatic performance loss. In this paper, we present an algorithm to partition the communications so that most FIFO channels can be recovered after a loop tiling, a key optimization for parallelism and data locality. Experimental results show a drastic improvement of FIFO detection for regular kernels at the cost of a few additional storage. As a bonus, the storage can even be reduced in some cases. INTRODUCTION Since the end of Dennard scaling, the performance of embedded systems is bounded by power consumption. The trend is to trade genericity (processors) for energy efficiency (hardware accelerators) by offloading critical tasks to specialized hardware. FPGA chips combine both flexibility of programmable chips and energy-efficiency of specialized hardware and appear as a natural solution. High-level synthesis (HLS) techniques are required to exploit all the capabilities of FPGA, while satisfying tight time-to-market constraints. Parallelization techniques from high-performance compilers are progressively migrating to HLS, particularly the models and algorithms from the polyhedral model [7], a powerful framework to design compiler optimizations. Ad- https://www.hipeac.net/events/activities/7528/ hip3es/ ditional constraints must be fulfilled before plugging a compiler optimization into an HLS tool. Unlike software, the hardware size is bounded by the available silicium surface. The bigger is a parallel unit, the less it can be duplicated, thereby limiting the overall performance. Particularly, tricky program optimizations are likely to spoil the performances if the circuit is not post-optimized carefully [5]. An important consequence is that the the roofline model is not longer valid in HLS [8]. Indeed, peak performance is no longer a constant: it decreases with the operational intensity. The bigger is the operational intensity, the bigger is the buffer size and the less is the space remaining for the computation itself. Consequently, it is important to produce at sourcelevel a precise model of the circuit which allows to predict accurately the resource consumption. Process networks are a natural and convenient intermediate representation for HLS [4,13,14,19]. A sequential program is translated to a process network by partitioning computations into processes and flow dependences into channels. Then, the processes and buffers are factorized and mapped to hardware. In this paper, we focus on the translation of buffers to hardware. We propose an algorithm to restructure the buffers so they can be mapped to inexpensive FIFOs. Most often, a direct translation of a regular kernel -without optimization -produces to a process network with FIFO buffers [16]. Unfortunately, data transfers optimization [3] and generally loop tiling reorganizes deeply the computations, hence the read/write order in channels (communication patterns). Consequently, most channels may no longer be implemented by a FIFO. Additional circuitry is required to enforce synchronizations [4,20,15,17] which result in larger circuits and causes performance penalties. In this paper, we make the following contributions: • We propose an algorithm to reorganize the communications between processes so that more channels can be implemented as FIFO after a loop tiling. As far as we know, this is the first algorithm to recover FIFO communication patterns after a compiler optimization. • Experimental results show that we can recover most of the FIFO disabled by communication optimization, and more generally any loop tiling, at almost no extra storage cost. The remainder of this paper is structured as follows. Section 2 introduces polyhedral process network and discusses how communication patterns are impacted by loop tiling, Section 3 describes our algorithm to reorganize channels, Section 4 presents experimental results, Finally, Section 5 concludes this paper and draws future research directions. PRELIMINARIES This section defines the notions used in the remainder of this paper. Section 2.1 and 2.2 introduces the basics of compiler optimization in the polyhedral model and defines loop tiling. Section 2.3 defines polyhedral process networks (PPN), shows how loop tiling disables FIFO communication patterns and outlines a solution. Polyhedral Model at a Glance Translating a program to a process network requires to split the computation into processes and flow dependences into channels. The polyhedral model focuses on kernels whose computation and flow dependences can be predicted, represented and explored at compile-time. The control must be predictable: for loops, if with conditions on loop counters. Data structures are bounded to arrays, pointers are not allowed. Also, loop bounds, conditions and array accesses must be affine functions of surrounding loop counters and structure parameters (typically the array size). This way, the computation may be represented with Presburger sets (typically approximated with convex polyhedra, hence the name). This makes possible to reason geometrically about the computation and to produce precise compiler analysis thanks to integer linear programming: flow dependence analysis [9], scheduling [7] or code generation [6,12] to quote a few. Most compute-intensive kernels from linear algebra and image processing fit in this category. In some cases, kernels with dynamic control can even fit in the polyhedral model after a proper abstraction [2]. Figure 1.(a) depicts a polyhedral kernel and (b) depicts the geometric representation of the computation for each assignment (• for assignment load, • for assignment compute and • for assignment store). The vector i = (i1, . . . , in) of loop counters surrounding an assignment S is called an iteration of S. The execution of S at iteration i is denoted by S, i . The set DS of iterations of S is called iteration domain of S. The original execution of the iterations of S follows the lexicographic order over DS. For instance, on the statement C: (t, i) (t , i ) iff t < t or (t = t and i < i ). The lexicographic order over Z d is naturally partitioned by depth: = 1 . . . d where (u1 . . . u d ) k (v1, . . . , v d ) iff ∧ k−1 i=1 ui = vi ∧ u k < v k . Dataflow Analysis. On Figure 1.(b), red arrows depict several flow dependences (read after write) between executions instances. We are interested in flow dependences relating the production of a value to its consumption, not only a write followed by a read to the same location. These flow dependences are called direct dependences. Direct dependences represent the communication of values between two computations and drive communications and synchronizations in the final process network. They are crucial to build the process network. Direct dependences can be computed exactly in the polyhedral model [9]. The result is a relation → relating each producer P, i to one or more consumers C, j . Technically, → is a Presburger relation between vectors (P, i) and vectors (C, j) where assignments P and C are encoded as integers. For example, dependence 5 is summed up with the Presburger relation: {(•, t − 1, i) → (•, t, i), 0 < t ≤ T ∧ 0 ≤ i ≤ N }. Presburger relations are computable and efficient libraries allow to manipulate them [18,10]. In the remainder, direct dependence will be referred as flow dependence or dependence to simplify the presentation. Scheduling and Loop Tiling Compiler optimizations change the execution order to fulfill multiple goals such as increasing the parallelism degree or minimizing the communications. The new execution order is specified by a schedule. A schedule θS maps each execution S, i to a timestamp θS( i) = (t1, . . . , t d ) ∈ Z d , the timestamps being ordered by the lexicographic order . In a way, a schedule dispatches each execution instance S, i into a new loop nest, θS( i) = (t1, . . . , t d ) being the new iteration vector of S, i . A schedule θ induces a new exe- cution order ≺ θ such that S, i ≺ θ T, j iff θS( i) θT ( j). Also, S, i θ T, j means that either S, i ≺ θ T, j or θS( i) = θT ( j) . When a schedule is injective, it is said to be sequential: no execution is scheduled at the same time. Hence everything is executed in sequence. In the polyhedral model, schedules are affine functions. They can be derived automatically from flow dependences [7]. On Figure 1, the original execution order is specified by the schedule θload(i) = (0, i), θcompute(t, i) = (1, t, i) and θstore(i) = (2, i). The lexicographic order ensures the execution of all the load instances (0), then all the compute instances (1) and finally all the store instances (2). Then, for each statement, the loops are executed in the specified order. Loop tiling is a transformation which partitions the computation in tiles, each tile being executed atomically. Communication minimization [3] typically relies on loop tiling to tune the ratio computation/communication of the program beyond the ratio peak performance/communication bandwidth of the target architecture. Figure 3.(a) depicts the iteration domain of compute and the new execution order after tiling loops t and i. For presentation reasons, we depict a domain bigger than in Figure 1.(b) (with bigger N and M ) and we depict only a part of the domain. In the polyhedral model, a loop tiling is specified by hyperplanes with linearly independent normal vectors τ1, . . . , τ d where d is the number of nested loops (here τ1 = (0, 1) for the vertical hyperplanes and τ2 = (1, 1) for the diagonal hyperplanes). Roughly, hyperplans along each normal vector τi are placed at regular intervals bi (here b1 = b2 = 2) to cut the iteration domain in tiles. Then, each tile is identified by an iteration vector (φ1, . . . , φ d ), φ k being the slice number of an iteration i along normal vector τ k : φ k = τ k · i ÷ b k . The result is a Presburger iteration domain, hereD = {(φ1, φ2, t, i), 2φ1 ≤ t < 2(φ1 +1)∧2φ2 ≤ t+i < 2(φ2 +1)}: the polyhedral model is closed under loop tiling. In particular, the tiled domain can be scheduled. For instance,θS(φ1, φ2, t, i) = (φ1, φ2, t, i) specifies the execution order depicted in Figure 3.(a)): tile with point (4,4) is executed, then tile with point (4,8), then tile with point (4,12), and so on. For each tile, the iterations are executed for each t, then for each i. Polyhedral Process Networks Given the iteration domains and the flow dependence relation, →, we derive a polyhedral process network by partitioning iterations domains into processes and flow dependence into channels. More formally, a polyhedral process network is a couple (P, C) such that: for i := 0 to N + 1 • load(a[0, i]); for t := 1 to T for i := 1 to N • a[t, i] := a[t − 1, i − 1] + a[t − 1, i] + a[t − 1, i + 1]; for i := 1 to N • store(a[T, i]); • Each process P ∈ P is specified by an iteration domain DP and a sequential schedule θP inducing an execution order ≺P over DP . Each iteration i ∈ DP realizes the execution instance µP ( i) in the program. The processes partition the execution instances in the program: {µP (DP ) for each process P } is a partition of the program computation. • Each channel c ∈ C is specified by a producer process Pc ∈ P, a consumer process Cc ∈ P and a dataflow relation →c relating each production of a value by Pc to its consumption by Cc: if i →c j, then execution i of Pc produces a value read by execution j of Cc. →c is a subset of the flow dependences from Pc to Cc and the collection of →c for each channel c between two given processes P and C, {→c, (P, C) = (Pc, Cc)}, is a partition of flow dependences from P to C. The goal of this paper is to find out a partition of flow dependences for each producer/consumer couple (P, C), such that most channels from P to C can be realized by a FIFO. Figure 1.(c) depicts the PPN obtained with the canonical partition of computation: each execution S, i is mapped to process PS and executed at process iteration i: µP S ( i) = S, i . For presentation reason the compute process is depicted as C. Dependences depicted as k on the dependence graph in (b) are solved by channel k. To read the input values in parallel, we use a different channel per couple producer/read reference, hence this partitioning. We assume that, locally, each process executes instructions in the same order than in the original program: θload(i) = i, θcompute(t, i) = (t, i) and θstore(i) = i. Remark that the leading constant (0 for load, 1 for compute, 2 for store) has disappeared: the timestamps only define an order local to their process: ≺ load , ≺compute and ≺store. The global execution order is driven by the dataflow semantics: the next process operation is executed as soon as its operands are available. The next step is to detect communication patterns to figure out how to implement channels. Communication Patterns. A channel c ∈ C might be implemented by a FIFO iff the consumer Cc read the values from c in the same order than the producer Pc write them to c (in-order) and each value is read exactly once (unicity) [14,16]. The in-order constraint can be written: in-order(→c, ≺P , ≺C ) := ∀x →c x , ∀y →c y : x ≺C y ⇒ x P y The unicity constraints can be written: unicity(→c) := ∀x →c x , ∀y →c y : x = y ⇒ x = y Notice that unicity depends only on the dataflow relation →c, it is independent from the execution order of the producer process ≺P and the consumer process ≺C . Furthermore, ¬in-order(→c, ≺P , ≺C ) and ¬unicity(→c) amount to check the emptiness of a convex polyhedron, which can be done by most LP solvers. Finally, a channel may be implemented by a FIFO iff it verifies both in-order and unicity constraints: fifo(→c, ≺P , ≺C ) := in-order(→c, ≺P , ≺C ) ∧ unicity(→c) When the consumer reads the data in the same order than they are produced but a datum may be read several times: in-order(→c, ≺P , ≺C ) ∧ ¬unicity(→c), the communication pattern is said to be in-order with multiplicity: the channel may be implemented with a FIFO and a register keeping the last read value for multiple reads. However, additional circuitry is required to trigger the write of a new datum in the register [14]: this implementation is more expensive than a single FIFO. Finally, when we have neither in-order nor unicity: ¬in-order(→c, ≺P , ≺C )∧¬unicity(→c), the communication pattern is said to be out-of-order without multiplicity: significant hardware resources are required to enforce flow-and anti-dependences between producer and consumer and additional latencies may limit the overall throughput of the circuit [4,20,15,17]. Consider Figure 1.(c), channel 5, implementing dependence 5 (depicted on (b)) from •, t − 1, i (write a[t, i]) to •, t, i (read a[t − 1, i]). With the schedule defined above, the data are produced ( •, t − 1, i ) and read ( •, t − 1, i ) in the same order, and only once: the channel may be implemented as a FIFO. Now, assume that process compute follows the tiled execution order depicted in Figure 3.(a). The execution order now executes tile with point (4,4), then tile with point (4,8), then tile with point (4,12), and so on. In each tile, the iterations are executed for each t, then for Figure 3.(b). With the new execution order, we execute successively 1,2,4,3, whereas an in-order pattern would have required 1,2,3,4. Consequently, channel 5 is no longer a FIFO. The same hold for channel 4 and 6. Now, the point is to partition dependence 5 and others so FIFO communication pattern hold. Consider Figure 3.(c). Dependence 5 is partitioned in 3 parts: red dependences crossing tiling hyperplane φ1 (direction t), blue dependences crossing tiling hyperplane t + i (direction t + i) and green dependences inside a tile. Since the execution order in a tile is the same than the original execution order (actually a subset of the original execution order), green dependences will clearly verify the FIFO communication pattern. As concerns blue and red dependences, source and target are executed in the same order because the execution order is the same for each tile and dependence 5 happens to be short enough. In practice, this partitioning is effective to reveal FIFO channels. In the next section, we propose an algorithm to find such a partitioning. Figure 2 depicts our algorithm for partitioning channels given a polyhedral process network (P, C) (line 5). For each channel c from a producer P = Pc to a consumer C = Cc, the channel is partitioned by depth along the lines described in the previous section (line 7). DP and DC are assumed to be tiled with the same number of hyperplanes. P and C are assumed to share a schedule with the shape: θ(φ1, . . . , φn, i) = (φ1, . . . , φn, i). This case arise frequently with tiling schemes for I/O optimization [4]. If not, the next channel →c is considered (line 6). The split is realized by procedure split (lines 1-4). A new partition is build starting from the empty set. For each depth (hyperplane) of the tiling, the dependences crossing that hyperplane are filtered and added to the partition (line 3): this gives dependences → 1 c , . . . , → n c . Finally, dependences lying in a tile (source and target in the same tile) are added to the partition (line 4): this gives → n+1 c . θP (x) ≈ n θC (y) means that the n first dimensions of θP (x) and θC (y) (tiling coordinates (φ1, . . . , φn)) are the same: x and y belong to the same tile. Consider the PPN depicted in Figure 1.(c) with the tiling and schedule discussed above: process compute is tiled as depicted in Figure 3.(c) with the schedule θcompute(φ1, φ2, t, i) = (φ1, φ2, t, i). Since processes load and store are not tiled, the only channels processed by our algorithm are 4,5 and 6. split is applied on the associated dataflow relations →4, →5 and →6. Each dataflow relation is split in three parts as depicted in Figure 3.(c). For →5: → 1 5 crosses hyperplane t (red), → 2 5 crosses hyperplane t + i (blue) and → 3 5 stays in a tile (green). This algorithm works pretty well for short uniform dependences →c: if fifo(c) before tiling, then, after tiling, the algorithm can split c in such a way that we get FIFOs. However, when dependences are longer, e.g. (t, i) → (t, i + 2), the target operations (t, i + 2) reproduce the tile execution pattern, which prevents to find a FIFO. The same happens when the tile hyperplanes are "too skewed", e.g. τ1 = (1, 1), τ2 = (2, 1), dependence (t−1, i−1) → (t, i). Figure 3.(d) depicts the volume of data to be stored on the FIFO produced for each depth. In particular, dotted line with k indicates iterations producing data to be kept in the FIFO at depth k. FIFO at depth 1 (dotted line with 1) must store N data at the same time. Similarly, FIFO at depth 2 stores at most b1 data and FIFO at depth 3 stores at most b2 data. Hence, on this example, each transformed channel requires b1 + b2 additional storage. In general the additional storage requirements are one order of magnitude smaller than the original FIFO size and stays reasonable in practice, as shown in the next section. OUR ALGORITHM EXPERIMENTAL EVALUATION This section presents the experimental results obtained on the benchmarks of the polyhedral community. We demonstrate the capabilities of our algorithm at recovering FIFO communication patterns after loop tiling and we show how much additional storage is required. Experimental Setup. We have run our algorithm on the kernels of PolyBench/C v3.2 [11]. Tables 2 and 1 depicts the results obtained for each kernel. Each kernel is tiled to reduce I/O while exposing parallelism [4] and translated to a PPN using our research compiler, Dcc (DPN C Compiler). Dcc actually produces a DPN (Data-aware Process Network), a PPN optimized for a specific tiled pattern. DPN features additional control processes and synchronization for I/O and parallelism which have nothing with our optimization. So, we actually only consider the PPN part of our DPN. We have applied our algorithm to each channel to expose FIFO patterns. For each kernel, we compare the PPN obtained after tiling to the PPN processed by our algorithm. Table 2 depicts the capabilities of our algorithm to find out FIFO patterns. For each kernel, we provide the channels characteristics on the original tiled PPN (Before Partitioning) and after applying our algorithm (After Partitioning). We give the total number of channels (#channel), the FIFO found among these channels (#fifo), the number of channels which were successfully turned to FIFO thanks to our algorithm (#fifo-split), the ratios #fifo/#channel (%fifo) and #fifo-split/#channel (%fifo-split), the cumulated size of the FIFO found (fifo-size) and the cumulated size of the channels found, including FIFO (total-size). On every kernel, our algorithm succeeds to expose more FIFO patterns (%fifo vs %fifo-split). On a significant number of kernels (11 among 15), we even succeed to turn all the compute channels to FIFO. On the remaining kernels, we succeed to recover all the FIFO communication patterns disabled by the tiling. Even though our method is not complete, as discussed in Table 1 depicts the additional storage required after splitting channels. For each kernel, we compare the cumulative size of channels split and successfully turn to a FIFO (sizefifo-fail) to the cumulative size of the FIFOs generated by the splitting (size-fifo-split). The size unit is a datum e.g. 4 bytes if a datum is a 32 bits float. We also quantify the additional storage required by split channels compared to the original channel (∆ := [size-fifo-split -size-fifo-fail] / sizefifo-fail). It turns out that the FIFO generated by splitting use mostly the same data volume than the original channels. Additional storage resources are due to our sizing heuristic [1], which rounds channel size to a power of 2. Surprisingly, splitting can sometimes help the sizing heuristic to find out a smaller size (kernel gemm), and then reducing the storage requirements. Indeed, splitting decomposes a channel into channels of a smaller dimension, for which our sizing heuristic is more precise. In a way, our algorithm allows to find out a nice piecewise allocation function whose footprint is smaller than a single piece allocation. We plan to exploit this nice side effect in the future. Results. CONCLUSION In this paper, we have proposed an algorithm to reorganize the channels of a polyhedral process network to reveal more FIFO communication patterns. Specifically, our algorithm operates producer/consumer processes whose iteration domain has been partitioned by a loop tiling. Experimental results shows that our algorithm allows to recover the FIFO disabled by loop tiling with almost the same storage requirement. Our algorithm is sensible to the dependence size and the chosen loop tiling. In the future, we plan to design a reorganization algorithm provably complete, in the meaning that a FIFO channel will be recovered whatever the dependence size and the tiling used. We also observe that splitting channels can reduce the storage requirements in some cases. We plan to investigate how such cases can be revealed automatically. 22th, 2018, Manchester, UK In conjunction with HiPEAC 2018. Figure 1 : 1Motivating example: Jacobi-1D kernel }Figure 2 : 2→c ∩{(x, y), θ P (x) k θ C (y)}); 4 add(→c ∩{(x, y), θ P (x) ≈ n θ C := split(→c,θ Pc ,θ Cc );8 if fifo(→ k c , ≺ θ Pc , ≺ θ Cc ) Our algorithm for partitioning channels each i. Consider iterations depicted in red as 1, 2, 3, 4 in Figure 3 : 3Impact of loop tiling on communication patternsection 3, it happens that all the kernels fulfill the conditions expected by our algorithm (short dependence, tiling hyperplanes not too skewed). Table 1: Impact on storage requirementskernel size-fifo-fail size-fifo-split ∆ trmm 256 257 0% gemm 512 288 -44% syrk 8192 8193 0% symm 800 801 0% gemver 32 33 3% gesummv 0 0 syr2k 8192 8193 0% lu 528 531 1% cholesky 273 275 1% atax 1 1 0% doitgen 4096 4097 0% jacobi-2d 8320 8832 6% seidel-2d 49952 52065 4% jacobi-1d 1152 1174 2% heat-3d 148608 158992 7% KernelBefore PartitioningAfter Partitioning #channel #fifo #fifo-split %fifo %fifo-split fifo-size total-size #channel #fifo fifo-sizetotal-size trmm 2 1 2 50% 100% 256 512 3 3 513 513 gemm 2 1 2 50% 100% 16 528 3 3 304 304 syrk 2 1 2 50% 100% 1 8193 3 3 8194 8194 symm 6 3 6 50% 100% 18 818 7 7 819 819 gemver 6 3 5 50% 83% 4113 4161 7 6 4146 4162 gesummv 6 6 6 100% 100% 96 96 6 6 96 96 syr2k 2 1 2 50% 100% 1 8193 3 3 8194 8194 lu 8 0 3 0% 37% 0 1088 11 6 531 1091 cholesky 9 3 6 33% 66% 513 1074 11 8 788 1076 atax 5 3 4 60% 80% 48 65 5 4 49 65 doitgen 3 2 3 66% 100% 8192 12288 4 4 12289 12289 jacobi-2d 10 0 10 0% 100% 0 8320 18 18 8832 8832 seidel-2d 9 0 9 0% 100% 0 49952 16 16 52065 52065 jacobi-1d 6 1 6 16% 100% 1 1153 10 10 1175 1175 heat-3d 20 0 20 0% 100% 0 148608 38 38 158992 158992 Bee+Cl@k: An implementation of lattice-based array contraction in the source-to-source translator Rose. C Alias, F Baray, A Darte, ACM Conf. on Languages, Compilers, and Tools for Embedded Systems (LCTES'07). C. Alias, F. Baray, and A. Darte. Bee+Cl@k: An implementation of lattice-based array contraction in the source-to-source translator Rose. In ACM Conf. on Languages, Compilers, and Tools for Embedded Systems (LCTES'07), 2007. Multi-dimensional rankings, program termination, and complexity bounds of flowchart programs. C Alias, A Darte, P Feautrier, L Gonnord, International Static Analysis Symposium (SAS'10). C. Alias, A. Darte, P. Feautrier, and L. Gonnord. Multi-dimensional rankings, program termination, and complexity bounds of flowchart programs. In International Static Analysis Symposium (SAS'10), 2010. Optimizing remote accesses for offloaded kernels: Application to high-level synthesis for FPGA. C Alias, A Darte, A Plesco, ACM SIGDA Intl. Conference on Design, Automation and Test in Europe (DATE'13). Grenoble, FranceC. Alias, A. Darte, and A. Plesco. Optimizing remote accesses for offloaded kernels: Application to high-level synthesis for FPGA. In ACM SIGDA Intl. Conference on Design, Automation and Test in Europe (DATE'13), Grenoble, France, 2013. Data-aware Process Networks. C Alias, A Plesco, RR-8735Inria -Research Centre Grenoble -Rhône-AlpesResearch ReportC. Alias and A. Plesco. Data-aware Process Networks. Research Report RR-8735, Inria -Research Centre Grenoble -Rhône-Alpes, June 2015. Optimizing Affine Control with Semantic Factorizations. C Alias, A Plesco, ACM Transactions on Architecture and Code Optimization (TACO). 14427C. Alias and A. Plesco. Optimizing Affine Control with Semantic Factorizations. ACM Transactions on Architecture and Code Optimization (TACO) , 14(4):27, Dec. 2017. Efficient code generation for automatic parallelization and optimization. C Bastoul, 2nd International Symposium on Parallel and Distributed Computing Table 2: Detailed results (ISPDC 2003). Ljubljana, SloveniaC. Bastoul. Efficient code generation for automatic parallelization and optimization. In 2nd International Symposium on Parallel and Distributed Computing Table 2: Detailed results (ISPDC 2003), 13-14 October 2003, Ljubljana, Slovenia, pages 23-30, 2003. A practical automatic polyhedral parallelizer and locality optimizer. U Bondhugula, A Hartono, J Ramanujam, P Sadayappan, Proceedings of the ACM SIGPLAN 2008 Conference on Programming Language Design and Implementation. the ACM SIGPLAN 2008 Conference on Programming Language Design and ImplementationTucson, AZ, USAU. Bondhugula, A. Hartono, J. Ramanujam, and P. Sadayappan. A practical automatic polyhedral parallelizer and locality optimizer. In Proceedings of the ACM SIGPLAN 2008 Conference on Programming Language Design and Implementation, Tucson, AZ, USA, June 7-13, 2008, pages 101-113, 2008. Performance modeling for FPGAs: extending the roofline model with high-level synthesis tools. B Silva, A Braeken, E H Hollander, A Touhafi, International Journal of Reconfigurable Computing. 7B. da Silva, A. Braeken, E. H. D'Hollander, and A. Touhafi. Performance modeling for FPGAs: extending the roofline model with high-level synthesis tools. International Journal of Reconfigurable Computing, 2013:7, 2013. Dataflow analysis of array and scalar references. P Feautrier, International Journal of Parallel Programming. 201P. Feautrier. Dataflow analysis of array and scalar references. International Journal of Parallel Programming, 20(1):23-53, 1991. The omega calculator and library, version 1.1. 0. W Kelly, V Maslov, W Pugh, E Rosser, T Shpeisman, D Wonnacott, 18College Park, MD, 20742W. Kelly, V. Maslov, W. Pugh, E. Rosser, T. Shpeisman, and D. Wonnacott. The omega calculator and library, version 1.1. 0. College Park, MD, 20742:18, 1996. Polybench: The polyhedral benchmark suite. L.-N Pouchet, cited JulyL.-N. Pouchet. Polybench: The polyhedral benchmark suite. URL: http://www. cs. ucla. edu/p ouchet/software/polybench/[cited July,], 2012. Generation of efficient nested loops from polyhedra. F Quilleré, S Rajopadhye, D Wilde, International journal of parallel programming. 285F. Quilleré, S. Rajopadhye, and D. Wilde. Generation of efficient nested loops from polyhedra. International journal of parallel programming, 28(5):469-498, 2000. Deriving process networks from nested loop algorithms. E Rijpkema, E F Deprettere, B Kienhuis, Parallel Processing Letters. 1002n03E. Rijpkema, E. F. Deprettere, and B. Kienhuis. Deriving process networks from nested loop algorithms. Parallel Processing Letters, 10(02n03):165-176, 2000. Compiling nested loop programs to process networks. A Turjan, Leiden Institute of Advanced Computer Science (LIACS), and Leiden Embedded Research Center, Faculty of Science, Leiden UniversityPhD thesisA. Turjan. Compiling nested loop programs to process networks. PhD thesis, Leiden Institute of Advanced Computer Science (LIACS), and Leiden Embedded Research Center, Faculty of Science, Leiden University, 2007. Realizations of the extended linearization model. Domain-specific processors: systems, architectures, modeling, and simulation. A Turjan, B Kienhuis, E Deprettere, A. Turjan, B. Kienhuis, and E. Deprettere. Realizations of the extended linearization model. Domain-specific processors: systems, architectures, modeling, and simulation, pages 171-191, 2002. Classifying interprocess communication in process network representation of nested-loop programs. A Turjan, B Kienhuis, E Deprettere, ACM Transactions on Embedded Computing Systems (TECS). 6213A. Turjan, B. Kienhuis, and E. Deprettere. Classifying interprocess communication in process network representation of nested-loop programs. ACM Transactions on Embedded Computing Systems (TECS), 6(2):13, 2007. Enabling automatic pipeline utilization improvement in polyhedral process network implementations. S Van Haastregt, B Kienhuis, Application-Specific Systems, Architectures and Processors (ASAP). IEEEIEEE 23rd International Conference onS. van Haastregt and B. Kienhuis. Enabling automatic pipeline utilization improvement in polyhedral process network implementations. In Application-Specific Systems, Architectures and Processors (ASAP), 2012 IEEE 23rd International Conference on, pages 173-176. IEEE, 2012. ISL: An integer set library for the polyhedral model. S Verdoolaege, ICMS. Springer6327S. Verdoolaege. ISL: An integer set library for the polyhedral model. In ICMS, volume 6327, pages 299-302. Springer, 2010. Handbook of Signal Processing Systems. S Verdoolaege, Polyhedral Process Networks. S. Verdoolaege. Polyhedral Process Networks, pages 931-965. Handbook of Signal Processing Systems. 2010. Solving out of order communication using CAM memory: an implementation. C Zissulescu, A Turjan, B Kienhuis, E Deprettere, 13th Annual Workshop on Circuits, Systems and Signal Processing. C. Zissulescu, A. Turjan, B. Kienhuis, and E. Deprettere. Solving out of order communication using CAM memory: an implementation. In 13th Annual Workshop on Circuits, Systems and Signal Processing (ProRISC 2002), 2002.
[]
[ "Consistent and scalable Bayesian joint variable and graph selection for disease diagnosis leveraging functional brain network", "Consistent and scalable Bayesian joint variable and graph selection for disease diagnosis leveraging functional brain network" ]
[ "Xuan Cao \nDepartment of Mathematical Sciences\nDepartment of Statistics\nUniversity of Cincinnati\nSungkyunkwan University\n\n", "Kyoungjae Lee \nDepartment of Mathematical Sciences\nDepartment of Statistics\nUniversity of Cincinnati\nSungkyunkwan University\n\n" ]
[ "Department of Mathematical Sciences\nDepartment of Statistics\nUniversity of Cincinnati\nSungkyunkwan University\n", "Department of Mathematical Sciences\nDepartment of Statistics\nUniversity of Cincinnati\nSungkyunkwan University\n" ]
[]
We consider the joint inference of regression coefficients and the inverse covariance matrix for covariates in high-dimensional probit regression, where the predictors are both relevant to the binary response and functionally related to one another. A hierarchical model with spike and slab priors over regression coefficients and the elements in the inverse covariance matrix is employed to simultaneously perform variable and graph selection. We establish joint selection consistency for both the variable and the underlying graph when the dimension of predictors is allowed to grow much larger than the sample size, which is the first theoretical result in the Bayesian literature. A scalable Gibbs sampler is derived that performs better in high-dimensional simulation studies compared with other state-of-art methods. We illustrate the practical impact and utilities of the proposed method via a functional MRI dataset, where both the regions of interest with altered functional activities and the underlying functional brain network are inferred and integrated together for stratifying disease risk.
10.1214/23-ba1376
[ "https://arxiv.org/pdf/2203.07108v1.pdf" ]
247,446,802
2203.07108
bf0ea81888f9871c9ae8893fd577f6c49e18629b
Consistent and scalable Bayesian joint variable and graph selection for disease diagnosis leveraging functional brain network March 15, 2022 Xuan Cao Department of Mathematical Sciences Department of Statistics University of Cincinnati Sungkyunkwan University Kyoungjae Lee Department of Mathematical Sciences Department of Statistics University of Cincinnati Sungkyunkwan University Consistent and scalable Bayesian joint variable and graph selection for disease diagnosis leveraging functional brain network March 15, 2022 We consider the joint inference of regression coefficients and the inverse covariance matrix for covariates in high-dimensional probit regression, where the predictors are both relevant to the binary response and functionally related to one another. A hierarchical model with spike and slab priors over regression coefficients and the elements in the inverse covariance matrix is employed to simultaneously perform variable and graph selection. We establish joint selection consistency for both the variable and the underlying graph when the dimension of predictors is allowed to grow much larger than the sample size, which is the first theoretical result in the Bayesian literature. A scalable Gibbs sampler is derived that performs better in high-dimensional simulation studies compared with other state-of-art methods. We illustrate the practical impact and utilities of the proposed method via a functional MRI dataset, where both the regions of interest with altered functional activities and the underlying functional brain network are inferred and integrated together for stratifying disease risk. Introduction Analyzing high-dimensional data is becoming increasingly prevalent and challenging as technology advances facilitating the collection and storage of more extensive massive data. When applying a generalized linear model (GLM) to such large-scale data, a large number of variables can easily cause an overfitting problem. In this situation, variable selection is one of the most commonly used techniques to avoid overfitting. Numerous frequentist methods on variable selection have been introduced ever since the appearance of Lasso [Tibshirani, 1996], and many analogous Bayesian methods have also been proposed [Ishwaran et al., 2005, Narisetty and He, 2014, Ročková and George, 2018. On the other hand, understanding the complex relationships between variables high-dimensional datasets is also important, where inverse covariance matrices (or equivalently, precision matrices) are prevailingly exploited to capture the multivariate dependence. This is often called a network structure between the variables. A variety of work on algorithms and their theoretical considerations have emerged to investigate a network structure [Wainwright, 2019]. One of key developments was the introduction of the neighborhood selection method [Meinshausen and Bühlmann, 2006], which leverages the connection between the (i, j)th entry of the inverse covariance matrix Ω to the partial correlation between the ith and jth variable estimated through a penalized regression setup. Many other frequentist methods have been developed for sparse precision matrix estimation based on the neighborhood selection [Yuan and Lin, 2007, Friedman et al., 2007, Peng et al., 2009, and several Bayesian counterparts have been proposed in the literature [Dobra et al., 2011, Wang, 2012, 2015. However, a key challenge for these Bayesian approaches is their scalability to high-dimensional settings. To address this issue, recently, Jalali et al. [2020] employed the regression-based generalized likelihood function in combined the spike and slab priors over entries in Ω. They proposed a scalable Gibbs sampler that works well in high dimensions and runs comparably fast compared with the Graphical Lasso [Friedman et al., 2007]. It is often of interest to jointly perform variable selection and discover the network structure among predictors. This type of problems is of wide clinical applications in radiological and genomic studies. Magnetic resonance imaging (MRI) scans and genetic traits are typical examples where the mechanism for effect on an outcome, such as functional brain activities [Langer et al., 2012] or molecular phenotypes such as gene expression, proteomics, or metabolomics [Nacu et al., 2007, Souza et al., 2020, often displays a coordinated change along a pathway. In such cases, the impact of a single factor may not be apparent. Specifically for radiological studies, recent progress in imaging analysis allows the development of a novel feature extraction method called radiomics which converts large amounts of medical imaging characteristics into high-dimensional mineable data pool to build a predictive and descriptive model. The method has been applied to the diagnosis of neuropsychiatric diseases such as autism, schizophrenia, and Alzheimer disease [Feng et al., 2019, Salvatore et al., 2021. These findings demonstrate the validity of these radiomic approaches in discovering discriminative features that can reveal pathological information. In such cases, the method of joint selection can incorporate and highlight the underlying brain network to improve the classification accuracy. Several frequentist and Bayesian methods have been proposed for joint inference on variables and graphs. Li and Li [2008] and investigated a graph-constrained regularization procedure as well as its theoretical properties in order to account for the neighborhood information of variables measured on a given graph. Dobra [2009] estimated a network among relevant predictors by first performing a stochastic search to discover subsets of predictors, then using a Bayesian model averaging approach to estimate a dependency network. Liu et al. [2014] developed a Bayesian method for regularized regression, which provides inference on the inter-relationship between variables by explicitly modeling through a graph Laplacian matrix. Peterson et al. [2016] simultaneously inferred a sparse network among the predictors based on the block Gibbs sampler and performed variable selection using this network as guidance by incorporating it into a Markov random field (MRF) prior. Despite recent advances in Bayesian methods for joint regression and covariance estimation, theory related to joint selection consistency is not well-understood. Some early attempts [Cao and Lee, 2021a] focused solely on linear regression models, where the predictors are linked through a directed graph with a known ordering. To the best of our knowledge, joint variable and graph selection consistency in a high-dimensional GLM has not been investigated under either directed or undirected graphical models. In this paper, we consider a high-dimensional probit model with network-structured predictors via a Gaussian graphical model. Our goal is to jointly perform variable and graph selection with theoretical guarantees, and to develop a scalable algorithm for joint inference in a high-dimensional regime. We fill the gap in the literature by establishing joint selection consistency of the proposed posterior distribution, which guarantees that the posterior probability assigned to the significant variables and the true graph tends to 1 as we observe more data. To perform joint selection, spike and slab priors, imposed on the regression coefficients and the precision matrix of predictors, are linked by an MRF prior. Furthermore, for scalable inference, we adopt the regression-based generalized likelihood function for the predictors. This enables the derivation of a scalable Gibbs sampler by making available the conditional posteriors for the entries of the precision matrix in closed form. We illustrate the practical impact and utilities of the proposed method via a functional MRI dataset, where both the regions of interest with altered functional activities and the underlying functional brain network are inferred and integrated together for disease diagnosis. The rest of the paper is organized as follows. In Section 2, we describe the generalized likelihood function for inverse covariance estimation and the spike and slab priors for sparsity recovery under a probit regression. Posterior computation algorithms are described in Section 3. Theoretical results of the proposed posterior including joint variable and graph selection consistency are shown in Section 4 with proofs provided in Section A. We show the performance of the proposed method and compare it with other competitors through simulation studies in Section 5. In Section 6, a radiomic analysis is conducted for predicting Parkinson's disease based on functional MRI (fMRI) data, and a discussion is given in Section 7. Model Specification Consider a case-control study to identify the radiomic features that are network-structured and may contribute to the disease risk by comparing patients who have certain disease (the "cases") with subjects who do not have that disease but are otherwise similar (the "controls"). In particular, for i = 1, 2, . . . , n, let Y i ∈ {0, 1} be the binary response variable indicating whether the ith subject has certain disease, and denote X i = (x i1 , x i2 , . . . , x ip ) T ∈ R p as the covariate vector containing all the p radiomic features for the ith subject. We consider the following probit model with covariates that obey a multivariate Gaussian distribution: for 1 ≤ i ≤ n, P (Y i = 1 | X i , β) = Φ X T i β ,(1)X i | Ω i.i.d. ∼ N p 0, Ω −1 ,(2) where Φ(·) is the cumulative distribution function of the standard normal distribution, β is a p × 1 vector of regression coefficients, and Ω = (ω jk ) denotes the p × p inverse covariance matrix. Our goal is to infer the regression coefficients β and underlying network structure Ω simultaneously to identify all the significant features better. CONCORD generalized likelihood for predictors In the frequentist setting, one of the most popular methods to achieve a sparse estimate of Ω is the graphical lasso [Friedman et al., 2007, Yuan andLin, 2007], where the objective function is composed of the negative Gaussian log-likelihood and an 1 -penalty term for the off-diagonal entries of the inverse covariance matrix over the space of positive definite matrices. This objective function is also proportional to the posterior density of Ω under Laplace priors for the off-diagonal entries, leading to a Bayesian inference and analysis framework [Wang, 2012]. Note that the requirement on the positive definiteness of Ω translates to the expensive computational need of inverting (p − 1) × (p − 1) matrices in each iteration of both graphical lasso or Bayesian Markov Chain Monte Carlo (MCMC) algorithms. To mitigate this issue, relaxed the parameter space of Ω from positive definite matrices to symmetric matrices with positive diagonal entries. Note that it cannot be achieved under the graphical lasso framework due to the determinant of Ω in the likelihood function. Let S = n −1 n i=1 X i X T i denote the sample covariance matrix. They introduced the CONvex CORrelation selection methoD (CONCORD) generalized likelihood function, for a given p × p symmetric matrix Ω, L(Ω) = exp n p j=1 log ω jj − n 2 tr(Ω 2 S) = exp n p j=1 log ω jj − 1 2 p j=1 n i=1 ω jj x ij + k =j ω jk x ik 2 ,(3) which is motivated by the regression-based neighborhood selection method [Meinshausen and Bühlmann, 2006]. The quadratic nature of the objective function (3) and the relaxation of the parameter space lead to an entire order of magnitude decrease in computational complexity compared to that required by graphical lasso-based approaches. Hereafter, we proceed with the CONCORD generalized likelihood (3) instead of the Gaussian likelihood corresponding to (2), and show that asymptotic properties as well as the computational efficiency can be achieved under the Bayesian framework of joint inference. Spike and slab priors for graph selection The main goal of this paper is to simultaneously infer the sparsity pattern in both β and Ω. To facilitate this purpose, we first introduce the following spike and slab priors for every off-diagonal entry of Ω, ω jk ind ∼ (1 − q)δ 0 (ω jk ) + qN (0, 1/λ jk ) for 1 ≤ j < k ≤ p,(4) where δ 0 (·) denotes the point mass at 0, λ jk > 0 is the precision of slab part, and q ∈ (0, 1) is the prior inclusion probability. For the diagonal entries of Ω, we assume ω jj ind ∼ Exp(λ j ) for 1 ≤ j ≤ p,(5) where λ j > 0. Let ξ = (ω jk , 1 ≤ j < k < p) T ∈ R ( p 2 ) and δ = (ω 11 , ω 22 , . . . , ω pp ) T ∈ R p be the collection of all the off-diagonal and diagonal entries of Ω, respectively. Let a symmetric matrix G = (G jk ) ∈ {0, 1} p×p with zero diagonals represent the adjacency matrix corresponding to the precision matrix Ω where G jk = G kj = 1 if and only if ω jk = 0, and G jk = G kj = 0 otherwise. If we further restrict our analysis to only realistic models, i.e., precision matrices with nonzero entries no more than R 1 > 0, spike and slab priors (4) can be alternatively represented as ξ | G ∼ N |G| (0, Λ u ), π(G) ∝ q |G| (1 − q) ( p 2 )−|G| I(|G| < R 1 ), where |G| = p−1 j=1 p k=j+1 G jk is the number of nonzero entries in the upper triangular part of G, Λ is a diagonal matrix with diagonal entries {λ jk , 1 ≤ j < k < p}, and Λ u is the sub-matrix of Λ after removing the columns and rows corresponding to the zero indices in the upper triangular part of G [Jalali et al., 2020]. In the above, I(·) stands for the indicator function. Incorporating graph structure for variable selection We denote a variable indicator γ = {γ 1 , γ 2 , . . . , γ p } such that γ j = 1 if and only if β j = 0, for 1 ≤ j ≤ p. Let β γ ∈ R |γ| be the vector formed by the active components in β corresponding to model γ, where |γ| = p j=1 γ j is the number of nonzero entries in γ. For any matrix A ∈ R q×p with p columns, let A γ ∈ R q×|γ| represent the submatrix formed from the columns of A corresponding to the nonzero indices in model γ. For variable selection, we consider the following hierarchical prior over β: β γ | γ ∼ N |γ| 0, τ 2 I |γ| ,(6)π(γ | G) ∝ exp −a|γ| + bγ T Gγ I(|γ| < R 2 ),(7) for some constants a > 0, b ≥ 0 and a positive integer 0 ≤ R 2 ≤ p. Prior (6) can be seen as a collection of slabs of spike and slab priors for regression coefficients He, 2014, Yang et al., 2016], where τ 2 is the variance of the slab. Prior (7) is called an MRF prior on the variable indicator γ. It encourages the inclusion of variables connected to other variables through the adjacency matrix G. MRF priors have been used in the variable selection literature including Peterson et al. [2016], Li and Zhang [2010] and Stingo and Vannucci [2010]. Note that the hyperparameter a in (7) corresponds to a penalty for large models, and b determines how strongly an adjacency matrix G affects inclusion probabilities of variables. We can jointly infer a variable indicator γ and an adjacency matrix G by considering b > 0, whereas b = 0 leads to a separate inference of γ and G. Posterior Computation Model (1) is equivalent to letting Y i = I(Z i ≥ 0), where Z i is an underlying continuous variable that has a normal distribution with mean X T i β and variance 1. As we shall demonstrate subsequently, one can exploit this reparameterization to formulate a Gibbs sampler for posterior inference. Let Z = (Z 1 , Z 2 , . . . , Z n ) T . Combining this with the CONCORD generalized likelihood (3) and priors (4)-(7), the full posterior of Z, β, γ, Ω and G is given by π(Z, β, γ, Ω, G | Y, X) ∝ exp − 1 2 (Z − X γ β γ ) T (Z − X γ β γ ) n i=1 Y i I(Z i ≥ 0) + (1 − Y i )I(Z i < 0) × π(γ | G) j:γj =1 (2πτ 2 ) −1/2 exp −β 2 j /(2τ 2 ) j:γj =0 I(β j = 0) × exp n p j=1 log ω jj − 1 2 p j=1 n i=1 ω jj x ij + k =j ω jk x ik 2 − p j=1 λ j ω jj × π(G) 1≤j<k≤p (1 − G jk )δ 0 (ω jk ) + G jk λ 1/2 jk /(2π) 1/2 exp − λ jk ω 2 jk /2 . For the selection of shrinkage parameters λ jk and λ j , following Park and Casella [2008] and Jalali et al. [2020], we assign independent gamma prior distributions on each shrinkage parameter, i.e., λ jk ∼ Gamma(r, s) for 1 ≤ j < k ≤ p and λ j ∼ Gamma(r, s) for 1 ≤ j ≤ p, where r and s are some fixed positive hyperparameters. Gibbs sampler We suggest using the standard Gibbs sampling for posterior inference. In particular, when sampling the off-diagonal entries of Ω and G, we modify the entrywise Gibbs sampler proposed by Jalali et al. [2020] due to the MRF prior. For any matrix A = (a jk ) ∈ R p×p and 1 ≤ j ≤ k ≤ p, let A −jk denote all the upper triangular entries of A, including diagonals, except a jk . For 1 ≤ j ≤ p, let β −j ∈ R p−1 and X −j ∈ R n×(p−1) denote the β vector without the jth predictor and the submatrix of X corresponding to β −j , respectively. LetX j ∈ R n be the jth column of X. The above full posterior leads to the following Gibbs sampler. • For 1 ≤ i ≤ n, generate Z i via the following conditional distribution, π(Z i | Y, X, β) ∝        N (Z i | X T i β, 1)1 (Z i > 0) , if Y i = 1, N (Y i | X T i β, 1)1 (Z i < 0) , if Y i = 0. • For 1 ≤ j ≤ p, set γ j = 0 if |γ −j | = R 2 − 1. Otherwise, generate γ j from the conditional distribution, γ j | X, Z, G, γ −j , β −j ∼ Bernoulli d j 1 + d j , where d j = (σ j /τ 2 ) 1/2 exp − a + 2b i =j γ i G ij + µ 2 j /(2σ j ) , σ j = (X T jX j + τ −2 ) −1 and µ j = σ jX T j (Z − X −j β −j ). • For 1 ≤ j ≤ p, generate β j based on the following spike and slab distribution, β j | X, Z, G, γ j , β −j ∼ (1 − γ j )δ 0 + γ j N (µ j , σ j ). • For 1 ≤ j < k ≤ p, set G jk = 0 if |G −jk | = R 1 − 1. Otherwise, generate G jk based on G jk | Ω −jk , γ, X ∼ Bernoulli c jk 1 + c jk , where S = n −1 n i=1 X i X T i = (s jk ) and a jk = s jj + s kk + λ jk n , b jk = k =k ω jk s kk + j =j ω j k s jj , c jk = q 1 − q λ jk na jk 1 2 exp nb 2 jk 2a jk + 2bγ j γ k . • For 1 ≤ j < k ≤ p, generate ω jk based on the following spike and slab distribution, ω jk | G jk , Ω −jk , γ, X ∼ (1 − G jk )δ 0 (ω jk ) + G jk N − b jk a jk , 1 na jk . • For 1 ≤ j < k ≤ p, the conditional distribution of λ jk is given by λ jk | Ω ∼ Gamma r + 1/2, ω 2 jk /2 + s . • For 1 ≤ j ≤ p, the conditional distribution of λ j is given by λ j | ω jj ∼ Gamma r + 1, ω jj + s .(8) • For 1 ≤ j ≤ p, the conditional distribution of ω jj is π(ω jj | Ω −jj , X) ∝ ω n jj exp −ns jj ω 2 jj /2−ω jj (λ j + nb j ) , whose normalizing constant is intractable, where b j = j =j ω jj s jj . As suggested by Jalali et al. [2020], we set ω jj as the unique mode of π(ω jj | Ω −jj , X), ω jj = −(λ j + nb j ) + (λ j + nb j ) 2 + 4n 2 s jj 2ns jj .(9) When sampling γ j and G jk , we are using the conditional posteriors after integrating out β j and ω jk respectively, rather than using the full conditional posterior. This is to ensure that the Markov chain will be irreducible and converge, where the same trick has been commonly used, for examples, in Yang and Narisetty [2020] and Xu and Ghosh [2015]. Remark 1. An extensive numerical study conducted by Jalali et al. [2020] showed that π(ω jj | Ω −jj , X) puts most of its mass around the mode (9). By using this fact, we simply approximate the nonstandard density using the degenerate distribution at the mode for fast inference. Otherwise, one can employ a Metropolis-Hastings algorithm to obtain samples from π(ω jj | Ω −jj , X). For example, the uniform distribution Unif(ω jj /2, 2ω jj ) can be used as a Metropolis-Hastings kernel. Theoretical Properties For any positive sequences a n and b n , we denote (i) a n b n if a n /b n −→ ∞ as n → ∞, (ii) a n = O(b n ) if there exists a constant C > 0 such that a n /b n ≤ C, (iii) a n ∼ b n if a n = O(b n ) and b n = O(a n ), and (iv) a n = o(b n ) if a n /b n −→ 0 as n → ∞, For any a = (a 1 , a 2 , . . . , a p ) T ∈ R p , we denote vector norms by a 1 = p j=1 |a j |, a 2 = p j=1 a 2 j 1/2 and a max = max 1≤j≤p |a j |. In this section, we investigate asymptotic theoretical properties of the proposed Bayesian joint variable and graph selection method. We are interested in whether the joint posterior for the variable and graph is concentrated on each true value. Let β 0 = (β 0,j ) ∈ R p be the true coefficient vector, and γ 0 = (γ 0,j ) ∈ {0, 1} p be the binary vector indicating locations of nonzero entries in β 0 , i.e., γ 0,j = I(β 0,j = 0) for j = 1, 2, . . . , p. Let Ω 0 = (ω 0,jk ) ∈ R p×p be the true precision matrix of X i , and G 0 = (G 0,jk ) ∈ {0, 1} p×p be the corresponding adjacency matrix. Based on these quantities, we assume that the true data-generating mechanism is Y i | X i , β 0 ind ∼ Ber(Φ(X T i β 0 ) ) with a random predictor vector X i such that Cov(X i ) = Ω −1 0 , for i = 1, 2, . . . , n. The following assumptions were made in order to demonstrate the theoretical properties. In the below, P 0 and E 0 denote the probability measure and expectation, respectively, under the true data-generating mechanism. Condition (A1) (Conditions on n and p) p = p n ≥ n and log p = o(n) as n → ∞. Condition (A2) (Conditions on the design matrix) For X i ∈ R p , i = 1, 2, . . . , p, we assume the following: Jalali et al. [2020]. (i) (sub-gaussianity) There exists a constant C > 0 such that E 0 exp(α T X i ) ≤ exp(C α 2 2 ) for all α ∈ R p . (ii) (bounded eigenvalues) There exists a constant 0 < 0 < 1 such that 0 ≤ λ min (Ω 0 ) ≤ λ max (Ω 0 ) ≤ −1 0 . (iii) (boundedness) P 0 X i max ≤ M = 1 for some constant M > 0. (iv) (|G 0 | + 1) 2 log p = o(n) and Ω 0,min ≡ min (j,k):G 0,jk =1 ω 2 0,jk {|G 0 | log p + (log n)/2}/n. Condition (A3) (Conditions on β 0 ) |γ 0 | = O(1), β 0 1 = O(1) and β 2 0,min ≡ min j∈γ0 β 2 0,j ≥ C β0 log p/n for some constant C β0 > 0. Condition (A4) (Conditions on Ω 0 ) (|G 0 |+1) 2 log p = o(n) and Ω 0,min ≡ min (j,k):G 0,jk =1 ω 2 0,jk {|G 0 | log p+ (log n)/2}/n. Condition (A5) (Conditions on the hyperparameter q) q = p −Cq|G0| , where C q = 16(1 ∨ c 0 ) 2 /(1 ∧ 0 ), for some constant c 0 > 0 defined in Lemma S3 of Condition (A6) (Conditions on the other hyperparameters) For some constants 1/2 < d < 1, δ > 0 and C a > 0, R 1 = (n/ log p) 1 2 , R 2 = (n/ log p) 1−d 2 , τ 2 ∼ n −1 p 2+2δ , a = C a log p and b = o (log p/n) 1−d . Condition (A1) demonstrates the high-dimensional setting, where the number of variables p is larger than the sample size n. It allows p to grow at a rate exp{o(n)} as n → ∞. Similar conditions have been used in the literature including Narisetty and He [2014] and Lee and Cao [2021a] to prove selection consistency of coefficient vector. Condition (A2) shows the conditions for each row, X i , of the random design matrix X. The first condition implies that a linear combination of X i = (x i1 , x i2 , . . . , x ip ) T has a sufficiently light tail satisfying sub-gaussianity. The second condition requires that the eigenvalues of precision matrix Ω 0 are bounded. Liu and Martin [2019] and Cao and Lee [2021b] also used this condition for linear regression models with random design matrix. The third condition requires each component of X i is bounded with probability 1, where Narisetty et al. [2019] adopted a similar condition for a deterministic design matrix. By assuming these conditions for X, we can efficiently control the eigenvalues of n −1 X T γ X γ and the Hessian matrix of (1) for any reasonably large model γ, with large probability tend to 1. For example, condition (A2) holds and Narisetty and He [2014] assumed similar conditions. Note that we still allow, as the number of variables increases, the magnitude of the smallest coefficient converge to zero at the rate of log p/n. This can describe a situation in which the importance of meaningful variables decreases as the number of variables grows. if X i = Ω −1/2 0 Z i , where Z i i.i.d. ∼ Unif([− √ 3, √ 3] p ) for i = 1, Condition (A4) requires the number of nonzero off-diagonal entries in Ω 0 is at most O( n/ log p). Banerjee and Ghosal [2015] , Xiang et al. [2015] and Lee and Cao [2021b] used similar conditions for highdimensional precision matrices. Furthermore, condition (A4) allows the magnitude of the smallest nonzero off-diagonal elements of Ω 0 converge to zero at the rate (|G 0 | log p + log n)/n. We adopt these conditions from Jalali et al. [2020] to use their results. Among conditions (A5) and (A6), q = p −Cq|G0| and a = C a log p mean that the prior should impose a sufficient penalty to large |G| and |γ|, respectively. These are standard assumptions for Martin et al. [2017]. The condition τ 2 ∼ n −1 p 2+2δ implies that the variance of slab part should be sufficiently large, where τ 2 essentially plays a role as a penalty for large |γ|. The other conditions, R 1 = (n/ log p) 1 2 and R 2 = (n/ log p) 1−d 2 control the size of |G| and |γ|, respectively, while b = o (log p/n) 1−d controls the strength of γ T Gγ term in π(γ | G) . Similar conditions can be found in Jalali et al. [2020] and Cao and Lee [2021b]. With these conditions at hand, we are now ready to state asymptotic properties of the posterior. Theorem 4.1 shows the proposed prior enjoys posterior ratio consistency of γ given any G. This implies that for any fixed G, the true variable indicator γ 0 is the mode of the conditional posterior π(γ | G, Y, X) with probability tending to 1. π(γ, G | Y, X) π(γ 0 , G | Y, X) P −→ 0 as n → ∞. To establish posterior ratio consistency of G given γ 0 ., we assume the existence of accurate estimates of diagonal entries δ = (ω 11 , ω 22 , . . . , ω pp ), sayδ = (ω 11 ,ω 22 , . . . ,ω pp ), satisfying δ −δ max = O log p/n with probability at least 1−n −c for any constant c > 0. The existence of these estimates have been commonly assumed for high-dimensional precision matrix estimation [Peng et al., 2009; for example, Proposition 1 in Peng et al. [2009] provides one way to obtain such estimates of δ. Because our main focus is selection of γ and G, not the estimation of Ω, we will work with the conditional posterior of γ and G with the estimatesδ plugged in. The next theorem states the posterior ratio consistency result of G given γ 0 and δ, which implies the true graph G 0 is the mode of π(G | γ 0 ,δ, Y, X) with probability tending to 1. hold. Then, for any G = G 0 , π(γ 0 , G |δ, Y, X) π(γ 0 , G 0 |δ, Y, X) P −→ 0 as n → ∞. For any γ and G, note that π(γ, G | Y, X) π(γ 0 , G | Y, X) = f (Y | X γ , γ)π(X | G)π(γ | G)π(G) f (Y | X γ0 , γ 0 )π(X | G)π(γ 0 | G)π(G) = f (Y | X γ , γ)π(X |δ, G)π(γ | G)π(G) f (Y | X γ0 , γ 0 )π(X |δ, G)π(γ 0 | G)π(G) = π(γ, G |δ, Y, X) π(γ 0 , G |δ, Y, X) , where f (Y | X γ , γ) = f (Y | X γ , β γ )π(β γ | γ)dβ γ , π(X | G) = π(X | Ω, G)π(Ω | G)dΩ and π(X |δ, G) = π(X | ξ,δ, G)π(ξ | G)dξ. Then, by using the above equality, Theorems 4.1 and 4.2 imply joint posterior ratio consistency of γ and G. Corollary 4.3 states the joint selection consistency result. γ = γ 0 and G = G 0 , π(γ, G |δ, Y, X) π(γ 0 , G 0 |δ, Y, X) P −→ 0 as n → ∞. In fact, the proposed method enjoys called joint selection consistency. Theorem 4.4 shows that the joint posterior of γ and G givenδ is concentrated around the true values, γ 0 and G 0 . Joint selection consistency guarantees that the posterior mass assigned to γ 0 and G 0 converges to 1 as n → ∞. This is a more powerful result than Corollary 4.3, because joint selection consistency implies joint posterior ratio consistency, but not vice versa. Simulation Studies In this section, we demonstrate the performance of the proposed method in various settings. For i = 1, 2, . . . , n, we simulate the data from Y i = I(Z i ≥ 0), where Z i = X i β 0 + i , i ∼ N (0, 1) and X i = (x i1 , x i2 , . . . , x ip ) T i.i.d. ∼ N p (0, Σ 0 ) , with the sample size n and the number of predictors p. Throughout the simulation study, we fix n = 100. If the atlas segments the brain into p different anatomical sections, then, for example, we can consider p as the number of brain regions. In this case, the objective of joint inference would be to learn the abnormal functional activities among the significant brain regions that contribute to the disease onset. Among these p predictors, we assume that the first ten are active and consider the following four settings for the true coefficient vector β 0 to include different combinations of small and large signals. • Setting 1: All the nonzero entries of β 0 are set to 3. • Setting 2: All the nonzero entries of β 0 are generated from Unif(1.5, 3). • Setting 3: All the nonzero entries of β 0 are set to 1.5. • Setting 4: All the nonzero entries of β 0 are generated from Unif(0.5, 1.5). For the true precision matrix Ω 0 = Σ −1 0 , we consider the following four scenarios. • Scenario 1: For p = 150, we set all the diagonal entries to be 1 and Ω 0,i1 = Ω 0,1i = 0.3 for i = 2, 3, . . . , 10, and set all the remaining entries to be 0. • Scenario 2: For p = 150, we consider a banded structures of Ω 0 with all the unit diagonals, where Ω 0,i,i+1 = Ω 0,i+1,i = 0.3, for i = 1, 2, . . . , p − 1. • Scenario 3: For p = 150, we consider another banded structures of Ω 0 with all the unit diagonals, where Ω 0,i,i+1 = Ω 0,i+1,i = 0.5, Ω 0,j,j+2 = Ω 0,j+2,j = 0.25, for i = 1, 2, . . . , p − 1, j = 1, 2, . . . , p − 2. • Scenario 4: The true precision matrix Ω 0 is set to be the same as in Scenario 1, but with p = 300. This scenario will show the performance of the proposed method in high dimensions. We will refer to our proposed joint selection method coupled with Bayesian spike and slab CONCORD as J.BSSC. In terms of variable selection, we first compare the performance of J.BSSC with other existing methods including Lasso [Tibshirani, 1996], elastic net [Zou and Hastie, 2005] and the Bayesian joint selection method based on stochastic search structure learning (SSSL) [Peterson et al., 2016, Wang, 2015, hereafter referred to as J.SSSL. The tuning parameters in Lasso and elastic net were chosen by 10-fold cross-validation. For Bayesian methods, as discussed by Peterson et al. [2016], we suggest using the hyperparameters a = 2.75 and b = 0.5 for the MRF prior as default. Furthermore, to show the benefits of joint modeling, we also implement the setting with b = 0 for J.BSSC, which corresponds to the Bayesian method modeling the variable and precision matrix separately. The other hyperparameters were set at a 0 = 0.1, b 0 = 0.01, τ 2 = 1, q = 0.005, r = 10 −4 and s = 10 −8 . The initial state for γ was set at p-dimensional zero vector, i.e., the empty model, while the initial state for the inverse covariance matrix was chosen by the graphical lasso (GLasso) [Friedman et al., 2007]. For posterior inference, 2, 000 posterior samples were drawn with a burn-in period of 2, 000. As the final model, we chose the indices having posterior inclusion probability larger than 0.5, which is called the median probability model. When the posterior probability of the posterior mode is larger than 0.5, the median probability model corresponds to the posterior mode [Barbieri and Berger, 2004]. To evaluate the performance of variable selection, the sensitivity, specificity, Matthews correlation coefficient (MCC) and mean-squared prediction error (MSPE) are reported at Tables 1 to 4. The criteria are evaluate the variable selection performance. We notice that compared to regularization methods (Lasso and elastic net), the proposed joint selection approach (J.BSSC) tends to have better specificity and MCC. The poor specificity of the regularization methods has also been discussed in previous literature in the sense that selection of the regularization parameter using cross-validation is optimal with respect to prediction but tends to include too many noise predictors [Meinshausen and Bühlmann, 2006]. This leads to relatively larger numbers of errors for the regularization methods compared with those for the Bayesian joint selection methods. Among all Bayesian approaches, under most of settings, the proposed J.BSSC approach (with b = 0.5 or b = 0) outperforms J.SSSL based on all criteria, which shows the benefit of the proposed joint method incorporating the graph structure through the CONCORD generalized likelihood. Interestingly, compared with J.SSSL that adopts the Metropolis-Hastings algorithm for variable selection, the performance of the proposed Gibbs sampler is significantly better in terms of almost all the measures. Furthermore, J.BSSC with b = 0.5 tends to have a slightly lower specificity but significantly higher sensitivity, MCC and lower MSPE compared with J.BSSC with b = 0. This could be caused by the proposed method frequently visiting graph-linked variables due to the MRF prior. We also found that the proposed J.BSSC overall works better than other methods especially in the strong signal setting (i.e., Setting 1). This is because as signal strength gets stronger, the consistency conditions of our method are easier to satisfy which leads to better performance. To sum up, the above observation indicates that the proposed method can achieve good variable selection performance under a variety of configurations with different data generation mechanisms. Specificity = T N T N + F P , MCC = T P × T N − F P × F N (T P + F P )(T P + F N )(T N + F P )(T N + F N ) , MSPE = 1 n test ntest i=1 Φ(X T test,iβ ) − Y test,i 2 ,where We also briefly present the performance of graph selection and precision matrix estimation for J.BSSC. We compare the performance of J.BSSC with other existing methods including J.SSSL [Peterson et al., 2016, Wang, 2015, GLasso [Friedman et al., 2007], the constrained 1 -minimization for inverse matrix estimation (CLIME) [Cai et al., 2011] and the tuning-insensitive approach for optimally estimating Gaussian graphical models (TIGER) [Liu and Wang, 2017]. The tuning parameters for GLasso and TIGER were chosen by the criterion of stability approach to regularization selection (StARS) [Liu et al., 2010]. We used 10-fold cross-validation to select the penalty parameter for CLIME. For GLasso and TIGER, the final models were constructed by collecting the nonzero entries in the estimated precision matrix. In our simulation settings, CLIME could not produce exact zeros, so we chose the final graph estimate by thresholding the absolute values of the estimated precision matrix at 0.1. To evaluate the performance of graph selection and precision matrix estimation, we report the results at Tables 5 and 6, where each simulation setting is repeated for 20 times. The results under different scenarios are omitted because they gave similar conclusions, and only the results under Scenario 1 are presented in the tables. In Table 5, #Error denotes the number of errors, i.e., FP+FN. For a matrix norm · and an estimatorΩ, the relative error Ω 0 −Ω / Ω 0 is chosen as a criterion. In Table 6, E 1 , E 2 , E 3 and E 4 represent the relative errors based on the matrix 1 -norm, the matrix 2 -norm (spectral norm), the vector Based on the results in Table 5, in terms of graph selection, joint selection approaches (J.BSSC and J.SSSL) outperform other contenders estimating a graph G without incorporating information about γ. This suggests that joint selection using an MRF prior can benefit not only variable selection performance but also graph selection performance. Furthermore, Table 6 shows that J.BSSC performs significantly better than J.SSSL in terms of precision matrix estimation. In fact, J.BSSC also outperforms the other contenders for all the criteria considered. Therefore, it can be interpreted that joint selection improves the estimation performance, and in particular, it is more preferable to use CONCONRD for precision matrix estimation. In addition, as noted in Jalali et al. [2020], BSSC is computationally much more efficient compared with SSSL. In Figure 1, we plot the run time comparison between J.BSSC and J.SSSL under different values of p coded in R. The averaged computation times for J.BSSC are significantly smaller than those for J.SSSL, and the gap between the two gets larger as p grows. Even in terms of the memory requirement, J.BSSC needs a significantly smaller memory than SSSL. For example, J.SSSL requires more than 20 GB while J.BSSC achieves the goal with 0.22 GB of memory when p = 300. Furthermore, based on asymptotic results, one can expect that our method will give accurate inference results as we have more observations, while asymptotic properties of the Bayesian method proposed by Peterson et al. [2016] are still in question. Aberrant Functional Activities in the Parkinson's Disease Cohort Parkinson's disease (PD) was first described by Dr. James Parkinson in 1817 as "shaking palsy". It is a chronic, progressive neurodegenerative disease characterized by both motor and nonmotor features. As one of the most common neurodegenerative disorders, the disease has a significant clinical impact on patients, families, and caregivers through its progressive degenerative effects on mobility and muscle control. Research suggests that the pathophysiological changes associated with PD may start before the onset of motor features and may include a number of nonmotor presentations, such as sleep disorders, depression, and cognitive changes. Evidence for this preclinical phase has driven the enthusiasm for research that focuses on early diagnosis and preventive therapies of PD [Schrag et al., 2015]. In recent years, neuroimaging has been increasingly employed to aid the risk stratification in PD. Among a variety of neuroimaging technologies, resting-state fMRI (rs-fMRI) is regarded as a promising technique for precisely locating the abnormal spontaneous activities in neuropsychological disease . Several rs-fMRI-based methods including regional homogeneity (ReHo), the amplitude of low-frequency fluctuation, and functional connectivity provide a task-free approach to explore spontaneous brain activity and connectivity among networks in different brain regions of PD patients. In this section, we apply the proposed joint selection method to rs-fMRI data for simultaneously identifying aberrant functional brain activities and inferring the underlying functional brain network to aid the diagnosis of PD [Wei et al., 2017, Cao et al., 2020. Subjects and data preprocessing This study was approved by the Medical Research Ethical Committee of Nanjing Brain Hospital (Nanjing, China) in accordance with the Declaration of Helsinki, and written informed consent was obtained from all subjects. Seventy PD patients and fifty healthy controls (HCs) were recruited. Image data were acquired using a Siemens 3.0-Tesla signal scanner (Siemens, Verio, Germany) in the department of radiology within Nanjing Brain Hospital. Functional imaging data were collected transversely by using a gradient-recalled echo-planar imaging pulse sequence and retrieved from the archive by neuroradiologists. Image preprocessing steps including slice-timing correction and spatial normalization were carried out using the Data Processing 6.2 Image feature extraction Zang et al. [2004] proposed the method of Regional Homogeneity (ReHo) to analyze characteristics of regional brain activity and to reflect the temporal homogeneity of neural activity. ReHo is defined as a voxel-based measure of brain activity which evaluates the similarity or synchronization between the time series of a given voxel and its nearest neighbors. Abnormal ReHo signals, which are associated with changes in neuronal activity in local brain regions, may be exploited to analyze the abnormal brain activities and to depict the dynamic brain functional connectivities [Xu et al., 2019, Deng et al., 2016. In particular, we focus on the mReHo maps obtained by dividing the mean ReHo of the whole brain within each voxel in the ReHo map. We further segmented the mReHo maps and extracted all the 112 ROI signals based on the Harvard-Oxford atlas (HOA) using the Resting-State fMRI Data Analysis Toolkit [Song et al., 2011]. Model fitting We now consider a probit regression model with the binary disease indicator as an outcome and 112 ReHo radiomic variables as predictors. Various models including the proposed method and other competing approaches will then be implemented to classify subjects based on these extracted features and to learn functional connectivities of the brain. The dataset is randomly divided into a training set (80%) and a testing set (20%) while maintaining the PD:HC ratio in both sets. The hyperparameters for all methods are set as in simulation studies. For Bayesian methods, we first obtain the identified variables and then evaluate the testing set performance using standard GLM estimates based on the selected features. The penalty parameters in all frequentist methods are tuned via 10-fold cross validation in the training set. The final prediction results based on the testing set for both Bayesian and frequentist approaches are evaluated using a common threshold 0.5. Results In terms of discriminative radiomic features, our method is able to identify abnormal functional brain Figure 2, we plot the inferred functional brain network overlaid with selected nodes that correspond to the aforementioned brain regions. The predictive performance of various methods in the test set is summarized in Table 7. We can tell from Table 7 that the predictive performance of the proposed joint selection approach based on BSSC is overall better than that of all the other methods. The proposed J.BSSC approach has higher sensitivity and lower MSPE compared with all the other methods, but yields a lower specificity than Lasso. Based on the most comprehensive measure MCC, our method outperforms all the other methods. [Martin et al., 2009, Zhang et al., 2021, Mihaescu et al., 2019. These findings suggest disease-related alterations of functional activities that provide physicians sufficient information to get involved with early diagnosis and treatment. The inferred functional brain connectivities also seem plausible and are primarily located in the typical resting-state network (RSN) including default-mode network (DMN), visual network (VIN) and basal ganglia network (BGN). The identified regions in DMN include the left middle temporal gyrus, anterior division and angular gyrus. We also discover abnormal VIN in the right temporal fusiform cortex, anterior division and right occipital fusiform gyrus, as well as unusual BGN in the left putamen. RSN reflects the spontaneous neural activities of the blood oxygenation level-dependent signals between temporally correlated brain regions. Compared with the control group, the DMN plays a crucial role in neurodegenerative disorders and normal aging. Several fMRI studies have indicated that the DMN was injured before the cognitive decline in PD [Sandrone andCatani, 2013, Koshimori et al., 2016]. The BGN has also been observed in pathologies with motor control and altered neurotransmitter systems of dopaminergic processes [Griffanti et al., 2018, De Micco et al., 2019. A previous study on functional connectivity markers in advanced PD also found functional connectivity features located in the VIN and cerebellar networks that are significantly relevant to classification and provide preliminary evidence that can characterize PD patients compared with HCs [Lin et al., 2020]. In conclusion, the radiomics-based joint selection approach proposed in this paper has shown that high-order radiomic features that quantify functional brain connectivities and activities can be used for the diagnosis of PD with satisfactory prediction accuracy. Discussion We propose a Bayesian joint selection method for probit models. Although it should be rigorously investigated, it is possible to extend the proposed method to other GLMs with network-structured predictors and binary responses. For example, an extension to logistic regression models, in terms of computation, is straightforward by approximating a logistic distribution to mixture of normal distributions [Albert andChib, 1993, O'brien andDunson, 2004]. This approximation enables us to derive a similar Gibbs sampler presented in Section 3 with some minor changes; for example, see Lee and Cao [2021a]. Furthermore, in theoretical aspect, it is highly expected that joint selection consistency (Theorem 4.4) can be achieved in logistic regression models with CONCORD generalized likelihood by applying the techniques in Lee and Cao [2021a], which efficiently control the score function and Hessian matrix of logistic models. Theoretical results in this paper, except Theorem 4.1, are based on the conditional posteriors given accurate estimates of diagonal entries,δ. This is because we adopt the selection consistency result in Jalali et al. [2020]. It would be interesting to investigate whether one can obtain selection consistency without conditioningδ to conduct a fully Bayesian inference. This would need a significant amount of technical modification, so we leave it as future work. Furthermore, by using CONCORD generalized likelihood, we can enjoy fast computational speed but at the cost of possibly losing the positive definiteness of the precision matrix. Although it does not harm the primary goal of this paper, the selection of the support of the precision matrix and coefficient vector, it will obviously not be satisfactory when the estimation of the precision matrix is of interest. Thus, modifying the CONCORD algorithm to ensure positive definiteness of the precision matrix while maintaining fast computation would be another possible direction of future work. A Proofs Notation. In the rest of the paper, we denote Y n ≡ Y = (Y 1 , Y 2 , . . . , Y n ) T ∈ R n and X n ≡ X = (X 1 , X 2 , . . . , X n ) T ∈ R n×p . Score function and Hessian matrix. For any γ, let η i,γ = X T i,γ β γ . Then, the log-likelihood function is L n (β γ ) = n i=1 Y i log Φ(X T i,γ β γ ) + (1 − Y i ) log 1 − Φ(X T i,γ β γ ) . The score function and Hessian matrix are given by s n (β γ ) = ∂ ∂β γ L n (β γ ) = n i=1 X i,γ Y i φ(η i,γ ) Φ(η i,γ ) − (1 − Y i ) φ(η i,γ ) 1 − Φ(η iγ ) , ≡ X T γ D(β γ )Σ(β γ ) −1/2 {Y − µ(β γ )}, H n (β γ ) = ∂ 2 ∂β γ ∂β T γ L n (β γ ) = n i=1 X i,γ X T i,γ Y i η i,γ φ(η i,γ ) Φ(η i,γ ) + φ(η i,γ ) 2 Φ(η i,γ ) 2 + (1 − Y i ) −η i,γ φ(η i,γ ) 1 − Φ(η i,γ ) + φ(η i,γ ) 2 (1 − Φ(η i,γ )) 2 ≡ n i=1 X i,γ X T i,γ · ψ i (β γ ) ≡ X T γ Ψ γ X γ where D(β γ ) = diag(d i (β γ )) ∈ R n×n with d i (β γ ) = φ(η i,γ )/ Φ(η i,γ )(1 − Φ(η i,γ )) , and µ(β γ ) = (µ i (β γ )) ∈ R n with µ i (β γ ) = Φ(η i,γ ), and Σ(β γ ) = diag(σ 2 i (β γ )) ∈ R n×n with σ 2 i (β γ ) = Φ(η i,γ )(1 − Φ(η i,γ )), and Ψ γ = diag(ψ i (β γ )) ∈ R n×n . For simplicity, let µ = (µ i (β 0 )) and Σ = diag(σ 2 i (β 0 )). Proof of Theorem 4.1. Note that for any G, π(γ, G | Y n , X n ) ∝ f (Y n | X γ , γ)π(X n | G)π(γ | G)π(G), where f (Y n | X γ , γ) = f (Y n | X γ , β γ )π(β γ | γ)dβ γ ≡ exp L n (β γ ) (2πτ 2 ) −|γ|/2 exp − 1 2τ 2 β γ 2 2 dβ γ . Thus, π(γ, G | Y n , X n ) π(γ 0 , G | Y n , X n ) = f (Y n | X γ , γ)π(γ | G) f (Y n | X γ0 , γ 0 )π(γ 0 | G) and π(X n | G) = π(X n | Ω, G)π(Ω | G)dΩ. First, we focus on overfitted models, M 1 = {γ : γ γ 0 , |γ| ≤ R 2 }. By Taylor's expansion of L n (β γ ) around the MLE of β γ under the model γ, sayβ γ , L n (β γ ) − L n (β γ ) = − 1 2 (β γ −β γ ) T H n (β γ )(β γ −β γ ) for someβ γ such that β γ −β γ 2 ≤ β γ −β γ 2 . For any β γ such that β γ − β 0,γ 2 ≤ C |γ| log p/n ≡ Cw n for some constant C > 0, by Lemma A.3, β γ − β 0,γ 2 ≤ β γ −β γ 2 + β γ − β 0,γ 2 ≤ β γ −β γ 2 + β γ − β 0,γ 2 ≤ β γ − β 0,γ 2 + 2 β γ − β 0,γ 2 ≤ 3Cw n uniformly for all γ ∈ M 1 with probability at least 1 − 2 exp(−cn) for some constant c > 0. Thus, by Lemma A.1, L n (β γ ) − L n (β γ ) ≤ − 1 − 2 (β γ −β γ ) T H n (β 0,γ )(β γ −β γ ) for some small constant > 0. For any β γ such that β γ −β γ 2 = Cw n /2, we have L n (β γ ) − L n (β γ ) ≤ − 1 − 2 β γ −β γ 2 2 λ min H n (β 0,γ ) ≤ − 1 − 8 C 2 λ|γ| log p −→ −∞ as n → ∞ with probability at least 1 − 2 exp(−cn), by Lemma A.4. Note that it also holds for any β γ such that β γ −β γ 2 > Cw n /2 due to the concavity of L n (·) and the fact thatβ γ maximizes L n (β γ ). Define B γ = {β γ : β γ −β γ 2 ≤ Cw n /2}, then B γ ⊂ {β γ : β γ − β 0,γ 2 < Cw n } with probability at least 1 − 2 exp(−cn) uniformly in γ ∈ M 1 . Therefore, for any γ ∈ M 1 , with probability at least 1 − 2 exp(−cn), f (Y n | X γ , γ)π(γ | G) = exp L n (β γ ) (2πτ 2 ) −|γ|/2 exp − 1 2τ 2 β γ 2 2 dβ γ π(γ | G) ≤ (2πτ 2 ) −|γ|/2 π(γ | G) exp L n (β γ ) exp − 1 − 8 C 2 λ|γ| log p B c γ exp − β γ 2 2 2τ 2 dβ γ + Bγ exp − 1 − 2 (β γ −β γ ) T H n (β 0,γ )(β γ −β γ ) − β γ 2 2 2τ 2 dβ γ , where Bγ exp − 1 − 2 (β γ −β γ ) T H n (β 0,γ )(β γ −β γ ) − β γ 2 2 2τ 2 dβ γ ≤ (2π) |γ|/2 det (1 − )H n (β 0,γ ) + τ −2 I |γ| −1/2 and exp − 1 − 8 C 2 λ|γ| log p B c γ exp − β γ 2 2 2τ 2 dβ γ ≤ exp − 1 − 8 C 2 λ|γ| log p + |γ| 2 log τ 2 (2π) |γ|/2 ≤ exp − C |γ| log p (2π) |γ|/2 det (1 − )H n (β 0,γ ) + τ −2 I |γ| 1/2−1/2 ≤ exp − C |γ| log p (2π) |γ|/2 det (1 − )H n (β 0,γ ) + τ −2 I |γ| −1/2 for some positive constants C and C . Hence, we have f (Y n | X γ , γ)π(γ | G) ≤ (τ 2 ) −|γ|/2 π(γ | G) exp L n (β γ ) det (1 − )H n (β 0,γ ) + τ −2 I |γ| −1/2 1 + o(1)(10) for any γ ∈ M 1 , with probability at least 1 − 2 exp(−cn). On the other hand, f (Y n | X γ0 , γ 0 )π(γ 0 | G) = exp L n (β γ0 ) (2πτ 2 ) −|γ0|/2 exp − 1 2τ 2 β γ0 2 2 dβ γ0 π(γ 0 | G) ≥ (2πτ 2 ) −|γ0|/2 π(γ 0 | G) exp L n (β γ0 ) Bγ 0 exp − 1 + 2 (β γ0 −β γ0 ) T H n (β 0,γ0 )(β γ0 −β γ0 ) − β γ0 2 2 2τ 2 dβ γ0 , where A = (1 + )H n (β 0,γ0 ) and Bγ 0 exp − 1 + 2 (β γ0 −β γ0 ) T H n (β 0,γ0 )(β γ0 −β γ0 ) − β γ0 2 2 2τ 2 dβ γ0 = (2π) −|γ0|/2 det A + τ −2 I |γ0| −1/2 exp − 1 2β T γ0 A − A(A + τ −2 I |γ0| ) −1 A β γ0 (2π) −|γ0|/2 det A + τ −2 I |γ0| −1/2 uniformly in γ ∈ M 1 with probability at least 1 − 2 exp(−cn), by Lemma 1 in Lee and Cao [2021a]. Hence, we have f (Y n | X γ0 , γ 0 )π(γ 0 | G) (τ 2 ) −|γ0|/2 π(γ 0 | G) exp L n (β γ0 ) det (1 + )H n (β 0,γ0 ) + τ −2 I |γ0| −1/2 (11) for any γ ∈ M 1 , with probability at least 1 − 2 exp(−cn). Then, (10) and (11) implies π(γ, G | Y n , X n ) π(γ 0 , G | Y n , X n ) π(γ | G) π(γ 0 | G) (nτ 2 ) − 1 2 (|γ|−|γ0|) det 1+ n H n (β 0,γ0 ) + (nτ 2 ) −1 I |γ0| 1/2 det 1− n H n (β 0,γ ) + (nτ 2 ) −1 I |γ| 1/2 × exp L n (β γ ) − L n (β γ0 ) π(γ | G) π(γ 0 | G) (nτ 2 ) − 1 2 (|γ|−|γ0|) 2 λ |γ|−|γ0| exp L n (β γ ) − L n (β γ0 ) for any γ ∈ M 1 , with probability at least 1 − 2 exp(−cn), by Lemma 2 in Lee and Cao [2021a]. Note that by Taylor's expansion of L n (·), L n (β γ ) − L n (β γ0 ) ≤ L n (β γ ) − L n (β 0,γ ) = (β γ − β 0,γ ) T s n (β 0,γ ) − 1 2 (β γ − β 0,γ ) T H n (β γ )(β γ − β 0,γ ) for someβ γ such that β γ −β γ 2 ≤ β 0,γ −β γ 2 . Again by Taylor's expansion of s n (·), we have 0 = s n (β γ ) = s n (β 0,γ ) − H n (β * γ )(β γ − γ 0,γ ) for someβ * γ such that β * γ − β 0,γ 2 ≤ β γ − β 0,γ 2 , which implies (β γ − β 0,γ ) T s n (β 0,γ ) = s n (β 0,γ ) T H n (β * γ ) −1 s n (β 0,γ ) and (β γ − β 0,γ ) T H n (β γ )(β γ − β 0,γ ) = s n (β 0,γ ) T H n (β * γ ) −1 s n (β 0,γ ) + (β γ − β 0,γ ) T H n (β γ ) − H n (β * γ ) (β γ − β 0,γ ). Note that H n (β γ ) = X T γ Ψ γ X γ and (β γ − β 0,γ ) T H n (β γ ) − H n (β * γ ) (β γ − β 0,γ ) ≤ sup u∈R |γ| : u 2=1 u T H n (β γ ) − H n (β * γ ) u · β γ − β 0,γ 2 2 ≤ max 1≤i≤n ψ i (β γ ) − ψ i (β * γ ) · X T γ X γ · β γ − β 0,γ 2 2 max 1≤i≤n X T i,γβγ − X T i,γβ * γ · X T γ X γ · β γ − β 0,γ 2 2 R 2 log p n · n · R 2 log p n = R 4 2 log p n 1/2 log p = o log p uniformly in γ ∈ M 1 with probability at least 1 − 2 exp(−cn), where the third inequality holds due to the Lipschitz continuity of ψ i (β γ ) (using similar arguments in the proof of Lemma A.1) and the fourth inequality holds due to condition (A2) and (12). Therefore, by Lemma A.1, L n (β γ ) − L n (β γ0 ) ≤ 1 2(1 − ) s n (β 0,γ ) T H n (β 0,γ ) −1 s n (β 0,γ ) + o(log p) uniformly in γ ∈ M 1 with probability at least 1 − 2 exp(−cn). Note that s n (β 0,γ ) = X T γ D(β 0,γ )Ũ , wherẽ U = Σ −1/2 (Y − µ), and H n (β 0,γ ) = n i=1 X i,γ X T i,γ ψ i (β 0,γ ) ≥ n i=1 X i,γ X T i,γ · C −1 d i (β 0,γ ) 2 ≥ C −1 X T γ D(β 0,γ ) 2 X γ for some constant C > 0, because ψ i (β 0,γ ) ≥ C −1 d i (β 0,γ ) 2 on |η 0,i,γ | ≤ X i max β 0 1 ≤ C for some positive constants C and C . Let P γ = D(β 0,γ )X γ (X T γ D(β 0,γ ) 2 X γ ) −1 X T γ D(β 0,γ ). Thus, L n (β γ ) − L n (β γ0 ) ≤ C 2(1 − )Ũ T P γŨ + o log p ≤ C (|γ| − |γ 0 |) log p for some large positive constants C and C uniformly in γ ∈ M 1 with probability at least 1 − 2 exp(−cn) − p −2|γ| by Lemma A.2 with t = 2|γ| log p due to condition (A3). Then, we have π(γ, G | Y n , X n ) π(γ 0 , G | Y n , X n ) π(γ | G) π(γ 0 | G) (nτ 2 ) − 1 2 (|γ|−|γ0|) 2 λ |γ|−|γ0| exp L n (β γ ) − L n (β γ0 ) exp − a(|γ| − |γ 0 |) + bγ T Gγ − bγ T 0 Gγ 0 p −(1+δ−C) 2 λ |γ|−|γ0| ≤ p −(1+δ−C+C ) 2 λ |γ|−|γ0| = o(1) uniformly in γ ∈ M 1 with probability at least 1 − 2 exp(−cn) − p −2|γ| for some positive constants C and δ due to condition (A6). Now we focus on the remaining models, M 2 = γ : γ γ 0 , |γ| ≤ R 2 . For any γ ∈ M 2 , let γ * = γ ∪ γ 0 so that γ * ∈ M * 1 = {γ : γ ⊃ γ 0 , |γ| ≤ R 2 + |γ 0 |}. Let β γ * be the |γ * |-dimensional vector including β γ for γ and zeros for γ 0 \ γ. By Taylor's expansion, for any β γ * such that β γ * − β 0,γ * 2 ≤ C |γ * | log p/n ≡ Cw n for some large constant C > 0, L n (β γ * ) = L n (β γ * ) − 1 2 (β γ * −β γ * ) * H n (β γ * )(β γ * −β γ * ) ≤ L n (β γ * ) − 1 − 2 (β γ * −β γ * ) * H n (β 0,γ * )(β γ * −β γ * ) ≤ L n (β γ * ) − n(1 − )λ 2 β γ * −β γ * 2 2 with probability at least 1 − 2 exp(−cn), where the first inequality holds due to Lemmas A.1 and A.3, and the second inequality holds due to Lemma A.4. Define B γ * = {β γ : β γ * −β γ * 2 ≤ Cw n /2}, then B γ * ⊂ {β γ : β γ * − β 0,γ * 2 < Cw n } with probability at least 1 − 2 exp(−cn) uniformly in γ ∈ M 2 . Then, for any γ ∈ M 2 , with probability at least 1 − 2 exp(−cn), f (Y n | X γ , γ)π(γ | G) ≤ π(γ | G) exp L n (β γ * ) (2πτ 2 ) −|γ|/2 B γ * exp − n(1 − )λ 2 β γ * −β γ * 2 2 − 1 2τ 2 β γ 2 2 dβ γ + (τ 2 ) −|γ|/2 exp − C |γ * | log p ≤ π(γ | G) exp L n (β γ * ) (τ 2 ) −|γ|/2 n(1 − )λ + τ −2 −|γ|/2 exp − n(1 − )λ 2 β γ0\γ 2 2 + exp − C |γ * | log p ≤ π(γ | G) exp L n (β γ * ) (τ 2 ) −|γ|/2 n(1 − )λ + τ −2 −|γ|/2 × exp − (1 − )λ 2 C β 2 |γ 0 \ γ| − C |γ 0 \ γ| log p (1 + o(1)) for some positive constants C and C because |γ 0 | = O(1), (2π) −|γ|/2 B γ * exp − n(1 − )λ 2 β γ * −β γ * 2 2 − 1 2τ 2 β γ 2 2 dβ γ ≤ n(1 − )λ + τ −2 −|γ|/2 exp − n(1 − )λ 2 β γ0\γ 2 2 and exp − n(1 − )λ 2 β γ0\γ 2 2 ≤ exp − n(1 − )λ 2 1 2 β 0,γ0\γ 2 2 − β γ0\γ − β 0,γ0\γ 2 2 ≤ exp − n(1 − )λ 2 1 2 β 0,γ0\γ 2 2 − C |γ 0 \ γ| log p n ≤ exp − (1 − )λ 2 C β0 2 |γ 0 \ γ| − C |γ 0 \ γ| log p due to condition (A3). By deriving the lower bound of f (Y n | X γ0 .γ 0 ) as before, π(γ, G | Y n , X n ) π(γ 0 , G | Y n , X n ) π(γ | G) π(γ 0 | G) (nτ 2 ) − 1 2 (|γ|−|γ0|) det 1+ n H n (β 0,γ0 ) + (nτ 2 ) −1 I |γ0| 1/2 {(1 − )λ + (nτ 2 ) −1 } |γ|/2 × exp L n (β γ * ) − L n (β γ0 ) exp − (1 − )λ 2 C β0 2 |γ 0 \ γ| − C |γ 0 \ γ| log p π(γ | G) π(γ 0 | G) (Cnτ 2 ) − 1 2 (|γ|−|γ0|) × exp L n (β γ * ) − L n (β γ0 ) exp − (1 − )λ 2 C β0 2 |γ 0 \ γ| − C |γ 0 \ γ| log p for any γ ∈ M 2 and some constantC > 0 with probability at least 1 − 2 exp(−cn) − p −2|γ * | , because det 1+ n H n (β 0,γ0 ) + (nτ 2 ) −1 I |γ0| 1/2 {(1 − )λ + (nτ 2 ) −1 } |γ|/2 ≤ {(1 + )C * 2 + (nτ 2 ) −1 } |γ0|/2 {(1 − )λ + (nτ 2 ) −1 } |γ|/2 = {(1 + )C * 2 + (nτ 2 ) −1 } |γ0|/2 {(1 − )λ + (nτ 2 ) −1 } |γ0|/2 {(1 − )λ + (nτ 2 ) −1 } − 1 2 (|γ|−|γ0|) C − 1 2 (|γ|−|γ0|) . Therefore, by similar arguments used for γ ∈ M 1 case, π(γ, G | Y n , X n ) π(γ 0 , G | Y n , X n ) exp − C a + 1 + δ + logC log p (|γ| − |γ 0 |) log p + C (|γ * | − |γ|) log p − (1 − )λ 2 C β0 2 |γ 0 \ γ| − C |γ 0 \ γ| log p = exp − C a + 1 + δ + logC log p − C (|γ| − |γ ∩ γ 0 |) log p − (1 − )λ 2 C β0 2 |γ 0 \ γ| − C − C a − 1 − δ |γ 0 \ γ| log p = o(1) uniformly in γ ∈ M 2 with probability at least 1 − 2 exp(−cn) − p −2|γ * | for some positive constants C a , C β0 and δ, which completes the proof. Proof of Theorem 4.2. By Theorem 1 in Jalali et al. [2020], under conditions (A2), (A4) and (A5), π(G 0 |δ, X n ) P −→ 1 as n → ∞. It implies π(γ 0 , G |δ, Y n , X n ) π(γ 0 , G 0 |δ, Y n , X n ) = f (X n |δ, G)π(γ 0 | G)π(G) f (X n |δ, G 0 )π(γ 0 | G 0 )π(G 0 ) = π(G |δ, X n ) π(G 0 |δ, X n ) π(γ 0 | G) π(γ 0 | G 0 ) = π(G |δ, X n ) π(G 0 |δ, X n ) exp bγ T 0 Gγ 0 − bγ T 0 G 0 γ 0 ≤ π(G |δ, X n ) π(G 0 |δ, X n ) exp b|γ 0 | 2 P −→ 0 as n → ∞ for any G = G 0 , where π(X n |δ, G) = π(X n | ξ,δ, G)π(ξ | G)dξ. Proof of Corollary 4.3. Note that π(γ, G |δ, Y n , X n ) π(γ 0 , G 0 |δ, Y n , X n ) = π(γ, G |δ, Y n , X n ) π(γ 0 , G |δ, Y n , X n ) π(γ 0 , G |δ, Y n , X n ) π(γ 0 , G 0 |δ, Y n , X n ) = f (Y n | X γ , γ)π(γ | G) f (Y n | X γ0 , γ 0 )π(γ 0 | G) π(γ 0 , G |δ, Y n , X n ) π(γ 0 , G 0 |δ, Y n , X n ) = π(γ, G | Y n , X n ) π(γ 0 , G | Y n , X n ) π(γ 0 , G |δ, Y n , X n ) π(γ 0 , G 0 |δ, Y n , X n ) . Thus, by applying Theorems 4.1 and 4.2, we can complete the proof. Proof of Theorem 4.4. It suffices to show that γ:γ =γ0 G:G =G0 π(γ, G |δ, Y n , X n ) π(γ 0 , G 0 |δ, Y n , X n ) P −→ 0 as n → ∞. Note that γ:γ =γ0 G:G =G0 π(γ, G |δ, Y n , X n ) π(γ 0 , G 0 |δ, Y n , X n ) = G:G =G0 π(γ 0 , G |δ, Y n , X n ) π(γ 0 , G 0 |δ, Y n , X n ) γ:γ =γ0 π(γ, G | Y n , X n ) π(γ 0 , G | Y n , X n ) and {γ : γ = γ 0 , |γ| ≤ R 2 } = M 1 ∪ M 2 , where M 1 and M 2 are defined in the proof of Theorem 4.1. By the proof of Theorem 4.1, we have γ∈M1 π(γ, G | Y n , X n ) π(γ 0 , G | Y n , X n ) R2 k=|γ0|+1 γ∈M1:|γ|=k p −(1+δ−C+C ) 2 λ k−|γ0| ≤ R2 k=|γ0|+1 p − |γ 0 | k − |γ 0 | p −(1+δ−C+C ) 2 λ k−|γ0| ≤ R2 k=|γ0|+1 p −(δ−C+C ) 2 λ k−|γ0| = o(1) for some positive constants C and δ, with probability at least 1 − 2 exp(−cn) − p −|γ0|−1 , and γ∈M2 π(γ, G | Y n , X n ) π(γ 0 , G | Y n , X n ) R2 k=0 γ∈M2:|γ|=k exp − C a + 1 + δ + logC log p − C (k − |γ ∩ γ 0 |) log p − (1 − )λ 2 C β0 2 |γ 0 \ γ| − C − C a − 1 − δ |γ 0 \ γ| log p ≤ R2 k=0 (|γ0|−1)∧k ν=0 |γ 0 | ν p − |γ 0 | k − ν exp − C a + 1 + δ + logC log p − C (k − ν) log p − (1 − )λ 2 C β0 2 (|γ 0 | − ν) − C − C a − 1 − δ (|γ 0 | − ν) log p ≤ R2 k=0 (|γ0|−1)∧k ν=0 |γ 0 | |γ0|−ν exp − C a + 1 + δ + logC log p − C − 1 (k − ν) log p − (1 − )λ 2 C β0 2 (|γ 0 | − ν) − C − C a − 1 − δ (|γ 0 | − ν) log p = o(1) for some positive constants C a , C β0 and δ, with probability at least 1 − 2 exp(−cn) − p −2 . On the other hand, we have G:G =G0 π(γ 0 , G |δ, Y n , X n ) π(γ 0 , G 0 |δ, Y n , X n ) ≤ G:G =G0 π(G |δ, X n ) π(G 0 |δ, X n ) exp b|γ 0 | 2 = 1 − π(G 0 |δ, X n ) π(G 0 |δ, X n ) exp b|γ 0 | 2 P −→ 0 by Theorem 1 in Jalali et al. [2020], which completes the proof. Lemma A.1. Under conditions (A1)-(A3) and R 2 = (n/ log p) 1−d 2 with 1/2 < d < 1, for n = CR 2 log p/n, we have (1 − n )H n β 0,γ ≤ H n β γ ≤ (1 + n )H n β 0,γ for any γ ∈ M * 1 = {γ : γ ⊃ γ 0 , |γ| ≤ R 2 + |γ 0 |} and any β γ such that β γ − β 0,γ 2 ≤ C |γ| log p/n 1 2 , with probability at least 1 − 2 exp(−cn) for some positive constants c, C and C . Proof of Lemma A.1. Note that by Theorem 5.39 and Remark 5.40 in Eldar and Kutyniok [2012], there exist positive constants C * 1 and C * 2 such that C * 1 ≤ min γ:|γ|≤R2+|γ0| λ min n −1 X T γ X γ ≤ max γ:|γ|≤R2+|γ0| λ max n −1 X T γ X γ ≤ C * 2(12) with probability at least 1 − 2 exp(−cn) for some constant c > 0. Also note that H n (β γ ) = X T γ Ψ γ X γ , where Ψ γ = diag(ψ i (β γ )) ∈ R n×n and ψ i (β γ ) = Y i η i,γ φ(η i,γ ) Φ(η i,γ ) + φ(η i,γ ) 2 Φ(η i,γ ) 2 + (1 − Y i ) −η i,γ φ(η i,γ ) 1 − Φ(η i,γ ) + φ(η i,γ ) 2 (1 − Φ(η i,γ )) 2 .(13) Thus, it suffices to show that (1 − n )ψ i (β 0,γ ) ≤ ψ i (β γ ) ≤ (1 + n )ψ i (β 0,γ ), ∀i = 1, . . . , n. Due to conditions (A1)-(A3), uniformly for γ ∈ M * 1 , and |η i,γ | ≤ |η i,γ − η i,0,γ | + |η i,0,γ | ≤ o(1) + X i max β 0 1 ≤ C for some constant C > 0, with probability at least 1 − 2 exp(−cn). Since the support of η i,γ , say S, is compact and h(η i,γ ) ≡ log ψ i (β γ ) is continuously differentiable on S, h(η i,γ ) is Lipschitz continuous with probability at least 1 − 2 exp(−cn). Therefore, for any γ ∈ M * 1 , |η i,γ − η i,0,γ | = |X T i,γ β γ − X T i,γ β 0,γ | ≤ X i,γ 2 β γ − β 0,ψ i (β γ ) ψ i (β 0,γ ) = exp h(η i,γ ) − h(η i,0,γ ) ≤ exp K|η i,γ − η i,0,γ | ≤ exp K C R 2 log p n ≤ 1 + C R 2 log p n and φ i (β γ ) φ i (β 0,γ ) ≥ 1 − C R 2 log p n for some positive constants K, C, C and C , with probability at least 1 − 2 exp(−cn). By taking n = CR 2 log p/n, it completes the proof. Lemma A.2. LetŨ = Σ −1/2 (Y − µ) and P γ be the projection matrix onto the column space of D 0,γ X γ , where |γ| ≤ R 2 , and R 2 = (n/ log p) 1−d 2 with 1/2 < d < 1. Then, for some constant δ * > 0, P 0 Ũ T P γŨ > (1 + δ * ) tr(P γ ) + 2 tr(P γ )t + 2t ≤ e −t , ∀t > 0. Proof of Lemma A.2. Because the distribution of Y i given X i is a Bernoulli distribution, there exists a constant δ * > 0 and N (δ * ) such that, given X n , E 0 exp(u TŨ ) | X n ≤ exp 1 + δ * 2 u 2 2 for any n ≥ N (δ * ) and u ∈ R n in the space spanned by the columns of D 0,γ X γ (See condition 2(c) and P 0 Ũ T P γŨ > (1 + δ * ) tr(P γ ) + 2 tr(P γ )t + 2t | X n ≤ e −t , ∀t > 0, which implies the desired result because the right-hand-side does not depend on X n . with 1/2 < d < 1, we have sup γ:γ⊃γ0,|γ|=m β γ − β 0,γ 2 = O m log p n uniformly for all m ≤ R 2 + |γ 0 | with probability at least 1 − 2 exp(−cn) for some constant c > 0. Proof of Lemma A.3. Fix γ such that γ ⊃ γ 0 and |γ| = m. Note that if |γ| = |γ 0 | = 0, then the above argument trivially holds. Thus, we focus only on the case |γ 0 | ≥ 1. LetỸ = (Ỹ i ) ∈ R n andμ = (μ i ) ∈ R n , where η i,0 = X T ≤ E 0 exp φ(M 0 ) 2 8Φ(M 0 ) 2 (1 − Φ(M 0 )) 2 n i=1α 2 i ≡ E 0 exp 1 2 σ 2 α 2 2 where α = (α i ) ∈ R n and α i =α i φ(η i,0 )/{Φ(η i,0 )(1 − Φ(η i,0 ))}. The last inequality holds because |η i,0 | ≤ X i max β 0 1 ≤ M 0 for some constant M 0 . Therefore, by Theorem A.1 in Narisetty et al. [2019], it implies p −2m ≥ P 0 X T γ (Ỹ −μ) 2 2 ≥ σ 2 tr(X γ X T γ ) + 2 tr(X γ X T γ X γ X T γ ) 2m log p + 4 X γ X T γ m log p | X n ≥ P 0 X T γ (Ỹ −μ) 2 2 ≥ σ 2 tr(X T γ X γ ) + 2tr{(X T γ X γ ) 2 } 2m log p + 4 X T γ X γ m log p | X n ≥ P 0 X T γ (Ỹ −μ) 2 2 ≥ σ 2 C * 2 mn + 2 √ 2C * 2 mn log p + 4C * 2 mn log p | X n ≥ P 0 X T γ (Ỹ −μ) 2 2 ≥ 5C * 2 σ 2 mn log p | X n for all sufficiently large p, with probability at least 1 − 2 exp(−cn) for some constant c > 0. Here, C * 2 is defined at (12). Now, let β γ = β 0,γ + c n u, where u ∈ R n , u 2 = 1, c n = 20C * 2 σ 2 m log p/{nλ 2 (1 − ) 2 } for some small constant > 0 and C * 1 is defined at (12). Then, P 0 L n (β γ ) − L n (β 0,γ ) > 0 for some u | X n = P 0 c n u T s n (β 0,γ ) > 1 2 c 2 n u T H n (β γ )u for some u | X n ≤ P 0 u T s n (β 0,γ ) > 1 2 (1 − )c n λ min H n (β 0,γ ) for some u | X n ≤ P 0 u T s n (β 0,γ ) > 1 2 (1 − )λc n n for some u | X n ≤ P 0 X T γ (Ỹ −μ) 2 2 > (1 − ) 2 4 λ 2 c 2 n n 2 | X n = P 0 X T γ (Ỹ −μ) 2 2 > 5C * 2 σ 2 mn log p | X n ≤ p −2m with probability at least 1 − 2 exp(−cn) for some constant c > 0, where the first equality holds due to the Taylor's expansion, the first inequality holds due to Lemma A.1, the second inequality holds due to Lemma A.4. Due to the concavity of L n (·), it implies that Note that a constant M satisfying (14) always exists due to conditions (A2) and (A3). Then, by Hoeffding's inequality, P 0 |I| ≥ n(1 − w)/2 ≥ 1 − exp{−n(1 − w)}. Furthermore, by (12), for any γ such that |γ| ≤ R 2 + |γ 0 |, λ min 1 n H n (β 0,γ ) = λ min 1 n X T γ diag(ψ i (β 0,γ ))X γ ≥ λ min 1 n X T I,γ diag(ψ i (β 0,γ ))X I,γ ≥ 1 2 C * 1 (1 − w) min i∈I ψ i (β 0,γ ) with probability at least 1 − 2 exp(−cn) for some constant c > 0, where the definition of ψ i (β 0,γ ) is given at (13) and X I,γ ∈ R |I|×|γ| is a submatrix of X γ consisting of the columns corresponding to I. Note that for any i ∈ I, ψ i (β 0,γ ) ≥ M φ(M ) Φ(M ) + φ(M ) 2 Φ(M ) 2 ≡ d M . Therefore, min γ:|γ|≤R2+|γ0| λ min 1 n H n (β 0,γ ) ≥ 1 2 d M C * 1 (1 − w) with probability at least 1 − 2 exp(−cn) for some constant c > 0. 2, . . . , n and Ω 0 1 = O(1). Here, · 1 denotes the matrix 1 -norm. Condition (A3) means that the true regression coefficient β 0 has finite numbers of nonzero entries and a bounded 1 -norm. It holds that if we assume β 0 max = O(1). For examples, Johnson and Rossell [2012] Theorem 4. 1 ( 1Posterior ratio consistency of γ). Suppose conditions (A1)-(A3) and (A6) hold. Then, for any γ = γ 0 and G, Theorem 4. 2 ( 2Posterior ratio consistency of G). Suppose conditions (A2), (A4), (A5) and |γ 0 | = O(1) Corollary 4. 3 ( 3Joint posterior ratio consistency of γ and G). Suppose conditions (A1)-(A6) hold. Then, Theorem 4. 4 ( 4Joint selection consistency of γ and G). Suppose conditions (A1)-(A6) hold. Then, π(γ 0 , G 0 |δ, Y, X) P −→ 1 as n → ∞. Figure 1 : 1The comparison of average wall-clock seconds per iteration under different dimensions.2 -norm (Frobenius norm) and the vector ∞ -norm (entrywise maximum norm), respectively. activities for PD that occur in the regions of interest including right superior frontal gyrus (F1.R), left middle temporal gyrus, anterior division (T2a.L), left angular gyrus (AG.L), right angular gyrus (AG.R), right temporal fusiform cortex, anterior division (TFa.R), right occipital fusiform gyrus (OF.R), left frontal operculum cortex (FO.L) and left putamen (Put.L). In Figure 2 : 2The lateral and medial view of the functional brain network inferred by J.BSSC. Nodes selected by J.BSSC are marked in orange. related explanations in Narisetty et al. [2019]). By Theorem A.1 in Narisetty et al. [2019], Lemma A. 3 . 3Under conditions (A1)-(A3) and R 2 = (n/ log p) 2m + 2e −cn = o(1), which gives the desired result. Lemma A.4. Under conditions(A1)-(A3) and R 2 = (n/ log p) at least 1 − 2 exp(−cn) for some constant c > 0. Proof of Lemma A.4. Let [n] = {1, . . . , n} and I = {i ∈ [n] : |X T i β 0 | ≤ M } for some large constant M Table 1 : 1The summary statistics for Scenario 1 are represented for different settings, which corresponds to different choice of the true coefficient β 0 .Setting 1 Setting 2 Sensitivity Specificity MCC MSPE Sensitivity Specificity MCC MSPE J.BSSC (b = 1 2 ) 0.87 0.99 0.87 0.08 0.79 0.99 0.80 0.10 J.BSSC (b = 0) 0.61 1 0.76 0.12 0.46 1 0.63 0.18 J.SSSL 0.35 0.97 0.43 0.24 0.25 0.97 0.35 0.29 Lasso 0.72 0.98 0.69 0.12 0.80 0.98 0.75 0.12 Elastic 0.90 0.93 0.62 0.20 1 0.94 0.70 0.20 Setting 3 Setting 4 Sensitivity Specificity MCC MSPE Sensitivity Specificity MCC MSPE J.BSSC (b = 1 2 ) 0.67 0.99 0.73 0.14 0.85 0.99 0.89 0.08 J.BSSC (b = 0) 0.41 0.99 0.55 0.18 0.50 1 0.69 0.16 J.SSSL 0.20 0.98 0.32 0.34 0.32 0.98 0.40 0.27 Lasso 0.69 0.98 0.68 0.14 0.82 0.98 0.79 0.09 Elastic 0.95 0.94 0.69 0.20 0.84 0.97 0.75 0.19 Table 2 : 2The summary statistics for Scenario 2 are represented for different settings, which corresponds to different choice of the true coefficient β 0 .Setting 1 Setting 2 Sensitivity Specificity MCC MSPE Sensitivity Specificity MCC MSPE J.BSSC (b = 1 2 ) 1 1 1 0.05 1 1 1 0.05 J.BSSC (b = 0) 0.90 1 0.95 0.10 0.88 1 0.90 0.14 J.SSSL 0.52 0.98 0.63 0.20 0.41 0.97 0.42 0.21 Lasso 0.74 0.89 0.44 0.19 0.66 0.90 0.41 0.18 Elastic 0.70 0.86 0.40 0.24 0.50 0.93 0.38 0.24 Setting 3 Setting 4 Sensitivity Specificity MCC MSPE Sensitivity Specificity MCC MSPE J.BSSC (b = 1 2 ) 1 0.99 0.95 0.08 0.90 0.99 0.89 0.11 J.BSSC (b = 0) 0.83 0.99 0.85 0.12 0.76 0.99 0.84 0.18 J.SSSL 0.36 0.97 0.39 0.21 0.34 0.96 0.29 0.23 Lasso 0.64 0.92 0.43 0.19 0.61 0.89 0.36 0.17 Elastic 0.62 0.89 0.40 0.24 0.60 0.86 0.32 0.23 defined as Sensitivitiy = T P T P + F N , TP, TN, FP and FN are the true positive, true negative, false positive and false negative, respectively, andβ denotes the estimated coefficient based on each method. For Bayesian methods, the usual GLM estimates based on the selected variables were used asβ. We generated test samples and corresponding predictors Y test,1 , Y test,2 . . . , Y test,ntest and X test,1 , X test,2 . . . , X test,ntest , respectively, with n test = 50 to calculate the MSPE.The sensitivity, specificity, MCC and MSPE, under different scenarios, are reported at Tables 1-4 to Table 3 : 3The summary statistics for Scenario 3 are represented for different settings, which corresponds to different choice of the true coefficient β 0 .Setting 1 Setting 2 Sensitivity Specificity MCC MSPE Sensitivity Specificity MCC MSPE J.BSSC (b = 1 2 ) 0.92 1 0.96 0.09 0.77 0.98 0.71 0.13 J.BSSC (b = 0) 0.90 1 0.94 0.10 0.52 1 0.65 0.12 J.SSSL 0.49 0.98 0.57 0.20 0.43 0.98 0.49 0.20 Lasso 0.51 0.92 0.35 0.22 0.41 0.94 0.33 0.22 Elastic 0.55 0.86 0.36 0.24 0.48 0.89 0.32 0.24 Setting 3 Setting 4 Sensitivity Specificity MCC MSPE Sensitivity Specificity MCC MSPE J.BSSC (b = 1 2 ) 0.83 0.96 0.69 0.18 0.55 0.99 0.67 0.17 J.BSSC (b = 0) 0.56 1 0.70 0.17 0.41 1 0.62 0.18 J.SSSL 0.30 0.97 0.32 0.24 0.25 0.97 0.29 0.26 Lasso 0.43 0.94 0.33 0.22 0.56 0.92 0.38 0.19 Elastic 0.52 0.88 0.37 0.24 0.55 0.91 0.37 0.23 Table 4 : 4The summary statistics for Scenario 4 are represented for different settings, which corresponds to different choice of the true coefficient β 0 .Setting 1 Setting 2 Sensitivity Specificity MCC MSPE Sensitivity Specificity MCC MSPE J.BSSC (b = 1 2 ) 0.78 1 0.86 0.07 0.74 1 0.85 0.08 J.BSSC (b = 0) 0.57 1 0.72 0.07 0.48 1 0.65 0.13 J.SSSL 0.40 0.99 0.43 0.16 0.31 0.99 0.33 0.19 Lasso 0.70 0.99 0.73 0.08 0.73 0.98 0.61 0.06 Elastic 0.79 0.99 0.75 0.20 0.75 0.99 0.77 0.18 Setting 3 Setting 4 Sensitivity Specificity MCC MSPE Sensitivity Specificity MCC MSPE J.BSSC (b = 1 2 ) 0.70 1 0.78 0.12 0.64 1 0.72 0.17 J.BSSC (b = 0) 0.45 1 0.61 0.15 0.38 1 0.51 0.16 J.SSSL 0.25 0.98 0.29 0.19 0.22 0.98 0.24 0.21 Lasso 0.72 0.98 0.61 0.11 0.70 0.97 0.60 0.13 Elastic 0.67 0.98 0.59 0.18 0.59 0.99 0.65 0.19 Table 5 : 5The summary statistics for graph selection under Setting 1 and Scenario 1 are represented.Sensitivity Specificity MCC #Error J.BSSC 1 1 0.90 2 J.SSSL 1 1 0.87 3 GLasso 1 0.98 0.19 239 CLIME 1 0.98 0.18 256 TIGER 1 1 0.73 8 Table 6: The summary statistics for precision matrix estimation under Setting 1 and Scenario 1 are represented. E 1 E 2 E 3 E 4 J.BSSC 0.13 0.21 0.08 0.28 J.SSSL 8.01 7.26 1.86 11.95 GLasso 0.37 0.24 0.19 0.19 CLIME 1.51 2.22 0.58 4.16 TIGER 1.47 1.91 0.31 3.48 Table 7 : 7The summary statistics for prediction performance on the testing set for all methods.Furthermore, J.BSSC identifies regions of interest that are coherent with the altered functional features in cortical and subcortical regions discovered in previous studiesSensitivity Specificity MCC MSPE J.BSSC (b = 1 2 ) 0.92 0.82 0.74 0.09 J.BSSC (b = 0) 0.67 0.73 0.39 0.18 J.SSSL 0.58 0.73 0.31 0.24 Lasso 0.67 0.91 0.59 0.16 Elastic 0.75 0.82 0.57 0.16 Bayesian analysis of binary and polychotomous response data. H James, Siddhartha Albert, Chib, Journal of the American Statistical Association. 88422James H. Albert and Siddhartha Chib. Bayesian analysis of binary and polychotomous response data. Journal of the American Statistical Association, 88(422):669-679, 1993. 21 Bayesian structure learning in graphical models. Sayantan Banerjee, Subhashis Ghosal, Journal of Multivariate Analysis. 13610Sayantan Banerjee and Subhashis Ghosal. Bayesian structure learning in graphical models. Journal of Multivariate Analysis, 136:147-162, 2015. 10 Optimal predictive model selection. The annals of statistics. Maria Maddalena Barbieri, James O Berger, 32Maria Maddalena Barbieri and James O Berger. Optimal predictive model selection. The annals of statistics, 32(3):870-897, 2004. 13 A constrained 1 minimization approach to sparse precision matrix estimation. Tony Cai, Weidong Liu, Xi Luo, Journal of the American Statistical Association. 10649416Tony Cai, Weidong Liu, and Xi Luo. A constrained 1 minimization approach to sparse precision matrix estimation. Journal of the American Statistical Association, 106(494):594-607, 2011. 16 Joint bayesian variable and dag selection consistency for high-dimensional regression models with network-structured covariates. Xuan Cao, Kyoungjae Lee, 2021a. 3Statist. Sinica. 31Xuan Cao and Kyoungjae Lee. Joint bayesian variable and dag selection consistency for high-dimensional regression models with network-structured covariates. Statist. Sinica, 31:1509-1530, 2021a. 3 Joint bayesisan variable and dag selection consistency for high-dimensional regression models with network-structured covariates. Xuan Cao, Kyoungjae Lee, Statistica Sinica. 31311Xuan Cao and Kyoungjae Lee. Joint bayesisan variable and dag selection consistency for high-dimensional regression models with network-structured covariates. Statistica Sinica, 31(3):1509-1530, 2021b. 10, 11 Posterior graph selection and estimation consistency for highdimensional bayesian dag models. Xuan Cao, Kshitij Khare, Malay Ghosh, The Annals of Statistics. 47111Xuan Cao, Kshitij Khare, and Malay Ghosh. Posterior graph selection and estimation consistency for high- dimensional bayesian dag models. The Annals of Statistics, 47(1):319-348, 2019. 11 A radiomics approach to predicting parkinson's disease by incorporating whole-brain functional activity and gray matter structure. Xuan Cao, Xiao Wang, Chen Xue, Shaojun Zhang, Qingling Huang, Weiguo Liu, Frontiers in neuroscience. 1418Xuan Cao, Xiao Wang, Chen Xue, Shaojun Zhang, Qingling Huang, and Weiguo Liu. A radiomics ap- proach to predicting parkinson's disease by incorporating whole-brain functional activity and gray matter structure. Frontiers in neuroscience, 14:751-751, 2020. 18 Sex-related pattern of intrinsic brain connectivity in drug-naïve parkinson's disease patients. Fabrizio Rosa De Micco, Federica Esposito, Giuseppina Di Nardo, Mattia Caiazzo, Antonio Siciliano, Mario Russo, Gioacchino Cirillo, Alessandro Tedeschi, Tessitore, Movement Disorders. 34720Rosa De Micco, Fabrizio Esposito, Federica di Nardo, Giuseppina Caiazzo, Mattia Siciliano, Antonio Russo, Mario Cirillo, Gioacchino Tedeschi, and Alessandro Tessitore. Sex-related pattern of intrinsic brain con- nectivity in drug-naïve parkinson's disease patients. Movement Disorders, 34(7):997-1005, 2019. 20 Characterizing dynamic local functional connectivity in the human brain. Lifu Deng, Junfeng Sun, Lin Cheng, Shanbao Tong, Scientific Reports. 61Lifu Deng, Junfeng Sun, Lin Cheng, and Shanbao Tong. Characterizing dynamic local functional connectivity in the human brain. Scientific Reports, 6(1):26976, 05 2016. 18 Variable selection and dependency networks for genomewide data. Adrian Dobra, Biostatistics. 104Adrian Dobra. Variable selection and dependency networks for genomewide data. Biostatistics, 10(4): 621-639, 06 2009. 3 Bayesian inference for general gaussian graphical models with application to multivariate lattice data. Adrian Dobra, Alex Lenkoski, Abel Rodriguez, Journal of the American Statistical Association. 106496Adrian Dobra, Alex Lenkoski, and Abel Rodriguez. Bayesian inference for general gaussian graphical models with application to multivariate lattice data. Journal of the American Statistical Association, 106(496): 1418-1433, 2011. 2 Compressed sensing: theory and applications. C Yonina, Gitta Eldar, Kutyniok, Cambridge University Press30Yonina C Eldar and Gitta Kutyniok. Compressed sensing: theory and applications. Cambridge University Press, 2012. 30 Correlation between hippocampus mri radiomic features and resting-state intrahippocampal functional connectivity in alzheimer's disease. Qi Feng, Mei Wang, Qiaowei Song, Zhengwang Wu, Hongyang Jiang, Peipei Pang, Zhengluan Liao, Enyan Yu, Zhongxiang Ding, Frontiers in Neuroscience. 132435Qi Feng, Mei Wang, Qiaowei Song, Zhengwang Wu, Hongyang Jiang, Peipei Pang, Zhengluan Liao, Enyan Yu, and Zhongxiang Ding. Correlation between hippocampus mri radiomic features and resting-state intrahippocampal functional connectivity in alzheimer's disease. Frontiers in Neuroscience, 13:435, 2019. 2 Sparse inverse covariance estimation with the graphical lasso. Jerome Friedman, Trevor Hastie, Robert Tibshirani, Biostatistics. 9316Jerome Friedman, Trevor Hastie, and Robert Tibshirani. Sparse inverse covariance estimation with the graphical lasso. Biostatistics, 9(3):432-441, 12 2007. 2, 4, 13, 16 . Ludovica Griffanti, Philipp Stratmann, Michal Rolinski, Nicola Filippini, Enikő Zsoldos, Abda Mahmood, Giovanna Zamboni, Gwenaëlle Douaud, Johannes C Klein, Mika Kivimäki, Archana Singh-Manoux. Ludovica Griffanti, Philipp Stratmann, Michal Rolinski, Nicola Filippini, Enikő Zsoldos, Abda Mahmood, Giovanna Zamboni, Gwenaëlle Douaud, Johannes C. Klein, Mika Kivimäki, Archana Singh-Manoux, Exploring variability in basal ganglia connectivity with functional mri in healthy aging. Michele T Hu, Klaus P Ebmeier, Clare E Mackay, Brain Imaging and Behavior. 12620Michele T. Hu, Klaus P. Ebmeier, and Clare E. Mackay. Exploring variability in basal ganglia connectivity with functional mri in healthy aging. Brain Imaging and Behavior, 12(6):1822-1827, 2018. 20 Spike and slab variable selection: Frequentist and bayesian strategies. H Ishwaran, U B Kogalur, J S Rao, Ann. Statist. 332H. Ishwaran, U. B. Kogalur, and J. S. Rao. Spike and slab variable selection: Frequentist and bayesian strategies. Ann. Statist., 33(2):730-773, 04 2005. 2 B-concord -a scalable bayesian high-dimensional precision matrix estimation procedure. Peyman Jalali, Kshitij Khare, George Michailidis, arXiv:2005.090172830arXiv preprintPeyman Jalali, Kshitij Khare, and George Michailidis. B-concord -a scalable bayesian high-dimensional precision matrix estimation procedure. arXiv preprint arXiv:2005.09017, 2020. 2, 6, 7, 8, 9, 10, 11, 17, 21, 28, 30 Bayesian model selection in high-dimensional settings. E Valen, David Johnson, Rossell, Journal of the American Statistical Association. 107498Valen E Johnson and David Rossell. Bayesian model selection in high-dimensional settings. Journal of the American Statistical Association, 107(498):649-660, 2012. 10 A convex pseudolikelihood framework for high dimensional partial correlation estimation with convergence guarantees. Kshitij Khare, Sang-Yun Oh, Bala Rajaratnam, Journal of the Royal Statistical Society: Series B (Statistical Methodology). 77411Kshitij Khare, Sang-Yun Oh, and Bala Rajaratnam. A convex pseudolikelihood framework for high dimen- sional partial correlation estimation with convergence guarantees. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 77(4):803-825, 2015. 2, 3, 4, 11 . Yuko Koshimori, Sang-Soo Cho, Marion Criaud, Leigh Christopher, Mark Jacobs, Christine Ghadery, Sarah Coakeley, Madeleine Harris, Romina Mizrahi, Clement Hamani, Anthony E Lang, Sylvain Houle, Yuko Koshimori, Sang-Soo Cho, Marion Criaud, Leigh Christopher, Mark Jacobs, Christine Ghadery, Sarah Coakeley, Madeleine Harris, Romina Mizrahi, Clement Hamani, Anthony E. Lang, Sylvain Houle, and Disrupted nodal and hub organization account for brain network abnormalities in parkinson's disease. Antonio P Strafella, Frontiers in Aging Neuroscience. 820Antonio P. Strafella. Disrupted nodal and hub organization account for brain network abnormalities in parkinson's disease. Frontiers in Aging Neuroscience, 8:259, 2016. 20 Functional brain network efficiency predicts intelligence. Nicolas Langer, Andreas Pedroni, Lorena R R Gianotti, Jürgen Hänggi, Daria Knoch, Lutz Jäncke, Human Brain Mapping. 336Nicolas Langer, Andreas Pedroni, Lorena R.R. Gianotti, Jürgen Hänggi, Daria Knoch, and Lutz Jäncke. Functional brain network efficiency predicts intelligence. Human Brain Mapping, 33(6):1393-1406, 2012. 2 Bayesian group selection in logistic regression with application to mri data analysis. Kyoungjae Lee, Xuan Cao, Biometrics. 77225Kyoungjae Lee and Xuan Cao. Bayesian group selection in logistic regression with application to mri data analysis. Biometrics, 77(2):391-400, 2021a. 10, 21, 24, 25 Bayesian inference for high-dimensional decomposable graphs. Kyoungjae Lee, Xuan Cao, 2021b. 10Electronic Journal of Statistics. 151Kyoungjae Lee and Xuan Cao. Bayesian inference for high-dimensional decomposable graphs. Electronic Journal of Statistics, 15(1):1549-1582, 2021b. 10 Network-constrained regularization and variable selection for analysis of genomic data. Caiyan Li, Hongzhe Li, Bioinformatics. 249Caiyan Li and Hongzhe Li. Network-constrained regularization and variable selection for analysis of genomic data. Bioinformatics, 24(9):1175-1182, 03 2008. 2 Variable selection and regression analysis for graph-structured covariates with an application to genomics. Caiyan Li, Hongzhe Li, Ann. Appl. Stat. 43Caiyan Li and Hongzhe Li. Variable selection and regression analysis for graph-structured covariates with an application to genomics. Ann. Appl. Stat., 4(3):1498-1516, 09 2010. 2 Bayesian variable selection in structured high-dimensional covariate spaces with applications in genomics. Fan Li, Nancy R Zhang, Journal of the American Statistical Association. 105491Fan Li and Nancy R. Zhang. Bayesian variable selection in structured high-dimensional covariate spaces with applications in genomics. Journal of the American Statistical Association, 105(491):1202-1214, 2010. 6 Functional connectivity markers of depression in advanced parkinson's disease. Hai Lin, Xiaodong Cai, Doudou Zhang, Jiali Liu, Peng Na, Weiping Li, NeuroImage: Clinical. 25Hai Lin, Xiaodong Cai, Doudou Zhang, Jiali Liu, Peng Na, and Weiping Li. Functional connectivity markers of depression in advanced parkinson's disease. NeuroImage: Clinical, 25:102130, 2020. 21 An empirical g-wishart prior for sparse high-dimensional gaussian graphical models. Chang Liu, Ryan Martin, arXiv:1912.038071011arXiv preprintChang Liu and Ryan Martin. An empirical g-wishart prior for sparse high-dimensional gaussian graphical models. arXiv preprint arXiv:1912.03807, 2019. 10, 11 Bayesian regularization via graph laplacian. Fei Liu, Sounak Chakraborty, Fan Li, Yan Liu, Aurelie C Lozano, Bayesian Anal. 92Fei Liu, Sounak Chakraborty, Fan Li, Yan Liu, and Aurelie C. Lozano. Bayesian regularization via graph laplacian. Bayesian Anal., 9(2):449-474, 06 2014. 3 Tiger: A tuning-insensitive approach for optimally estimating gaussian graphical models. Han Liu, Lie Wang, 10.1214/16-EJS1195.16Electron. J. Statist. 111Han Liu and Lie Wang. Tiger: A tuning-insensitive approach for optimally estimating gaussian graphical models. Electron. J. Statist., 11(1):241-294, 2017. doi: 10.1214/16-EJS1195. 16 Stability approach to regularization selection (stars) for high dimensional graphical models. Han Liu, Kathryn Roeder, Larry Wasserman, Proceedings of the 23rd International Conference on Neural Information Processing Systems. the 23rd International Conference on Neural Information Processing Systems216Han Liu, Kathryn Roeder, and Larry Wasserman. Stability approach to regularization selection (stars) for high dimensional graphical models. In Proceedings of the 23rd International Conference on Neural Information Processing Systems -Volume 2, NIPS'10, page 1432-1440, 2010. 16 Empirical bayes posterior concentration in sparse high-dimensional linear models. Ryan Martin, Raymond Mess, Stephen G Walker, Bernoulli. 23311Ryan Martin, Raymond Mess, and Stephen G Walker. Empirical bayes posterior concentration in sparse high-dimensional linear models. Bernoulli, 23(3):1822-1847, 2017. 11 Temporal lobe changes in early, untreated parkinson's disease. W R Wayne Martin, Marguerite Wieler, Myrlene Gee, Richard Camicioli, Movement Disorders. 241320W.R. Wayne Martin, Marguerite Wieler, Myrlene Gee, and Richard Camicioli. Temporal lobe changes in early, untreated parkinson's disease. Movement Disorders, 24(13):1949-1954, 2009. 20 High-dimensional graphs and variable selection with the lasso. Nicolai Meinshausen, Peter Bühlmann, The Annals of Statistics. 34315Nicolai Meinshausen and Peter Bühlmann. High-dimensional graphs and variable selection with the lasso. The Annals of Statistics, 34(3):1436-1462, 2006. 2, 5, 15 Brain degeneration in parkinson's disease patients with cognitive decline: a coordinate-based meta-analysis. Alexander S Mihaescu, Mario Masellis, Ariel Graff-Guerrero, Jinhee Kim, Marion Criaud, Sang Soo Cho, Christine Ghadery, Mikaeel Valli, Antonio P Strafella, Brain Imaging and Behavior. 13420Alexander S. Mihaescu, Mario Masellis, Ariel Graff-Guerrero, Jinhee Kim, Marion Criaud, Sang Soo Cho, Christine Ghadery, Mikaeel Valli, and Antonio P. Strafella. Brain degeneration in parkinson's disease patients with cognitive decline: a coordinate-based meta-analysis. Brain Imaging and Behavior, 13(4): 1021-1034, 2019. 20 Gene expression network analysis and applications to immunology. Şerban Nacu, Rebecca Critchley-Thorne, Peter Lee, Susan Holmes, Bioinformatics. 237Şerban Nacu, Rebecca Critchley-Thorne, Peter Lee, and Susan Holmes. Gene expression network analysis and applications to immunology. Bioinformatics, 23(7):850-858, 01 2007. 2 Skinny gibbs: A consistent and scalable gibbs sampler for model selection. N Naveen, Juan Narisetty, Xuming Shen, He, Journal of the American Statistical Association. 11452733Naveen N. Narisetty, Juan Shen, and Xuming He. Skinny gibbs: A consistent and scalable gibbs sampler for model selection. Journal of the American Statistical Association, 114(527):1205-1217, 2019. 10, 32, 33 Bayesian variable selection with shrinking and diffusing priors. Naidu Naveen, Xuming Narisetty, He, The Annals of Statistics. 42210Naveen Naidu Narisetty and Xuming He. Bayesian variable selection with shrinking and diffusing priors. The Annals of Statistics, 42(2):789-817, 2014. 2, 6, 10 Bayesian multivariate logistic regression. M Sean, David B O&apos;brien, Dunson, Biometrics. 603Sean M O'brien and David B Dunson. Bayesian multivariate logistic regression. Biometrics, 60(3):739-746, 2004. 21 The bayesian lasso. Trevor Park, George Casella, Journal of the American Statistical Association. 103482Trevor Park and George Casella. The bayesian lasso. Journal of the American Statistical Association, 103 (482):681-686, 2008. 7 Partial correlation estimation by joint sparse regression models. Jie Peng, Pei Wang, Nengfeng Zhou, Ji Zhu, Journal of the American Statistical Association. 10448611Jie Peng, Pei Wang, Nengfeng Zhou, and Ji Zhu. Partial correlation estimation by joint sparse regression models. Journal of the American Statistical Association, 104(486):735-746, 2009. 2, 11 Joint bayesian variable and graph selection for regression models with network-structured predictors. Christine B Peterson, Francesco C Stingo, Marina Vannucci, Statistics in Medicine. 35717Christine B. Peterson, Francesco C. Stingo, and Marina Vannucci. Joint bayesian variable and graph selection for regression models with network-structured predictors. Statistics in Medicine, 35(7):1017-1031, 2016. 3, 6, 13, 16, 17 The spike-and-slab lasso. Veronika Ročková, Edward I George, Journal of the American Statistical Association. 113521Veronika Ročková and Edward I George. The spike-and-slab lasso. Journal of the American Statistical Association, 113(521):431-444, 2018. 2 Radiomics approach in the neurodegenerative brain. Christian Salvatore, Isabella Castiglioni, Antonio Cerasa, Aging Clinical and Experimental Research. 336Christian Salvatore, Isabella Castiglioni, and Antonio Cerasa. Radiomics approach in the neurodegenerative brain. Aging Clinical and Experimental Research, 33(6):1709-1711, Jun 2021. 2 Journal club: Default-mode network connectivity in cognitively unimpaired patients with parkinson disease. Stefano Sandrone, Marco Catani, Neurology. 812320Stefano Sandrone and Marco Catani. Journal club: Default-mode network connectivity in cognitively unim- paired patients with parkinson disease. Neurology, 81(23):e172-e175, 2013. 20 Prediagnostic presentations of parkinson's disease in primary care: a case-control study. Anette Schrag, Laura Horsfall, Kate Walters, Alastair Noyce, Irene Petersen, The Lancet Neurology. 14118Anette Schrag, Laura Horsfall, Kate Walters, Alastair Noyce, and Irene Petersen. Prediagnostic presentations of parkinson's disease in primary care: a case-control study. The Lancet Neurology, 14(1):57-64, 2015. 18 Rest: A toolkit for resting-state functional magnetic resonance imaging data processing. Xiao-Wei Song, Zhang-Ye Dong, Xiang-Yu Long, Su-Fang Li, Xi-Nian Zuo, Chao-Zhe Zhu, Yong He, Yu-Feng Chao-Gan Yan, Zang, PLOS ONE. 69Xiao-Wei Song, Zhang-Ye Dong, Xiang-Yu Long, Su-Fang Li, Xi-Nian Zuo, Chao-Zhe Zhu, Yong He, Chao- Gan Yan, and Yu-Feng Zang. Rest: A toolkit for resting-state functional magnetic resonance imaging data processing. PLOS ONE, 6(9):1-12, 09 2011. 19 Network-based strategies in metabolomics data analysis and interpretation: from molecular networking to biological interpretation. Leonardo Perez De Souza, Saleh Alseekh, Yariv Brotman, Alisdair R Fernie, Expert Review of Proteomics. 174Leonardo Perez De Souza, Saleh Alseekh, Yariv Brotman, and Alisdair R Fernie. Network-based strategies in metabolomics data analysis and interpretation: from molecular networking to biological interpretation. Expert Review of Proteomics, 17(4):243-255, 2020. 2 Variable selection for discriminant analysis with Markov random field priors for the analysis of microarray data. Francesco C Stingo, Marina Vannucci, Bioinformatics. 274Francesco C. Stingo and Marina Vannucci. Variable selection for discriminant analysis with Markov random field priors for the analysis of microarray data. Bioinformatics, 27(4):495-501, 12 2010. 6 Regression shrinkage and selection via the lasso. Robert Tibshirani, Journal of the Royal Statistical Society: Series B (Methodological). 58113Robert Tibshirani. Regression shrinkage and selection via the lasso. Journal of the Royal Statistical Society: Series B (Methodological), 58(1):267-288, 1996. 2, 13 Martin J Wainwright, High-Dimensional Statistics: A Non-Asymptotic Viewpoint. Cambridge Series in Statistical and Probabilistic Mathematics. Cambridge University PressMartin J. Wainwright. High-Dimensional Statistics: A Non-Asymptotic Viewpoint. Cambridge Series in Statistical and Probabilistic Mathematics. Cambridge University Press, 2019. 2 Bayesian Graphical Lasso Models and Efficient Posterior Computation. Hao Wang, Bayesian Analysis. 744Hao Wang. Bayesian Graphical Lasso Models and Efficient Posterior Computation. Bayesian Analysis, 7 (4):867 -886, 2012. 2, 4 Scaling it up: Stochastic search structure learning in graphical models. Hao Wang, Bayesian Analysis. 10216Hao Wang. Scaling it up: Stochastic search structure learning in graphical models. Bayesian Analysis, 10 (2):351-377, 2015. 2, 13, 16 Classification of Unmedicated Bipolar Disorder Using Whole-Brain Functional Activity and Connectivity: A Radiomics Analysis. Ying Wang, Kai Sun, Zhenyu Liu, Guanmao Chen, Yanbin Jia, Shuming Zhong, Jiyang Pan, Li Huang, Jie Tian, Cerebral Cortex. 303Ying Wang, Kai Sun, Zhenyu Liu, Guanmao Chen, Yanbin Jia, Shuming Zhong, Jiyang Pan, Li Huang, and Jie Tian. Classification of Unmedicated Bipolar Disorder Using Whole-Brain Functional Activity and Connectivity: A Radiomics Analysis. Cerebral Cortex, 30(3):1117-1128, 08 2019. 18 Aberrant intra-and internetwork functional connectivity in depressed Parkinson's disease. Luqing Wei, Xiao Hu, Yajing Zhu, Yonggui Yuan, Weiguo Liu, Hong Chen, Scientific reports. 7118Luqing Wei, Xiao Hu, Yajing Zhu, Yonggui Yuan, Weiguo Liu, and Hong Chen. Aberrant intra-and in- ternetwork functional connectivity in depressed Parkinson's disease. Scientific reports, 7(1):1-12, 2017. 18 High dimensional posterior convergence rates for decomposable graphical models. Ruoxuan Xiang, Kshitij Khare, Malay Ghosh, Electronic Journal of Statistics. 92Ruoxuan Xiang, Kshitij Khare, and Malay Ghosh. High dimensional posterior convergence rates for decom- posable graphical models. Electronic Journal of Statistics, 9(2):2828-2854, 2015. 10 Bayesian variable selection and estimation for group lasso. Xiaofan Xu, Malay Ghosh, Bayesian Anal. 104Xiaofan Xu and Malay Ghosh. Bayesian variable selection and estimation for group lasso. Bayesian Anal., 10(4):909-936, 12 2015. 9 Regional homogeneity and functional connectivity analysis of resting-state magnetic resonance in patients with bipolar ii disorder. Zhe Xu, Jianbo Lai, Haorong Zhang, H Chee, Peng Ng, Dongrong Zhang, Shaohua Xu, Hu, 2019. 18Medicine. 9847Zhe Xu, Jianbo Lai, Haorong Zhang, Chee H. Ng, Peng Zhang, Dongrong Xu, and Shaohua Hu. Regional homogeneity and functional connectivity analysis of resting-state magnetic resonance in patients with bipolar ii disorder. Medicine, 98(47), 2019. 18 Dparsf: a matlab toolbox for "pipeline" data analysis of resting-state fmri. Chaogan Yan, Yufeng Zang, Frontiers in Systems Neuroscience. 41318Chaogan Yan and Yufeng Zang. Dparsf: a matlab toolbox for "pipeline" data analysis of resting-state fmri. Frontiers in Systems Neuroscience, 4:13, 2010. 18 Consistent group selection with bayesian high dimensional modeling. Xinming Yang, Naveen N Narisetty, Bayesian Anal. 153Xinming Yang and Naveen N. Narisetty. Consistent group selection with bayesian high dimensional modeling. Bayesian Anal., 15(3):909-935, 09 2020. 9 On the computational complexity of highdimensional bayesian variable selection. Yun Yang, Martin J Wainwright, Michael I Jordan, Ann. Statist. 446Yun Yang, Martin J. Wainwright, and Michael I. Jordan. On the computational complexity of high- dimensional bayesian variable selection. Ann. Statist., 44(6):2497-2532, 12 2016. 6 Model selection and estimation in the Gaussian graphical model. Ming Yuan, Yi Lin, Biometrika. 9414Ming Yuan and Yi Lin. Model selection and estimation in the Gaussian graphical model. Biometrika, 94(1): 19-35, 03 2007. 2, 4 Regional homogeneity approach to fmri data analysis. Yufeng Zang, Tianzi Jiang, Yingli Lu, Yong He, Lixia Tian, NeuroImage. 22118Yufeng Zang, Tianzi Jiang, Yingli Lu, Yong He, and Lixia Tian. Regional homogeneity approach to fmri data analysis. NeuroImage, 22(1):394 -400, 2004. 18 Aberrant functional connectivity and activity in parkinson's disease and comorbidity with depression based on radiomic analysis. Xulian Zhang, Xuan Cao, Chen Xue, Jingyi Zheng, Shaojun Zhang, Qingling Huang, Weiguo Liu, e02103, 2021. 20Brain and Behavior. 115Xulian Zhang, Xuan Cao, Chen Xue, Jingyi Zheng, Shaojun Zhang, Qingling Huang, and Weiguo Liu. Aberrant functional connectivity and activity in parkinson's disease and comorbidity with depression based on radiomic analysis. Brain and Behavior, 11(5):e02103, 2021. 20 Journal of the royal statistical society: series B (statistical methodology. Hui Zou, Trevor Hastie, 67Regularization and variable selection via the elastic netHui Zou and Trevor Hastie. Regularization and variable selection via the elastic net. Journal of the royal statistical society: series B (statistical methodology), 67(2):301-320, 2005. 13
[]
[ "Robot Learning From Randomized Simulations: A Review", "Robot Learning From Randomized Simulations: A Review" ]
[ "Fabio Muratore \nIntelligent Autonomous Systems Group\nTechnical University of Darmstadt\nDarmstadt, Germany\n\nHonda Research Institute Europe\nOffenbach am Main\nGermany\n", "Fabio Ramos \nSchool of Computer Science\nUniversity of Sydney\nSydneyNSWAustralia\n\nNVIDIA\nSeattleWAUnited States\n", "Greg Turk \nGeorgia Institute of Technology\nAtlantaGAUnited States\n", "Wenhao Yu \nRobotics at Google\nMountain ViewCAUnited States\n", "Michael Gienger \nHonda Research Institute Europe\nOffenbach am Main\nGermany\n", "Jan Peters \nIntelligent Autonomous Systems Group\nTechnical University of Darmstadt\nDarmstadt, Germany\n" ]
[ "Intelligent Autonomous Systems Group\nTechnical University of Darmstadt\nDarmstadt, Germany", "Honda Research Institute Europe\nOffenbach am Main\nGermany", "School of Computer Science\nUniversity of Sydney\nSydneyNSWAustralia", "NVIDIA\nSeattleWAUnited States", "Georgia Institute of Technology\nAtlantaGAUnited States", "Robotics at Google\nMountain ViewCAUnited States", "Honda Research Institute Europe\nOffenbach am Main\nGermany", "Intelligent Autonomous Systems Group\nTechnical University of Darmstadt\nDarmstadt, Germany" ]
[]
The rise of deep learning has caused a paradigm shift in robotics research, favoring methods that require large amounts of data. Unfortunately, it is prohibitively expensive to generate such data sets on a physical platform. Therefore, state-of-the-art approaches learn in simulation where data generation is fast as well as inexpensive and subsequently transfer the knowledge to the real robot (sim-to-real). Despite becoming increasingly realistic, all simulators are by construction based on models, hence inevitably imperfect. This raises the question of how simulators can be modified to facilitate learning robot control policies and overcome the mismatch between simulation and reality, often called the "reality gap." We provide a comprehensive review of sim-to-real research for robotics, focusing on a technique named "domain randomization" which is a method for learning from randomized simulations.
10.3389/frobt.2022.799893
[ "https://arxiv.org/pdf/2111.00956v2.pdf" ]
240,354,417
2111.00956
b76fc6b3127060d66206f804c3f610a8fd1036c2
Robot Learning From Randomized Simulations: A Review Fabio Muratore Intelligent Autonomous Systems Group Technical University of Darmstadt Darmstadt, Germany Honda Research Institute Europe Offenbach am Main Germany Fabio Ramos School of Computer Science University of Sydney SydneyNSWAustralia NVIDIA SeattleWAUnited States Greg Turk Georgia Institute of Technology AtlantaGAUnited States Wenhao Yu Robotics at Google Mountain ViewCAUnited States Michael Gienger Honda Research Institute Europe Offenbach am Main Germany Jan Peters Intelligent Autonomous Systems Group Technical University of Darmstadt Darmstadt, Germany Robot Learning From Randomized Simulations: A Review 10.3389/frobt.2022.799893roboticssimulationreality gapsimulation optimization biasreinforcement learningdomain randomizationsim-to-real The rise of deep learning has caused a paradigm shift in robotics research, favoring methods that require large amounts of data. Unfortunately, it is prohibitively expensive to generate such data sets on a physical platform. Therefore, state-of-the-art approaches learn in simulation where data generation is fast as well as inexpensive and subsequently transfer the knowledge to the real robot (sim-to-real). Despite becoming increasingly realistic, all simulators are by construction based on models, hence inevitably imperfect. This raises the question of how simulators can be modified to facilitate learning robot control policies and overcome the mismatch between simulation and reality, often called the "reality gap." We provide a comprehensive review of sim-to-real research for robotics, focusing on a technique named "domain randomization" which is a method for learning from randomized simulations. INTRODUCTION Given that machine learning has achieved super-human performance in image classification (Ciresan et al., 2012;Krizhevsky et al., 2012) and games (Mnih et al., 2015;Silver et al., 2016), the question arises why we do not see similar results in robotics. There are several reasons for this. First, learning to act in the physical world is orders of magnitude more difficult. While the data required by modern (deep) learning algorithms could be acquired directly on a real robot , this solution is too expensive in terms of time and resources to scale up. Alternatively, the data can be generated in simulation faster, cheaper, safer, and with unmatched diversity. In doing so, we have to cope with unavoidable approximation errors that we make when modeling reality. These errors, often referred to as the "reality gap," originate from omitting physical phenomena, inaccurate parameter estimation, or the discretized numerical integration in typical solvers. Compounding this issue, state-of-the-art (deep) learning methods are known to be brittle (Szegedy et al., 2014;Goodfellow et al., 2015;Huang et al., 2017), that is, sensitive to shifts in their input domains. Additionally, the learner is free to exploit the simulator, overfitting to features which do not occur in the real world. For example, Baker et al. (2020) noticed that the agents learned to exploit the physics engine to gain an unexpected advantage. While this exploitation is an interesting observation for studies made entirely in simulation, it is highly undesirable in sim-to-real scenarios. In the best case, the reality gap manifests itself as a performance drop, giving a lower success rate or reduced tracking accuracy. More likely, the learned policy is not transferable to the robot because of unknown physical effects. One effect that is difficult to model is friction, often leading to an underestimation thereof in simulation, which can result in motor commands that are not strong enough to get the robot moving. Another reason for failure are parameter estimation errors, which can quickly lead to unstable system dynamics. This case is particularly dangerous for the human and the robot. For these reasons, bridging the reality gap is the essential step to endow robots with the ability to learn from simulated experience. There is a consensus that further increasing the simulator's accuracy alone will not bridge this gap (Höfer et al., 2020). Looking at breakthroughs in machine learning, we see that deep models in combination with large and diverse data sets lead to better generalization (Russakovsky et al., 2015;Radford et al., 2019). In a similar spirit, a technique called domain randomization has recently gained momentum ( Figure 1). The common characteristic of such approaches is the perturbation of simulator parameters, state observations, or applied actions. Typical quantities to randomize include the bodies' inertia and geometry, the parameters of the friction and contact models, possible delays in the actuation, efficiency coefficients of motors, levels of sensor noise, as well as visual properties such as colors, illumination, position and orientation of a camera, or additional artifacts to the image (e.g., glare). Domain randomization can be seen as a regularization method that prevents the learner from overfitting to individual simulation instances. From the Bayesian perspective, we can interpret the distribution over simulators as a representation of uncertainty. In this paper, we first introduce the necessary nomenclature and mathematical fundamentals for the problem (Section 2). Next, we review early approaches for learning from randomized simulations, state the practical requirements, and describe measures for sim-to-real transferability (Section 3). Subsequently, we discuss the connections between research on sim-to-real transfer and related fields (Section 4). Moreover, we introduce a taxonomy for domain randomization and categorize the current state of the art (Section 5). Finally, we conclude and outline possible future research directions (Section 6). For those who want to first become more familiar with robot policy learning as well as policy search, we recommend these surveys: Kober et al. (2013), Deisenroth et al. (2013), and Chatzilygeroudis et al. (2020). PROBLEM FORMULATION AND NOMENCLATURE We begin our discussion by defining critical concepts and nomenclature used throughout this article. Markov Decision Processes (MDPs): Consider a discrete-time dynamical system s t+1~Pξ s t+1 |s t , a t ( ) , s 0~μ ξ s 0 ( ), a t~πθ a t |s t ( ), ξ~p ξ ( ), (1) with the continuous state s t ∈ S ξ ⊆ R n s and continuous action a t ∈ A ξ ⊆ R n a at time step t. The environment, also called domain, is characterized by its parameters ξ ∈ R n ξ (e.g., masses, friction coefficients, time delays, or surface appearance properties) which are in general assumed to be random variables distributed according to an unknown probability distribution p(ξ): R n ξ → R + . A special case of this is the common assumption that the domain parameters obey a parametric distribution p ϕ (ξ) with unknown parameters ϕ (e.g., mean and variance). The domain parameters determine the transition probability density function P ξ : S ξ × A ξ × S ξ → R + that describes the system's stochastic dynamics. The initial state s 0 is drawn from the start state distribution μ ξ : S ξ → R + . In general, the instantaneous reward is a random variable depending on the current state and action as well as the next state. Here we make the common simplification that the reward is a deterministic function of the current state and action r ξ : S ξ × A ξ → R Together with the temporal discount factor γ ∈ [0, 1], the system forms a MDP described by the tuple M ξ 〈S ξ , A ξ , P ξ , μ ξ , r ξ , γ〉. Reinforcement Learning (RL): The goal of a RL agent is to maximize the expected (discounted) return, a numeric scoring function which measures the policy's performance. The expected discounted return of a policy π θ (a t |s t ) with the parameters θ ∈ Θ ⊆ R n θ is defined as FIGURE 1 | Examples of sim-to-real robot learning research using domain randomization: (left) Multiple simulation instances of robotic in-hand manipulation (OpenAI et al., 2020), (middle top) transformation to a canonical simulation (James et al., 2019), (middle bottom) synthetic 3D hallways generated for indoor drone flight (Sadeghi and Levine, 2017), (right top) ball-in-a-cup task solved with adaptive dynamics randomization (Muratore et al., 2021a), (right bottom) quadruped locomotion (Tan et al., 2018). J θ, ξ ( ) E s0~μ ξ s0 ( ) E st+1~P ξ st,at ( ), at~π θ at|st ( ) T−1 t 0 γ t ξ s t , a t ( ) θ | , ξ, s 0 . (2) While learning from experience, the agent adapts its policy parameters. The resulting state-action-reward tuples are collected in trajectories, a.k.a. rollouts, τ {s t , a t , r t } T−1 t 0 ∈ T with r t r ξ (s t , a t ). In a partially observable MDP, the policy's input would not be the state but observations there of o t ∈ O ξ ⊆ R n o , which are obtained through an environmentspecific mapping o t = f obs (s t ). Domain randomization: When augmenting the RL setting with domain randomization, the goal becomes to maximize the expected (discounted) return for a distribution of domain parameters J θ ( ) E ξ~p ξ ( ) J θ, ξ ( ) [ ] E ξ~p ξ ( ) E τ~p τ ( ) T−1 t 0 γ t ξ s t , a t ( ) θ | , ξ, s 0 .(3) The outer expectation with respect to the domain parameter distribution p(ξ) is the key difference compared to the standard MDP formulation. It enables the learning of robust policies, in the sense that these policies work for a whole set of environments instead of overfitting to a particular problem instance. FOUNDATIONS OF SIM-TO-REAL TRANSFER Modern research on learning from (randomized) physics simulations is based on solid foundation of prior work (Section 3.1). Parametric simulators are the core component of every sim-to-real method (Section 3.2). Even though the details of their randomization are crucial, they are rarely discussed (Section 3.3). Estimating the sim-to-real transferability during or after learning allows one to assess or predict the policy's performance in the target domain (Section 3.4). Early Methods The roots of randomized simulations trace back to the invention of the Monte Carlo method (Metropolis and Ulam, 1949), which computes its results based on repeated random sampling and subsequent statistical analysis. Later, the concept of common random numbers, also called correlated sampling, was developed as a variance reduction technique (Kahn and Marshall, 1953;Wright and Ramsay, 1979). The idea is to synchronize the random numbers for all stochastic events across the simulation runs to achieve a (desirably positive) correlation between random variables reducing the variance of an estimator based on a combination of them. Many of the simto-real challenges which are currently tackled have already been identified by Brooks (1992). In particular, Brooks addresses the overfitting to effects which only occur in simulation as well as the idealized modeling on sensing and actuation. To avoid overfitting, he advocated for reactive behavior-based programming which is deeply rooted in, hence tailored to, the embodiment. Focusing on RL, Sutton (1991) introduced the Dyna architecture which revolves around predicting from a learned world model and updating the policy from this hypothetical experience. Viewing the data generated from randomized simulators as "imaginary," emphasizes the parallels of domain randomization to Dyna. As stated by Sutton, the usage of "mental rehearsal" to predict and reason about the effect of actions dates back even further in other fields of research such as psychology (Craik, 1943;Dennett, 1975). Instead of querying a learned internal model, Jakobi et al. (1995) added random noise the sensors and actuators while learning, achieving the arguably first sim-to-real transfer in robotics. In follow-up work, Jakobi (1997) formulated the radical envelope of noise hypothesis which states that "it does not matter how inaccurate or incomplete [the simulations] are: controllers that have evolved to be reliably fit in simulation will still transfer into reality." Picking up on the idea of common random numbers, Ng and Jordan (2000) suggested to explicitly control the randomness of a simulator, i.e., the random number generator's state, rendering the simulator deterministic. This way the same initial configurations can be (re-)used for Monte Carlo estimations of different policies' value functions, allowing one to conduct policy search in partially observable problems. Bongard et al. (2006) bridged the sim-to-real gap through iterating model generation and selection depending on the short-term state-action history. This process is repeated for a given number of iterations, and then yields the self-model, i.e., a simulator, which best explains the observed data. Inspired by these early approaches, the systematic analysis of randomized simulations for robot learning has become a highly active research direction. Moreover, the prior work above also falsifies the common belief that domain randomization originated recently with the rise of deep learning. Nevertheless, the current popularity of domain randomization can be explained by its widespread use in the computer vision and locomotion communities as well as its synergies with deep learning methods. The key difference between the early and the recent domain randomization methods (Section 5) is that the latter (directly) manipulate the simulators' parameters. Constructing Stochastic Simulators Simulators can be obtained by implementing a set of physical laws for a particular system. Given the challenges in implementing an efficient simulator for complex systems, it is common to use general purpose physics engines such as ODE, DART, Bullet, Newton, SimBody, Vortex, MuJoCo, Havok, Chrono, RaiSim, PhysX, FleX, or Brax. These simulators are parameterized generative models, which describe how multiple bodies or particles evolve over time by interacting with each other. The associated physics parameters can be estimated by system identification (Section 4.6), which generally involves executing experiments on the physical platform and recording associated measurement. Additionally, using the Gauss-Markov theorem one could also compute the parameters' covariance and hence construct a normal distribution for each domain parameter. Differentiable simulators facilitate deep learning for robotics (Degrave et al., 2019;Coumans, 2020;Heiden et al., 2021) by propagating the gradients though the dynamics. Current research extends the differentiability to soft body dynamics (Hu et al., 2019). Alternatively, the system dynamics can be captured using nonparametric methods like Gaussian Processes (GPs) (Rasmussen and Williams, 2006) as for example demonstrated by Calandra et al. (2015). It is important to keep in mind that even if the domain parameters have been identified very accurately, simulators are nevertheless just approximations of the real world and are thus always imperfect. Several comparisons between various physics engines were made (Ivaldi et al., 2014;Erez et al., 2015;Chung and Pollard, 2016;Collins et al., 2019;Körber et al., 2021). However, note that these results become outdated quickly due to the rapid development in the field, or are often limited to very few scenarios and partially introduce custom metrics to measure their performance or accuracy. Apart from the physics engines listed above, there is an orthogonal research direction investigating human-inspired learning of the physics laws from visual input (Battaglia et al., 2013;Wu et al., 2015) as well as physical reasoning given a configuration of bodies (Battaglia et al., 2016), which is out of the scope of this review. Randomizing a Simulator Learning from randomized simulations entails significant design decisions: Which parameters should be randomized? Depending on the problem, some domain parameters have no influence (e.g., the mass of an idealized rolling ball) while others are pivotal (e.g., the pendulum length for a stabilization task). It is recommended to first identify the essential parameters (Xie et al., 2020). For example, most robot locomotion papers highlight the importance of varying the terrain and contact models, while applications such as drone control benefit from adding perturbations, e.g., to simulate a gust of wind. Injecting random latency and noise to the actuation is another frequent modeling choice. Starting from a small set of randomized domain parameters, identified from prior knowledge, has the additional benefit of shortening the evaluation time which involves approximating an expectation over domains, which scales exponentially with the number of parameters. Moreover, including at least one visually observable parameter (e.g., an extent of a body) helps to verify if the values are set as expected. When should the parameters be randomized? Episodic dynamics randomization, without a rigorous theoretical justification, is the most common approach. Randomizing the domain parameters at every time step instead would drastically increase the variance, and pose a challenge to the implementations since this typically implies recreating the simulation at every step. Imagine a stack of cubes standing on the ground. If we now vary the cubes' side lengths individually while keeping their absolute positions fixed, they will either lose contact or intersect with their neighboring cube(s). In order to keep the stack intact, we need to randomize the cubes with respect to their neighbors, additionally moving them in space. Executing this once at the beginning is fine, but doing this at every step creates artificial "movement" which would almost certainly be detrimental. Orthogonal to the argumentation above, alternative approaches apply random disturbance forces and torques at every time step. In these cases, the distribution over disturbance magnitudes is chosen to be constant until the randomization scheme is updated. To the best of our knowledge, event-triggered randomization has not been explored yet. How should the parameters be randomized? Answering this question is what characterizes a domain randomization method (Section 5). There are a few aspects that needs to be considered in practice when designing a domain randomization scheme, such as the numerical stability of the simulation instances. Low masses for example quickly lead to stiff differential equations which might require a different (implicit) integrator. Furthermore, the noise level of the introduced randomness needs to match the precision of the state estimation. If the noise is too low, the randomization is pointless. On the other side, if the noise level is too high, the learning procedure will fail. To find the right balance between these considerations, we can start by statistically analyzing the incoming measurement signals. What about physical plausibility? The application of pseudorandom color patterns, e.g., Perlin noise (Perlin, 2002), has become a frequent choice for computer vision applications. Despite that these patterns do not occur on real-world objects, this technique has improved the robustness of object detectors (James et al., 2017;Pinto et al., 2018). Regarding the randomization of dynamics parameters, no research has so far hinted that physically implausible simulations (e.g., containing bodies with negative masses) are useful. On the other hand, it is safe to say that these can cause numerical instabilities. Thus, ensuring feasibility of the resulting simulator is highly desirable. One solution is to project the domain parameters into a different space, guaranteeing physical plausibility via the inverse projection. For example, a body's mass could be learned in the log-space such that the subsequent exp-transformation, applied before setting the new parameter value, yields strictly positive numbers. However, most of the existing domain randomization approaches can not guarantee physical plausibility. Even in the case of rigid body dynamics there are notable differences between physics engines, as was observed by Muratore et al. (2018) when transferring a robot control policy trained using Vortex to Bullet and vice versa. Typical sources for deviations are different coordinate representations, numerical solvers, friction and contact models. Especially the latter two are decisive for robot manipulation. For vision-based tasks, Alghonaim and Johns (2020) found a strong correlation between the renderer's quality and sim-to-real transferability. Additionally, the authors emphasize the importance of randomizing both distractor objects and background textures for generalizing to unseen environments. Measuring and Predicting the Reality Gap Coining the term "reality gap," Koos et al. (2010) hypothesize that the fittest solutions in simulation often rely on poorly simulated phenomena. From this, they derive a multi-objective formulation for sim-to-real transfer where performance and transferability Frontiers in Robotics and AI | www.frontiersin.org April 2022 | Volume 9 | Article 799893 need to be balanced. In subsequent work, Koos et al. (2013) defined a transferability function that maps controller parameters to their estimated target domain performance. A surrogate model of this function is regressed from the real-world fitness values that are obtained by executing the controllers found in simulation. The Simulation Optimization Bias (SOB) (Muratore et al., 2018;) is a quantitative measure for the transferability of a control policy from a set of source domains to a different target domain originating from the same distribution. Building on the formulation of the optimality gap from convex optimization (Mak et al., 1999;Bayraksan and Morton, 2006), Muratore et al. (2018) proposed a Monte Carlo estimator of the SOB as well as an upper confidence bound, tailored to reinforcement learning settings. This bound can be used as an indicator to stop training when the predicted transferability exceeds a threshold. Moreover, the authors show that the SOB is always positive, i.e., optimistic, and in expectation monotonically decreases with an increasing number of domains. Collins et al. (2019) quantify the accuracy of ODE, (Py)Bullet, Newton, Vortex, and MuJoCo in a real-world robotic setup. The accuracy is defined as the accumulated mean-squared error between the Cartesian ground truth position, tracked by a motion capture system, and the simulators' prediction. Based on this measure, they conclude that simulators are able to model the control and kinematics accurately, but show deficits during dynamic robot-object interactions. To obtain a quantitative estimate of the transferability, Zhang et al. (2020) suggest to learn a probabilistic dynamics model which is evaluated on a static set of target domain trajectories. This dynamics model is trained jointly with the policy in the same randomized simulator. The transferability score is chosen to be the average negative log-likelihood of the model's output given temporal state differences from the real-world trajectories. Thus, the proposed method requires a set of pre-recorded target domain trajectories, and makes the assumption that for a given domain the model's prediction accuracy correlates with the policy performance. With robot navigation in mind, Kadian et al. (2020) define the Sim-vs-Real Correlation Coefficient (SRCC) to be the Pearson correlation coefficient on data pairs of scalar performance metrics. The data pairs consist of the policy performance achieved in a simulator instance as well as in the real counterpart. Therefore, in contrast to the SOB (Muratore et al., 2018), the SRCC requires real-world rollouts. A high SRCC value, i.e., close to 1, predicts good transferability, while low values, i.e., close to 0, indicates that the agent is exploited the simulation during learning. Kadian et al. (2020) also report tuning the domain parameters with grid search to increase the SRCC. By using the Pearson correlation, the SRCC is restricted to linear correlation, which might not be a notable restriction in practice. RELATION OF SIM-TO-REAL TO OTHER FIELDS There are several research areas that overlap with sim-to-real in robot learning, more specifically domain randomization ( Figure 2). In the following, we describe those that either share the same goal, or employ conceptually similar methods. Curriculum Learning The key idea behind curriculum learning is to increase the sample efficiency by scheduling the training process such that the agent first encounters "easier" tasks and gradually progresses to "harder" ones. Hence, the agent can bootstrap from the knowledge it gained at the beginning, before learning to solve more difficult task instances. Widely known in supervised learning (Bengio et al., 2009;Kumar et al., 2010), curriculum learning has been applied to RL (Asada et al., 1996;Erez and Smart, 2008;Klink et al., 2019Klink et al., , 2021. The connection between curriculum learning and domain randomization can be highlighted by viewing the task as a part of the domain, i.e., the MDP, rendering the task parameters a subspace of the domain parameters. From this point of view, the curriculum learning schedule describes how the domain parameter distribution is updated. There are several challenges to using a curriculum learning approach for sim-to-real transfer. Three such challenges are: 1) we can not always assume to have an assessment of the difficulty level of individual domain parameter configurations, 2) curriculum learning does not aim at finding solutions robust to model uncertainty, and 3) curriculum learning methods may require a target distribution which is not defined in the domain randomization setting. However, adjustments can be made to circumvent these problems. OpenAI et al. (2019) suggested a heuristic for the domain randomization schedule that increases the boundaries of each domain parameter individually until the return drops more than a predefined threshold. Executing this approach on a computing cluster, the authors managed to train a policy and a vision system which in combination solve a Rubik's cube with a tendondriven robotic hand. Another intersection point of curriculum learning and sim-to-real transfer is the work by Morere et al. (2019), where a hierarchical planning method for discrete domains with unknown dynamics is proposed. Learning abstract skills based on a curriculum enables the algorithm to outperform planning and RL baselines, even in domains with a very large number of possible states. Meta Learning Inspired by the human ability to quickly master new tasks by leveraging the knowledge extracted from solving other tasks, meta learning (Santoro et al., 2016;Finn et al., 2017) seeks to make use of prior experiences gained from conceptually similar tasks. The field of meta learning currently enjoys high popularity, leading to abundant follow-up work. Grant et al. (2018) for example casts meta learning as hierarchical Bayesian inference. Furthermore, the meta learning framework has been adapted to the RL setting (Wang et al., 2017;Nagabandi et al., 2019). The optimization over an ensemble of tasks can be translated to the optimization over an ensemble of domain instances, modeled by different MDPs (Section 2). Via this duality one can view domain randomization as a special form of meta learning where the robot's task remains qualitatively unchanged but the environment varies. Thus, the tasks seen during the meta training phase are analogous to domain instances experienced earlier in the training process. However, when looking at the complete procedure, meta learning and domain randomization are fundamentally different. The goal of meta learning, i.e., Finn et al. (2017), is to find a suitable set of initial weights, which when updated generalizes well to a new task. Domain randomization on the other hand strives to directly solve a single task, generalizing over domain instances. Transfer Learning The term transfer learning covers a wide range of machine learning research, aiming at using knowledge learned in the source domain to solve a task in the target domain. Rooted in classification, transfer learning is categorized in several subfields by for example differentiating 1) if labeled data is available in the source or target domain, and 2) if the tasks in both domains are the same (Pan and Yang, 2010;Zhuang et al., 2021). Domain adaptation is one of the resulting subfields, specifying the case where ground truth information is only available in the target domain which is not equal to the source domain while the task remains the same. Thus, domain adaptation methods are in general suitable to tackle sim-to-real problems. However, the research fields evolved at different times in different communities, with different goals in mind. The keyword "simto-real" specifically concerns regression and control problems where the focus lies on overcoming the mismatch between simulation and reality. In contrast, most domain adaptation techniques are not designed for a dynamical system as the target domain. Knowledge Distillation When executing a controller on a physical device operating at high frequencies, it is of utmost importance that the forward pass finishes with the given time frame. With deep Neural Network (NN) policies, and especially with ensembles of these, this requirement can become challenging to meet. Distilling the knowledge of a larger network into a smaller one reduces the evaluation time. Knowledge distillation (Hinton et al., 2015) has been successfully applied to several machine learning applications such as natural language processing (Cui et al., 2017), and object detection (Chen et al., 2017). In the context of RL, knowledge distillation techniques can be used to compress the learned behavior of one or more teachers into a single student (Rusu et al., 2016a). Based on samples generated by the teachers, the student is trained in a supervised manner to imitate them. This idea can be applied to sim-to-real robot learning in a straightforward manner, where the teachers can be policies optimal for specific domain instances (Brosseit et al., 2021). Complementarily, knowledge distillation has been applied to multitask learning (Parisotto et al., 2016;Teh et al., 2017), promising to improve sample efficiency when learning a new task. A technical comparison of policy distillation methods for RL is provided by Czarnecki et al. (2019). Distributional Robustness The term robustness is overloaded with different meanings, such as the ability to (quickly) counteract external disturbances, or the resilience against uncertainties in the underlying model's parameters. The field of robust control aims at designing controllers that explicitly deal with these uncertainties (Zhou and Doyle, 1998). Within this field, distributional robust optimization is a framework to find the worst-case probabilistic model from a so-called ambiguity set, and subsequently set a policy which acts optimally in this worst case. Mathematically, the problem is formulated as bilevel optimization, which is solved iteratively in practice. By restricting the model selection to the ambiguity set, distributional robust optimization regularizes the adversary to prevent the process from yielding solutions that are overly conservative policies. Under the lens of domain randomization, the ambiguity set closely relates to the distribution over domain parameters. Abdulsamad et al. (2021) for example define the ambiguity set as a Kullback-Leibler (KL) ball the nominal distribution. Other approaches use a moment-based ambiguity set (Delage and Ye, 2010) or introduce chance constrains (Van Parys et al., 2016). For a review of distributional robust optimization, see Zhen et al. (2021). Chatzilygeroudis et al. (2020) point out that performing policy search under an uncertain model is equivalent to finding a policy that can perform well under various dynamics models. Hence, they argue that "model-based policy search with probabilistic models is performing something similar to dynamics randomization." System Identification The goal of system identification is to find the set of model parameters which fit the observed data best, typically by minimizing the prediction-dependent loss such as the meansquared error. Since the simulator is the pivotal element in every domain randomization method, the accessible parameters and their nominal values are of critical importance. When a manufacturer does not provide data for all model parameters, or when an engineer wants to deploy a new model, system identification is typically the first measure to obtain an estimate of the domain parameters. In principle, a number of approaches can be applied depending on the assumptions on the internal structure of the simulator. The earliest approaches in robotics recognized the linearity of the rigid body dynamics with respect to combinations of physics parameters such as masses, moments of inertia, and link lengths, thus proposed to use linear regression (Atkeson et al., 1986), and later Bayesian linear regression (Ting et al., 2006). However, it was quickly observed that the inferred parameters may be physically implausible, leading to the development of methods that can account for this (Ting et al., 2011). With the advent of deep learning, such structured physics-based approaches have been enhanced with NNs, yielding nonlinear system identification methods such as the ones based on the Newton-Euler forward dynamics (Sutanto et al., 2020;Lutter et al., 2021b). Alternatively, the simulator can be augmented with a NN to learn the domain parameter residuals, minimizing the one step prediction error (Allevato et al., 2019). On another front, system identification based on the classification loss between simulated and real samples has been investigated (Du et al., 2021;Jiang et al., 2021). System identification can also be interpreted as an episodic RL problem by treating the trajectory mismatch as the cost function and iteratively updating a distribution over models (Chebotar et al., 2019). Recent simulation-based inference methods yield highly expressive posterior distributions that capture multi-modality as well as correlations between the domain parameters (Section 4.8). Adaptive Control The well-established field of adaptive control is concerned with the problem of adapting a controller's parameters at runtime to operate initially uncertain or varying systems (e.g., aircraft reaching supersonic speed). A prominent method is model reference adaptive control, which tracks a reference model's output specifying the desired closed-loop behavior. Model Identification Adaptive Control (MIAC) is a different variant, which includes an online system identification component that continuously estimates the system's parameters based on the prediction error of the output signal (Åström and Wittenmark, 2008;Landau et al., 2011). Given the identified system, the controller is updated subsequently. Similarly, there exists a line of sim-to-real reinforcement learning approaches that condition the policy on the estimated domain parameters (Yu et al., , 2019bMozifian et al., 2020) or a latent representation thereof (Yu et al., 2019a;Peng et al., 2020;Kumar et al., 2021). The main difference to MIAC lies in the adaption mechanism. Adaptive control techniques typically define the parameters' gradient proportional to the prediction error, while the approaches referenced above make the domain parameters an input to the policy. Simulation-Based Inference Simulators are predominantly used as forward models, i.e., to make predictions. However, with the increasing fidelity and expressiveness of simulators, there is a growing interest to also use them for probabilistic inference (Cranmer et al., 2020). In the case of simulation-based inference, the simulator and its parameters define the statistical model. Inference tasks differ by the quantity to be inferred. Regarding sim-to-real transfer, the most frequent task is to infer the simulation parameters from real-world time series data. Similarly to system identification (Section 4.6), the result can be a point estimate, or a posterior distribution. Likelihood-Free Inference (LFI) methods are a type of simulation-based inference approaches which are particularly well-suited when we can make very little assumptions about the underlying generative model, treating it as an implicit function. These approaches only require samples from the model (e.g., a non-differentiable black-box simulator) and a measure of how likely real observations could have been generated from the simulator. Approximate Bayesian computation is well-known class of LFI methods that applies Monte Carlo sampling to infer the parameters by comparing summary statistics of synthetically generated and observed data. There exist plenty of variants for approximate Bayesian computation (Marjoram et al., 2003;Beaumont et al., 2009;Sunnåker et al., 2013) as well as studies on the design of low-dimensional summary statistics (Fearnhead and Prangle, 2012). In order to increase the efficiency and thereby scale LFI higher-dimensional problems, researchers investigated amortized approaches, which conduct the inference over multiple sequential rounds. Sequential neural posterior estimation approaches (Papamakarios and Murray, 2016;Lueckmann et al., 2017;Greenberg et al., 2019) approximate the conditional posterior, allowing for direct sampling from the posterior. Learning the likelihood (Papamakarios et al., 2019) can be useful in the context for hypothesis testing. Alternatively, posterior samples can be generated from likelihood-ratios (Durkan et al., 2020;Hermans et al., 2020). However, simulation-based inference does not explicitly consider policy optimization or domain randomization. Recent approaches connected all three techniques, and closed the reality gap by inferring a distribution over simulators while training policies in simulation Barcelos et al., 2020;Muratore et al., 2021c). DOMAIN RANDOMIZATION FOR SIM-TO-REAL TRANSFER We distinguish between static (Section 5.1), adaptive (Section 5.2), and adversarial (Section 5.3) domain randomization (Figure 3). Static, as well as adaptive, methods are characterized by randomly sampling a set of domain parameters ξ~p(ξ) at the beginning of Frontiers in Robotics and AI | www.frontiersin.org April 2022 | Volume 9 | Article 799893 each simulated rollout. A randomization scheme is categorized as adaptive if the domain parameter distribution is updated during learning, otherwise the scheme is called static. The main advantage of adaptive schemes is that they alleviate the need for hand-tuning the distributions of the domain parameters, which is currently a decisive part of the hyper-parameter search in a static scheme. Nonetheless, the prior distributions still demand design decisions. On the downside, every form of adaptation requires data from the target domain, typically the real robot, which is significantly more expensive to obtain. Another approach for learning robust policies in simulation is to apply adversarial disturbances during the training process. We classify these perturbations as a form of domain randomization, since they either depend on a highly stochastic adversary learned jointly with the policy, or directly contain a random process controlling the application of the perturbation. Adversarial approaches may yield exceptionally robust control strategies. However, without any further restrictions, it is always possible to create scenarios in which the protagonist agent can never win, i.e., the policy can not learn the task. Balancing the adversary's power is pivotal to an adversarial domain randomization method, adding a sensitive hyperparameter. Another way to distinguish domain randomization concepts is the representation of the domain parameter distribution. The vast majority of algorithms assume a specific probability distribution (e.g., normal or uniform) independently for every parameter. This modeling decision has the benefit of greatly reducing the complexity, but at the same time severely limits the expressiveness. Novel LFI methods (Section 5.2) estimate the complete posterior, hence allow the recognition of correlations between the domain parameters, multi-modality, and skewness. Static Domain Randomization Approaches that sample from a fixed domain parameter distribution typically aim at performing sim-to-real transfer without using any real-world data ( Figure 4). Since running the policy on a physical device is generally the most difficult and time-consuming part, static approaches promise quick and relatively easy to obtain results. In terms of final policy performance in the target domain, these methods are usually inferior to those that adapt the domain parameter distribution. Nevertheless, static domain randomization has bridged the reality gap in several cases. Randomizing Dynamics Without Using Real-World Data at Runtime More than a decade ago, Wang et al. (2010) proposed to randomize the simulator in which the training data is generated. The authors examined the randomization of initial states, external disturbances, goals, and actuator noise, clearly showing an improved robustness of the learned locomotion controllers in simulated experiments (sim-to-sim). Mordatch et al. (2015) used a finite model ensembles to run (offline) trajectory optimization on a small-scale humanoid robot, achieving one of the first sim-to-real transfers in robotics powered by domain randomization. Similarly, Lowrey et al. (2018) employed the Natural Policy Gradient (Kakade, 2001) to learn a continuous controller for a three-finger positioning task, after carefully identifying the system's parameters. Conforming with Mordatch et al. (2015), their results showed that the policy learned from the identified model was able to perform the sim-to-real transfer, but the policies learned from an ensemble of models was more robust to modeling errors. In contrast, Peng et al. (2018) combined model-free RL with recurrent NN policies that were trained using hindsight experience replay (Andrychowicz et al., 2017) in order to push an object by controlling a robotic arm. Tan et al. (2018) presented an example for learning quadruped gaits from randomized simulations, where particular efforts were made to conduct a prior system identification. They empirically found that sampling domain parameters from a uniform distribution together with applying random forces and regularizing the observation space can be enough to cross the reality gap. For quadrotor control, Molchanov et al. (2019) trained feedforward NN policies which generalize over different physical drones. The suggested randomization includes a custom model for motor lag and noise based on an Ornstein-Uhlenbeck process. Rajeswaran et al. (2017) explored the use of a risk-averse objective function, optimizing a lower quantile of the return. The method was only evaluated on simulated MuJoCo tasks, however it was also one of the first methods that draws upon the Bayesian perspective. Moreover, this approach was employed as a baseline by Muratore et al. (2021b), who introduced a measure for the inter-domain transferability of controllers together with a risk-neutral randomization scheme. The resulting policies have the unique feature of providing a (probabilistic) guarantee on the estimated transferability and managed to directly transfer to the real platform in two different experiments. Siekmann et al. (2021) achieved the sim-to-real transfer of a recurrent NN policy for bipedal walking. The policy was trained using model-free RL in simulation with uniformly distributed dynamics parameters as well as randomized task-specific terrain. According to the authors, the recurrent architecture and the terrain randomization were pivotal. Randomizing Dynamics Using Real-World Data at Runtime The work by Cully et al. (2015) can be seen as both static and adaptive domain randomization, where a large set of hexapod locomotion policies is learned before execution on the physical robot, and subsequently evaluated in simulation. Every policy is associated with one configuration of the so-called behavioral descriptors, which can be interpreted as domain parameters. Instead of retraining or fine-tuning, the proposed algorithm Frontiers in Robotics and AI | www.frontiersin.org April 2022 | Volume 9 | Article 799893 reacts to performance drops, e.g., due to damage, by querying Bayesian Optimization (BO) to sequentially select one of the pretrained policies and measure its performance on the robot. Instead of randomizing the simulator parameters, Cutler and How (2015) explored learning a probabilistic model, chosen to be a GP, of the environment using data from both simulated and real-world dynamics. A key feature of this method is to incorporate the simulator as a prior for the probabilistic model, and subsequently use this information of the policy updates with PILCO (Deisenroth and Rasmussen, 2011). The authors demonstrated policy transfer for a inverted pendulum task. In follow-up work, Cutler and How (2016) extended the algorithm to make a remote-controlled toy car learn how to drift in circles. Antonova et al. (2019) propose a sequential Variational AutoEncoder (VAE) to embed trajectories into a compressed latent space which is used with BO to search for controllers. The VAE and the domain-specific high-level controllers are learned jointly, while the randomization scheme is left unchanged. Leveraging a custom kernel which measures the KL divergence between trajectories and the data efficiency of BO, the authors report successful sim-to-real transfers after 10 target domain trials for a hexapod locomotion task as well as 20 trials for a manipulation task. Kumar et al. (2021) learned a quadruped locomotion policy that passed joint positions to a lower level PD controller without using any real-wold data. The essential components of this approach are the encoder that projects the domain parameters to a latent space and the adaption module which is trained to regress the latent state from the recent history of measured states and actions. The policy is conditioned on the current state, the previous actions, and the latent state which needs to be reconstructed during deployment in the physical world. Emphasizing the importance of the carefully engineered reward function, the authors demonstrate the method's ability to transfer from simulation to various outdoor terrains. Randomizing Visual Appearance and Configurations Tobin et al. (2017) learned an object detector for robot grasping using a fixed domain parameter distribution, and bridged the gap with a deep NN policy trained exclusively on simulated RGB images. Similarly, James et al. (2017) added various distracting shapes as well as structured noise (Perlin, 2002) when learning a robot manipulation task with an end-to-end controller that mapped pixels to motor velocities. The approach presented by Pinto et al. (2018) combines the concepts of static domain randomization and actor-critic training , enabling the direct sim-to-real transfer of the abilities to pick, push, or move objects. While the critic has access to the simulator's full state, the policy only receives images of the environment, creating an information asymmetry. Matas et al. (2018) used the asymmetric actor-critic idea from Pinto et al. (2018) as well as several other improvements to train a deep NN policy end-to-end, seeded with prior demonstrations. Solving three variations of a tissue folding task, this work scales sim-toreal visuomotor manipulation to deformable objects. Purely visual domain randomization has also been applied to aerial robotics, where Sadeghi and Levine (2017) achieved sim-to-real transfer for learning to fly a drone through indoor environments. The resulting deep NN policy was able to map from monocular images to normalized 3D drone velocities. Similarly, Polvara et al. (2020) demonstrated landing of a quadrotor trained in end-toend fashion using randomized environments. Dai et al. (2019) investigated the effect of domain randomization on visuomotor policies, and observed that this leads to more redundant and entangled representations accompanied with significant statistical changes in the weights. Yan et al. (2020) apply Model Predictive Control (MPC) to manipulate of deformable objects using a forward model based on visual input. The novelty of this approach is that the predictive model is trained jointly with an embedding to minimizing a contrastive loss (van den Oord et al., 2018) in the latent space. Finally, domain randomization was applied to transfer the behavior from simulation to the real robot. Randomizing Dynamics, Randomizing Visual Appearance, and Configurations Combining Generative Adversarial Networks (GANs) and domain randomization, Bousmalis et al. (2018) greatly reduced the number of necessary real-world samples for learning a robotic grasping task. The essence of their method is to transform simulated monocular RGB images in a way that is closely matched to the real counterpart. Extensive evaluation on the physical robot showed that domain randomization as well as the suggested pixel-level domain adaptation technique were important to successfully transfer. Despite the pixel-level domain adaptation technique being learned, the policy optimization in simulation is done with a fixed randomization scheme. In related work James et al. (2019) train a GAN to transform randomized images to so-called canonical images, such that a corresponding real image would be transformed to the same one. This approach allowed them to train purely from simulated images, and optionally fine-tune the policy on target domain data. Notably, the robotic in-hand manipulation conducted by OpenAI et al. (2020) demonstrated that domain randomization in combination with careful model engineering and the usage of recurrent NNs enables sim-to-real transfer on an unprecedentedly difficulty level. Adaptive Domain Randomization Static domain randomization (Section 5.1) is inherently limited and implicitly assumes knowledge of the true mean of the domain parameters or accepts biased samples ( Figure 5). Adapting the randomization scheme allows the training to narrow or widen the search distribution in order to fulfill one or multiple criteria Frontiers in Robotics and AI | www.frontiersin.org April 2022 | Volume 9 | Article 799893 which can be chosen freely. The mechanism devised for updating the domain parameter distribution as well as the procedure to collect meaningful target domain data are typically the center piece of adaptive randomization algorithms. In this process the execution of intermediate policies on the physical device is the most likely point of failure. However, approaches that update the distribution solely based on data from the source domain are less flexible and generally less effective. Conditioning Policies on the Estimated Domain Parameters Yu et al. (2017) suggested the use of a NN policy that is conditioned on the state and the domain parameters. Since these parameters are not assumed to be known, they have to be estimated, e.g., with online system identification. For this purpose, a second NN is trained to regress the domain parameters from the observed rollouts. By applying this approach to simulated continuous control tasks, the authors showed that adding the online system identification module can enable an adaption to sudden changes in the environment. In subsequent research, Yu et al. (2019a) intertwined policy optimization, system identification, and domain randomization. The proposed method first identifies bounds on the domain parameters which are later used for learning from the randomized simulator. In a departure from their previous approach, the policy is conditioned on a latent space projection of the domain parameters. After training in simulation, a second system identification step runs BO for a fixed number of iterations to find the most promising projected domain parameters. The algorithm was evaluated on sim-to-real bipedal robot walking. Mozifian et al. (2020) also introduce a dependence of the policy w.r.t. to the domain parameters. These are updated by gradient ascent on the average return over domains, regularized by a penalty proportional to the KL divergence. Similar to Ruiz et al. (2019), the authors update the domain parameter distribution using the score function gradient estimator. Mozifian et al. (2020) tested their method on sim-to-sim robot locomotion tasks. It remains unclear whether this approach scales to sim-to-real scenarios since the adaptation is done based on the return obtained in simulation, thus is not physically grounded. Bootstrapping from pre-recorded motion capture data of animals, Peng et al. (2020) learned quadruped locomotion skills with a synthesis of imitation learning, domain randomization, and domain adaptation (Section 4.3). The introduced method is conceptually related to the approach of Yu et al. (2019b), but adds an information bottleneck. According to the authors, this bottleneck is necessary because without it, the policy has access to the underlying dynamics parameters and becomes overly dependent on them, which leads to brittle behavior. To avoid this overfitting, Peng et al. (2020) limit the mutual information between the domain parameters and their encoding, realized as penalty on the KL divergence from a zero-mean Gaussian prior on the latent variable. The Bilevel Optimization Perspective Muratore et al. (2021a) formulated adaptive domain randomization as a bilevel optimization that consists of an upper and a lower level problem. In this framework, the upper level is concerned with finding the domain parameter distribution, which when used for training in simulation leads to a policy with maximal real-world return. The lower level problem seeks to find a policy in the current randomized source domain. Using BO for the upper level and model-free RL for the lower level, Muratore et al. (2021a) compare their method in two underactuated sim-to-real robotic tasks against two baselines. Picturing the real-world return analogous to the probability for optimality, this approach reveals parallels to control as inference (Rawlik et al., 2012;Levine and Koltun, 2013;Watson et al., 2021), where the control variates are the parameters of the domain distribution. BO has also been employed by Paul et al. (2019) to adapt the distribution of domain parameters such that using these for the subsequent training maximizes the policy's return. Their method models the relation between the current domain parameters, the current policy and the return of the updated policy with a GP. Choosing the domain parameters that maximize the return in simulation is critical, since this creates the possibility to adapt the environment such that it is easier for the agent to solve. This design decision requires the policy parameters to be fed into the GP which is prohibitively expensive if the full set of parameters are used. Therefore, abstractions of the policy, so-called fingerprints, are created. These handcrafted features, e.g., a Gaussian approximation of the stationary state distribution, replace the policy to reduce the input dimension. Paul et al. (2019) tested the suggested algorithm on three sim-to-sim tasks, focusing on the handling of so-called significant rare events. Embedding the domain parameters into the mean function of a GP which models the system dynamics, Chatzilygeroudis and Mouret (2018) extended a black-box policy search algorithm (Chatzilygeroudis et al., 2017) with a simulator as prior. The approach explicitly searches for parameters of the simulator that fit the real-world data in an upper level loop, while optimizing the GP's hyper-parameters in a lower level loop. This method allowed a damage hexapod robot to walk in less than 30 s. Ruiz et al. (2019) proposed a meta-algorithm which is based on a bilevel optimization problem and updates the domain parameter distribution using REINFORCE (Williams, 1992). The approach has been evaluated in simulation on synthetic data, except for a semantic segmentation task. Thus, there was no dynamics-dependent interaction of the learned policy with the real world. Mehta et al. (2019) also formulated the adaption of the domain parameter distribution as an RL problem where different simulation instances are sampled and compared against a reference environment based on the resulting trajectories. This comparison is done by a discriminator which yields rewards proportional to the difficulty of distinguishing the simulated and real environments, hence providing an incentive to generate distinct domains. Using this reward signal, the domain parameters of the simulation instances are updated via Stein Variational Policy Gradient . Mehta et al. (2019) evaluated their method in a sim-to-real experiment where a robotic arm had to reach a desired point. In contrast, Chebotar et al. (2019) presented a trajectory-based framework for closing the reality gap, and validated it on two sim-to-real Frontiers in Robotics and AI | www.frontiersin.org April 2022 | Volume 9 | Article 799893 robotic manipulation tasks. The proposed procedure adapts the domain parameter distribution's parameters by minimizing discrepancy between observations from the real-world system and the simulation. To measure the discrepancy, Chebotar et al. (2019) use a linear combination of the L 1 and L 2 norm between simulated and real trajectories. These values are then plugged in as costs for Relative Entropy Policy Search (REPS) (Peters et al., 2010) to update the simulator's parameters, hence turning the simulator identification into an episodic RL problem. The policy optimization was done using Proximal Policy Optimization (PPO) (Schulman et al., 2017), a step-based model-free RL algorithm. Removing Restrictions on the Domain Parameter Distribution Ramos et al. (2019) perform a fully Bayesian treatment of the simulator's parameters by employing Likelihood-Free Inference (LFI) with a Mixture Density Network (MDN) as model for the density estimator. Analyzing the obtained posterior over domain parameters, they showed that the proposed method is, in a sim-tosim scenario, able to simultaneously infer different parameter configurations which can explain the observed trajectories. An evaluation over a gird of domain parameters confirms that the policies trained with the inferred posterior are more robust model uncertainties. The key benefit over previous approaches is that the domain parameter distribution is not restricted to belong to a specific family, e.g., normal or uniform. Instead, the true posterior is approximated by the density estimator, fitted using LFI (Papamakarios and Murray, 2016). In follow-up work, Possas et al. (2020) addressed the problem of learning the behavioral policies which are required for the collection of target domain data. By describing the integration policy optimization via modelfree RL, the authors created an online variant of the original method. The sim-to-real experiments were carried out using MPC where (only) the model parameters are updated based on the result from the LFI routine. Matl et al. (2020) scaled the Bayesian inference procedure of Ramos et al. (2019) to the simulation of granular media, estimating parameters such as friction and restitution coefficients. Barcelos et al. (2020) presented a method that interleaves domain randomization, LFI, and policy optimization. The controller is updated via nonlinear MPC while using the unscented transform to simulate different domain instances for the control horizon. Hence, this algorithm allows one to calibrate the uncertainty as the system evolves with the passage of time, attributing higher costs to more uncertain paths. For performing the essential LFI, the authors build upon the work of Ramos et al. (2019) to identify the posterior domain parameters, which are modeled by a mixture of Gaussians. The approach was validated on a simulated inverted pendulum swing-up task as well as a real trajectory following task using a wheeled robot. Since the density estimation problem is the center piece of LFI-based domain randomization, improving the estimator's flexibility is of great interest. Muratore et al. (2021c) employed a sequential neural posterior estimation algorithm (Greenberg et al., 2019) which uses normalizing flows to estimate the (conditional) posterior over simulators. In combination with a segment-wise synchronization between the simulations and the recorded real-world trajectories, Muratore et al. (2021c) demonstrated the neural inference method's ability to learn the posterior belief over contact-rich black-box simulations. Moreover, the proposed approach was evaluated with policy optimization in the loop on an underactuated swing-up and balancing task, showing improved results compared to BayesSim as well as Bayesian linear regression. Adversarial Domain Randomization Extensive prior studies have shown that deep NN classifiers are vulnerable to imperceptible perturbations their inputs, obtained via adversarial optimization, leading to significant drops in accuracy (Szegedy et al., 2014;Fawzi et al., 2015;Goodfellow et al., 2015;Kurakin et al., 2017;Ilyas et al., 2019). This line of research has been extended to reinforcement learning, showing that small (adversarial) perturbations are enough to significantly degrade the policy performance (Huang et al., 2017). To defend against such attacks, the training data can be augmented with adversarially-perturbed examples, or the adversarial inputs can be detected and neutralized at test-time ( Figure 6). However, studies of existing defenses have shown that adversarial examples are harder to detect than originally believed (Carlini and Wagner, 2017). It is safe to assume that this insight gained from computer vision problems transfers to the RL setting, on which we focus here. Mandlekar et al. (2017) proposed physically plausible perturbations by randomly deciding when to add a scaled gradient of the expected return w.r.t. the state. Their sim-tosim evaluation on four MuJoCo tasks showed that agents trained with the suggested adversarial randomization generalize slightly better to domain parameter configurations than agents trained with a static randomization scheme. Lutter et al. (2021a) derived the optimal policy together with different optimal disturbances from the value function in a continuous state, action, and time RL setting. Despite outstanding sim-to-real transferability of the resulting policies, the presented approach is conceptually restricted by assuming access to a compact representation of the state domain, typically obtained through exhaustive sampling, which hinders the scalability to high-dimensional tasks. Adversary Available Analytically Adversary Learned via Two-Player Games Domain randomization can be described using a game theoretic framework. Focusing on two-player games for model-based RL, Rajeswaran et al. (2020) define a "policy player" which maximizes rewards in the learned model and a "model player" which minimizes prediction error of data collected by policy player. This formulation can be transferred to the sim-to-real scenario in different ways. One example is to make the "policy player" modelagnostic and to let the "model player" control the domain parameters. Pinto et al. (2017) introduced the idea of a second agent whose goal it is to hinder the first agent from fulfilling its task. This adversary has the ability to apply force disturbances at predefined locations of the robot's body, while the domain parameters remain unchanged. Both agents are trained in alternation using RL make this a zero-sum game. Similarly, Zhang et al. (2021) aim to train an agent using adversarial examples such that it becomes robust against test-time attacks. As in the approach presented by Pinto et al. (2017), the adversary and the protagonist are trained alternately until convergence at every meta-iteration. Unlike prior work, Zhang et al. (2021) build on state-adversarial MDPs manipulating the observations but not the simulation state. Another key property of their approach is that the perturbations are applied after a projection to a bounded set. The proposed observation-based attack as well as training algorithm is supported by four sim-to-sim validations in MuJoCo environments. Jiang et al. (2021) employed GANs to distinguish between source and target domain dynamics, sharing the concept of a learned domain discriminator with Mehta et al. (2019). Moreover, the authors proposed to augment an analytical physics simulator with a NN that is trained to maximize the similarity between simulated and real trajectories, turning the identification of the hybrid simulator into an RL problem. The comparison on a sim-to-real quadruped locomotion task showed an advantage over static domain randomization baselines. On the other hand, this method added noise to the behavioral policy in order to obtain diverse target domain trajectories for the simulator identification, which can be considered dangerous. DISCUSSION AND OUTLOOK To conclude this review, we discuss practical aspects of choosing among the existing domain randomization approaches (Section 6.1), emphasizing that sim-to-real transfer can also be achieved without randomizing (Section 6.2). Finally, we sketch out several promising directions for future sim-to-real research (Section 6.3). Choosing a Suitable Domain Randomization Approach Every publication on sim-to-real robot learning presents an approach that surpasses its baselines. So, how should we select the right algorithm given a task? Up to now, there is no benchmark for sim-to-real methods based on the policy's target domain performance, and it is highly questionable if such a comparison could be fair, given that these algorithms have substantially different requirements and goals. The absence of one common benchmark is not necessarily bad, since bundling a set of environments to define a metric would bias research to pursue methods which optimize solely for that metric. A prominent example for this mechanism is the OpenAI Gym (Brockman et al., 2016), which became the de facto standard for RL. Contrarily, a similar development for sim-to-real research is not desirable since the overfitting to a small set of scenarios would be detrimental to the desired transferability and the vast amount of other scenarios. When choosing from the published algorithms, the practitioner is advised to check if the approach has been tested on at least two different sim-to-real tasks, and if the (sometimes implicit) assumptions can be met. Adaptive domain randomization methods, for example, will require operating the physical device in order to collect real-world data. After all, we can expect that approaches with randomization will be more robust than the ones only trained on a nominal model. This has been shown consistently (Section 5). However, we can not expect that these approaches work out of the box on novel problems without adjusting the hyper-parameters. Another starting point could be the set of sim-to-sim benchmarks released by Mehta et al. (2020), targeting the problem of system identification for state-of-the-art domain randomization algorithms. Sim-To-Real Transfer Without Domain Randomization Domain randomization is one way to successfully transfer control policies learned in simulation to the physical device, but by no means the only way. Action Transformation In order to cope with the inaccuracies of a simulator, Christiano et al. (2016) propose to train a deep inverse dynamics model to map the action commanded by policy to a transformed action. When applying the original action to the real system and the transformed action to the simulated system, they would lead to the same next robot state, thus bridging the reality gap. To generate the data for training the inverse dynamics model, preliminary policies are augmented with hand-tuned exploration noise and executed in the target domain. Their approach is based on the observation that a policy's high-level strategy remains valid after sim-to-real transfer, and assumes that the simulator provides a reasonable estimate of the next state. With the same goal in mind, Hanna and Stone (2017) suggest an action transformation that is learned such that applying the transformed actions in simulation has the same effects as applying the original actions had on the real system. At the core approach is the estimation of neural forward and inverse models based on rollouts executed with the real robot. Novel Neural Policy Architectures Rusu et al. (2017) employ a progressively growing NN architecture (Rusu et al., 2016b) to learn an end-to-end approach mapping from pixels to discretized joint velocities. This NN framework enables the reuse of previously gained knowledge as well as the adaptation to new input modalities. The first part of the NN policy is trained in simulation, while the part added when transferring needs to be trained using real-world data. For a relatively simple reaching task, the authors reported requiring approximately 4 h of runtime on the physical robot. Kaspar et al. (2020) propose to combine operational space control and RL. After carefully identifying the simulator's parameters, the RL agent learns to control the end-effector via forces on a unit mass-spring-damper system. The constrains and nullspace behavior are abstracted away from the agent, making the RL problem easier and the policy more transferable. Promising Future Research Directions Learning from randomized simulations still offers abundant possibilities to enable or improve the sim-to-real transfer of control policies. In the following section, we describe multiple opportunities for future work in this area of research. Real-To-Sim-To-Real Transfer Creating randomizable simulation environments is timeintensive, and the initial guesses for the domain parameters as well as their variances are typically very inaccurate. It is of great interest to automate this process grounded by real-world data. One viable scenario could be to record an environment with a RGBD camera, and subsequently use the information to reconstruct the scene. Moreover, the recorded data can be processed to infer the domain parameters, which then specifies the domain parameter distributions. When devising such a framework, we could start from prior work on 3D scene reconstruction Kolev et al. (2009), Haefner et al. (2018 as well as methods to estimate the degrees of freedom for rigid bodies (Martin-Martin and Brock, 2014). A data-based automatic generation of simulation environments (real-to-sim-to-real) not only promises to reduce the workload, but would also yields a meaningful initialization for domain distribution parameters. Policy Architectures With Inductive Biases tDeep NNs are by far the most common policy type, favored because of their flexibility and expressiveness. However, they are also brittle w.r.t. changes in their inputs (Szegedy et al., 2014;Goodfellow et al., 2015;Huang et al., 2017). Due to the inevitable domain shift in sim-to-real scenarios this input sensitivity is magnified. The success of domain randomization methods for robot learning can largely be attributed to their ability of regularizing deep NN policies by diversifying the training data. Generally, one may also introduce regularization to the learning by designing alternative models for the control policies, e.g., linear combination of features and parameters, (time varying) mixtures of densities, or movement primitives. All of these have their individual strengths and weaknesses. We believe that pairing the expressiveness of deep NNs with physically-grounded prior knowledge leads to controllers that achieve high performance and suffer less from transferring to the real world, since they are able to bootstrap from their prior. There are multiple ways to incorporate abstract knowledge about physics. We can for example restrict the policy to obey stable system dynamics derived from first principles (Greydanus et al., 2019;Lutter et al., 2019). Another approach is to design the model class such that the closed-loop system is passive for all parameterizations of the learned policy, thus guaranteeing stability in the sense of Lyapunov as well as bounded output energy given bounded input energy (Brogliato et al., 2007;Yang et al., 2013;Dai et al., 2021). All these methods would require significant exploration in the environment, making it even more challenging to learn successful controllers in the real-world directly. Leveraging randomized simulation is likely going to be a critical component in demonstrating solving sequential problems on real robots. Towards Dual Control via Neural Likelihood-Free Inference Continuing the direction of adaptive domain randomization, we are convinced that neural LFI powered by normalizing flows are auspicious approaches. The combination of highly flexible density estimators with widely applicable and sampleefficient inference methods allows one to identify multimodal distributions over simulators with very mild assumptions Barcelos et al., 2021;Muratore et al., 2021c). By introducing an auxiliary optimality variable and making the policy parameters subject to the inference, we obtain the posterior over policies quantifying their likelihood of being optimal. While this idea is well-known in the control-as-inference community (Rawlik et al., 2012;Levine and Koltun, 2013;Watson et al., 2021), prior methods were limited to less powerful density estimation procedures. Taking this idea one step further, we could additionally include the domain parameters for inference, and thereby establish connections to dual control (Feldbaum, 1960;Wittenmark, 1995). Accounting for the Cost of Information Collection Another promising direction for future research is the combination of simulated and real-world data collection with explicit consideration of the different costs when sampling from the two domains, subject to a restriction of the overall computational budget. One part of this problem was already addressed by Marco et al. (2017), showing how simulation can be used to alleviate the need for real-world samples when finding a set of policy parameters. However, the question of how to schedule the individual (simulated or real) experiments and when to stop the procedure, i.e., when does the cost of gathering information exceed its expected benefit, is not answered for sim-to-real transfer yet. This question relates to the problems of optimal stopping (Chow and Robbins, 1963) as well as multi-fidelity optimization (Forrester et al., 2007), and can be seen as a reformulation thereof in the context of simulationbased learning. Solving Sequential Problems The problem settings considered in the overwhelming majority of related publications, are (continuous) control tasks which do not have a sequential nature. In contrast, most real-world tasks such as the ones posed at the DARPA Robotics Challenge (Krotkov et al., 2017) consist of (disconnected) segments, e.g., a robot needs to turn the knob before it can open a door. One possible way to address these more complicated tasks is by splitting the control into high and low level policies, similar to the options framework (Sutton et al., 1999). The higher level policy is trained to orchestrate the lowlevel policies which could be learned or fixed. Existing approaches typically realize this with discrete switches between the low-level policies, leading to undesirable abrupt changes in the behavior. An alternative would be a continuous blending of policies, controlled by a special kind of recurrent NN which has originally been proposed by Amari (1977) to model activities in the human brain. Used as policy architectures they can be constructed to exhibit asymptotically stable nonlinear dynamics (Kishimoto and Amari, 1979). The main benefits of this structure are its easy interpretability via exhibition and inhibition of neural potentials, as well as the relatively low number of parameters necessary to create complex and adaptive behavior. A variation of this idea with hand-tuned parameters, i.e., without machine learning, has been applied by Luksch et al. (2012) to coordinate the activation pre-defined movement primitives. SELECTION OF REFERENCES We chose the references based on multiple criteria: 1) Our primary goal was to covering all milestones of the sim-to-real research for robotics. 2) In the process, we aimed at diversifying over subfields and research groups. 3) A large proportion of papers came to our attention by running Google Scholar alerts on "sim-to-real" and "reality gap" since 2017. 4) Another source were reverse searches starting from highly influential publications. 5) Some papers came to our attention because of citation notifications we received on our work. 6) Finally, a few of the selected publications are recommendations from reviewers, colleagues, or researchers met at conferences. 7) Peer-reviewed papers were strongly preferred over pre-prints. AUTHOR CONTRIBUTIONS FIGURE 2 | 2Topological overview of the sim-to-real research and a selection of related fields.Frontiers in Robotics and AI | www.frontiersin.org April 2022 | Volume 9 | Article 799893 FIGURE 3 | 3Topological overview of domain randomization methods. FIGURE 4 | 4Conceptual illustration of static domain randomization. FIGURE 5 | 5Conceptual illustration of adaptive domain randomization. FIGURE 6 | 6Conceptual illustration of adversarial domain randomization. Frontiers in Robotics and AI | www.frontiersin.org April 2022 | Volume 9 | Article 799893 FM : main author; FR: added and edited text, suggested publications, proofread; GT: added and edited text, suggested publications, proofread; WY: added and edited text, suggested publications, proofread; MG: edited text, proofread, (Ph.D. supervisor of FM); JP: added and edited text, suggested publications, proofread, (Ph.D. supervisor of FM).FUNDING FM gratefully acknowledges the financial support from Honda Research Institute Europe. JP received funding from the European Union's Horizon 2020 research and innovation programme under grant agreement No 640554. WY and GT have been supported by NSF award IIS-1514258. Frontiers in Robotics and AI | www.frontiersin.org April 2022 | Volume 9 | Article 799893 6.2.3 Identifying and Improving the Simulator Xie et al. (2019) describe an iterative process including motion tracking, system identification, RL, and knowledge distillation, to learn control policies for humanoid walking on the physical system. This way, the authors can rely on known building blocks resulting in initial and intermediate policies which are reasonably safe to execute. To run a policy on the real robot while learning without the risk of damaging or stopping the device, Muratore et al. Robot Learning From Randomized Simulations The authors declare that this study received funding from the Honda Research Institute Europe. The funder had the following involvement in the study: the structuring and improvement of this article jointly with the authors, and the decision to submit it for publication.Conflict of Interest: Author FM was employed by the Technical University of Darmstadt in collaboration with the Honda Research Institute Europe. Author FR was employed by NVIDIA. Author WY was employed by Google. Author MG was employed by the Honda Research Institute Europe.The remaining authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest. . H Abdulsamad, T Dorau, B Belousov, J Zhu, J Peters, Abdulsamad, H., Dorau, T., Belousov, B., Zhu, J., and Peters, J. (2021). Distributionally Robust Trajectory Optimization under Uncertain Dynamics via Relative-Entropy Trust Regions. arXiv 2103.15388Distributionally Robust Trajectory Optimization under Uncertain Dynamics via Relative-Entropy Trust Regions. arXiv 2103.15388 Benchmarking Domain Randomisation for Visual Sim-To-Real Transfer. R Alghonaim, E Johns, arXiv 2011.07112Alghonaim, R., and Johns, E. (2020). Benchmarking Domain Randomisation for Visual Sim-To-Real Transfer. arXiv 2011.07112 Tunenet: One-Shot Residual Tuning for System Identification and Sim-To-Real Robot Task Transfer. A Allevato, E S Short, M Pryor, A Thomaz, Conference on Robot Learning (CoRL). Osaka, Japan100PMLR)Allevato, A., Short, E. S., Pryor, M., and Thomaz, A. (2019). Tunenet: One-Shot Residual Tuning for System Identification and Sim-To-Real Robot Task Transfer. In Conference on Robot Learning (CoRL), Osaka, Japan, October 30 -November 1 (PMLR), vol. 100 of Proc. Machine Learn. Res., 445-455. Dynamics of Pattern Formation in Lateral-Inhibition Type Neural fields. S Amari, 10.1007/bf00337259Biol. Cybern. 27Amari, S.-i. (1977). Dynamics of Pattern Formation in Lateral-Inhibition Type Neural fields. Biol. Cybern. 27, 77-87. doi:10.1007/bf00337259 Hindsight Experience Replay. M Andrychowicz, D Crow, A Ray, J Schneider, R Fong, P Welinder, Conference on Neural Information Processing Systems (NIPS). Long Beach, CA, USAAndrychowicz, M., Crow, D., Ray, A., Schneider, J., Fong, R., Welinder, P., et al. (2017). "Hindsight Experience Replay," in Conference on Neural Information Processing Systems (NIPS), December 4-9 (Long Beach, CA, USA, 5048-5058. Learning Dexterous In-Hand Manipulation. O M Andrychowicz, B Baker, M Chociej, R Józefowicz, B Mcgrew, J Pachocki, 10.1177/0278364919887447Int. J. Robotics Res. 39Andrychowicz, O. M., Baker, B., Chociej, M., Józefowicz, R., McGrew, B., Pachocki, J., et al. (2020). Learning Dexterous In-Hand Manipulation. Int. J. Robotics Res. 39, 3-20. doi:10.1177/0278364919887447 Bayesian Optimization in Variational Latent Spaces with Dynamic Compression. R Antonova, A Rai, T Li, D Kragic, Research.100Conference on Robot Learning (CoRL). Osaka, Japanof Proceedings of Machine LearningAntonova, R., Rai, A., Li, T., and Kragic, D. (2019). "Bayesian Optimization in Variational Latent Spaces with Dynamic Compression," in Conference on Robot Learning (CoRL), October 30 -November 1 (Osaka, Japan: PMLR), 456-465. of Proceedings of Machine Learning Research.100 Purposive Behavior Acquisition for a Real Robot by Vision-Based Reinforcement Learning. M Asada, S Noda, S Tawaratsumida, K Hosoda, 10.1023/A:101823700882310.1007/bf00117447Mach. Learn. 23Asada, M., Noda, S., Tawaratsumida, S., and Hosoda, K. (1996). Purposive Behavior Acquisition for a Real Robot by Vision-Based Reinforcement Learning. Mach. Learn. 23, 279-303. doi:10.1023/A:101823700882310.1007/bf00117447 Adaptive Control. 2 edn. K J Åström, B Wittenmark, Dover PublicationsÅström, K. J., and Wittenmark, B. (2008). Adaptive Control. 2 edn. Dover Publications. Estimation of Inertial Parameters of Manipulator Loads and Links. C G Atkeson, H , C A , An , C H , 10.1177/027836498600500306doi:10.1177/ 027836498600500306Int. J. Robotics Res. 5Atkeson, C. G., H, C. A., and An, C. H. (1986). Estimation of Inertial Parameters of Manipulator Loads and Links. Int. J. Robotics Res. 5, 101-119. doi:10.1177/ 027836498600500306 Emergent Tool Use from Multi-Agent Autocurricula. B Baker, I Kanitscheider, T M Markov, Y Wu, G Powell, B Mcgrew, 26-30OpenReview.net.International Conference on Learning Representations (ICLR)April. Addis Ababa, EthiopiaBaker, B., Kanitscheider, I., Markov, T. M., Wu, Y., Powell, G., McGrew, B., et al. (2020). "Emergent Tool Use from Multi-Agent Autocurricula," in (Addis Ababa, Ethiopia. OpenReview.net.International Conference on Learning Representations (ICLR)April 26-30 Dual Online Stein Variational Inference for Control and Dynamics. L Barcelos, A Lambert, R Oliveira, P Borges, B Boots, F Ramos, 10.15607/RSS.2021.XVII.068Robotics: Science and Systems (RSS). Barcelos, L., Lambert, A., Oliveira, R., Borges, P., Boots, B., and Ramos, F. (2021). "Dual Online Stein Variational Inference for Control and Dynamics," in Robotics: Science and Systems (RSS), July 12-16. Virtual Event. doi:10.15607/RSS.2021.XVII.068 DISCO: Double Likelihood-free Inference Stochastic Control. L Barcelos, R Oliveira, R Possas, L Ott, F Ramos, 10.1109/ICRA40945.2020.9196931International Conference on Robotics and Automation (ICRA). Paris, FranceIEEEBarcelos, L., Oliveira, R., Possas, R., Ott, L., and Ramos, F. (2020). "DISCO: Double Likelihood-free Inference Stochastic Control," in International Conference on Robotics and Automation (ICRA), May 31 -August 31 (Paris, France: IEEE), 10969-10975. doi:10.1109/ICRA40945.2020.9196931 Simulation as an Engine of Physical Scene Understanding. P W Battaglia, J B Hamrick, J B Tenenbaum, 10.1073/pnas.1306572110Proc. Natl. Acad. Sci. 110Battaglia, P. W., Hamrick, J. B., and Tenenbaum, J. B. (2013). Simulation as an Engine of Physical Scene Understanding. Proc. Natl. Acad. Sci. 110, 18327-18332. doi:10.1073/pnas.1306572110 Interaction Networks for Learning about Objects, Relations and Physics. P W Battaglia, R Pascanu, M Lai, D J Rezende, K Kavukcuoglu, Conference on Neural Information Processing Systems (NIPS). Barcelona, SpainBattaglia, P. W., Pascanu, R., Lai, M., Rezende, D. J., and Kavukcuoglu, K. (2016). "Interaction Networks for Learning about Objects, Relations and Physics," in Conference on Neural Information Processing Systems (NIPS), December 5-10 (Barcelona, Spain, 4502-4510. Assessing Solution Quality in Stochastic Programs. G Bayraksan, D P Morton, 10.1007/s10107-006-0720-xMath. Program. 108Bayraksan, G., and Morton, D. P. (2006). Assessing Solution Quality in Stochastic Programs. Math. Program 108, 495-514. doi:10.1007/s10107-006-0720-x Adaptive Approximate Bayesian Computation. M A Beaumont, J.-M Cornuet, J.-M Marin, C P Robert, 10.1093/biomet/asp052doi:10.1093/ biomet/asp052Biometrika. 96Beaumont, M. A., Cornuet, J.-M., Marin, J.-M., and Robert, C. P. (2009). Adaptive Approximate Bayesian Computation. Biometrika 96, 983-990. doi:10.1093/ biomet/asp052 Curriculum Learning. Y Bengio, J Louradour, R Collobert, Weston , J , 10.1145/1553374.1553380International Conference on Machine Learning (ICML). Montreal, Quebec, CanadaACMACM International Conference Proceeding SeriesBengio, Y., Louradour, J., Collobert, R., and Weston, J. (2009). "Curriculum Learning," in International Conference on Machine Learning (ICML), June 14-18 (Montreal, Quebec, Canada: ACM), 41-48. of ACM International Conference Proceeding Series. doi:10.1145/1553374.1553380382 Learning Agile Robotic Locomotion Skills by Imitating Animals. X Bin Peng, E Coumans, T Zhang, T.-W Lee, J Tan, S Levine, 10.15607/RSS.2020.XVI.064Robotics: Science and Systems (RSS), Virtual Event/Corvalis. Oregon, USABin Peng, X., Coumans, E., Zhang, T., Lee, T.-W., Tan, J., and Levine, S. (2020). "Learning Agile Robotic Locomotion Skills by Imitating Animals," in Robotics: Science and Systems (RSS), Virtual Event/Corvalis, July 12-16 (Oregon, USA. doi:10.15607/RSS.2020.XVI.064 Resilient Machines through Continuous Self-Modeling. J Bongard, V Zykov, H Lipson, 10.1126/science.1133687Science. 314Bongard, J., Zykov, V., and Lipson, H. (2006). Resilient Machines through Continuous Self-Modeling. Science 314, 1118-1121. doi:10.1126/science. 1133687 Using Simulation and Domain Adaptation to Improve Efficiency of Deep Robotic Grasping. K Bousmalis, A Irpan, P Wohlhart, Y Bai, M Kelcey, M Kalakrishnan, 10.1109/ICRA.2018.8460875doi:10. 1109/ICRA.2018.8460875International Conference on Robotics and Automation. Brisbane, Australia: ICRA)Bousmalis, K., Irpan, A., Wohlhart, P., Bai, Y., Kelcey, M., Kalakrishnan, M., et al. (2018). "Using Simulation and Domain Adaptation to Improve Efficiency of Deep Robotic Grasping," in International Conference on Robotics and Automation, May 21-25 (Brisbane, Australia: ICRA), 4243-4250. doi:10. 1109/ICRA.2018.8460875 . G Brockman, V Cheung, L Pettersson, J Schneider, J Schulman, J Tang, Openai Gym. arXiv 1606.01540Brockman, G., Cheung, V., Pettersson, L., Schneider, J., Schulman, J., Tang, J., et al. (2016). Openai Gym. arXiv 1606.01540 Dissipative Systems Analysis and Control. B Brogliato, B Maschke, R Lozano, O Egeland, 10.1007/978-1-84628-517-2Theor. Appl. 2Brogliato, B., Maschke, B., Lozano, R., and Egeland, O. (2007). Dissipative Systems Analysis and Control. Theor. Appl. 2. doi:10.1007/978-1-84628-517-2 Artificial Life and Real Robots. R A Brooks, European Conference on Artificial Life (ECAL). Paris, FranceBrooks, R. A. (1992). "Artificial Life and Real Robots," in European Conference on Artificial Life (ECAL), December 11-13 (Paris, France, 3-10. Destilled Domain Randomization, 2112. J Brosseit, B Hahner, F Muratore, M Gienger, J Peters, 3149Brosseit, J., Hahner, B., Muratore, F., Gienger, M., and Peters, J. (2021). Destilled Domain Randomization, 2112, 03149. Learning Inverse Dynamics Models with Contacts. R Calandra, S Ivaldi, M P Deisenroth, E Rueckert, J Peters, 10.1109/ICRA.2015.7139638International Conference on Robotics and Automation (ICRA). Seattle, WA, USAIEEECalandra, R., Ivaldi, S., Deisenroth, M. P., Rueckert, E., and Peters, J. (2015). "Learning Inverse Dynamics Models with Contacts," in International Conference on Robotics and Automation (ICRA), 26-30 May (Seattle, WA, USA: IEEE), 3186-3191. doi:10.1109/ICRA.2015.7139638 Adversarial Examples Are Not Easily Detected. N Carlini, D Wagner, 10.1145/3128572.3140444Workshop on Artificial Intelligence and Security (AISec). Dallas, TX, USAACMCarlini, N., and Wagner, D. (2017). "Adversarial Examples Are Not Easily Detected," in Workshop on Artificial Intelligence and Security (AISec), November 3 (Dallas, TX, USA: ACM), 3-14. doi:10.1145/3128572.3140444 Using Parameterized Black-Box Priors to Scale up Model-Based Policy Search for Robotics. K Chatzilygeroudis, J.-B Mouret, 10.1109/ICRA.2018.8461083International Conference on Robotics and Automation (ICRA). Brisbane, AustraliaIEEEChatzilygeroudis, K., and Mouret, J.-B. (2018). "Using Parameterized Black-Box Priors to Scale up Model-Based Policy Search for Robotics," in International Conference on Robotics and Automation (ICRA), May 21-25 (Brisbane, Australia: IEEE), 1-9. doi:10.1109/ICRA.2018.8461083 Black-box Data-Efficient Policy Search for Robotics. K Chatzilygeroudis, R Rama, R Kaushik, D Goepp, V Vassiliades, J.-B Mouret, 10.1109/IROS.2017.8202137International Conference on Intelligent Robots and Systems (IROS). Vancouver, BCCanadaIEEEChatzilygeroudis, K., Rama, R., Kaushik, R., Goepp, D., Vassiliades, V., and Mouret, J.-B. (2017). "Black-box Data-Efficient Policy Search for Robotics," in International Conference on Intelligent Robots and Systems (IROS), September 24-28 (Vancouver, BC: CanadaIEEE), 51-58. doi:10.1109/IROS.2017.8202137 A Survey on Policy Search Algorithms for Learning Robot Controllers in a Handful of Trials. K Chatzilygeroudis, V Vassiliades, F Stulp, S Calinon, J.-B Mouret, 10.1109/TRO.2019.2958211IEEE Trans. Robot. 36Chatzilygeroudis, K., Vassiliades, V., Stulp, F., Calinon, S., and Mouret, J.-B. (2020). A Survey on Policy Search Algorithms for Learning Robot Controllers in a Handful of Trials. IEEE Trans. Robot. 36, 328-347. doi:10.1109/TRO.2019.2958211 Closing the Sim-To-Real Loop: Adapting Simulation Randomization with Real World Experience. Y Chebotar, A Handa, V Makoviychuk, M Macklin, J Issac, N Ratliff, 10.1109/ICRA.2019.8793789doi:10. 1109/ICRA.2019.8793789International Conference on Robotics and Automation (ICRA). Montreal, QC, CanadaChebotar, Y., Handa, A., Makoviychuk, V., Macklin, M., Issac, J., Ratliff, N., et al. (2019). "Closing the Sim-To-Real Loop: Adapting Simulation Randomization with Real World Experience," in International Conference on Robotics and Automation (ICRA), May 20-24 (Montreal, QC, Canada, 8973-8979. doi:10. 1109/ICRA.2019.8793789 Learning Efficient Object Detection Models with Knowledge Distillation. G Chen, W Choi, X Yu, T X Han, M Chandraker, Conference on Neural Information Processing Systems (NIPS). Long Beach, CA, USAChen, G., Choi, W., Yu, X., Han, T. X., and Chandraker, M. (2017). "Learning Efficient Object Detection Models with Knowledge Distillation," in Conference on Neural Information Processing Systems (NIPS), December 4-9 (Long Beach, CA, USA, 742-751. On Optimal Stopping Rules. Y S Chow, H Robbins, 10.1007/bf00535296Z. Wahrscheinlichkeitstheorie Verw Gebiete. 2Chow, Y. S., and Robbins, H. (1963). On Optimal Stopping Rules. Z. Wahrscheinlichkeitstheorie Verw Gebiete 2, 33-49. doi:10.1007/bf00535296 P F Christiano, Z Shah, I Mordatch, J Schneider, T Blackwell, J Tobin, arXiv 1610.03518Transfer from Simulation to Real World through Learning Deep Inverse Dynamics Model. Christiano, P. F., Shah, Z., Mordatch, I., Schneider, J., Blackwell, T., Tobin, J., et al. (2016). Transfer from Simulation to Real World through Learning Deep Inverse Dynamics Model. arXiv 1610.03518 Predictable Behavior during Contact Simulation: a Comparison of Selected Physics Engines. S.-J Chung, N Pollard, 10.1002/cav.1712Comp. Anim. Virtual Worlds. 27Chung, S.-J., and Pollard, N. (2016). Predictable Behavior during Contact Simulation: a Comparison of Selected Physics Engines. Comp. Anim. Virtual Worlds 27, 262-270. doi:10.1002/cav.1712 Multi-column Deep Neural Networks for Image Classification. D Ciresan, U Meier, J Schmidhuber, 10.1109/CVPR.2012.6248110Conference on Computer Vision and Pattern Recognition (CVPR). RI, USAIEEE Computer SocietyCiresan, D., Meier, U., and Schmidhuber, J. (2012). "Multi-column Deep Neural Networks for Image Classification," in Conference on Computer Vision and Pattern Recognition (CVPR), June 16-21 (RI, USA: IEEE Computer Society), 3642-3649. doi:10.1109/CVPR.2012.6248110 Quantifying the Reality gap in Robotic Manipulation Tasks. J Collins, D Howard, J Leitner, 10.1109/ICRA.2019.8793591International Conference on Robotics and Automation (ICRA). Montreal, QC, CanadaIEEECollins, J., Howard, D., and Leitner, J. (2019). "Quantifying the Reality gap in Robotic Manipulation Tasks," in International Conference on Robotics and Automation (ICRA), May 20-24 (Montreal, QC, Canada: IEEE), 6706-6712. doi:10.1109/ICRA.2019.8793591 Tiny Differentiable Simulator. E Coumans, Coumans, E. (2020). Tiny Differentiable Simulator. Available at: https://github. com/google-research/tiny-differentiable-simulator. The Nature of Explanation. K J W Craik, Craik, K. J. W. (1943). The Nature of Explanation. The Frontier of Simulation-Based Inference. K Cranmer, J Brehmer, G Louppe, 10.1073/pnas.1912789117Proc. Natl. Acad. Sci. USA. 117Cranmer, K., Brehmer, J., and Louppe, G. (2020). The Frontier of Simulation-Based Inference. Proc. Natl. Acad. Sci. USA 117, 30055-30062. doi:10.1073/pnas.1912789117 Knowledge Distillation across Ensembles of Multilingual Models for Low-Resource Languages," in Knowledge distillation across ensembles of multilingual models for low-resource languages. J Cui, B Kingsbury, B Ramabhadran, G Saon, T Sercu, K Audhkhasi, 10.1109/ICASSP.2017.7953073IEEENew Orleans, LA, USACui, J., Kingsbury, B., Ramabhadran, B., Saon, G., Sercu, T., Audhkhasi, K., et al. (2017). "Knowledge Distillation across Ensembles of Multilingual Models for Low-Resource Languages," in Knowledge distillation across ensembles of multilingual models for low-resource languages, March 5-9 (ICASSP, New Orleans, LA, USA: IEEE), 4825-4829. doi:10.1109/ICASSP.2017.7953073 Robots that Can Adapt like Animals. A Cully, J Clune, D Tarapore, J.-B Mouret, 10.1038/nature14422Nature. 521Cully, A., Clune, J., Tarapore, D., and Mouret, J.-B. (2015). Robots that Can Adapt like Animals. Nature 521, 503-507. doi:10.1038/nature14422 Autonomous Drifting Using Simulation-Aided Reinforcement Learning. M Cutler, J P How, 10.1109/ICRA.2016.7487756International Conference on Robotics and Automation (ICRA). Stockholm, SwedenIEEECutler, M., and How, J. P. (2016). "Autonomous Drifting Using Simulation-Aided Reinforcement Learning," in International Conference on Robotics and Automation (ICRA), May 16-21 (Stockholm, Sweden: IEEE), 5442-5448. doi:10.1109/ICRA.2016.7487756 Efficient Reinforcement Learning for Robots Using Informative Simulated Priors. M Cutler, J P How, 10.1109/ICRA.2015.7139550International Conference on Robotics and Automation (ICRA). Seattle, WA, USAIEEECutler, M., and How, J. P. (2015). "Efficient Reinforcement Learning for Robots Using Informative Simulated Priors," in International Conference on Robotics and Automation (ICRA), 26-30 May (Seattle, WA, USA: IEEE), 2605-2612. doi:10.1109/ICRA.2015.7139550 Distilling Policy Distillation. W M Czarnecki, R Pascanu, S Osindero, S M Jayakumar, G Swirszcz, Jaderberg , M , International Conference on Artificial Intelligence and Statistics (AISTATS). Naha, Okinawa89of Proceedings of Machine Learning ResearchCzarnecki, W. M., Pascanu, R., Osindero, S., Jayakumar, S. M., Swirszcz, G., and Jaderberg, M. (2019). "Distilling Policy Distillation," in International Conference on Artificial Intelligence and Statistics (AISTATS) (Naha, Okinawa, Japan16-18 April: PMLR), 1331-1340. of Proceedings of Machine Learning Research.89 Lyapunov-stable Neural-Network Control. H Dai, B Landry, L Yang, M Pavone, R Tedrake, 10.15607/RSS.2021.XVII.063Robotics: Science and Systems (RSS). Dai, H., Landry, B., Yang, L., Pavone, M., and Tedrake, R. (2021). "Lyapunov-stable Neural-Network Control," in Robotics: Science and Systems (RSS), July 12-16. Virtual Event. doi:10.15607/RSS.2021.XVII.063 Analysing Deep Reinforcement Learning Agents Trained with Domain Randomisation. T Dai, K Arulkumaran, S Tukra, F Behbahani, A A Bharath, arXiv 1912.08324Dai, T., Arulkumaran, K., Tukra, S., Behbahani, F., and Bharath, A. A. (2019). Analysing Deep Reinforcement Learning Agents Trained with Domain Randomisation. arXiv 1912.08324 A Differentiable Physics Engine for Deep Learning in Robotics. J Degrave, M Hermans, J Dambre, F Wyffels, 10.3389/fnbot.2019.00006doi:10. 3389/fnbot.2019.00006Front. Neurorobot. 136Degrave, J., Hermans, M., Dambre, J., and Wyffels, F. (2019). A Differentiable Physics Engine for Deep Learning in Robotics. Front. Neurorobot. 13, 6. doi:10. 3389/fnbot.2019.00006 A Survey on Policy Search for Robotics. M P Deisenroth, G Neumann, J Peters, 10.1561/2300000021FNT in Robotics. 2Deisenroth, M. P., Neumann, G., and Peters, J. (2011). A Survey on Policy Search for Robotics. FNT in Robotics 2, 1-142. doi:10.1561/2300000021 PILCO: a Model-Based and Data-Efficient Approach to Policy Search. M P Deisenroth, C E Rasmussen, International Conference on Machine Learning (ICML). Bellevue, Washington, USADeisenroth, M. P., and Rasmussen, C. E. (2011). "PILCO: a Model-Based and Data- Efficient Approach to Policy Search," in International Conference on Machine Learning (ICML), June 28 -July 2 (Bellevue, Washington, USA, 465-472. Distributionally Robust Optimization under Moment Uncertainty with Application to Data-Driven Problems. E Delage, Ye , Y , 10.1287/opre.1090.0741Operations Res. 58Delage, E., and Ye, Y. (2010). Distributionally Robust Optimization under Moment Uncertainty with Application to Data-Driven Problems. Operations Res. 58, 595-612. doi:10.1287/opre.1090.0741 Why the Law of Effect Will Not Go Away. D C Dennett, 10.1111/j.1468-5914.1975.tb00350.xJ. Theor. Soc. Behav. Dennett, D. C. (1975). Why the Law of Effect Will Not Go Away. J. Theor. Soc. Behav. doi:10.1111/j.1468-5914.1975.tb00350.x Auto-tuned Sim-To-Real Transfer. Y Du, O Watkins, T Darrell, P Abbeel, D Pathak, arXiv 2104.07662Du, Y., Watkins, O., Darrell, T., Abbeel, P., and Pathak, D. (2021). Auto-tuned Sim- To-Real Transfer. arXiv 2104.07662. On Contrastive Learning for Likelihood-free Inference. C Durkan, I Murray, Papamakarios , G , Virtual Eventof Proceedings of Machine Learning Research. PMLR119International Conference on Machine Learning (ICML)Durkan, C., Murray, I., and Papamakarios, G. (2020). "On Contrastive Learning for Likelihood-free Inference," in International Conference on Machine Learning (ICML), July 13-18 (PMLR), 2771-2781. Virtual Eventof Proceedings of Machine Learning Research.119 What Does Shaping Mean for Computational Reinforcement Learning?. T Erez, W D Smart, International Conference on Development and Learning (ICDL). Monterey, CA, USAIEEEErez, T., and Smart, W. D. (2008). "What Does Shaping Mean for Computational Reinforcement Learning?," in International Conference on Development and Learning (ICDL) (Monterey, CA, USA: IEEE), 215-219. Simulation Tools for Model-Based Robotics: Comparison of Bullet, Havok, Mujoco, ODE and Physx. T Erez, Y Tassa, E Todorov, 10.1109/ICRA.2015.7139807International Conference on Robotics and Automation (ICRA). Seattle, WA, USAErez, T., Tassa, Y., and Todorov, E. (2015). "Simulation Tools for Model-Based Robotics: Comparison of Bullet, Havok, Mujoco, ODE and Physx," in International Conference on Robotics and Automation (ICRA), May 26-30 (Seattle, WA, USA, 4397-4404. doi:10.1109/ICRA.2015.7139807 Fundamental Limits on Adversarial Robustness. A Fawzi, O Fawzi, P Frossard, International Conference on Machine Learning (ICML), Workshop on Deep Learning. Fawzi, A., Fawzi, O., and Frossard, P. (2015). "Fundamental Limits on Adversarial Robustness," in International Conference on Machine Learning (ICML), Workshop on Deep Learning. Constructing Summary Statistics for Approximate Bayesian Computation: Semi-automatic Approximate Bayesian Computation. P Fearnhead, D Prangle, 10.1111/j.1467-9868.2011.01010.xDual Control Theory. I. Avtomatika i Telemekhanika. x Feldbaum, A. A.74J. R. Stat. Soc.Fearnhead, P., and Prangle, D. (2012). Constructing Summary Statistics for Approximate Bayesian Computation: Semi-automatic Approximate Bayesian Computation. J. R. Stat. Soc. 74, 419-474. doi:10.1111/j.1467-9868.2011.01010.x Feldbaum, A. A. (1960). Dual Control Theory. I. Avtomatika i Telemekhanika 21, 1240-1249. Model-agnostic Meta-Learning for Fast Adaptation of Deep Networks. C Finn, P Abbeel, S Levine, International Conference on Machine Learning. Sydney, NSW, AustraliaFinn, C., Abbeel, P., and Levine, S. (2017). "Model-agnostic Meta-Learning for Fast Adaptation of Deep Networks," in International Conference on Machine Learning, 6-11 August (Sydney, NSW, Australia: ICML), 1126-1135. Multi-fidelity Optimization via Surrogate Modelling. A I J Forrester, A Sóbester, A J Keane, 10.1098/rspa.2007.1900Proc. R. Soc. A. 463Forrester, A. I. J., Sóbester, A., and Keane, A. J. (2007). Multi-fidelity Optimization via Surrogate Modelling. Proc. R. Soc. A. 463, 3251-3269. doi:10.1098/rspa.2007.1900 Explaining and Harnessing Adversarial Examples. I J Goodfellow, J Shlens, C Szegedy, International Conference on Learning Representations (ICLR). San Diego, CA, USAConference TrackGoodfellow, I. J., Shlens, J., and Szegedy, C. (2015). "Explaining and Harnessing Adversarial Examples," in International Conference on Learning Representations (ICLR), May 7-9 (San Diego, CA, USA. Conference Track. Frontiers in Robotics and AI | www.frontiersin.org. 9799893Frontiers in Robotics and AI | www.frontiersin.org April 2022 | Volume 9 | Article 799893 Recasting Gradient-Based Meta-Learning as Hierarchical Bayes. E Grant, C Finn, S Levine, T Darrell, T L Griffiths, International Conference on Learning Representations (ICLR). Vancouver, BC, CanadaConference Track (OpenReview.netGrant, E., Finn, C., Levine, S., Darrell, T., and Griffiths, T. L. (2018). "Recasting Gradient-Based Meta-Learning as Hierarchical Bayes," in International Conference on Learning Representations (ICLR), April 30 -May 3, 2018 (Vancouver, BC, Canada. Conference Track (OpenReview.net). Automatic Posterior Transformation for Likelihood-free Inference. D S Greenberg, M Nonnenmacher, J H Macke, Research.97International Conference on Machine Learning (ICML). Long Beach, California, USAof Proceedings of Machine LearningGreenberg, D. S., Nonnenmacher, M., and Macke, J. H. (2019). "Automatic Posterior Transformation for Likelihood-free Inference," in International Conference on Machine Learning (ICML), 9-15 June (Long Beach, California, USA: PMLR), 2404-2414. of Proceedings of Machine Learning Research.97 Hamiltonian Neural Networks. S Greydanus, M Dzamba, Yosinski , J , Conference on Neural Information Processing Systems (NeurIPS). Vancouver, BC, CanadaGreydanus, S., Dzamba, M., and Yosinski, J. (2019). "Hamiltonian Neural Networks," in Conference on Neural Information Processing Systems (NeurIPS), December 8-14 (Vancouver, BC, Canada, 15353-15363. S Höfer, K E Bekris, A Handa, J C G Higuera, F Golemo, M Mozifian, Perspectives on Sim2real Transfer for Robotics: A Summary of the R:SS 2020 Workshop. arXiv 2012. 3806Höfer, S., Bekris, K. E., Handa, A., Higuera, J. C. G., Golemo, F., Mozifian, M., et al. (2020). Perspectives on Sim2real Transfer for Robotics: A Summary of the R:SS 2020 Workshop. arXiv 2012.03806. Fight Ill-Posedness with Ill-Posedness: Single-Shot Variational Depth Superresolution from Shading. B Haefner, Y Queau, T Mollenhoff, D Cremers, 10.1109/CVPR.2018.00025Conference on Computer Vision and Pattern Recognition (CVPR). Salt Lake City, UT, USAIEEE Computer SocietyHaefner, B., Queau, Y., Mollenhoff, T., and Cremers, D. (2018). "Fight Ill- Posedness with Ill-Posedness: Single-Shot Variational Depth Super- resolution from Shading," in Conference on Computer Vision and Pattern Recognition (CVPR), June 18-22 (Salt Lake City, UT, USA: IEEE Computer Society), 164-174. doi:10.1109/CVPR.2018.00025 Grounded Action Transformation for Robot Learning in Simulation. J P Hanna, P Stone, AAAI Conference on Artificial Intelligence. San Francisco, California, USAHanna, J. P., and Stone, P. (2017). "Grounded Action Transformation for Robot Learning in Simulation," in AAAI Conference on Artificial Intelligence, February 4-9 (San Francisco, California, USA, 3834-3840. NeuralSim: Augmenting Differentiable Simulators with Neural Networks. E Heiden, D Millard, E Coumans, Y Sheng, G S Sukhatme, 10.1109/icra48506.2021.9560935International Conference on Robotics and Automation (ICRA). Xi'an, ChinaHeiden, E., Millard, D., Coumans, E., Sheng, Y., and Sukhatme, G. S. (2021). "NeuralSim: Augmenting Differentiable Simulators with Neural Networks," in International Conference on Robotics and Automation (ICRA), May 30 -June 5 (Xi'an, China. doi:10.1109/icra48506.2021.9560935 J Hermans, V Begy, G Louppe, PMLRof Proceedings of Machine Learning Research.Likelihood-free MCMC with Amortized Approximate Ratio EstimatorsInternational Conference on Machine Learning (ICML). 119Hermans, J., Begy, V., and Louppe, G. (2020)., 119. PMLR, 4239-4248. of Proceedings of Machine Learning Research.Likelihood-free MCMC with Amortized Approximate Ratio EstimatorsInternational Conference on Machine Learning (ICML), Virtual Event13-18 July G E Hinton, O Vinyals, J Dean, Distilling the Knowledge in a Neural Network. arXiv 1503. 2531Hinton, G. E., Vinyals, O., and Dean, J. (2015). Distilling the Knowledge in a Neural Network. arXiv 1503.02531. Chainqueen: A Real-Time Differentiable Physical Simulator for Soft Robotics. Y Hu, J Liu, A Spielberg, J B Tenenbaum, W T Freeman, J Wu, 10.1109/ICRA.2019.8794333International Conference on Robotics and Automation (ICRA). Montreal, QC, CanadaIEEEHu, Y., Liu, J., Spielberg, A., Tenenbaum, J. B., Freeman, W. T., Wu, J., et al. (2019). "Chainqueen: A Real-Time Differentiable Physical Simulator for Soft Robotics," in International Conference on Robotics and Automation (ICRA), May 20-24 (Montreal, QC, Canada: IEEE), 6265-6271. doi:10.1109/ICRA. 2019.8794333 S H Huang, N Papernot, I J Goodfellow, Y Duan, Abbeel , P , International Conference on Learning Representations (ICLR) Toulon. France. OpenReviewAdversarial Attacks on Neural Network Policies, Workshop Track. netHuang, S. H., Papernot, N., Goodfellow, I. J., Duan, Y., and Abbeel, P. (2017). "Adversarial Attacks on Neural Network Policies, Workshop Track," in International Conference on Learning Representations (ICLR) Toulon, April 24-26 (France. OpenReview.net). Adversarial Examples Are Not Bugs, They Are Features. A Ilyas, S Santurkar, D Tsipras, L Engstrom, B Tran, A Madry, Conference on Neural Information Processing Systems (NeurIPS). Vancouver, BC, CanadaIlyas, A., Santurkar, S., Tsipras, D., Engstrom, L., Tran, B., and Madry, A. (2019). "Adversarial Examples Are Not Bugs, They Are Features," in Conference on Neural Information Processing Systems (NeurIPS), December 8-14 (Vancouver, BC, Canada, 125-136. Tools for Simulating Humanoid Robot Dynamics: A Survey Based on User Feedback. S Ivaldi, J Peters, V Padois, F Nori, 10.1109/HUMANOIDS.2014.7041462doi:10.1109/ HUMANOIDS.2014.7041462Tools for simulating humanoid robot dynamics: A survey based on user feedback. Humanoids, Madrid, SpainIEEEIvaldi, S., Peters, J., Padois, V., and Nori, F. (2014). "Tools for Simulating Humanoid Robot Dynamics: A Survey Based on User Feedback," in Tools for simulating humanoid robot dynamics: A survey based on user feedback, November 18-20 (Humanoids, Madrid, Spain: IEEE), 842-849. doi:10.1109/ HUMANOIDS.2014.7041462 Evolutionary Robotics and the Radical Envelope-Of-Noise Hypothesis. N Jakobi, 10.1177/105971239700600205Adaptive Behav. 6Jakobi, N. (1997). Evolutionary Robotics and the Radical Envelope-Of-Noise Hypothesis. Adaptive Behav. 6, 325-368. doi:10.1177/105971239700600205 Noise and the Reality gap: The Use of Simulation in Evolutionary Robotics. N Jakobi, P Husbands, I Harvey, 10.1007/3-540-59496-5_337Advances in Artificial Life. Granada, SpainJakobi, N., Husbands, P., and Harvey, I. (1995). "Noise and the Reality gap: The Use of Simulation in Evolutionary Robotics," in Advances in Artificial Life, June 4-6 (Granada, Spain, 704-720. 704-720. doi:10.1007/3-540-59496-5_337 Transferring End-To-End Visuomotor Control from Simulation to Real World for a Multi-Stage Task. S James, A J Davison, E Johns, Research.78Conference on Robot Learning (CoRL). Mountain View, California, USAof Proceedings of Machine LearningJames, S., Davison, A. J., and Johns, E. (2017). "Transferring End-To-End Visuomotor Control from Simulation to Real World for a Multi-Stage Task," in Conference on Robot Learning (CoRL), November 13-15 (Mountain View, California, USA: PMLR), 334-343. of Proceedings of Machine Learning Research.78 Sim-to-real via Sim-To-Sim: Data-Efficient Robotic Grasping via Randomized-To-Canonical Adaptation Networks. S James, P Wohlhart, M Kalakrishnan, D Kalashnikov, A Irpan, J Ibarz, 10.1109/CVPR.2019.01291doi:10. 1109/CVPR.2019.01291Conference on Computer Vision and Pattern Recognition (CVPR). Long Beach, CA, USAComputer Vision Foundation/IEEEJames, S., Wohlhart, P., Kalakrishnan, M., Kalashnikov, D., Irpan, A., Ibarz, J., et al. (2019). "Sim-to-real via Sim-To-Sim: Data-Efficient Robotic Grasping via Randomized-To-Canonical Adaptation Networks," in Conference on Computer Vision and Pattern Recognition (CVPR), June 16-20 (Long Beach, CA, USA: Computer Vision Foundation/IEEE), 12627-12637. doi:10. 1109/CVPR.2019.01291 Simgan: Hybrid Simulator Identification for Domain Adaptation via. Y Jiang, T Zhang, D Ho, Y Bai, C K Liu, S Levine, Adversarial Reinforcement Learning. arXiv. 21016005Jiang, Y., Zhang, T., Ho, D., Bai, Y., Liu, C. K., Levine, S., et al. (2021). Simgan: Hybrid Simulator Identification for Domain Adaptation via Adversarial Reinforcement Learning. arXiv 2101.06005 M Körber, J Lange, S Rediske, S Steinmann, R Glück, Comparing Popular Simulation Environments in the Scope of Robotics and Reinforcement Learning. arXiv 2103. 4616Körber, M., Lange, J., Rediske, S., Steinmann, S., and Glück, R. (2021). Comparing Popular Simulation Environments in the Scope of Robotics and Reinforcement Learning. arXiv 2103.04616 Sim2real Predictivity: Does Evaluation in Simulation Predict Real-World Performance? IEEE Robot. A Kadian, J Truong, A Gokaslan, A Clegg, E Wijmans, S Lee, 10.1109/LRA.2020.3013848Autom. Lett. 5Kadian, A., Truong, J., Gokaslan, A., Clegg, A., Wijmans, E., Lee, S., et al. (2020). Sim2real Predictivity: Does Evaluation in Simulation Predict Real-World Performance? IEEE Robot. Autom. Lett. 5, 6670-6677. doi:10.1109/LRA. 2020.3013848 Methods of Reducing Sample Size in Monte Carlo Computations. H Kahn, A W Marshall, 10.1287/opre.1.5.263Or. 1Kahn, H., and Marshall, A. W. (1953). Methods of Reducing Sample Size in Monte Carlo Computations. Or 1, 263-278. doi:10.1287/opre.1.5.263 VancouverBritish Columbia, Canada. S M Kakade, A Natural Policy GradientConference on Neural Information Processing Systems (NIPS) December. Kakade, S. M. (2001). VancouverBritish Columbia, Canada, 1531-1538.A Natural Policy GradientConference on Neural Information Processing Systems (NIPS) December 3-8. Sim2real Transfer for Reinforcement Learning without Dynamics Randomization. M Kaspar, J D Munoz Osorio, J Bock, 10.1109/IROS45743.2020.9341260International Conference on Intelligent Robots and Systems (IROS). Las Vegas, NV, USAIEEEKaspar, M., Munoz Osorio, J. D., and Bock, J. (2020). "Sim2real Transfer for Reinforcement Learning without Dynamics Randomization," in International Conference on Intelligent Robots and Systems (IROS), October 24 -January 24 (Las Vegas, NV, USA: IEEE), 4383-4388. doi:10.1109/IROS45743.2020.9341260 Existence and Stability of Local Excitations in Homogeneous Neural fields. K Kishimoto, Amari , S , 10.1007/bf00275151J. Math. Biol. 7Kishimoto, K., and Amari, S. (1979). Existence and Stability of Local Excitations in Homogeneous Neural fields. J. Math. Biol. 7, 303-318. doi:10.1007/bf00275151 A Probabilistic Interpretation of Self-Paced Learning with Applications to Reinforcement Learning. P Klink, H Abdulsamad, B Belousov, C D&apos;eramo, J Peters, J Pajarinen, Klink, P., Abdulsamad, H., Belousov, B., D'Eramo, C., Peters, J., and Pajarinen, J. (2021). A Probabilistic Interpretation of Self-Paced Learning with Applications to Reinforcement Learning, 13176. arXiv 2102. Self-paced Contextual Reinforcement Learning. P Klink, H Abdulsamad, B Belousov, J Peters, PMLRof Proceedings of Machine Learning Research. Osaka, Japan100Conference on Robot Learning (CoRL)Klink, P., Abdulsamad, H., Belousov, B., and Peters, J. (2019). "Self-paced Contextual Reinforcement Learning," in Conference on Robot Learning (CoRL), October 30 -November 1 (Osaka, Japan: PMLR), 513-529. of Proceedings of Machine Learning Research.100. Reinforcement Learning in Robotics: A Survey. J Kober, J A Bagnell, J Peters, 10.1177/0278364913495721Int. J. Robotics Res. 32Kober, J., Bagnell, J. A., and Peters, J. (2013). Reinforcement Learning in Robotics: A Survey. Int. J. Robotics Res. 32, 1238-1274. doi:10.1177/0278364913495721 Continuous Global Optimization in Multiview 3d Reconstruction. K Kolev, M Klodt, T Brox, D Cremers, 10.1007/s11263-009-0233-1Int. J. Comput. Vis. 84Kolev, K., Klodt, M., Brox, T., and Cremers, D. (2009). Continuous Global Optimization in Multiview 3d Reconstruction. Int. J. Comput. Vis. 84, 80-96. doi:10.1007/s11263-009-0233-1 Crossing the Reality gap in Evolutionary Robotics by Promoting Transferable Controllers. S Koos, J.-B Mouret, S Doncieux, 10.1145/1830483.1830505Genetic and Evolutionary Computation Conference (GECCO). Portland, Oregon, USAACMKoos, S., Mouret, J.-B., and Doncieux, S. (2010). "Crossing the Reality gap in Evolutionary Robotics by Promoting Transferable Controllers," in Genetic and Evolutionary Computation Conference (GECCO), July 7-11 (Portland, Oregon, USA: ACM), 119-126. doi:10.1145/1830483.1830505 The Transferability Approach: Crossing the Reality gap in Evolutionary Robotics. S Koos, J.-B Mouret, S Doncieux, 10.1109/TEVC.2012.2185849IEEE Trans. Evol. Computat. 17Koos, S., Mouret, J.-B., and Doncieux, S. (2013). The Transferability Approach: Crossing the Reality gap in Evolutionary Robotics. IEEE Trans. Evol. Computat. 17, 122-145. doi:10.1109/TEVC.2012.2185849 Imagenet Classification with Deep Convolutional Neural Networks. A Krizhevsky, I Sutskever, G E Hinton, Conference on Neural Information Processing Systems (NIPS). Lake TahoeKrizhevsky, A., Sutskever, I., and Hinton, G. E. (2012). "Imagenet Classification with Deep Convolutional Neural Networks," in Conference on Neural Information Processing Systems (NIPS), 1106-1114.Lake Tahoe, Nev. United States December3-6 The DARPA Robotics challenge Finals: Results and Perspectives. E Krotkov, D Hackett, L Jackel, M Perschbacher, J Pippine, J Strauss, 10.1002/rob.21683J. Field Robotics. 34Krotkov, E., Hackett, D., Jackel, L., Perschbacher, M., Pippine, J., Strauss, J., et al. (2017). The DARPA Robotics challenge Finals: Results and Perspectives. J. Field Robotics 34, 229-240. doi:10.1002/rob.21683 RMA: Rapid Motor Adaptation for Legged Robots. A Kumar, Z Fu, D Pathak, Malik , J , 10.15607/RSS.2021.XVII.011Robotics: Science and Systems (RSS), Virtual Event. Kumar, A., Fu, Z., Pathak, D., and Malik, J. (2021). "RMA: Rapid Motor Adaptation for Legged Robots," in Robotics: Science and Systems (RSS), Virtual Event, July 12-16. doi:10.15607/RSS.2021.XVII.011 Self-paced Learning for Latent Variable Models. M P Kumar, B Packer, D Koller, Conference on Neural Information Processing Systems (NIPS). Vancouver, British Columbia, CanadaKumar, M. P., Packer, B., and Koller, D. (2010). "Self-paced Learning for Latent Variable Models," in Conference on Neural Information Processing Systems (NIPS), 6-9 December (Vancouver, British Columbia, Canada, 1189-1197. Adversarial Examples in the Physical World. A Kurakin, I J Goodfellow, S Bengio, International Conference on Learning Representations (ICLR) Toulon. France. Workshop Track (OpenReview.netKurakin, A., Goodfellow, I. J., and Bengio, S. (2017). "Adversarial Examples in the Physical World," in International Conference on Learning Representations (ICLR) Toulon, April 24-26 (France. Workshop Track (OpenReview.net). Adaptive Control: Algorithms, Analysis and Applications. 2 edn. I D Landau, R Lozano, M Saad, A Karimi, Springer Science & Business MediaLandau, I. D., Lozano, R., M'Saad, M., and Karimi, A. (2011). Adaptive Control: Algorithms, Analysis and Applications. 2 edn. Springer Science & Business Media. Variational Policy Search via Trajectory Optimization. S Levine, V Koltun, Conference on Neural Information Processing Systems (NIPS). Lake Tahoe, Nevada, USALevine, S., and Koltun, V. (2013). "Variational Policy Search via Trajectory Optimization," in Conference on Neural Information Processing Systems (NIPS), December 5-8 (Lake Tahoe, Nevada, USA, 207-215. Learning Hand-Eye Coordination for Robotic Grasping with Deep Learning and Large-Scale Data Collection. S Levine, P Pastor, A Krizhevsky, J Ibarz, D Quillen, 10.1177/0278364917710318doi:10.1177/ 0278364917710318Int. J. Robotics Res. 37Levine, S., Pastor, P., Krizhevsky, A., Ibarz, J., and Quillen, D. (2018). Learning Hand-Eye Coordination for Robotic Grasping with Deep Learning and Large- Scale Data Collection. Int. J. Robotics Res. 37, 421-436. doi:10.1177/ 0278364917710318 Frontiers in Robotics and AI | www.frontiersin.org. 9799893Frontiers in Robotics and AI | www.frontiersin.org April 2022 | Volume 9 | Article 799893 Continuous Control with Deep Reinforcement Learning. T P Lillicrap, J J Hunt, A Pritzel, N Heess, T Erez, Y Tassa, International Conference on Learning Representations (ICLR). San Juan, Puerto RicoConference Track (OpenReview.netLillicrap, T. P., Hunt, J. J., Pritzel, A., Heess, N., Erez, T., Tassa, Y., et al. (2016). "Continuous Control with Deep Reinforcement Learning," in International Conference on Learning Representations (ICLR), May 2-4 (San Juan, Puerto Rico. Conference Track (OpenReview.net). Stein Variational Policy Gradient. Y Liu, P Ramachandran, Q Liu, J Peng, Association for Uncertainty in Artificial Intelligence (UAI). Sydney, AustraliaLiu, Y., Ramachandran, P., Liu, Q., and Peng, J. (2017). "Stein Variational Policy Gradient," in Association for Uncertainty in Artificial Intelligence (UAI) (Sydney, Australia, August 11-15. Reinforcement Learning for Non-prehensile Manipulation: Transfer from Simulation to Physical System. K Lowrey, S Kolev, J Dao, A Rajeswaran, E Todorov, 10.1109/SIMPAR.2018.8376268Simulation, Modeling, and Programming for Autonomous Robots (SIMPAR). Brisbane, AustraliaLowrey, K., Kolev, S., Dao, J., Rajeswaran, A., and Todorov, E. (2018). "Reinforcement Learning for Non-prehensile Manipulation: Transfer from Simulation to Physical System," in Simulation, Modeling, and Programming for Autonomous Robots (SIMPAR), May 16-19 (Brisbane, Australia, 35-42. doi:10.1109/SIMPAR.2018.8376268 Flexible Statistical Inference for Mechanistic Models of Neural Dynamics. J Lueckmann, P J Gonçalves, G Bassetto, K Öcal, M Nonnenmacher, J H Macke, Conference on Neural Information Processing Systems. Long Beach, CA, USALueckmann, J., Gonçalves, P. J., Bassetto, G., Öcal, K., Nonnenmacher, M., and Macke, J. H. (2017). "Flexible Statistical Inference for Mechanistic Models of Neural Dynamics," in Conference on Neural Information Processing Systems, December 4-9 (Long Beach, CA, USA: NIPS), 1289-1299. Adaptive Movement Sequences and Predictive Decisions Based on Hierarchical Dynamical Systems. T Luksch, M Gienger, M Mühlig, Yoshiike , T , 10.1109/iros.2012.6385651International Conference on Intelligent Robots and Systems (IROS). Vilamoura, Algarve, PortugalIEEELuksch, T., Gienger, M., Mühlig, M., and Yoshiike, T. (2012). "Adaptive Movement Sequences and Predictive Decisions Based on Hierarchical Dynamical Systems," in International Conference on Intelligent Robots and Systems (IROS), October 7-12 (Vilamoura, Algarve, Portugal: IEEE), 2082-2088. doi:10.1109/iros.2012.6385651 M Lutter, S Mannor, J Peters, D Fox, A Garg, Robust Value Iteration for Continuous Control Tasks. arXiv 2105. 12189Lutter, M., Mannor, S., Peters, J., Fox, D., and Garg, A. (2021a). Robust Value Iteration for Continuous Control Tasks. arXiv 2105.12189. Deep Lagrangian Networks: Using Physics as Model Prior for Deep Learning. M Lutter, C Ritter, J Peters, International Conference on Learning Representations (ICLR). New Orleans, LA, USAConference Track (OpenReview.netLutter, M., Ritter, C., and Peters, J. (2019). "Deep Lagrangian Networks: Using Physics as Model Prior for Deep Learning," in International Conference on Learning Representations (ICLR), May 6-9 (New Orleans, LA, USA. Conference Track (OpenReview.net). Differentiable Physics Models for Real-World Offline Model-Based Reinforcement Learning. M Lutter, J Silberbauer, J Watson, J Peters, 1734Lutter, M., Silberbauer, J., Watson, J., and Peters, J. (2021b). Differentiable Physics Models for Real-World Offline Model-Based Reinforcement Learning. arXiv 2011.01734 Monte Carlo Bounding Techniques for Determining Solution Quality in Stochastic Programs. W.-K Mak, D P Morton, R K Wood, 10.1016/S0167-6377(98)00054-6Operations Res. Lett. 24Mak, W.-K., Morton, D. P., and Wood, R. K. (1999). Monte Carlo Bounding Techniques for Determining Solution Quality in Stochastic Programs. Operations Res. Lett. 24, 47-56. doi:10.1016/S0167-6377(98)00054-6 A Mandlekar, Y Zhu, A Garg, L Fei-Fei, S Savarese, 10.1109/IROS.2017.82062458206245Adversarially Robust Policy Learning: Active Construction of Physically-Plausible PerturbationsInternational Conference on Intelligent Robots and Systems (IROS). Vancouver, BC: CanadaMandlekar, A., Zhu, Y., Garg, A., Fei-Fei, L., and Savarese, S. (2017). Vancouver, BC: Canada. September 24-28. 3932-3939. doi:10.1109/IROS.2017. 8206245Adversarially Robust Policy Learning: Active Construction of Physically-Plausible PerturbationsInternational Conference on Intelligent Robots and Systems (IROS) Virtual vs. Real: Trading off Simulations and Physical Experiments in Reinforcement Learning with Bayesian Optimization. A Marco, F Berkenkamp, P Hennig, A P Schoellig, A Krause, S Schaal, 10.1109/icra.2017.7989186International Conference on Robotics and Automation (ICRA). Marina Bay Sands; SingaporeMarco, A., Berkenkamp, F., Hennig, P., Schoellig, A. P., Krause, A., Schaal, S., et al. (2017). "Virtual vs. Real: Trading off Simulations and Physical Experiments in Reinforcement Learning with Bayesian Optimization," in International Conference on Robotics and Automation (ICRA), May 29 -Jun 3 (Marina Bay Sands, Singapore. doi:10.1109/icra.2017.7989186 Markov Chain Monte Carlo without Likelihoods. P Marjoram, J Molitor, V Plagnol, S Tavare, 10.1073/pnas.0306899100doi:10. 1073/pnas.0306899100Proc. Natl. Acad. Sci. 100Marjoram, P., Molitor, J., Plagnol, V., and Tavare, S. (2003). Markov Chain Monte Carlo without Likelihoods. Proc. Natl. Acad. Sci. 100, 15324-15328. doi:10. 1073/pnas.0306899100 Online Interactive Perception of Articulated Objects with Multi-Level Recursive Estimation Based on Taskspecific Priors. Martin Martin, R Brock, O , 10.1109/IROS.2014.6942902doi:10.1109/ IROS.2014.6942902International Conference on Intelligent Robots and Systems (IROS). Chicago, IL, USAIEEEMartin Martin, R., and Brock, O. (2014). "Online Interactive Perception of Articulated Objects with Multi-Level Recursive Estimation Based on Task- specific Priors," in International Conference on Intelligent Robots and Systems (IROS), September 14-18 (Chicago, IL, USA: IEEE), 2494-2501. doi:10.1109/ IROS.2014.6942902 Sim-to-real Reinforcement Learning for Deformable Object Manipulation. J Matas, S James, A J Davison, PMLRof Proceedings of Machine Learning Research. Zürich, Switzerland87Conference on Robot Learning (CoRL)Matas, J., James, S., and Davison, A. J. (2018). "Sim-to-real Reinforcement Learning for Deformable Object Manipulation," in Conference on Robot Learning (CoRL), October 29-31 (Zürich, Switzerland: PMLR), 734-743. of Proceedings of Machine Learning Research.87 Inferring the Material Properties of Granular media for Robotic Tasks. C Matl, Y Narang, R Bajcsy, F Ramos, D Fox, 10.1109/ICRA40945.2020.9197063International Conference on Robotics and Automation (ICRA). Paris, FranceMay 31 -AugustIEEE31Matl, C., Narang, Y., Bajcsy, R., Ramos, F., and Fox, D. (2020). "Inferring the Material Properties of Granular media for Robotic Tasks," in International Conference on Robotics and Automation (ICRA) (Paris, FranceMay 31 - August 31: IEEE), 2770-2777. doi:10.1109/ICRA40945.2020.9197063 Active Domain Randomization. B Mehta, M Diaz, F Golemo, C J Pal, L Paull, Research.100Conference on Robot Learning (CoRL). Osaka, Japanof Proceedings of Machine LearningMehta, B., Diaz, M., Golemo, F., Pal, C. J., and Paull, L. (2019). "Active Domain Randomization," in Conference on Robot Learning (CoRL), October 30 - November 1 (Osaka, Japan: PMLR), 1162-1176. of Proceedings of Machine Learning Research.100 A User's Guide to Calibrating Robotics Simulators. B Mehta, A Handa, D Fox, F Ramos, Proceedings of Machine Learning Research. Machine Learning ResearchPMLRConference on Robot Learning (CoRL)Mehta, B., Handa, A., Fox, D., and Ramos, F. (2020). "A User's Guide to Calibrating Robotics Simulators," in Conference on Robot Learning (CoRL), Virtual Event, November 16 -18 (PMLR). Proceedings of Machine Learning Research. The Monte Carlo Method. N Metropolis, S Ulam, 10.1080/01621459.1949.10483310J. Am. Stat. Assoc. 44Metropolis, N., and Ulam, S. (1949). The Monte Carlo Method. J. Am. Stat. Assoc. 44, 335-341. doi:10.1080/01621459.1949.10483310 Human-level Control through Deep Reinforcement Learning. V Mnih, K Kavukcuoglu, D Silver, A A Rusu, J Veness, M G Bellemare, 10.1038/nature14236Nature. 518Mnih, V., Kavukcuoglu, K., Silver, D., Rusu, A. A., Veness, J., Bellemare, M. G., et al. (2015). Human-level Control through Deep Reinforcement Learning. Nature 518, 529-533. doi:10.1038/nature14236 Sim-to-(multi)-real: Transfer of Low-Level Robust Control Policies to Multiple Quadrotors. A Molchanov, T Chen, W Honig, J A Preiss, N Ayanian, G S Sukhatme, 10.1109/IROS40897.2019.8967695doi:10. 1109/IROS40897.2019.8967695International Conference on Intelligent Robots and Systems (IROS). Macau, SAR, ChinaIEEEMolchanov, A., Chen, T., Honig, W., Preiss, J. A., Ayanian, N., and Sukhatme, G. S. (2019). "Sim-to-(multi)-real: Transfer of Low-Level Robust Control Policies to Multiple Quadrotors," in International Conference on Intelligent Robots and Systems (IROS), November 3-8 (Macau, SAR, China: IEEE), 59-66. doi:10. 1109/IROS40897.2019.8967695 Ensemble-cio: Full-Body Dynamic Motion Planning that Transfers to Physical Humanoids. I Mordatch, K Lowrey, E Todorov, 10.1109/IROS.2015.7354126doi:10.1109/ IROS.2015.7354126International Conference on Intelligent Robots and Systems (IROS). Hamburg, GermanyMordatch, I., Lowrey, K., and Todorov, E. (2015). "Ensemble-cio: Full-Body Dynamic Motion Planning that Transfers to Physical Humanoids," in International Conference on Intelligent Robots and Systems (IROS), September 28 -October 2 (Hamburg, Germany, 5307-5314. doi:10.1109/ IROS.2015.7354126 Learning to Plan Hierarchically from Curriculum. P Morere, L Ott, F Ramos, 10.1109/LRA.2019.2920285IEEE Robot. Autom. Lett. 4Morere, P., Ott, L., and Ramos, F. (2019). Learning to Plan Hierarchically from Curriculum. IEEE Robot. Autom. Lett. 4, 2815-2822. doi:10.1109/LRA.2019. 2920285 Learning Domain Randomization Distributions for Training Robust Locomotion Policies. M Mozian, J Camilo Gamboa Higuera, D Meger, G Dudek, 10.1109/IROS45743.2020.9341019International Conference on Intelligent Robots and Systems (IROS) Las Vegas. NV, USAIEEEMozian, M., Camilo Gamboa Higuera, J., Meger, D., and Dudek, G. (2020). "Learning Domain Randomization Distributions for Training Robust Locomotion Policies," in International Conference on Intelligent Robots and Systems (IROS) Las Vegas, October 24 -January 24 (NV, USA: IEEE), 6112-6117. doi:10.1109/IROS45743.2020.9341019 Data-efficient Domain Randomization with Bayesian Optimization. F Muratore, C Eilers, M Gienger, J Peters, 10.1109/LRA.2021.3052391IEEE Robot. Autom. Lett. 6Muratore, F., Eilers, C., Gienger, M., and Peters, J. (2021a). Data-efficient Domain Randomization with Bayesian Optimization. IEEE Robot. Autom. Lett. 6, 911-918. doi:10.1109/LRA.2021.3052391 Assessing Transferability from Simulation to Reality for Reinforcement Learning. F Muratore, M Gienger, J Peters, 10.1109/TPAMI.2019.2952353IEEE Trans. Pattern Anal. Mach. Intell. 43Muratore, F., Gienger, M., and Peters, J. (2021b). Assessing Transferability from Simulation to Reality for Reinforcement Learning. IEEE Trans. Pattern Anal. Mach. Intell. 43, 1172-1183. doi:10.1109/TPAMI.2019.2952353 Neural Posterior Domain Randomization. F Muratore, T Gruner, F Wiese, B B M Gienger, J Peters, Conference on Robot Learning (CoRL). London, EnglandVirtual EventMuratore, F., Gruner, T., Wiese, F., Gienger, B. B. M., and Peters, J. (2021c). "Neural Posterior Domain Randomization," in Conference on Robot Learning (CoRL), Virtual Event, November 8-11 (London, England. Domain Randomization for Simulation-Based Policy Optimization with Transferability Assessment. F Muratore, F Treede, M Gienger, J Peters, of Proceedings of Machine Learning Research. Zürich, SwitzerlandOctoberPMLR87Conference on Robot Learning (CoRL)Muratore, F., Treede, F., Gienger, M., and Peters, J. (2018). "Domain Randomization for Simulation-Based Policy Optimization with Transferability Assessment," in Conference on Robot Learning (CoRL) (Zürich, SwitzerlandOctober 29-31: PMLR), 700-713. of Proceedings of Machine Learning Research.87 Learning to Adapt in Dynamic, Real-World Environments through Meta-Reinforcement Learning. A Nagabandi, I Clavera, S Liu, R S Fearing, P Abbeel, S Levine, International Conference on Learning Representations (ICLR). New Orleans; LA, USANagabandi, A., Clavera, I., Liu, S., Fearing, R. S., Abbeel, P., Levine, S., et al. (2019). "Learning to Adapt in Dynamic, Real-World Environments through Meta- Reinforcement Learning," in International Conference on Learning Representations (ICLR) New Orleans, May 6-9 (LA, USA. OpenReview.net). PEGASUS: a Policy Search Method for Large Mdps and Pomdps. A Y Ng, Jordan , M I , UAI. Stanford, California, USAMorgan KaufmannNg, A. Y., and Jordan, M. I. (2000). "PEGASUS: a Policy Search Method for Large Mdps and Pomdps," in UAI, June 30 -July 3 (Stanford, California, USA: Morgan Kaufmann), 406-415. Solving Rubik's Cube with a Robot Hand. I Openaiakkaya, M Andrychowicz, M Chociej, M Litwin, B Mcgrew, arXiv 1910.07113OpenAIAkkaya, I., Andrychowicz, M., Chociej, M., Litwin, M., McGrew, B., et al. (2019). Solving Rubik's Cube with a Robot Hand. arXiv 1910.07113 A Survey on Transfer Learning. S J Pan, Yang , Q , 10.1109/TKDE.2009.191IEEE Trans. Knowl. Data Eng. 22Pan, S. J., and Yang, Q. (2010). A Survey on Transfer Learning. IEEE Trans. Knowl. Data Eng. 22, 1345-1359. doi:10.1109/TKDE.2009.191 Fast ϵ-free Inference of Simulation Models with Bayesian Conditional Density Estimation. G Papamakarios, Murray , I , Conference on Neural Information Processing Systems (NIPS). Barcelona, SpainPapamakarios, G., and Murray, I. (2016). "Fast ϵ-free Inference of Simulation Models with Bayesian Conditional Density Estimation," in Conference on Neural Information Processing Systems (NIPS), December 5-10 (Barcelona, Spain, 1028-1036. Sequential Neural Likelihood: Fast Likelihood-free Inference with Autoregressive Flows. G Papamakarios, D C Sterratt, Murray , I , PMLRInternational Conference on Artificial Intelligence and Statistics (AISTATS). Naha, Okinawa, Japan89of Proceedings of Machine Learning ResearchPapamakarios, G., Sterratt, D. C., and Murray, I. (2019). "Sequential Neural Likelihood: Fast Likelihood-free Inference with Autoregressive Flows," in International Conference on Artificial Intelligence and Statistics (AISTATS), April 16-18 (Naha, Okinawa, Japan: PMLR), 837-848. of Proceedings of Machine Learning Research.89 Actor-mimic: Deep Multitask and Transfer Reinforcement Learning. E Parisotto, L J Ba, R Salakhutdinov, International Conference on Learning Representations (ICLR). San Juan; Puerto RicoConference TrackParisotto, E., Ba, L. J., and Salakhutdinov, R. (2016). "Actor-mimic: Deep Multitask and Transfer Reinforcement Learning," in International Conference on Learning Representations (ICLR) San Juan, May 2-4 (Puerto Rico. Conference Track. Fingerprint Policy Optimisation for Robust Reinforcement Learning. S Paul, M A Osborne, S Whiteson, International Conference on Machine Learning (ICML). Long Beach California, USA; June97Paul, S., Osborne, M. A., and Whiteson, S. (2019). Fingerprint Policy Optimisation for Robust Reinforcement Learning. In International Conference on Machine Learning (ICML), Long Beach California, USA, 9-15 June (PMLR), vol. 97 Sim-to-real Transfer of Robotic Control with Dynamics Randomization. X B Peng, M Andrychowicz, W Zaremba, Abbeel , P , 10.1109/ICRA.2018.8460528International Conference on Robotics and Automation (ICRA). Brisbane, AustraliaPeng, X. B., Andrychowicz, M., Zaremba, W., and Abbeel, P. (2018). "Sim-to-real Transfer of Robotic Control with Dynamics Randomization," in International Conference on Robotics and Automation (ICRA), May 21-25 (Brisbane, Australia, 1-8. doi:10.1109/ICRA.2018.8460528 Improving Noise. K Perlin, 10.1145/566654.566636doi:10.1145/ 566654.566636ACM Trans. Graph. 21Perlin, K. (2002). Improving Noise. ACM Trans. Graph. 21, 681-682. doi:10.1145/ 566654.566636 Frontiers in Robotics and AI | www.frontiersin.org. 9799893Frontiers in Robotics and AI | www.frontiersin.org April 2022 | Volume 9 | Article 799893 Relative Entropy Policy Search. J Peters, K Mülling, Altun , Y , AAAI Conference on Artificial Intelligence. Atlanta, Georgia, USAPeters, J., Mülling, K., and Altun, Y. (2010). "Relative Entropy Policy Search," in AAAI Conference on Artificial Intelligence, July 11-15 (Atlanta, Georgia, USA. Asymmetric Actor Critic for Image-Based Robot Learning. L Pinto, M Andrychowicz, P Welinder, W Zaremba, Abbeel , P , 10.15607/RSS.2018.XIV.008Robotics: Science and Systems (RSS). Pittsburgh, Pennsylvania, USAPinto, L., Andrychowicz, M., Welinder, P., Zaremba, W., and Abbeel, P. (2018). "Asymmetric Actor Critic for Image-Based Robot Learning," in Robotics: Science and Systems (RSS), June 26-30 (Pittsburgh, Pennsylvania, USA. doi:10.15607/RSS.2018.XIV.008 Robust Adversarial Reinforcement Learning. L Pinto, J Davidson, R Sukthankar, A Gupta, PMLRInternational Conference on Machine Learning (ICML). Sydney, NSW, AustraliaPinto, L., Davidson, J., Sukthankar, R., and Gupta, A. (2017). "Robust Adversarial Reinforcement Learning," in International Conference on Machine Learning (ICML), August 6-11 (Sydney, NSW, Australia: PMLR), 2817-2826. Sim-to-real Quadrotor landing via Sequential Deep Q-Networks and Domain Randomization. R Polvara, M Patacchiola, M Hanheide, G Neumann, 10.3390/robotics9010008Robotics. 9Polvara, R., Patacchiola, M., Hanheide, M., and Neumann, G. (2020). Sim-to-real Quadrotor landing via Sequential Deep Q-Networks and Domain Randomization. Robotics 9, 8. doi:10.3390/robotics9010008 Online Bayessim for Combined Simulator Parameter Inference and Policy Improvement. R Possas, L Barcelos, R Oliveira, D Fox, F Ramos, 10.1109/IROS45743.2020.9341401International Conference on Intelligent Robots and Systems (IROS) Las Vegas. NV, USAIEEEPossas, R., Barcelos, L., Oliveira, R., Fox, D., and Ramos, F. (2020). "Online Bayessim for Combined Simulator Parameter Inference and Policy Improvement," in International Conference on Intelligent Robots and Systems (IROS) Las Vegas, October 24 -January 24 (NV, USA: IEEE), 5445-5452. doi:10.1109/IROS45743.2020.9341401 Language Models Are Unsupervised Multitask Learners. A Radford, J Wu, R Child, D Luan, D Amodei, I Sutskever, Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., and Sutskever, I. (2019). Language Models Are Unsupervised Multitask Learners. Epopt: Learning Robust Neural Network Policies Using Model Ensembles. A Rajeswaran, S Ghotra, B Ravindran, S Levine, International Conference on Learning Representations (ICLR). ToulonFrance. Conference Track (OpenReview.netRajeswaran, A., Ghotra, S., Ravindran, B., and Levine, S. (2017). "Epopt: Learning Robust Neural Network Policies Using Model Ensembles," in International Conference on Learning Representations (ICLR), Toulon, April 24-26 (France. Conference Track (OpenReview.net). A Game Theoretic Framework for Model Based Reinforcement Learning. A Rajeswaran, I Mordatch, V Kumar, International Conference on Machine Learning (ICML). 119of Proceedings of Machine Learning ResearchRajeswaran, A., Mordatch, I., and Kumar, V. (2020). "A Game Theoretic Framework for Model Based Reinforcement Learning," in International Conference on Machine Learning (ICML), Virtual Event, 13-18 July (PMLR), 7953-7963. of Proceedings of Machine Learning Research.119 Bayessim: Adaptive Domain Randomization via Probabilistic Inference for Robotics Simulators. F Ramos, R Possas, D Fox, 10.15607/RSS.2019.XV.029Robotics: Science and Systems (RSS). Germany: Freiburg im BreisgauRamos, F., Possas, R., and Fox, D. (2019). "Bayessim: Adaptive Domain Randomization via Probabilistic Inference for Robotics Simulators," in Robotics: Science and Systems (RSS), June 22-26 (Germany: Freiburg im Breisgau). doi:10.15607/RSS.2019.XV.029 C E Rasmussen, C K I Williams, Gaussian Processes for Machine Learning. Adaptive Computation and Machine Learning. MIT PressRasmussen, C. E., and Williams, C. K. I. (2006). Gaussian Processes for Machine Learning. Adaptive Computation and Machine Learning. MIT Press. K Rawlik, M Toussaint, S Vijayakumar, 10.15607/RSS.2012.VIII.045Stochastic Optimal Control and Reinforcement Learning by Approximate InferenceRobotics: Science and SystemsJuly. Sydney, NSW, Australia9Rawlik, K., Toussaint, M., and Vijayakumar, S. (2012). Sydney, NSW, Australia: RSS. doi:10.15607/RSS.2012.VIII.045On Stochastic Optimal Control and Reinforcement Learning by Approximate InferenceRobotics: Science and SystemsJuly 9-13 Learning to Simulate. N Ruiz, S Schulter, M Chandraker, International Conference on Learning Representations (ICLR). New Orleans, LA, USAOpenReview.netRuiz, N., Schulter, S., and Chandraker, M. (2019). "Learning to Simulate," in International Conference on Learning Representations (ICLR), May 6-9 (New Orleans, LA, USA. (OpenReview.net). . O Russakovsky, J Deng, H Su, J Krause, S Satheesh, S Ma, Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., et al. (2015). Imagenet Large Scale Visual Recognition challenge. 10.1007/s11263-015-0816-yInt. J. Comput. Vis. 115Imagenet Large Scale Visual Recognition challenge. Int. J. Comput. Vis. 115, 211-252. doi:10.1007/s11263-015-0816-y Policy Distillation. A A Rusu, S G Colmenarejo, Ç Gülçehre, G Desjardins, J Kirkpatrick, R Pascanu, Conference Track.International Conference on Learning Representations (ICLR)May. San Juan, Puerto RicoRusu, A. A., Colmenarejo, S. G., Gülçehre, Ç., Desjardins, G., Kirkpatrick, J., Pascanu, R., et al. (2016a). "Policy Distillation," in (San Juan, Puerto Rico. Conference Track.International Conference on Learning Representations (ICLR)May 2-4 A A Rusu, N C Rabinowitz, G Desjardins, H Soyer, J Kirkpatrick, K Kavukcuoglu, 1606.04671Progressive Neural Networks. Rusu, A. A., Rabinowitz, N. C., Desjardins, G., Soyer, H., Kirkpatrick, J., Kavukcuoglu, K., et al. (2016b). Progressive Neural Networks. arXiv 1606.04671 Sim-to-real Robot Learning from Pixels with Progressive Nets. A A Rusu, M Vecerik, T Rothörl, N Heess, R Pascanu, R Hadsell, Research.78Conference on Robot Learning (CoRL). Mountain View; California, USAof Proceedings of Machine LearningRusu, A. A., Vecerik, M., Rothörl, T., Heess, N., Pascanu, R., and Hadsell, R. (2017). "Sim-to-real Robot Learning from Pixels with Progressive Nets," in Conference on Robot Learning (CoRL), Mountain View, November 13-15 (California, USA: PMLR), 262-270. of Proceedings of Machine Learning Research.78 CAD2RL: Real Single-Image Flight without a Single Real Image. F Sadeghi, S Levine, 10.15607/RSS.2017.XIII.034Robotics: Science and Systems (RSS). Cambridge, Massachusetts, USASadeghi, F., and Levine, S. (2017). "CAD2RL: Real Single-Image Flight without a Single Real Image," in Robotics: Science and Systems (RSS), July 12-16 (Cambridge, Massachusetts, USA. doi:10.15607/RSS.2017.XIII.034 Meta-learning with Memory-Augmented Neural Networks. A Santoro, S Bartunov, M Botvinick, D Wierstra, T P Lillicrap, International Conference on Machine Learning (ICML). New York City, NY, USA48Santoro, A., Bartunov, S., Botvinick, M., Wierstra, D., and Lillicrap, T. P. (2016). "Meta-learning with Memory-Augmented Neural Networks," in International Conference on Machine Learning (ICML), June 19-24 (New York City, NY, USA: JMLR.org), 1842-1850.48 . J Schulman, F Wolski, P Dhariwal, A Radford, O Klimov, Schulman, J., Wolski, F., Dhariwal, P., Radford, A., and Klimov, O. (2017). Proximal Policy Optimization Algorithms. arXiv 1707. 6347Proximal Policy Optimization Algorithms. arXiv 1707.06347. Blind Bipedal Stair Traversal via Sim-To-Real Reinforcement Learning. J Siekmann, K Green, J Warila, A Fern, J Hurst, 10.15607/RSS.2021.XVII.061Robotics: Science and Systems (RSS), Virtual Event. Siekmann, J., Green, K., Warila, J., Fern, A., and Hurst, J. (2021). "Blind Bipedal Stair Traversal via Sim-To-Real Reinforcement Learning," in Robotics: Science and Systems (RSS), Virtual Event, July 12-16. doi:10.15607/RSS.2021.XVII.061 Mastering the Game of Go with Deep Neural Networks and Tree Search. D Silver, A Huang, C J Maddison, A Guez, L Sifre, G Van Den Driessche, 10.1038/nature16961Nature. 529Silver, D., Huang, A., Maddison, C. J., Guez, A., Sifre, L., van den Driessche, G., et al. (2016). Mastering the Game of Go with Deep Neural Networks and Tree Search. Nature 529, 484-489. doi:10.1038/nature16961 Approximate Bayesian Computation. M Sunnåker, A G Busetto, E Numminen, J Corander, M Foll, C Dessimoz, 10.1371/journal.pcbi.1002803Plos Comput. Biol. 91002803Sunnåker, M., Busetto, A. G., Numminen, E., Corander, J., Foll, M., and Dessimoz, C. (2013). Approximate Bayesian Computation. Plos Comput. Biol. 9, e1002803. doi:10.1371/journal.pcbi.1002803 Encoding Physical Constraints in Differentiable newton-euler Algorithm. G Sutanto, A S Wang, Y Lin, M Mukadam, G S Sukhatme, A Rai, Research.120of Proceedings of Machine Learning. Berkeley, CA, USAL4DC, Virtual EventSutanto, G., Wang, A. S., Lin, Y., Mukadam, M., Sukhatme, G. S., Rai, A., et al. (2020). "Encoding Physical Constraints in Differentiable newton-euler Algorithm," in L4DC, Virtual Event, 11-12 June (Berkeley, CA, USA: PMLR), 804-813. of Proceedings of Machine Learning Research.120 R S Sutton, 10.1145/122344.122377Dyna, an Integrated Architecture for Learning, Planning, and Reacting. SIGART Bull. 2Sutton, R. S. (1991). Dyna, an Integrated Architecture for Learning, Planning, and Reacting. SIGART Bull. 2, 160-163. doi:10.1145/122344.122377 Between Mdps and Semi-mdps: A Framework for Temporal Abstraction in Reinforcement Learning. R S Sutton, D Precup, S Singh, 10.1016/S0004-3702(99)00052-1Artif. Intelligence. 112Sutton, R. S., Precup, D., and Singh, S. (1999). Between Mdps and Semi-mdps: A Framework for Temporal Abstraction in Reinforcement Learning. Artif. Intelligence 112, 181-211. doi:10.1016/S0004-3702(99)00052-1 Intriguing Properties of Neural Networks. C Szegedy, W Zaremba, I Sutskever, J Bruna, D Erhan, I J Goodfellow, Conference Track.International Conference on Learning Representations (ICLR)April. Banff, CanadaSzegedy, C., Zaremba, W., Sutskever, I., Bruna, J., Erhan, D., Goodfellow, I. J., et al. (2014). "Intriguing Properties of Neural Networks," in (Banff, Canada. Conference Track.International Conference on Learning Representations (ICLR)April 14-16 Sim-toreal: Learning Agile Locomotion for Quadruped Robots. J Tan, T Zhang, E Coumans, A Iscen, Y Bai, D Hafner, 10.15607/RSS.2018.XIV.010doi:10.15607/ RSS.2018.XIV.010Robotics: Science and Systems (RSS). Pittsburgh, Pennsylvania, USATan, J., Zhang, T., Coumans, E., Iscen, A., Bai, Y., Hafner, D., et al. (2018). "Sim-to- real: Learning Agile Locomotion for Quadruped Robots," in Robotics: Science and Systems (RSS), June 26-30 (Pittsburgh, Pennsylvania, USA. doi:10.15607/ RSS.2018.XIV.010 Distral: Robust Multitask Reinforcement Learning. Y W Teh, V Bapst, W M Czarnecki, J Quan, J Kirkpatrick, R Hadsell, Conference on Neural Information Processing Systems (NIPS). Long Beach, CA, USATeh, Y. W., Bapst, V., Czarnecki, W. M., Quan, J., Kirkpatrick, J., Hadsell, R., et al. (2017). "Distral: Robust Multitask Reinforcement Learning," in Conference on Neural Information Processing Systems (NIPS) (Long Beach, CA, USA, 4496-4506. Bayesian Robot System Identification with Input and Output Noise. J.-A Ting, A Souza, S Schaal, 10.1016/j.neunet.2010.08.011Neural Networks. 24Ting, J.-A., D'Souza, A., and Schaal, S. (2011). Bayesian Robot System Identification with Input and Output Noise. Neural Networks 24, 99-108. doi:10.1016/j.neunet.2010.08.011 A Bayesian Approach to Nonlinear Parameter Identification for Rigid Body Dynamics. J Ting, M Mistry, J Peters, S Schaal, J Nakanishi, 10.15607/RSS.2006.II.032Robotics: Science and Systems (RSS). Philadelphia, Pennsylvania, USAThe MIT PressTing, J., Mistry, M., Peters, J., Schaal, S., and Nakanishi, J. (2006). "A Bayesian Approach to Nonlinear Parameter Identification for Rigid Body Dynamics," in Robotics: Science and Systems (RSS), August 16-19 (Philadelphia, Pennsylvania, USA: The MIT Press). doi:10.15607/RSS. 2006.II.032 Domain Randomization for Transferring Deep Neural Networks from Simulation to the Real World. J Tobin, R Fong, A Ray, J Schneider, W Zaremba, Abbeel , P , 10.1109/IROS.2017.8202133International Conference on Intelligent Robots and Systems (IROS). Vancouver, BC: CanadaTobin, J., Fong, R., Ray, A., Schneider, J., Zaremba, W., and Abbeel, P. (2017). "Domain Randomization for Transferring Deep Neural Networks from Simulation to the Real World," in International Conference on Intelligent Robots and Systems (IROS), September 24-28 (Vancouver, BC: Canada), 23-30. doi:10.1109/IROS.2017.8202133 A Van Den Oord, Y Li, O Vinyals, arXiv 1807.03748Representation Learning with Contrastive Predictive Coding. van den Oord, A., Li, Y., and Vinyals, O. (2018). Representation Learning with Contrastive Predictive Coding. arXiv 1807.03748 Distributionally Robust Control of Constrained Stochastic Systems. B Van Parys, D Kuhn, P Goulart, M Morari, 10.1109/TAC.2015.2444134IEEE Trans. Automat. Contr. 61Van Parys, B., Kuhn, D., Goulart, P., and Morari, M. (2015). Distributionally Robust Control of Constrained Stochastic Systems. IEEE Trans. Automat. Contr. 61, 1. doi:10.1109/TAC.2015.2444134 Learning to Reinforcement Learn. J Wang, Z Kurth-Nelson, H Soyer, J Z Leibo, D Tirumala, R Munos, Cognitive Science. London, UK. cognitivesciencesociety.orgWang, J., Kurth-Nelson, Z., Soyer, H., Leibo, J. Z., Tirumala, D., Munos, R., et al. (2017). "Learning to Reinforcement Learn," in Cognitive Science, 16-29 July (London, UK. cognitivesciencesociety.org). Optimizing Walking Controllers for Uncertain Inputs and Environments. J M Wang, D J Fleet, A Hertzmann, 10.1145/1833351.177881010.1145/1778765.1778810ACM Trans. Graphics. 29Wang, J. M., Fleet, D. J., and Hertzmann, A. (2010). Optimizing Walking Controllers for Uncertain Inputs and Environments. ACM Trans. Graphics 29, 73-78. doi:10.1145/1833351.177881010.1145/1778765.1778810 J Watson, H Abdulsamad, R Findeisen, J Peters, Stochastic Control through Approximate Bayesian Input Inference. arXiv 2105. 7693Watson, J., Abdulsamad, H., Findeisen, R., and Peters, J. (2021). Stochastic Control through Approximate Bayesian Input Inference. arXiv 2105.07693 Simple Statistical Gradient-Following Algorithms for Connectionist Reinforcement Learning. R J Williams, 10.1007/BF00992696doi:10. 1007/BF00992696Mach Learn. 8Williams, R. J. (1992). Simple Statistical Gradient-Following Algorithms for Connectionist Reinforcement Learning. Mach Learn. 8, 229-256. doi:10. 1007/BF00992696 Adaptive Dual Control Methods: An Overview. B Wittenmark, 10.1016/b978-0-08-042375-3.50010-xAdaptive Systems in Control and Signal Processing. ElsevierWittenmark, B. (1995). "Adaptive Dual Control Methods: An Overview," in Adaptive Systems in Control and Signal Processing 1995 (Elsevier), 67-72. doi:10.1016/b978-0-08-042375-3.50010-x On the Effectiveness of Common Random Numbers. R D Wright, T E Ramsay, 10.1287/mnsc.25.7.649Manag. Sci. 25Wright, R. D., and Ramsay, T. E. (1979). On the Effectiveness of Common Random Numbers. Manag. Sci. 25, 649-656. doi:10.1287/mnsc.25.7.649 Galileo: Perceiving Physical Object Properties by Integrating a Physics Engine with Deep Learning. J Wu, I Yildirim, J J Lim, B Freeman, J B Tenenbaum, Conference on Neural Information Processing Systems (NIPS). Montreal, Quebec, CanadaWu, J., Yildirim, I., Lim, J. J., Freeman, B., and Tenenbaum, J. B. (2015). "Galileo: Perceiving Physical Object Properties by Integrating a Physics Engine with Deep Learning," in Conference on Neural Information Processing Systems (NIPS), December 7-12 (Montreal, Quebec, Canada, 127-135. Learning Locomotion Skills for Cassie: Iterative Design and Sim-To-Real. Z Xie, P Clary, J Dao, P Morais, J W Hurst, M Van De Panne, Research.100Conference on Robot Learning (CoRL). Osaka, Japanof Proceedings of Machine LearningXie, Z., Clary, P., Dao, J., Morais, P., Hurst, J. W., and van de Panne, M. (2019). "Learning Locomotion Skills for Cassie: Iterative Design and Sim-To-Real," in Conference on Robot Learning (CoRL), October 30 -November 1 (Osaka, Japan: PMLR), 317-329. of Proceedings of Machine Learning Research.100 Frontiers in Robotics and AI | www.frontiersin.org. 9799893Frontiers in Robotics and AI | www.frontiersin.org April 2022 | Volume 9 | Article 799893 Dynamics Randomization Revisited: A Case Study for Quadrupedal Locomotion. Z Xie, X Da, M Van De Panne, B Babich, A Garg, arXiv 2011.02404Xie, Z., Da, X., van de Panne, M., Babich, B., and Garg, A. (2020). Dynamics Randomization Revisited: A Case Study for Quadrupedal Locomotion. arXiv 2011.02404 Learning Predictive Representations for Deformable Objects Using Contrastive Estimation. W Yan, A Vangipuram, P Abbeel, L Pinto, PMLRof Proceedings of Machine Learning Research. Virtual Event/Cambridge, MA, USA155Conference on Robot Learning (CoRL)Yan, W., Vangipuram, A., Abbeel, P., and Pinto, L. (2020). "Learning Predictive Representations for Deformable Objects Using Contrastive Estimation," in Conference on Robot Learning (CoRL), Virtual Event, November 16 -18 (Virtual Event/Cambridge, MA, USA: PMLR), 564-574. of Proceedings of Machine Learning Research.155. Lyapunov Stability and strong Passivity Analysis for Nonlinear Descriptor Systems. C Yang, J Sun, Q Zhang, X Ma, 10.1109/TCSI.2012.2215396IEEE Trans. Circuits Syst. 60Yang, C., Sun, J., Zhang, Q., and Ma, X. (2013). Lyapunov Stability and strong Passivity Analysis for Nonlinear Descriptor Systems. IEEE Trans. Circuits Syst. 60, 1003-1012. doi:10.1109/TCSI.2012.2215396 Sim-to-real Transfer for Biped Locomotion. W Yu, V C Kumar, G Turk, C K Liu, 10.1109/IROS40897.2019.8968053doi:10. 1109/IROS40897.2019.8968053International Conference on Intelligent Robots and Systems (IROS). Macau, SAR, ChinaIEEEYu, W., Kumar, V. C., Turk, G., and Liu, C. K. (2019a). "Sim-to-real Transfer for Biped Locomotion," in International Conference on Intelligent Robots and Systems (IROS), November 3-8 (Macau, SAR, China: IEEE), 3503-3510. doi:10. 1109/IROS40897.2019.8968053 Policy Transfer with Strategy Optimization. W Yu, C K Liu, G Turk, International Conference on Learning Representations (ICLR). New Orleans, LA, USAConference Track (OpenReview.netYu, W., Liu, C. K., and Turk, G. (2019b). "Policy Transfer with Strategy Optimization," in International Conference on Learning Representations (ICLR), May 6-9 (New Orleans, LA, USA. Conference Track (OpenReview.net). Preparing for the Unknown: Learning a Universal Policy with Online System Identification. W Yu, J Tan, Karen Liu, C Turk, G , 10.15607/RSS.2017.XIII.048doi:10. 15607/RSS.2017.XIII.048Robotics: Science and Systems (RSS). Cambridge, Massachusetts, USAYu, W., Tan, J., Karen Liu, C., and Turk, G. (2017). "Preparing for the Unknown: Learning a Universal Policy with Online System Identification," in Robotics: Science and Systems (RSS), July 12-16 (Cambridge, Massachusetts, USA. doi:10. 15607/RSS.2017.XIII.048 Robust Reinforcement Learning on State Observations with Learned Optimal Adversary. H Zhang, H Chen, D S Boning, C Hsieh, International Conference on Learning Representations (ICLR), Virtual Event. Zhang, H., Chen, H., Boning, D. S., and Hsieh, C. (2021). "Robust Reinforcement Learning on State Observations with Learned Optimal Adversary," in International Conference on Learning Representations (ICLR), Virtual Event, May 3-7 (Austria. OpenReview.net). Predicting Sim-To-Real Transfer with Probabilistic Dynamics Models. L M Zhang, M Plappert, W Zaremba, 12864Zhang, L. M., Plappert, M., and Zaremba, W. (2020). Predicting Sim-To-Real Transfer with Probabilistic Dynamics Models, 12864. arXiv 2009. Mathematical Foundations of Robust and Distributionally Robust Optimization. J Zhen, D Kuhn, W Wiesemann, arXiv 2105.00760Zhen, J., Kuhn, D., and Wiesemann, W. (2021). Mathematical Foundations of Robust and Distributionally Robust Optimization. arXiv 2105.00760 Essentials of Robust Control. K Zhou, J C Doyle, Prentice-Hall104Zhou, K., and Doyle, J. C. (1998). Essentials of Robust Control, 104. Prentice-Hall. . F Zhuang, Z Qi, K Duan, D Xi, Y Zhu, H Zhu, 10.1109/JPROC.2020.3004555doi:10. 1109/JPROC.2020.3004555Comprehensive Survey on Transfer Learning. Proc. IEEE. 109Zhuang, F., Qi, Z., Duan, K., Xi, D., Zhu, Y., Zhu, H., et al. (2021). A Comprehensive Survey on Transfer Learning. Proc. IEEE 109, 43-76. doi:10. 1109/JPROC.2020.3004555
[]
[ "Evading Adversarial Example Detection Defenses with Orthogonal Projected Gradient Descent", "Evading Adversarial Example Detection Defenses with Orthogonal Projected Gradient Descent" ]
[ "Oliver Bryniarski \nUC Berkeley\nBerkeley, Berkeley, Berkeley\n", "Nabeel Hingun \nUC Berkeley\nBerkeley, Berkeley, Berkeley\n", "Pedro Pachuca \nUC Berkeley\nBerkeley, Berkeley, Berkeley\n", "Vincent Wang \nUC Berkeley\nBerkeley, Berkeley, Berkeley\n", "Nicholas Carlini Google \nUC Berkeley\nBerkeley, Berkeley, Berkeley\n" ]
[ "UC Berkeley\nBerkeley, Berkeley, Berkeley", "UC Berkeley\nBerkeley, Berkeley, Berkeley", "UC Berkeley\nBerkeley, Berkeley, Berkeley", "UC Berkeley\nBerkeley, Berkeley, Berkeley", "UC Berkeley\nBerkeley, Berkeley, Berkeley" ]
[]
Evading adversarial example detection defenses requires finding adversarial examples that must simultaneously (a) be misclassified by the model and (b) be detected as non-adversarial. We find that existing attacks that attempt to satisfy multiple simultaneous constraints often over-optimize against one constraint at the cost of satisfying another. We introduce Orthogonal Projected Gradient Descent, an improved attack technique to generate adversarial examples that avoids this problem by orthogonalizing the gradients when running standard gradient-based attacks. We use our technique to evade four state-of-the-art detection defenses, reducing their accuracy to 0% while maintaining a 0% detection rate. * Equal contributions. Authored alphabetically.Preprint. Under review.
null
[ "https://arxiv.org/pdf/2106.15023v1.pdf" ]
235,670,132
2106.15023
13a5aedf89c0e6c10b18350e4b228708f22a6605
Evading Adversarial Example Detection Defenses with Orthogonal Projected Gradient Descent Oliver Bryniarski UC Berkeley Berkeley, Berkeley, Berkeley Nabeel Hingun UC Berkeley Berkeley, Berkeley, Berkeley Pedro Pachuca UC Berkeley Berkeley, Berkeley, Berkeley Vincent Wang UC Berkeley Berkeley, Berkeley, Berkeley Nicholas Carlini Google UC Berkeley Berkeley, Berkeley, Berkeley Evading Adversarial Example Detection Defenses with Orthogonal Projected Gradient Descent Evading adversarial example detection defenses requires finding adversarial examples that must simultaneously (a) be misclassified by the model and (b) be detected as non-adversarial. We find that existing attacks that attempt to satisfy multiple simultaneous constraints often over-optimize against one constraint at the cost of satisfying another. We introduce Orthogonal Projected Gradient Descent, an improved attack technique to generate adversarial examples that avoids this problem by orthogonalizing the gradients when running standard gradient-based attacks. We use our technique to evade four state-of-the-art detection defenses, reducing their accuracy to 0% while maintaining a 0% detection rate. * Equal contributions. Authored alphabetically.Preprint. Under review. Introduction Generating adversarial examples [SZS + 14, BCM + 13], inputs designed by an adversary to cause a neural network to behave incorrectly, is straightforward. By performing input-space gradient descent [CW17b,MMS + 17], it is possible to maximize the loss of arbitrary examples at test time. This process is both efficient and highly effective. Despite great efforts by the community, attempts at designing defenses against adversarial examples have been largely unsuccessful and gradient-descent attacks continue to circumvent new defenses, even those that attempt to make finding gradients difficult or impossible [ACW18,TCBM20]. As a result, many defenses aim to make generating adversarial examples more difficult by requiring additional constraints on inputs for them to be considered successful. Defenses that rely on detection, for example, will reject inputs if a secondary detector model determines the input is adversarial [MGFB17,XEQ17]. Turning a benign input x into an adversarial example x thus now requires fooling both the original classifier, f , and the detector, g, simultaneously. Traditionally, this is done by constructing a single loss function L that jointly penalizes the loss on f and the loss on g, e.g., by defining L(x ) = L(f ) + λL(g) and then minimizing L(x ) with gradient descent [CW17a]. Unfortunately, many defenses which develop evaluations using this strategy have had limited success in evaluating this way-not only must λ be tuned appropriately, but the gradients of f and g must also be well behaved. Our contributions. We develop a new attack technique designed to construct adversarial examples that simultaneously satisfy multiple constraints. Our attack approach is a modification of standard gradient descent [MMS + 17] and requires changing just a few lines of code. Given two objective functions f and g, instead of taking gradient descent steps that optimize the joint loss function f + λg, we selectively take gradient descent steps on either f or g. This makes our attack both simpler and easier to analyze than prior attack approaches. We use our technique to evade four state-of-the-art and previously-unbroken defenses to adversarial examples: the Honeypot defense (CCS'20) [SWW + 20], Dense Layer Analysis (IEEE Euro S&P'20) [SKCB19], Sensitivity Inconsistency Detector (AAAI'21) [TZLD21], and the SPAM detector presented in Detection by Steganalysis (CVPR'19) [LZZ + 19]. In all cases, we successfully reduce the accuracy of the protected classifier to 0% while maintaining a detection AUC of less than 0.5-meaning the detector performs worse than random guessing. The code we used to produce the results in this paper is published on GitHub at the following URL: https://github.com/v-wangg/OrthogonalPGD.git. Background Notation We consider classification neural networks f : R d → R n that receive a d-dimensional input vector (in this paper, images) x ∈ R d and output an n-dimensional prediction vector f (x) ∈ R n . We then use the notation g : R d → R to denote some other constraint which must also be satisfied, where g(x) < 0 when the constraint is satisfied and g(x) > 0 if it is violated. For detection defenses, this function g is the detector and higher values corresponding to higher likelihood of the input being an adversarial example. 2 We write c(x) = y to say that the true label for input x is the label y. When it is clear from the context, we abuse notation and write y = f (x) to denote the arg-max most likely label under the model f . We use L to denote the loss for our classifier (e.g. cross entropy loss). Finally, we let e(x) represent the embedding of an input x at an intermediate layer of f . Unless specified otherwise, e returns the logit vector that immediately precedes the softmax activation. Adversarial Examples Adversarial examples [SZS + 14, BCM + 13] have been demonstrated in nearly every domain in which neural networks are used. [ASE + 18, CW18, HPG + 17] Given an input x corresponding to label c(x) and classifier f , an adversarial example is a perturbation x of the input such that d(x, x ) < and c(x ) = t for some metric d. The metric d is most often that induced by a p-norm, typically either || · || 2 or || · || ∞ . With small enough perturbations under these metrics, the adversarial example x is not perceptibly different from the original input x. Datasets. We attack each defense on the dataset that it performs best on. All of our defenses operate on images. For three of these defenses, this is the CIFAR-10 dataset [KH09], and for one, it is the ImageNet dataset [DDS + 09]. We constrain our adversarial examples for each paper under the threat model originally considered to perform a fair re-evaluation, but also generate adversarial examples with standard norms used extensively in prior work in order to make cross-defense evaluations meaningful. We perform all evaluations on a single GPU. Our attacks on CIFAR-10 require just a few minutes, and for ImageNet a few hours. Detection Defenses We focus our study on detection defenses. Rather than improve the robustness of the model to adversarial examples directly (e.g., through adversarial training [MMS + 17] or certified approaches [RSL18, LAG + 19, CRK19]), detection defenses attempt to classify inputs as adversarial or benign [MGFB17,XEQ17]. However, it is often possible to generate adversarial examples which simultaneously fool both the classifier and detector [CW17a]. There have been several different strategies attempted to detect adversarial examples over the past few years [GSS15, MGFB17, FCSG17, XEQ17, MC17, MLW + 18, RKH19]. Consistent with prior work, in this paper we work under the perfect-knowledge scenario: the adversary has direct access to both functions f and g. Generating Adversarial Examples with Projected Gradient Descent Projected Gradient Descent [MMS + 17] is a powerful first-order method for finding such adversarial examples. Given a loss L(f, x, t) that takes a classifier, input, and desired target label, we optimize over the constraint set S = {z : d(x, z) < } and solve x = arg min z∈S L(f, z, t) (1) by taking the following steps: x i+1 = Π S (x i − α∇ xi L(f, x i , t)) Here, Π S denotes projection onto the set S , and α is the step size. For example, the projection Π S (z) for d(x, z) = ||x − z|| ∞ is given by clipping z to [x − , x + ]. In this paper, we adapt PGD in order to solve optimization problems which involve minimizing multiple objective functions simultaneously. Wherever we describe gradient descent steps in later sections, we do not explicitly write Π S -it is assumed that all steps are projected onto the constraint set. Related Attacks Recent work has shown that it is possible to attack models with adaptive attacks that target specific aspects of defenses. For detection defenses this process is often ad hoc, involving alterations specific to each given defense [TCBM20]. An independent line of work develops automated attack techniques that are reliable indicators of robustness [CH20]; however, in general, these attack approaches are difficult to apply to detection defenses. One useful output of our paper is a scheme that may help these automated tools evaluate detection defenses. Rethinking Adversarial Example Detection Before we develop our improved attack technique to break adversarial example detectors, it will be useful to understand why evaluating adversarial example detectors is more difficult than evaluating standard classifiers. Early work on adversarial examples often set up the problem slightly differently than we do above in Equation 1. The initial formulation of an adversarial example [SZS + 14, CW17b] asks for the smallest perturbation δ such that f (x + δ) is misclassified. That is, these papers solved for arg min δ 2 such that f (x + δ) = t Solving this problem as stated is intractable. It requires searching over a nonlinear constraint set, which is not feasible for standard gradient descent. As a result, these papers reformulate the search with the standard Lagrangian relaxation arg min δ 2 + λL(f, x + δ, t) (2) This formulation is simpler, but still (a) requires tuning λ to work well, and (b) is only guaranteed to be correct for convex functions L-that it works for non-convex models like deep neural networks is not theoretically justified. It additionally requires carefully constructing loss functions L [CW17b]. Equation 1 simplifies the setup considerably by just exchanging the constraint and objective. Whereas in Equation 2 we search for the smallest perturbation that results in misclassification, Equation 1 instead finds an input x + δ that maximizes the classifier's loss. This is a simpler formulation because now the constraint is convex, and so we can run standard gradient descent optimization. Evading detection defenses is difficult because there are now two non-linear constraints. Not only must the input be constrained by a distortion bound and be misclassified by the base classifier, but we must also have that they are not detected, i.e., with g(x) < 0. This new requirement is nonlinear, and now it becomes impossible to side-step the problem by merely swapping the objective and the constraint as we did before: there will always be at least one constraint that is a non-linear function, and so standard gradient descent techniques can not directly apply. In order to resolve this difficulty, the existing literature applies the same Lagrangian relaxation as was previously applied to constructing minimum-distortion adversarial examples. That is, breaking a detection scheme involves solving arg min x∈S L(f, x, t) + λg(x)(3) where λ is a hyperparameter that controls the relative importance of fooling the classifier versus fooling the detector. This formulation again brings back all of the reasons why the community moved past minimum-distortion adversarial examples. Perturbation Waste The fundamental failure mode for attacks on detection defenses that build on Equation 3 is what we call perturbation waste. Intuitively, we say that an adversarial example has wasted its perturbation budget if it has over-optimized against (for example) the detector so that g(x) is well below 0 but so that it is still correctly classified. More formally if an adversarial example x must satisfy two constraints c 1 (x ) ≤ 0 ∧ c 2 (x ) ≤ 0 then we say it has perturbation waste if (without loss of generality) c 1 (x ) < −α < 0 but c 2 (x ) > 0. We can now talk precisely about why generating adversarial examples that break detection defenses through Equation 3 is not always optimal: doing this often causes perturbation waste. Consider a benign input pair x, a target label t = c(x), and its corresponding (not yet known) adversarial example x . This input definitionally satisfies f (x ) = t and g(x ) < 0. Assuming the gradient descent search succeeds and optimizing Equation 3 reaches a global minimum, we can derive upper and lower bounds on what the value of λ should have been. However, this range of acceptable λ values is not going to be known ahead of time, and so requires additional search. However, worse, there is a second set of constraints: because the loss function is non-convex, the value of λ must be valid not only at the end of optimization but also at the start of optimization. In the worst case this might introduce incompatibilities where no single value of λ works throughout the generation process requiring tuning λ during a single adversarial example search. Our Attack Approaches We now present our attack strategy designed to generate adversarial examples that do not exhibit perturbation waste. We develop two related attack strategies that are designed to minimize perturbation waste. Then, in the following section we will apply these two attacks on defenses from the literature and show that they are indeed effective. As we have been doing, each of our attack strategies defined below generates a targeted adversarial example x so that f (x ) = t but g(x ) < 0. Constructing an untargeted attack is nearly identical except for the substitution of maximization instead of minimization. Selective gradient descent Instead of minimizing the weighted sum of f and g, our first attack completely eliminates the possibility for perturbation waste by never optimizing against a constraint once it becomes satisfied. That is, we write our attack as A(x, t) = arg min x : x−x < L(f, x , t) · 1[f(x) = t] + g(x ) · 1[f(x) = t] Lupdate(x,t) .(4) The idea here is that instead of minimizing a convex combination of the two loss functions, we selectively optimize either f or g depending on if f (x) = t, ensuring that updates are always helping to improve either the loss on f or the loss on g. Another benefit of this style is that it decomposes the gradient step into two updates, which prevents imbalanced gradients, where the gradients for two loss functions are not of the same magnitude and result in unstable optimization [JMW + 20]. In fact, our loss function can be viewed directly in this lens as following the margin decomposition proposal [JMW + 20] by observing that ∇L update (x, t) = ∇L(f, x, t) if f (x) = t ∇g(x) if f (x) = t.(5) That is, with each iteration, we either take gradients on f or on g depending on whether f (x) = t or not. The equivalence can be shown by computing ∇L(x) from Equation 4. Orthogonal gradient descent The prior attack, while mathematically correct, might encounter numerical stability difficulties. Often, the gradients of f and g point in opposite directions, that is, ∇f ≈ −∇g. As a result, every step spent optimizing f causes backwards progress on optimizing against g. This results in the optimizer constantly "undoing" its own progress after each step that is taken. We address this problem by giving a slightly different update rule that again will solve Equation 5, however this time by optimizing L update (x, t) = ∇L(f, x, t) − proj ∇L(f,x,t) ∇g(x) if f (x) = t ∇g(x) − proj ∇g(x) ∇L(f, x, t) if f (x) = t.(6) Note that ∇g(x) ⊥ = ∇L(f, x, t) − proj ∇L(f,x,t) ∇g(x) is orthogonal to the gradient ∇g(x), and similarly ∇L(f, x, t) ⊥ is orthogonal to ∇L(f, x, t). The purpose of this update is to take gradient descent steps with respect to one of f or g in such a way that we do not significantly disturb the loss of the function not chosen. In this way, we prevent our attack from taking steps that undo work done in previous iterations of the attack. Case Studies We validate the efficacy of our attack by using it to circumvent four previously unbroken, state-of-theart defenses accepted at top computer security or machine learning venues. Three of the case study utilizes models and code obtained directly from their respective authors. In the final case the original authors provided us with matlab source code that was not easily used, which we re-implemented. One factor we have not yet mentioned is that implicit to the setup of g is a threshold that adjusts the trade-off between true positives and false positives. Until now we have said that g(x) < 0 implies the input is classified as benign. However, when considering alternate thresholds, we use the notation φ so that if g(x) > φ, then x is flagged as adversarial. The choice of φ is made empirically as it determines the false positive rate of the detector-it is up to the defender to choose an acceptable threshold depending on the situation. Attack Success Rate Definition. We evaluate the success of our attack by a metric that we call attack success rate at N (SR@N for short). SR@N is defined as the fraction of targeted attacks that succeed when the defense's false positive rate is set to N %. For example, a 94% SR@5 could either be achieved through 94% of inputs being misclassified as the target class and 0% being detected as adversarial, or by 100% of inputs being misclassified as the target class and 6% being detected as adversarial, or some combination thereof. We report SR@5 and SR@50 for our main results. The value 5% is used in many prior defenses in the literature [MLW + 18, XEQ17], and 50% is an extreme upper bound and would reduce the model's accuracy by half. We also give the full ROC curve of the detection rate for a more complete analysis. Finally, note that all of our attacks are targeted attacks where we choose the target uniformly at random from among the incorrect class labels. Untargeted attacks are in general an order of magnitude easier (because there are more possible incorrect labels). We apply targeted attacks for the reasons listed in prior attack work [ACW18], primarily because if targeted attacks succeed, then untargeted attacks certainly will. Honeypot Defense The first paper we consider is the Honeypot Defense [SWW + 20]. Instead of preventing attackers from directly constructing adversarial examples, the authors propose to lure attackers into producing specific perturbations that are easy to find and hard to ignore. These perturbations are called "honeypots" or trapdoors and can be easily identified by a detector. For their evaluation on the MNIST and CIFAR-10 dataset, the authors use 5 sets of randomly selected 3 × 3 squares per label. Formally, consider an input x to the classifier, f . During training, f is injected with a honeypot, ∆. The signature of a particular honeypot, S ∆ , is the expectation of the neuron activations of f over multiple sample inputs containing ∆. During inference, the neuron activation pattern e(x) is compared to S ∆ using cosine similarity. Specifically, for a predefined threshold φ, if cos(e(x), S ∆ ) > φ, then x is flagged as adversarial. One additional modification the authors make is to use neuron randomization. When creating the signatures, a random sample of neuron activations is ignored and the same set of neurons is ignored when calculating the cosine similarity during adversarial attack detection. We test our method of generating adversarial images on CIFAR-10 with an l ∞ bound of = 8 255 . The original model has a baseline accuracy of .87. To test the defense, we generate targeted attacks choosing the target from among the incorrect labels. As a simple baseline attack, we confirm that FGSM achieves an accuracy of .06 on the classifier but the ROC AUC of the detector is .994. To evaluate the efficacy of our attack, we use the authors pre-trained code 3 and convert their models from TensorFlow to PyTorch to run our attacks. We find that our attack is extremely effective and has an attack success rate above 99.9%, even at a 50% false positive rate. In Table 1 we give the attack results for several configurations of our attack. We shared our results with the original defense authors who confirmed our adversarial examples successfully evaded their detection (an important step, given that we had converted the model to PyTorch). Dense Layer Analysis Many recent defenses analyze the hidden activations of neural networks while processing benign and adversarial inputs (e.g., following [MGFB17]). These defenses aim to capitalize on differences in activation patterns among benign and adversarial inputs to train a separate classifier capable of detecting inputs as adversarial. The most recent paper in this direction Sperl et al. extract dense layer activation patterns among benign and adversarial inputs and train a secondary binary classifier that detects adversarial examples [SKCB19]. The authors do this by first performing a forward pass through a target neural network with both adversarial and benign inputs to create a mixed-feature dataset of activation-label pairs. Then, using the mixed-feature dataset, they train a secondary binary classifier capable of discerning between adversarial and benign inputs. When evaluating their models, the authors pass an input through the target model to obtain the activation feature vectors for a particular input as well as a potential classification. They then pass this feature vector through the secondary classifier. If the secondary classifier alerts that the input was adversarial, the classification is thrown away. Otherwise, classification proceeds as normal. Sperl et al. evaluate this defense with 5 leading adversarial attacks on the MNIST and CIFAR-10 datasets using several models and report high accuracies for benign inputs and high detection rates for adversarial inputs. The authors report a worst-case individual attack accuracy of 0.739. In accordance with our framework, we assign the cross entropy loss of the classifier to our primary function and binary cross entropy loss of the detector as our secondary function. We obtain source code and pre-trained defense models from the authors in order to ensure that our attack matches the defense as closely as possible. We now detail the results of our attack at = .01 and at = .03 at false positive rates of 5% and 50% in Figure 2. We find that our attack is extremely effective, resulting in an accuracy of 0 at a detection rate of 0 with a false positive rate of 5% under = .03 bounds. Finally, to validate that our attack succeeded, we again shared the resulting adversarial examples with the authors who confirmed our attack results. Sensitivity Inconsistency of Spatial-Transform Domain We next evaluated our attack on the Sensitivity Inconsistency Detector To measure sensitivity against decision boundary transformations, Tian et al. design a dual classifier which is the composition of a weighted additive wavelet transform layer and a DNN classifier with the same structure as the original classifier. When doing a forward pass of the system, the authors run an input through both the primal and the dual model, then pass both results to the detector that discriminates among adversarial and benign classes. With these models, the authors then define their so-called feature of sensitivity inconsistency S(x 0 ). S(x 0 ) = {f i (x 0 ) − g i (x 0 )} K i=1 where f i (x 0 ) and g i (x 0 ) are the predictions of the primal and the dual respectively. Input x 0 is classified as adversarial if S(x 0 ) is greater than a threshold φ. SID achieves improved adversarial example detection performance, especially in cases with small perturbations in inputs. The authors report a worst-case, individual attack detection AUC % of 0.95. Now, we want to create adversarial examples that are misclassified by the original model and not flagged as adversarial by the Sensitivity Inconsistency Detector. We assign the loss of our target model to our primary function and the loss of the Sensitivity Inconsistency Detector as our secondary function. The initial target model had an accuracy of .94 and deemed .06 of all inputs adversarial. We again obtain source code from the authors along with pre-trained models to ensure evaluation correctness. We describe our attack's results at = .01 and at = .03 at false positive rates of 5% and 50% in Detection through Steganalysis Since adversarial perturbations alter the dependence between pixels in an image, Liu et al. [LZZ + 19] propose a defense which uses a steganalysis-inspired approach to detect "hidden features" within an image. These features are then used to train binary classifiers to detect the perturbations. Unlike the prior defenses, this paper evaluates on ImageNet, reasoning that small images such as those from CIFAR-10 and MNIST do not provide enough inter-pixel dependency samples to construct efficient features for adversarial detection, so we attack this defense on ImageNet. As a baseline, the authors use two feature extraction methods: SPAM and Spatial Rich Model. For each pixel X i,j of an image X, SPAM takes the difference between adjacent pixels along 8 directions. For the rightward direction, a difference matrix A → is computed so that A → i,j = X i,j − X i,j+1 . A transition probability matrix M → between pairs of differences can then be computed with M → x,y = P r(A → i,j+1 = x|A → i,j = y) where x, y ∈ {−T, ..., T }, with T being a parameter used to control the dimensionality of the final feature set F . We use T = 3 in accordance with that used by the authors. The features themselves are calculated by concatenating the average of the non-diagonal matrices with the average of the diagonal matrices: F 1,...,k = M → + M ← + M ↑ + M ↓ 4 F k+1,...,2k = M → + M ← + M ↑ + M ↓ 4 In order to use the same attack implementation across all defenses, we reimplemented this defense in PyTorch (the authors implementation was in matlab). Instead of re-implementing the full FLD ensemble [KFH12] used by the authors, we train a 3-layer fully connected neural network on SPAM features and use this as the detector. This allows us to directly investigate the claim that SPAM features can be reliably used to detect adversarial examples, as FLD is a highly non-diiferentiable operation and is not a fundamental component of the defense proposal. The paper also proposes a second feature extraction method named "Spatial Rich Model" (SRM) that we do not evaluate against. This scheme follows the same fundamental principle as SPAM in modeling inter-pixel dependencies-there is only a marginal benefit from using these more complex models, and so we analyze the simplest variant of the scheme. Notice that SPAM requires the difference matrices A to be discretized in order for the dimensionality of the transition probability matrices M to be finite. To make this discretization step differentiable and compatible with our attacks, we define a count matrix X where, for example, X → x,y counts, for any every pair i, j, the number of occurrences of y in A → i,j and x in A → i,j+1 . M → x,y is then defined by: M → x,y = P (A → i,j+1 = x|A → i,j = y) = X → x,y x X → x ,y To construct a differentiable approximation, consider without loss of generality the rightward difference matrix A → 1 for an image. We construct a shifted copy of it A → 2 so that A → 2i,j = A → 1i,j+1 . We Figure 4: Steganalysis attack evaluation. We find it difficult to decrease the detection score lower than the original score on the non-adversarial input, thus the AUC is almost exactly 0.5. then define a mask K so that K i,j = 1[x ≤ A → 2i,j < x + 1 ∩ y ≤ A → 1i,j < y + 1] Each element of the intermediate matrix X → x,y counts the number of pairs in A → 1 and A → 2 which would be rounded to x and y respectively after discretization: X → x,y = i,j (K • A → 2 ) i,j x where • is the Hadamard product. If we normalize X → so that the sum of elements in each column is equal to 1, we get the probability of difference values x ∈ A → 2 conditioned on column y ∈ A → 1 . Thus, for any pair of indices i, j, M → x,y = P (A → 2i,j = x|A → 1i,j = y) = X → x,y x X → x ,y Using this differentiable formulation of SPAM feature extraction, we train an auxillary detector as described above and use its gradients to apply our attack on the original, non-differentiable detector. The authors evaluate their defense on 4 adversarial attacks and report high accuracy for benign inputs and high detection rates for adversarial inputs. The best attack they develop still has a success rate less than 3%. In contrast, our attack on SPAM using the differentiable approximation has a success rate of 98.8% when considering a 5% false positive rate, with an AUC again less than the random guessing threshold of 0.5. Conclusion Generating adversarial examples that satisfy multiple constraints simultaneously (e.g., requiring that an input is both misclassified and deemed non-adversarial) requires more care than generating adversarial examples that satisfy only one constraint (e.g., requiring only that an input is misclassified). We find that prior attacks unnecessarily over-optimizes one constraint when another constraint has not yet been satisfied. Our new attack methodology of orthogonal and selective gradient descent prevent perturbation waste, and ensure that the available perturbation budget is always "spent" on optimizing the terms that are strictly necessary. Our attack reduces the accuracy of four previously-unbroken detection methods to 0% accuracy while maintaining a 0% detection rate at 5% false positive rates. We believe our attack approach is generally useful. For example, we believe that automated attack tools [CH20] would benefit from adding our optimization trick to their collection of known techniques that could compose with other attacks. However, we discourage future work from blindly applying this attack without properly understanding its design criteria. While this attack does eliminate perturbation waste for the defenses we consider, it is not the only way to do so, and may not be the correct way to do so in future defense evaluations. Evaluating adversarial example defenses will necessarily require adapting any attack strategies to the defense's design. Figure 1 : 1Honeypot attack evaluation. Compared to the originally reported 2% success rate, our attack reaches a 100% attack success rate under the same distortion bound. While the ROC curve does cross over the x = y line, this only occurs after a FPR of 70% which is completely unusable in practice. Figure 2 : 2Attack success rate for our two proposed attacks. * The original paper did not report at 5% FPR, the closest we could use was 13% TPR at a 20% FPR. However our attack succeeds 83% of the time even with a 4× lower false positive rate. DLA attack evaluation. Our attack succeeds with 83% probability compared to the original evaluation of 13% (with ε = 0.01), and 100% of the time under the more typical 8/255 constraint. (SID) proposed by Tian et al. [TZLD21]. This defense relies on the observations of Fawzi et al. [FMDFS18] that adversarial examples are movements, in the form of perturbations, of benign inputs in a decision space along an adversarial direction. Tian et al. then conjecture that, because adversarial examples are likely to lie near highly-curved decision boundaries, and benign inputs lie away from such boundaries, fluctuations in said boundaries will often result in a change in classification of adversarial examples but not in classification of benign inputs. Figure 3 .Figure 3 : 33Our attack works well in this case and induces an accuracy of 0 at a detection rate of 0 with a false positive rate of 5% under = .03 bounds. Attack success rate for our two proposed attacks. * The original paper only reports AUC values and does not report true positive/false positive rates. The value of 9% was obtained by running PGD on the author's defense implementation. SID attack evaluation. Our attack succeeds with 91% probability compared to the original evaluation of 9% under a ε = 0.01-norm constraint. (a) Attack success rate for our proposed attack. For computational efficiency, we only run our Orthogonal attack as the detection model has a throughput of one image per second.Attack eps=0.01 eps=0.031 SR@5 SR@50 SR@5 SR@50 [LZZ + 19] 0.03 - .03 - Orthogonal 0.988 0.54 1.0 0.62 These definitions are without loss of generality. For example, a two-class detector g can be converted to a one-class detector by subtracting p adversarial from p benign . https://github.com/Shawn-Shan/trapdoor AcknowledgementsWe thank the authors of the papers we use in the case studies, who helped us answer questions specific to their respective defenses and agreed to share their code with us. We are also grateful to Alex Kurakin for comments on a draft of this paper. Obfuscated gradients give a false sense of security: Circumventing defenses to adversarial examples. Anish Athalye, Nicholas Carlini, David Wagner, International Conference on Machine Learning. Anish Athalye, Nicholas Carlini, and David Wagner. Obfuscated gradients give a false sense of security: Circumventing defenses to adversarial examples. In International Conference on Machine Learning, 2018. . Yash Ase + 18] Moustafa Alzantot, Ahmed Sharma, Bo-Jhang Elgohary, Mani B Ho, Kai-Wei Srivastava, Chang, Generating natural language adversarial examples. CoRR, abs/1804.07998ASE + 18] Moustafa Alzantot, Yash Sharma, Ahmed Elgohary, Bo-Jhang Ho, Mani B. Srivas- tava, and Kai-Wei Chang. Generating natural language adversarial examples. CoRR, abs/1804.07998, 2018. Evasion attacks against machine learning at test time. + 13] Battista, Igino Biggio, Davide Corona, Blaine Maiorca, Nedim Nelson, Pavel Šrndić, Giorgio Laskov, Fabio Giacinto, Roli, Joint European conference on machine learning and knowledge discovery in databases. Springer+ 13] Battista Biggio, Igino Corona, Davide Maiorca, Blaine Nelson, Nedim Šrndić, Pavel Laskov, Giorgio Giacinto, and Fabio Roli. Evasion attacks against machine learning at test time. In Joint European conference on machine learning and knowledge discovery in databases, pages 387-402. Springer, 2013. Reliable evaluation of adversarial robustness with an ensemble of diverse parameter-free attacks. Francesco Croce, Matthias Hein, Proceedings of the 37th International Conference on Machine Learning. the 37th International Conference on Machine LearningPMLRFrancesco Croce and Matthias Hein. Reliable evaluation of adversarial robustness with an ensemble of diverse parameter-free attacks. In Proceedings of the 37th International Conference on Machine Learning, pages 2206-2216. PMLR, 2020. Elan Jeremy M Cohen, J Zico Rosenfeld, Kolter, arXiv:1902.02918Certified adversarial robustness via randomized smoothing. arXiv preprintJeremy M Cohen, Elan Rosenfeld, and J Zico Kolter. Certified adversarial robustness via randomized smoothing. arXiv preprint arXiv:1902.02918, 2019. Adversarial examples are not easily detected: Bypassing ten detection methods. Nicholas Carlini, David Wagner, Proceedings of the 10th ACM Workshop on Artificial Intelligence and Security. the 10th ACM Workshop on Artificial Intelligence and SecurityNicholas Carlini and David Wagner. Adversarial examples are not easily detected: Bypassing ten detection methods. In Proceedings of the 10th ACM Workshop on Artificial Intelligence and Security, pages 3-14, 2017. Towards evaluating the robustness of neural networks. Nicholas Carlini, David Wagner, 2017 IEEE symposium on security and privacy. IEEENicholas Carlini and David Wagner. Towards evaluating the robustness of neural networks. In 2017 IEEE symposium on security and privacy, pages 39-57. IEEE, 2017. Audio adversarial examples: Targeted attacks on speech-to-text. Nicholas Carlini, David Wagner, 2009 IEEE conference on computer vision and pattern recognition. Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-FeiIeee2018 IEEE Security and Privacy Workshops (SPW)Nicholas Carlini and David Wagner. Audio adversarial examples: Targeted attacks on speech-to-text. In 2018 IEEE Security and Privacy Workshops (SPW), pages 1-7, 2018. [DDS + 09] Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In 2009 IEEE conference on computer vision and pattern recognition, pages 248-255. Ieee, 2009. Reuben Feinman, Saurabh Ryan R Curtin, Andrew B Shintre, Gardner, arXiv:1703.00410Detecting adversarial samples from artifacts. arXiv preprintReuben Feinman, Ryan R Curtin, Saurabh Shintre, and Andrew B Gardner. Detecting adversarial samples from artifacts. arXiv preprint arXiv:1703.00410, 2017. Empirical study of the topology and geometry of deep networks. Alhussein Fawzi, Pascal Seyed-Mohsen Moosavi-Dezfooli, Stefano Frossard, Soatto, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)Alhussein Fawzi, Seyed-Mohsen Moosavi-Dezfooli, Pascal Frossard, and Stefano Soatto. Empirical study of the topology and geometry of deep networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2018. Adversarial attacks on neural network policies. CoRR. Ian Goodfellow, Jonathon Shlens, Christian Szegedy, ; Sandy, H Huang, Nicolas Papernot, Ian J Goodfellow, Yan Duan, Pieter Abbeel ; Linxi Jiang, Xingjun Ma, Zejia Weng, James Bailey, Yu-Gang Jiang, arXiv:2006.13726Imbalanced gradients: A new cause of overestimated adversarial robustness. arXiv preprintInternational Conference on Learning RepresentationsIan Goodfellow, Jonathon Shlens, and Christian Szegedy. Explaining and harnessing adversarial examples. International Conference on Learning Representations, 2015. [HPG + 17] Sandy H. Huang, Nicolas Papernot, Ian J. Goodfellow, Yan Duan, and Pieter Abbeel. Adversarial attacks on neural network policies. CoRR, abs/1702.02284, 2017. [JMW + 20] Linxi Jiang, Xingjun Ma, Zejia Weng, James Bailey, and Yu-Gang Jiang. Imbal- anced gradients: A new cause of overestimated adversarial robustness. arXiv preprint arXiv:2006.13726, 2020. Ensemble classifiers for steganalysis of digital media. Jan Kodovsky, Jessica Fridrich, Vojtěch Holub, IEEE Transactions on Information Forensics and Security. Jan Kodovsky, Jessica Fridrich, and Vojtěch Holub. Ensemble classifiers for steganalysis of digital media. In IEEE Transactions on Information Forensics and Security, pages 432-444, 2012. Certified robustness to adversarial examples with differential privacy. A Krizhevsky, G Hinton ; Mathias Lecuyer, Vaggelis Atlidakis, Roxana Geambasu, Daniel Hsu, Suman Jana, 2019 IEEE Symposium on Security and Privacy (SP). IEEEDepartment of Computer Science, University of TorontoLearning multiple layers of features from tiny imagesA. Krizhevsky and G. Hinton. Learning multiple layers of features from tiny images. Master's thesis, Department of Computer Science, University of Toronto, 2009. [LAG + 19] Mathias Lecuyer, Vaggelis Atlidakis, Roxana Geambasu, Daniel Hsu, and Suman Jana. Certified robustness to adversarial examples with differential privacy. In 2019 IEEE Symposium on Security and Privacy (SP), pages 656-672. IEEE, 2019. Detection based defense against adversarial examples from the steganalysis point of view. Lzz + 19] Jiayang, Weiming Liu, Yiwei Zhang, Dongdong Zhang, Yujia Hou, Hongyue Liu, Nenghai Zha, Yu, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. the IEEE/CVF Conference on Computer Vision and Pattern RecognitionLZZ + 19] Jiayang Liu, Weiming Zhang, Yiwei Zhang, Dongdong Hou, Yujia Liu, Hongyue Zha, and Nenghai Yu. Detection based defense against adversarial examples from the steganalysis point of view. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 4825-4834, 2019. Magnet: a two-pronged defense against adversarial examples. Dongyu Meng, Hao Chen, Proceedings of the 2017 ACM SIGSAC conference on computer and communications security. the 2017 ACM SIGSAC conference on computer and communications securityDongyu Meng and Hao Chen. Magnet: a two-pronged defense against adversarial examples. In Proceedings of the 2017 ACM SIGSAC conference on computer and communications security, pages 135-147, 2017. . Jan Hendrik Metzen, Tim Genewein, Volker Fischer, Bastian Bischoff, arXiv:1702.04267On detecting adversarial perturbations. arXiv preprintJan Hendrik Metzen, Tim Genewein, Volker Fischer, and Bastian Bischoff. On detecting adversarial perturbations. arXiv preprint arXiv:1702.04267, 2017. Characterizing adversarial subspaces using local intrinsic dimensionality. Mlw + 18] Xingjun, Bo Ma, Yisen Li, Wang, M Sarah, Sudanthi Erfani, Grant Wijewickrema, Dawn Schoenebeck, Song, E Michael, James Houle, Aleksandar Bailey ; Aleksander Madry, Ludwig Makelov, Schmidt, arXiv:1801.02613Dimitris Tsipras, and Adrian Vladu. Towards deep learning models resistant to adversarial attacks. International Conference on Learning Representations. arXiv preprintMMS + 17MLW + 18] Xingjun Ma, Bo Li, Yisen Wang, Sarah M Erfani, Sudanthi Wijewickrema, Grant Schoenebeck, Dawn Song, Michael E Houle, and James Bailey. Characterizing adver- sarial subspaces using local intrinsic dimensionality. arXiv preprint arXiv:1801.02613, 2018. [MMS + 17] Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, and Adrian Vladu. Towards deep learning models resistant to adversarial attacks. International Conference on Learning Representations, 2017. The odds are odd: A statistical test for detecting adversarial examples. Kevin Roth, Yannic Kilcher, Thomas Hofmann, International Conference on Machine Learning. PMLRKevin Roth, Yannic Kilcher, and Thomas Hofmann. The odds are odd: A statistical test for detecting adversarial examples. In International Conference on Machine Learning, pages 5498-5507. PMLR, 2019. Aditi Raghunathan, Jacob Steinhardt, Percy Liang, arXiv:1801.09344Certified defenses against adversarial examples. arXiv preprintAditi Raghunathan, Jacob Steinhardt, and Percy Liang. Certified defenses against adversarial examples. arXiv preprint arXiv:1801.09344, 2018. Dla: Dense-layer-analysis for adversarial example detection. + 20] Philip Sperl, Ching-Yu Kao, Peng Chen, Xiao Lei, Konstantin Böttinger, 2020 IEEE European Symposium on Security and Privacy (EuroS&P). IEEE+ 20] Philip Sperl, Ching-Yu Kao, Peng Chen, Xiao Lei, and Konstantin Böttinger. Dla: Dense-layer-analysis for adversarial example detection. In 2020 IEEE European Symposium on Security and Privacy (EuroS&P), pages 198-215. IEEE, 2020. DLA: dense-layeranalysis for adversarial example detection. Philip Sperl, Ching-Yu Kao, Peng Chen, Konstantin Böttinger, abs/1911.01921CoRRPhilip Sperl, Ching-yu Kao, Peng Chen, and Konstantin Böttinger. DLA: dense-layer- analysis for adversarial example detection. CoRR, abs/1911.01921, 2019. Gotta catch'em all: Using honeypots to catch adversarial attacks on neural networks. Emily + 20] Shawn Shan, Bolun Wenger, Bo Wang, Haitao Li, Zheng, Y Ben, Zhao, Proceedings of the 2020 ACM SIGSAC Conference on Computer and Communications Security. the 2020 ACM SIGSAC Conference on Computer and Communications Security+ 20] Shawn Shan, Emily Wenger, Bolun Wang, Bo Li, Haitao Zheng, and Ben Y Zhao. Gotta catch'em all: Using honeypots to catch adversarial attacks on neural networks. In Proceedings of the 2020 ACM SIGSAC Conference on Computer and Communications Security, pages 67-83, 2020. Ian an d Fergus. Intriguing properties of neural networks. Szs + 14] Christian, Wojciech Szegedy, Ilya Zaremba, Joan Sutskever, Dumitru Bruna, Rob Erhan, Goodfellow, International Conference on Learning Representations (ICLR). SZS + 14] Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, and Rob Goodfellow, Ian an d Fergus. Intriguing properties of neural networks. In International Conference on Learning Representations (ICLR), 2014. On adaptive attacks to adversarial example defenses. CoRR, abs. Florian Tramèr, Nicholas Carlini, Wieland Brendel, Aleksander Madry, Florian Tramèr, Nicholas Carlini, Wieland Brendel, and Aleksander Madry. On adaptive attacks to adversarial example defenses. CoRR, abs/2002.08347, 2020. Detecting adversarial examples from sensitivity inconsistency of spatial-transform domain. Jinyu Tian, Jiantao Zhou, Yuanman Li, Jia Duan, arXiv:2103.04302arXiv preprintJinyu Tian, Jiantao Zhou, Yuanman Li, and Jia Duan. Detecting adversarial ex- amples from sensitivity inconsistency of spatial-transform domain. arXiv preprint arXiv:2103.04302, 2021. Weilin Xu, David Evans, Yanjun Qi, arXiv:1704.01155Feature squeezing: Detecting adversarial examples in deep neural networks. arXiv preprintWeilin Xu, David Evans, and Yanjun Qi. Feature squeezing: Detecting adversarial examples in deep neural networks. arXiv preprint arXiv:1704.01155, 2017.
[ "https://github.com/v-wangg/OrthogonalPGD.git.", "https://github.com/Shawn-Shan/trapdoor" ]
[ "The ZERO Regrets Algorithm: Optimizing over Pure Nash Equilibria via Integer Programming", "The ZERO Regrets Algorithm: Optimizing over Pure Nash Equilibria via Integer Programming" ]
[ "Gabriele Dragotto [email protected] ", "Rosario Scatamacchia [email protected] " ]
[]
[]
Designing efficient algorithms to compute Nash equilibria poses considerable challenges in Algorithmic Game Theory and Optimization. In this work, we employ integer programming techniques to compute Nash equilibria in Integer Programming Games, a class of simultaneous and non-cooperative games where each player solves a parametrized integer program. We introduce ZERO Regrets, a general and efficient cutting plane algorithm to compute, enumerate, and select Nash equilibria. Our framework leverages the concept of equilibrium inequality, an inequality valid for any Nash equilibrium, and the associated equilibrium separation oracle. We evaluate our algorithmic framework on a wide range of practical and methodological problems from the literature, providing a solid benchmark against the existing approaches.Optimizing over Pure Nash Equilibria via Integer Programming 3 Schäfer [34]. More recently, Carvalho et al.[13]proposed a general-purpose cutting plane algorithm to compute a PNE in IPGs where each player utility is linear in their variables and bilinear with respect to the other players' variables. However, their approach does not handle equilibria selection, and requires a specific structure on the players' objectives to derive the Karush-Kuhn-Tucker conditions associated with the linear relaxation of their optimization problems. An important family of techniques for computing Mixed-Strategy equilibria is the one of support enumeration algorithms. The core idea is to determine if an equilibrium with a given support for each player -e.g., a subset of its strategies -exists in a normal-form game by solving a linear system of inequalities. Porter et al.[42] and Sandholm et al.[46]exploited this idea in the context of n-players normal-form games. Since equilibria in such games tend to have small supports, as proved theoretically by McLennan [37], support enumeration algorithms tend to be practically efficient in normal-form games. Inspired by the approach of Porter et al.[42], Carvalho et al. [14] introduced the sample generation method (SGM ) to compute an equilibrium in separable IPGs (i.e., where each player's payoff takes the form of a sumof-products) where players have bounded strategy sets. Their algorithm iteratively refines a sample of players' supports to compute an equilibrium or a correlated equilibrium (i.e., a generalization of the Nash equilibrium). However, the SGM does not handle the enumeration or selection of equilibria, nor can it prove that no equilibrium exists. Cronert and Minner [18] modified the SGM -extending the work of Carvalho et al.[14] -by proposing an enumerative algorithm to compute all the equilibria with the additional assumptions that all the players' variables are integer. They further complemented their approach with some considerations stemming from the theory of equilibria selection of Harsanyi[30]. Nevertheless, identifying the correct samples leading to equilibria in IPGs could be computationally cumbersome. While our approach shares a few elements with Cronert and Minner [18], it does not require any sampling in order to compute and select equilibria. This fundamental aspect leads to significant differences in terms of practical effectiveness and performance of the algorithms (see Section 5).Although the previous methodological works provide an insightful perspective on the computability and the selection of equilibria in IPGs, there are other significant intrinsic questions concerning the general nature of equilibria. Indeed, from the AGT standpoint, not all equilibria are created equal. Three paradigmatic questions in AGT and Game Theory are often: (i.) Does at least one PNE exist? (ii.) How good (or bad) is a PNE compared to the social optimum? (iii.) If more than one equilibrium exists, can one select the best PNE according to a given measure of quality? Establishing that a PNE does not exist may turn out to be a difficult task[19]. Nash proved that there always exists a Mixed-Strategy equilibrium in finite games, i.e., games with a finite number of strategies and players. In IPGs, where the set of players' strategies is large, deciding if a PNE exists is generally a Σ p 2 -hard decision problem in the polynomial hierarchy[12]. To measure the efficiency of equilibria, Koutsoupias and Papadimitriou [36] introduced the concept of Price of Anarchy (PoA), the ratio between the welfare value of the worst-possible equilibrium and the welfare value of a social optimum. Similarly, Anshelevich et al.[3], Schulz and Stier-Moses [47] introduced the Price of Stability (PoS ), the ratio between the welfare value of the best-possible equilibrium and a social optimum's one. In the AGT literature, many works focus on providing theoretical bounds for the PoS and the PoA, often by exploiting the game's structural properties[3,4,15,40,44]. However, in practice, one may be interested in establishing the exact values of such prices in order to characterize the efficiency of equilibria in specific applications. This further highlights the need for general and effective algorithmic frameworks to select equilibria.Contributions. In this work, we shed new light on the intersection between AGT and integer programming. We propose a new theoretical and algorithmic framework to efficiently and reliably compute, enumerate, and select PNE s for the IPGs in Definition 1. We summarize our contributions as follows:(i.) From a theoretical perspective, we provide a polyhedral characterization of the convex hull of the PNE s. We adapt the concepts of valid inequality, closure, and separation oracle to the domain of Nash equilibria. Specifically, we introduce the concept of equilibrium inequality to guide the
10.1287/ijoc.2022.0282
[ "https://export.arxiv.org/pdf/2111.06382v4.pdf" ]
243,985,647
2111.06382
c706de52fd0fd596158d808b5862097fdc2254f2
The ZERO Regrets Algorithm: Optimizing over Pure Nash Equilibria via Integer Programming 15 Sep 2022 Gabriele Dragotto [email protected] Rosario Scatamacchia [email protected] The ZERO Regrets Algorithm: Optimizing over Pure Nash Equilibria via Integer Programming 15 Sep 2022arXiv:2111.06382v4 [math.OC] Designing efficient algorithms to compute Nash equilibria poses considerable challenges in Algorithmic Game Theory and Optimization. In this work, we employ integer programming techniques to compute Nash equilibria in Integer Programming Games, a class of simultaneous and non-cooperative games where each player solves a parametrized integer program. We introduce ZERO Regrets, a general and efficient cutting plane algorithm to compute, enumerate, and select Nash equilibria. Our framework leverages the concept of equilibrium inequality, an inequality valid for any Nash equilibrium, and the associated equilibrium separation oracle. We evaluate our algorithmic framework on a wide range of practical and methodological problems from the literature, providing a solid benchmark against the existing approaches.Optimizing over Pure Nash Equilibria via Integer Programming 3 Schäfer [34]. More recently, Carvalho et al.[13]proposed a general-purpose cutting plane algorithm to compute a PNE in IPGs where each player utility is linear in their variables and bilinear with respect to the other players' variables. However, their approach does not handle equilibria selection, and requires a specific structure on the players' objectives to derive the Karush-Kuhn-Tucker conditions associated with the linear relaxation of their optimization problems. An important family of techniques for computing Mixed-Strategy equilibria is the one of support enumeration algorithms. The core idea is to determine if an equilibrium with a given support for each player -e.g., a subset of its strategies -exists in a normal-form game by solving a linear system of inequalities. Porter et al.[42] and Sandholm et al.[46]exploited this idea in the context of n-players normal-form games. Since equilibria in such games tend to have small supports, as proved theoretically by McLennan [37], support enumeration algorithms tend to be practically efficient in normal-form games. Inspired by the approach of Porter et al.[42], Carvalho et al. [14] introduced the sample generation method (SGM ) to compute an equilibrium in separable IPGs (i.e., where each player's payoff takes the form of a sumof-products) where players have bounded strategy sets. Their algorithm iteratively refines a sample of players' supports to compute an equilibrium or a correlated equilibrium (i.e., a generalization of the Nash equilibrium). However, the SGM does not handle the enumeration or selection of equilibria, nor can it prove that no equilibrium exists. Cronert and Minner [18] modified the SGM -extending the work of Carvalho et al.[14] -by proposing an enumerative algorithm to compute all the equilibria with the additional assumptions that all the players' variables are integer. They further complemented their approach with some considerations stemming from the theory of equilibria selection of Harsanyi[30]. Nevertheless, identifying the correct samples leading to equilibria in IPGs could be computationally cumbersome. While our approach shares a few elements with Cronert and Minner [18], it does not require any sampling in order to compute and select equilibria. This fundamental aspect leads to significant differences in terms of practical effectiveness and performance of the algorithms (see Section 5).Although the previous methodological works provide an insightful perspective on the computability and the selection of equilibria in IPGs, there are other significant intrinsic questions concerning the general nature of equilibria. Indeed, from the AGT standpoint, not all equilibria are created equal. Three paradigmatic questions in AGT and Game Theory are often: (i.) Does at least one PNE exist? (ii.) How good (or bad) is a PNE compared to the social optimum? (iii.) If more than one equilibrium exists, can one select the best PNE according to a given measure of quality? Establishing that a PNE does not exist may turn out to be a difficult task[19]. Nash proved that there always exists a Mixed-Strategy equilibrium in finite games, i.e., games with a finite number of strategies and players. In IPGs, where the set of players' strategies is large, deciding if a PNE exists is generally a Σ p 2 -hard decision problem in the polynomial hierarchy[12]. To measure the efficiency of equilibria, Koutsoupias and Papadimitriou [36] introduced the concept of Price of Anarchy (PoA), the ratio between the welfare value of the worst-possible equilibrium and the welfare value of a social optimum. Similarly, Anshelevich et al.[3], Schulz and Stier-Moses [47] introduced the Price of Stability (PoS ), the ratio between the welfare value of the best-possible equilibrium and a social optimum's one. In the AGT literature, many works focus on providing theoretical bounds for the PoS and the PoA, often by exploiting the game's structural properties[3,4,15,40,44]. However, in practice, one may be interested in establishing the exact values of such prices in order to characterize the efficiency of equilibria in specific applications. This further highlights the need for general and effective algorithmic frameworks to select equilibria.Contributions. In this work, we shed new light on the intersection between AGT and integer programming. We propose a new theoretical and algorithmic framework to efficiently and reliably compute, enumerate, and select PNE s for the IPGs in Definition 1. We summarize our contributions as follows:(i.) From a theoretical perspective, we provide a polyhedral characterization of the convex hull of the PNE s. We adapt the concepts of valid inequality, closure, and separation oracle to the domain of Nash equilibria. Specifically, we introduce the concept of equilibrium inequality to guide the Introduction Several real-world problems often involve a series of selfish agents optimizing their benefits while mutually affecting their decisions. The concept of Nash equilibrium [38,39] revolutionized the understanding of the agents' strategic behavior by proposing a flexible and interpretable solution, with consequences and applications in many different contexts. The Nash equilibrium constitutes a stable solution, meaning that no single agent has an incentive to defect from it profitably. Nash equilibria, however, may intrinsically differ in their features, for instance, in terms of a given welfare function measuring the common good for the collectivity of the agents. Above all, the quality of equilibria often does not match the quality of the social optimum, i.e., the best possible solution for the collectivity. In general, the social optimum is not a stable solution and, therefore, does not emerge naturally from the agents' interactions. Nevertheless, in numerous contexts, a central authority may suggest solutions to the agents, preferably ensuring that such solutions satisfy two foremost properties. First, the authority should ensure that little to no incentives exist for the agents to refuse the proposed solution. Second, the solution should be sufficiently close -in terms of quality -to the social optimum. The best trade-off between these two properties is the best Nash equilibrium, i.e., a solution that optimizes a welfare function among the equilibria. Often, the main focus is on selecting a Pure Nash Equilibrium (PNE ), a stable solution where each agent selects one alternative with probability one (in contrast to a Mixed-Strategy equilibrium, where agents randomize over the set of their alternatives). In this context, the Algorithmic Game Theory (AGT ) community pioneered the study of the interplay between Game Theory and algorithms with a focus on equilibria's efficiency [40]. The discipline attracted significant attention from the computer science and optimization communities, especially to study games where agents solve optimization problems (e.g., Facchinei and Pang [25]). Several recent works [14,18,24,29,31,35,45,48] considered Integer Programming Games (IPGs), namely games where the agents solve parametrized integer programs. In this work, we focus on a class of simultaneous and non-cooperative IPGs among n players (agents), as described in Definition 1, where each player controls m integer variables. Definition 1 (IPG). Each player i = 1, 2, . . . , n solves (1), where u i (x i , x −i ) -given x −i -is a function in x i with integer coefficients, A i ∈ Z r×m , b i ∈ Z r . max x i {u i (x i , x −i ) : x i ∈ X i }, X i := {A i x i ≤ b i , x i ∈ Z m }.(1) As standard game-theory notation, let x i denote the vector of variables of player i, and let the operator (·) −i be (·) except i. The vector x −i = (x 1 , . . . , x i−1 , x i+1 , . . . x n ) represents the variables of i's opponents (all players but i), and the set of linear constraints A i x i ≤ b i defines the feasible region X i of player i. We assume all integer variables are lower and upper bounded, and thus that X i is finite. In IPGs, the strategic interaction occurs in the players' objective functions, and not within their feasible regions. Specifically, players choose their strategy simultaneously, and each player i's utility (or payoff) u i (x i , x −i ) is a function in x i parametrized in i's opponents variables x −i . Without loss of generality, we assume the entries of A i and b i and the coefficients of u i (x i , x −i ) are integers. Further, considering the space of all players' variables (x 1 , . . . , x n ), we assume one can always linearize the non-linear terms in each u i with a finite number of inequalities and auxiliary variables (e.g., Sherali and Adams [49], Vielma [52]). We remark that this assumption is not restrictive; on the contrary, it enables us to tackle several games where the players' utilities are not linear (see Section 5). Besides, we assume (i.) players have complete information about the structure of the game, i.e., each player knows the other players' optimization problems via their feasible regions and objectives, (ii.) each player is rational, namely it always selects the best possible strategy given the information available on its opponents, and (iii.) common knowledge of rationality, namely each player knows its opponents are rational, and there is complete information. IPGs extend traditional resourceallocation tasks and combinatorial optimization problems to a multi-agent setting, and their modeling power lies precisely in the discrete variables and game dynamics they can model. Indeed, in several real-world applications, requirements such as indivisible quantities and fixed production costs often require the use of discrete variables (see, for instance, Bikhchandani and Mamer [8]). Several recent works explored the application of IPGs in various contexts. To name a few, Gabriel et al. [27] modeled energy production games, David Fuller and Ç elebi [20] proposed discrete unit commitment problem with fixed production costs, Anderson et al. [2] modeled a game where firms reserve discrete blocks of capacities from their suppliers, Federgruen and Hu [26] proposed a price competition framework with n competitors offering a discrete number of substitutable products, and Carvalho et al. [11] exploited IPGs in the context of kidney exchange programs. Despite the high potential impact of IPGs in many domains, practitioners and researchers often make restrictive assumptions about the game's structure to guarantee that solutions are unique or computationally tractable. This is mainly due to the lack of a general, scalable and reliable methodology to select efficient solutions in IPGs, which could potentially open new opportunities in terms of applications. This lack is the core motivation behind our work: providing a general-purpose algorithmic framework to optimize over the solutions of IPGs. Specifically, we focus on optimizing over the set of PNE s for the IPGs defined above and on characterizing the polyhedral structure of the set containing the PNE s. The algorithmic framework possesses a solid theoretical foundation, and it integrates with the existing tools from the theory and practice of integer programming and combinatorial optimization. From a computational perspective, it is highly flexible, and it generally outperforms the algorithms available in the present literature. Our framework is problem-agnostic and general, yet, it can be customized to address problem-specific needs. Literature. Köppe et al. [35] pioneered IPGs by laying down their first formal definition. The authors also provided an algorithmic framework to enumerate PNE s when the players' utilities are differences of piecewise linear convex payoff functions. Although their approach is theoretically well-grounded, there is no computational evidence of its effectiveness. Indeed, even in some 2-player games (e.g., normal-form [6,43] and bimatrix [5] games) there are considerable computational challenges involved in the design of efficient algorithms for computing and selecting equilibria. Sagratella [45] proposed a branching method to enumerate the PNE s in IPGs where each u i (x i , x −i ) is convex in x i . More recently, Schwarze and Stein [48] extended the work of Sagratella [45] by proposing an improved branch-and-prune scheme that also drops the convexity assumption on the players' utilities. Del Pia et al. [21] focused on totally-unimodular congestion games, namely IPGs where players have totallyunimodular constraint sets X i . They propose a strongly-polynomial time algorithm to find a PNE and derive some computational complexity results. Their results have been extended by Kleer and exploration of the set of PNE s. With this respect, we provide a general class of equilibrium inequalities and prove -through the concept of equilibrium closure -they are sufficient to define the convex hull of the PNE s. From a game-theory standpoint, we explore the interplay between the concept of rationality and cutting planes through the equilibrium inequalities. Since in any game, a player i may never play some of its strategies due to their induced payoffs, it is reasonable to think that player i would only pick its strategies from a rational subset of X i . In other words, we provide an interpretable criterion -in the form of a cutting plane -for a player to play or not some strategies. In this sense, what we propose constitutes an analytical and geometrical characterization of the sets of equilibria providing a novel perspective on equilibria selection. (ii.) From a practical perspective, we design a cutting plane algorithm -ZERO Regrets -that computes the most efficient PNE for a given welfare function. This algorithm is flexible and scalable, it can potentially enumerate all the PNE s and compute approximate PNE s. The algorithm exploits an equilibrium separation oracle, a procedure separating non-equilibrium strategies from PNE s through general and problem-specific equilibrium inequalities. Furthermore, our framework smoothly integrates with existing mathematical programming solvers, allowing practitioners to exploit the capabilities of the available optimization technologies. (iii.) We evaluate our algorithmic framework on a range of applications and problems from the relevant works in the literature. We provide a solid benchmark against the existing approaches and show the flexibility and effectiveness of ZERO Regrets. The classes of games we select derive from practical applications (e.g., competitive facility locations, network design) and methodological studies and the associated benchmark instances (e.g., games among quadratic programs). First, we consider the Knapsack Game, an IPG where each player solves a binary knapsack problem. For this problem, we also provide theoretical results on the computational complexity of establishing the existence of PNE s and two problem-specific equilibrium inequalities. Second, we focus on a Network Formation Game, a well-known and intensely investigated problem in AGT , where players build a network over a graph via a cost-sharing mechanism. Third, we consider a Competitive Facility Location and Design game, where several sellers strategically decide the location and design of their facilities in order to maximize their revenues. Finally, we test our algorithm on a game where players solve integer problems with convex and non-convex quadratic objectives. ZERO Regrets outperforms any baseline, proving to be highly efficient in both enumerating and selecting PNE s. We remark that our framework can be extended to the non-linear case, i.e., when u i is non-linearizable. However, we focus on the linear case (i.) to provide geometrical, polyhedral, and combinatorial insights on the structure of Nash equilibria in IPGs, and (ii.) to foster the interaction with existing streams of research in Combinatorial Optimization. We structure the paper as follows. In Section 2, we introduce the fundamental definitions and terminology. In Section 3 we introduce the theoretical elements of our algorithmic framework. In Section 4, we describe our cutting plane algorithm and its separation oracle and their extensions to compute approximate equilibria. In Section 5 we present an extensive computational campaign on the applications mentioned above, and, in Section 6, we provide some concluding remarks. Definitions We assume the reader is familiar with basic concepts of polyhedral theory and integer programming [17]. We introduce the notation and definitions related to an IPG instance G, where we omit explicit references to G when unnecessary. Let X i be the set of feasible strategies (or the feasible set) of player i, and let any strategyx i ∈ X i be a (pure) strategy for i. Anyx = (x 1 , . . . ,x n ) -withx i ∈ X i for any i -is a strategy profile. Let the vector x −i = (x 1 , . . . , x i−1 , x i+1 , . . . x n ) denote the vector of the i's opponents (pure) strategies. The payoff for i under the profilex is u i (x i ,x −i ). We define S(x) = n i=1 u i (x i ,x −i ) as the social welfare corresponding to a given strategy profilex. Equilibria and Prices. A strategyx i is a best-response strategy for player i given its opponents' strategiesx −i if u i (x i ,x −i ) ≥ u i (x i ,x −i ) for anyx i ∈ X i ; equivalently, we say i cannot profitably deviate to anyx i fromx i . The difference u i (x i ,x −i ) − u i (x i ,x −i ) is called the regret of strategyx i underx −i . Let BR(i,x −i ) = {x i ∈ X i : u i (x i ,x −i ) ≥ u i (x i ,x −i ) ∀x i ∈ X i } be the set of best- responses for i underx −i . A strategy profilex is a PNE if, for any player i and any strategyx i ∈ X i , u i (x i ,x −i ) ≥ u i (x i ,x −i ), i.e. anyx i is a best-response tox −i (all regrets are 0). Equivalently, in a PNE , no player i can unilaterally improve its payoff by deviating from its strategyx i . We define the optimal social welfare as OSW = max x 1 ,...,x n {S(x) : x i ∈ X i ∀i = 1, 2, . . . , n}. Given G, we denote as N = {x = (x 1 , . . . , x n ) : x is a PNE for G} the set of its PNE s. Also, let N i := {x i : (x i , x −i ) ∈ N }, with N i ⊆ X i be the set of equilibrium strategies for i, namely the strategies of i appearing in at least a PNE . If N is not empty, let: (i.)ẋ ∈ N be so that S(ẋ) ≤ S(x) for anyx ∈ N (i.e., the PNE with the worst welfare), and (ii.)ẍ ∈ N be so that S(ẍ) ≥ S(x) for anyx ∈ N (i.e., the PNE with the best welfare). Assuming w.l.o.g. OSW > 0 and S(ẍ) > 0, the PoA of G is OSW S(ẋ) , and the PoS is OSW S(ẍ) . The definitions of PoA and PoS hold when agents maximize a welfare function. Otherwise, when agents minimize their costs (e.g., the costs of routing packets in a network), we exchange numerator and denominator in both the PoA and the PoS . Polyhedral Theory. For a set S, let conv(S) be its convex hull. Let P be a polyhedron: bd(P ), ext(P ), int(P ), are the boundary, the set of vertices (extreme points), and the interior of P , respectively. Let P ⊆ R p andx / ∈ P a point in R p . A cut is a valid inequality π ⊤ x ≤ π 0 for P violated byx, i.e., π ⊤x > π 0 and π ⊤ x ≤ π 0 for any x ∈ P . Given a pointx ∈ R p and P , we define the separation problem as the task of determining that either (i.)x ∈ P , or (ii.)x / ∈ P and returning a cut π ⊤ x ≤ π 0 for P andx. For each player i, the set conv(X i ) is the perfect formulation of X i , namely an integral polyhedron whose vertices are in X i . Lifted Space and Equilibrium Inequalities Cutting plane methods are attractive tools for integer programs, both from a theoretical and an applied perspective. The essential idea is to iteratively refine a relaxation of the original problem by cutting off fractional solutions via valid inequalities for the integer program's perfect formulation. Nevertheless, in an IPG where the solution paradigm is the Nash equilibrium, we argue there exist stronger families of cuts, yet, not necessarily valid for each player's perfect formulation conv(X i ). In fact, for any player i, some of its best-responses in bd(conv(X i )) may never appear in a PNE , since no equilibrium strategies N −i of i's opponents induce i to play such best-responses. In this work, we introduce a general class of inequalities to characterize the nature of conv(N ). Such inequalities play a pivotal role in the cutting plane algorithm of Section 4. Dominance and Rationality. We ground our reasoning in the concepts of rationality and dominance [7,41]. Given two strategiesx i ∈ X i andx i ∈ X i for player i,x i is strictly dominated byx if, for any choice of opponents strategies x −i , then u i (x, x −i ) > u i (x, x −i ). Then, a rational player will never play dominated strategies. This also implies no player i would play any strategy in int(conv(X i )). Since dominated strategies -by definition -are never best-responses, they will never be part of any PNE . In Example 1, the set X 2 is made of 3 strategies (x 2 1 , x 2 2 ) = (0, 0), (1, 0), (0, 1). Yet, (x 2 1 , x 2 2 ) = (0, 0) is dominated by (x 2 1 , x 2 2 ) = (0, 1), and the latter is dominated by (x 2 1 , x 2 2 ) = (1, 0). However, when considering player 1, we need the assumption of common knowledge of rationality to conclude which strategy the player will play. Player 1 needs to know that player 2 would never play x 2 2 = 1 to declare (x 1 1 , x 1 2 ) = (0, 1) being dominated by (x 1 1 , x 1 2 ) = (1, 0). When searching for a PNE in this example, it follows that N 1 = {(x 1 1 , x 1 2 ) = (1, 0)} and N 2 = {(x 2 1 , x 2 2 ) = (1, 0)}. This inductive (and iterative) process of removal of strictly dominated strategies is known as the iterated elimination of dominated strategies (IEDS ). This process produces tighter sets of strategies and never excludes any PNE from the game [50,Ch.4]. Example 1. Consider the IPG where player 1 solves max x 1 {6x 1 1 + x 1 2 − 4x 1 1 x 2 1 + 6x 1 2 x 2 2 : 3x 1 1 + 2x 1 2 ≤ 4, x 1 ∈ {0, 1} 2 }, and player 2 solves max x 2 {4x 2 1 + 2x 2 2 − x 2 1 x 1 1 − x 2 2 x 1 2 : 3x 2 1 + 2x 2 2 ≤ 4, x 2 ∈ {0, 1} 2 }. The only PNE is (x 1 1 ,x 1 2 ) = (1, 0), (x 2 1 ,x 2 2 ) = (1, 0) with a welfare of S(x) = 5, u 1 (x 1 ,x 2 ) = 2, and u 2 (x 2 ,x 1 ) = 3. In the same fashion of IEDS , we propose a family of inequalities that cuts off -from each player's feasible set -the strategies that never appear in a PNE . Thus, from an IPG instance G, we aim to derive an instance G ′ where N i replaces each player's feasible set X i . Note that, since all X i are finite sets, all N i are finite as well as the number of PNE s. A Lifted Space Given the social welfare S(x), we aim to find the PNE maximizing it, namely, we aim to perform equilibria selection. In this context, the first urgent question is what space should we work in. Since mutually optimal strategies define PNE s, a natural choice is to consider a space of all players' variables x. As mentioned in the introduction, we assume the existence of a higher-dimensional (lifted) space where we linearize the non-linear terms in any u i (·) via auxiliary variables z and corresponding constraints (e.g., Sherali and Adams [49], Vielma [52]). Although our scheme holds for an arbitrary f (x) : n i=1 X i → R we can linearize to f (x, z), we focus on S(x) and the corresponding higherdimensional S(x, z) defined in the lifted space. Let L be the set of (i.) linear constraints necessary to linearize the non-linear terms, and (ii.) the integrality requirements and bounds on the z variables. The lifted space is then K = {(x 1 , . . . , x n , z) ∈ L, x i ∈ X i for any i = 1, . . . , n}.(2) Any vector x 1 , . . . , x n , z in (2) corresponds to a unique strategy profile x = (x 1 , . . . , x n ), since x induces z. K is then a set defined by linear constraints and integer requirements, and thus it is reasonable to deal with conv(K) and some of its projections. For brevity, let proj x conv(K) = {x = (x 1 , . . . , x n ) : ∃z s.t. (x 1 , . . . , x n , z) ∈ conv(K)}, and let u i (x i , x −i ) include the z variables when working in the space of conv(K). Equilibrium Inequalities The integer points in proj x (conv(K)) encompass all the game's strategy profiles. However, we need to focus on E = {(x 1 , . . . , x n , z) ∈ conv(K) : (x 1 , . . . , x n ) ∈ conv(N )}, since projecting out z yields the convex hull of PNE profiles conv(N ). By definition E is a polyhedron, and proj x i (E) = conv(N i ). The role of E is similar to the one of a perfect formulation for an integer program. As optimizing a linear function over a perfect formulation results in an integer optimum, optimizing a linear function S(x, z) over E results in a PNE . For this reason, we call E the perfect equilibrium formulation for G. Also, the equivalent of the integrality gap in integer programming is the PoS , namely the ratio between the optimal value of f (x, z) over conv(K) and E, respectively. All considered, we establish the concept of equilibrium inequality, a valid inequality for E. Definition 2 (Equilibrium Inequality). Consider an IPG instance G. An inequality is an equilibrium inequality for G if it is a valid inequality for E. A Class of Equilibrium Inequalities. We introduce a generic class of equilibrium inequalities that are linear in the space of conv(K). Consider any strategyx i ∈ X i for i: for any i's opponents' strategy x −i , u i (x i , x −i ) provides a lower bound on i's payoff sincex i ∈ X i (i.e.,x i is a feasible point). Then, u i (x i , x −i ) ≤ u i (x i , x −i ) holds for every player i. We introduce such inequalities in Proposition 1. Proposition 1. Consider an IPG instance G. For any player i andx i ∈ X i , the inequality u i (x i , x −i ) ≤ u i (x i , x −i ) is an equilibrium inequality. Proof. If a point (x,z) ∈ E, thenx ∈ conv(N ). First, consider the case wherex ∈ ext(conv(N )), namelyx ∈ N by definition. Assume (x,z) violates the inequality associated with at least a player i, then, u i (x i ,x −i ) > u i (x i ,x −i ) . Therefore, i can profitably deviate fromx i tox i underx −i , which contradictsx ∈ N and (x,z) ∈ E. Thus, no point (x,z) ∈ E withx ∈ ext(conv(N )) violates the inequality. Since we can represent any point (x,z) ∈ E as a convex combination of the extreme points of conv(N ), the proposition holds by iterating the previous reasoning for each extreme point in the support of (x,z). A fundamental issue is whether the inequalities of Proposition 1 are sufficient to define the set E. By modulating the concept of closure introduced by Chvátal [16], we prove this is indeed the case. We define the equilibrium closure as the convex hull of the points in K satisfying the equilibrium inequalities of Proposition 1. Theorem 1. Consider an IPG instance G where |N | = 0. Let the equilibrium closure given by the equilibrium inequalities of Proposition 1 be P e := conv (x, z) ∈ K : u i (x i , x −i ) ≤ u i (x i , x −i ) ∀x :x i ∈ BR(i,x −i ), i = 1, . . . , n , where the equilibrium inequalities consider only the best-responsesx i for any player i. Then, (i.) P e is a rational polyhedron, (ii.) there exists no point (x, z) ∈ int(P e ) such that x ∈ Z nm , (iii.) P e = E. Proof. Proof of (i.) The set K is finite since any X i is finite, the number of best-responses and, correspondingly, of equilibrium inequalities, is finite. Both equilibrium inequalities and the inequalities defining X i have integer coefficients. Therefore, P e is a rational polyhedron. Proof of (ii.) Assume there exists a point (x,z) ∈ int(P e ) such thatx ∈ Z nm . By definition of Nash equilibrium,x ∈ N since (x,z) satisfies all the equilibrium inequalities in P e . However, since (x,z) ∈ int(P e ), then no equilibrium inequality can be tight, contradicting the factx is a PNE . Therefore, there cannot exist any (x,z) ∈ int(P e ) such thatx ∈ Z nm . This also implies that all PNE s lie on the boundary of P e . Proof of (iii.) Since P e contains all the equilibrium inequalities generated by the players' best-responses, then any (x,z) ∈ E belongs to P e as of Proposition 1, and E ⊆ P e . Let (x,ẑ) be a point in ext(P e ). By definition, (x,ẑ) is an integer point, and it corresponds to a PNE . Indeed, non-equilibria integer points cannot belong to P e since they would violate at least one equilibrium inequality associated with the players' best-responses. Equivalently, for any (x,ẑ) ∈ ext(P e ), its projection proj x =x is in N . Since all PNE s are on the boundary of P e , P e = E necessarily. Throughout the proof of Theorem 1, we show that P e yields indeed the perfect equilibrium formulation E. Although the description of P e may contain an exponential number of possibly redundant equilibrium inequalities, it precisely describes the set of PNE s in the lifted space. In Example 2, we showcase the construction P e via Theorem 1 for a small IPG. Example 2. Consider an IPG where player 1 solves max x 1 {x 1 1 + 3x 1 2 + 7x 1 3 − 6x 1 1 x 2 1 + 3x 1 2 x 2 2 + 2x 1 3 x 2 3 : 6x 1 1 + 4x 1 2 + 5x 1 3 ≤ 7, x 1 ∈ {0, 1} 3 }, and player 2 solves max x 2 {9x 2 1 + 9x 2 2 + 2x 2 3 − 6x 2 1 x 1 1 + 5x 2 2 x 1 2 + 7x 2 3 x 1 3 : 4x 2 1 + 2x 2 2 + 5x 2 3 ≤ 5, x 2 ∈ {0, 1} 3 }. There are 4 feasible strategies for each player i, namely, (x i 1 , x i 2 , x i 3 ) = (0, 0, 0) ∨ (0, 0, 1) ∨ (0, 1, 0) ∨ (1, 0, 0). The 3 PNE s of this game are: (i.)x 1 = (0, 0, 1) andx 2 = (0, 0, 1) with u 1 (x 1 ,x 2 ) = 9 and u 2 (x 2 ,x 1 ) = 9, (ii.)x 1 = (0, 0, 1) andx 2 = (0, 1, 0) with u 1 (x 1 ,x 2 ) = 7 and u 2 (x 2 ,x 1 ) = 9, (iii.)x 1 = (0, 0, 1) andx 2 = (1, 0, 0) with u 1 (x 1 ,x 2 ) = 7 and u 2 (x 2 ,x 1 ) = 9. We linearize the game by introducing 3 variables z j ∈ {0, 1} for any player's variable j ∈ {1, 2, 3} such that z j = 1 if and only if x 1 j = x 2 j = 1. We model these implications through the constraints z j ≤ x i j and z j ≥ x 1 j + x 2 j − 1 for any player i and variable j. Hence, K = x 1 ∈ {0, 1} 3 , x 2 ∈ {0, 1} 3 , z ∈ {0, 1} 3 : 6x 1 1 + 4x 1 2 + 5x 1 3 ≤ 7, 4x 2 1 + 2x 2 2 + 5x 2 3 ≤ 5 z j ≤ x 1 j , z j ≤ x 2 j , z j ≥ x 1 j + x 2 j − 1 ∀j ∈ {1, 2, 3} . Correspondingly, the two players' utility functions in the linearized space are given by the two linear expressions u 1 (x 1 , x 2 ) = x 1 1 +3x 1 2 +7x 1 3 −6z 1 +3z 2 +2z 3 and u 2 (x 2 , x 1 ) = 9x 2 1 +9x 2 2 +2x 2 3 −6z 1 +5z 2 +7z 3 , respectively. On the one hand, the best-response of player 1 to any of player 2's feasible strategies isx 1 = (0, 0, 1), i.e., BR(1,x 2 ) = {(0, 0, 1)} for any feasible strategyx 2 . The equilibrium inequality associ- ated withx 1 = (0, 0, 1) is 7 + 2x 2 3 ≤ x 1 1 + 3x 1 2 + 7x 1 3 − 6z 1 + 3z 2 + 2z 3 . The left-hand side of the inequality represents u 1 (x 1 , x 2 ), namely player 1's utility function evaluated onx 1 . On the other hand, player 2's best-responses and the associated equilibrium inequalities are: (i.) x 2 = (1, 0, 0) with the inequality 9 − 6x 1 1 ≤ 9x 2 1 + 9x 2 2 + 2x 2 3 − 6z 1 + 5z 2 + 7z 3 , (ii.)x 2 = (0, 1, 0) with the inequality 9 + 5x 1 2 ≤ 9x 2 1 + 9x 2 2 + 2x 2 3 − 6z 1 + 5z 2 + 7z 3 , (iii.)x 2 = (0, 0, 1) with the inequality 2 + 7x 1 3 ≤ 9x 2 1 + 9x 2 2 + 2x 2 3 − 6z 1 + 5z 2 + 7z 3 . Therefore, P e = conv        (x, z) ∈ K : 7 + 2x 2 3 ≤ x 1 1 + 3x 1 2 + 7x 1 3 − 6z 1 + 3z 2 + 2z 3 9 − 6x 1 1 ≤ 9x 2 1 + 9x 2 2 + 2x 2 3 − 6z 1 + 5z 2 + 7z 3 9 + 5x 1 2 ≤ 9x 2 1 + 9x 2 2 + 2x 2 3 − 6z 1 + 5z 2 + 7z 3 2 + 7x 1 3 ≤ 9x 2 1 + 9x 2 2 + 2x 2 3 − 6z 1 + 5z 2 + 7z 3        . By explicitly computing the above convex hull, we obtain P e = (x, z) : x 2 1 ≥ 0, x 2 2 ≥ 0, x 2 3 ≥ 0, x 1 1 = 0, x 1 2 = 0, x 1 3 = 1, x 2 1 + x 2 2 + x 2 3 = 1, z 1 = 0, z 2 = 0, x 2 1 + x 2 2 + z 3 = 1 . The projections onto the x space of the extreme points of P e correspond to the 3 PNE s, and thus P e = E. The Cutting Plane Algorithm and its Oracle If an oracle gives us E in the form of a set of linear inequalities, then an optimal solution to max x 1 ,...,x n ,z {f (x, z) : (x, z) ∈ E} (i.e., a linear program) that is also an extreme point of E is a PNE for G for any function f (x, z). However, there are two major issues. First, E ⊆ conv(K), and conv(K) is a perfect formulation described by a possibly large number of inequalities. Second, retrieving E through Theorem 1 may still require a large number of inequalities. In practice, we actually do not need E nor conv(K): a more reasonable goal is to get a polyhedron containing conv(K) over which we can optimize f (x, z) efficiently and obtain an integer solution (i.e., x ∈ K) that is also a PNE . The first step is to obtain an integer solution. We could deploy branching schemes and known families of integer programming cutting planes, which are also equilibrium inequalities since they are valid for E. Equivalently, we can exploit a Mixed-Integer Programming (MIP ) solver to solve max x 1 ,...,x n ,z {f (x, z) : (x, z) ∈ K}. If the maximizer is a PNE , the algorithm terminates. Otherwise, the second step is to cut off such maximizer, since it is not a PNE , by separating at least an equilibrium inequality of Proposition 1. Equilibrium Separation Oracle. Given a point (x,z), for instance, the point returned by a MIP solver, the central question is to decide ifx ∈ N , and, if not, to derive an equilibrium inequality to cut off (x,z). If we use the equilibrium inequalities of Proposition 1, the process terminates in a finite number of iterations, since Theorem 1. In the spirit of Grötschel et al. [28], Karp and Papadimitriou [32], we define a separation oracle for the equilibrium inequalities and E. The equilibrium separation oracle solves the equilibrium separation problem of Definition 3. Definition 3 (Equilibrium Separation Problem). Consider an IPG instance G. Given a point (x,z), the equilibrium separation problem is the task of determining that either: (i.) (x,z) ∈ E, or (ii.) (x,z) / ∈ E and return an equilibrium inequality violated by (x,z). Algorithm 1 presents our separation oracle for the inequalities of Proposition 1. Given (x,z) and an empty set of linear inequalities φ, the algorithm outputs either (i.) yes if (x,z) ∈ E, or (ii.) no and a set of violated equilibrium inequalities φ if (x,z) / ∈ E. The algorithm separates at most one inequality for any player i. By definition,x i should be a best-response to be in a PNE . Therefore, for any player i, the algorithm solves max z). After considering all players, if |φ| = 0, thenx is a PNE and the answer is yes. Otherwise, the algorithm returns no and φ = ∅, i.e., at least an equilibrium inequality cutting off (x,z). x i {u i (x i ,x −i ) : A i x i ≤ b i , x i ∈ Z m }, wherex i is one of its maximizers. If u i (x i ,x −i ) = u i (x i ,x −i ),x i is also a best-response. However, if u i (x i ,x −i ) > u i (x i ,x −i ), the algorithm adds to φ an equilibrium inequality u i (x i , x −i ) ≤ u i (x i , x −i ) violated by (x, Algorithm 1: Equilibrium Separation Oracle Data: An IPG instance G, a point (x,z), and a set of cuts φ = ∅. Result: Either: (i.) yes if (x,z) ∈ E, or (ii.) no and φ. 1 for i ← 1 to n do 2x i ← max x i {u i (x i ,x −i ) : A i x i ≤ b i , x i ∈ Z m } ; 3 if u i (x i ,x −i ) > u i (x i ,x −i ) then 4 Add u i (x i , x −i ) ≤ u i (x i , x −i ) to φ; 5 if |φ| = 0 then return yes ; 6 else return no and φ ; ZERO Regrets. We present our cutting plane algorithm ZERO Regrets in Algorithm 2. The inputs are an instance G, and a function f (x), while the output is either the PNEẍ maximizing f (x), or a certificate that no PNE exists. Let Φ be a set of equilibrium inequalities, and Q = max x 1 ,...,x n ,z {f (x, z) : (x, z) ∈ K, (x, z) s.t. Φ}. We assume Q is feasible and bounded. Otherwise, there would be no rationale behind getting a PNE with an arbitrarily bad welfare. At each iteration, we compute an optimal solution (x,z) of Q. Afterwards, the equilibrium separation oracle (Algorithm 1) evaluates (x,z). If the oracle returns yes, thenẍ =x is the PNE maximizing f (x) in G. Otherwise, the oracle returns a set φ of equilibrium inequalities cutting off (x,z), and the algorithm adds φ to Φ. Therefore, the process restarts by solving Q with the additional set of constraints. If at any iteration Q becomes infeasible, then G has no PNE . We remark that Theorem 1 implies both correctness and finite termination of Algorithm 2. Although it is sufficient to add just one equilibrium inequality in φ cutting off Algorithm 2: ZERO Regrets Data: An IPG instance G, and a function f (x). Result: Either: (i.) the PNEẍ maximizing f (x), or (ii.) no PNE 1 Φ = {0 ≤ 1}, and Q = max x 1 ,...,x n ,z {f (x, z) : (x, z) ∈ K, (x, z) s.t. Φ}; 2 while true do 3 if Q is infeasible then return no PNE ; 4 (x,z) = arg max Q; φ = ∅ ; 5 if EquilibriumSeparationOracle(G, (x,z), φ) is yes then 6 returnẍ =x; 7 else add φ to Φ ; the incumbent solution (x,z), we expect that a good trade-off between |φ| = 1 and |φ| = n may speed up the convergence of the algorithm. This includes, for instance, also adding non-violated equilibrium inequalities. In Example 3, we provide a toy example of ZERO Regrets. Example 3. Consider the game in Example 1 where player 1 solves max x 1 {6x 1 1 + x 1 2 − 4x 1 1 x 2 1 + 6x 1 2 x 2 2 : 3x 1 1 + 2x 1 2 ≤ 4, x 1 ∈ {0, 1} 2 }, and player 2 solves max x 2 {4x 2 1 + 2x 2 2 − x 2 1 x 1 1 − x 2 2 x 1 2 : 3x 2 1 + 2x 2 2 ≤ 4, x 2 ∈ {0, 1} 2 }. As in Example 2, to linearize the players' utility functions, we introduce two binary variables z 1 and z 2 equal to 1 if both players select items 1 and 2, respectively. The linearization constraints are z 1 ≤ x 1 1 , z 1 ≤ x 2 1 , z 1 ≥ x 1 1 + x 2 1 − 1, z 2 ≤ x 1 2 , z 2 ≤ x 2 2 , z 2 ≥ x 1 2 + x 2 2 − 1. Thus, player 1's utility function is 6x 1 1 + x 1 2 − 4z 1 + 6z 2 and player 2's utility function is 4x 2 1 + 2x 2 2 − z 1 − z 2 . Correspondingly, problem Q maximizing the social welfare is max (x 1 , x 2 , z) 6x 1 1 + x 1 2 + 4x 2 1 + 2x 2 2 − 5z 1 + 5z 2 s.t. 3x 1 1 + 2x 1 2 ≤ 4, 3x 2 1 + 2x 2 2 ≤ 4 z j ≤ x 1 j , z j ≤ x 2 j , z j ≥ x 1 j + x 2 j − 1 j = 1, 2. x 1 j , x 2 j , z j ∈ {0, 1} j = 1, 2. An optimal solution of the problem is ( x 1 1 ,x 1 2 ) = (1, 0), (x 2 1 ,x 2 2 ) = (0, 1),z 1 =z 2 = 0. The social welfare is 8, and the players' utility values are 6 and 2, respectively. However, this solution is not a PNE . In fact, while a best-response tox 2 for player 1 isx 1 , the best-response tox 1 for player 2 is (x 2 1 ,x 2 2 ) = (1, 0) with an utility value of 3. Therefore, from player 2, we derive the equilibrium inequality 4 − x 1 1 ≤ 4x 2 1 + 2x 2 2 − z 1 − z 2 cutting off (x,z) . By adding the equilibrium inequality to Q, the optimal solution is then ( x 1 1 ,x 1 2 ) = (1, 0), (x 2 1 ,x 2 2 ) = (1, 0),z 1 = 1,z 2 = 0, with utility values 2 and 3 and a welfare of 5. Sincex is a PNE , the algorithm terminates by finding a PNE with a PoS of 8/5. Game-theoretical Interpretation. We provide a straightforward game-theoretical interpretation of ZERO Regrets. The algorithm acts as a central authority (e.g., a central planner) when optimizing f (x, z) over K, producing a solution that optimizes the welfare. Afterward, it proposes the solution to each player, who evaluates it through the equilibrium separation oracle. The latter acts as a rationality blackbox, in the sense that the oracle advises each player i whether the proposed strategy is acceptable or not. In other words, the rationality blackbox tells the player i if it should selfishly (and rationally) deviate to a better strategy, ignoring the central authority's advice. On the one hand, if the rationality blackbox says the solution is acceptable for player i, then the player knows through the oracle that it should accept the proposed strategy. On the other hand, if at least one player i refuses the proposed solution, the central authority should exclude such a solution and formulate a new proposal. Namely, it should cut off the non-equilibrium strategy and compute a new solution maximizing the welfare. Extensions We showcase the flexibility of our algorithmic framework by proposing two extensions to ZERO Regrets. Specifically, to address broader practical needs, we propose two extensions for enumerating PNE s and computing approximate PNE s. Enumerating PNEs. We can easily tune ZERO Regrets to enumerate all the PNE s in N as follows. In Line 6 of Algorithm 2, instead of terminating and returningẍ, we memorizeẍ and add an (invalid) inequality cutting off the PNE from E. Since all x variables are integer-constrained, such inequality can be, for instance, an hamming distance fromx. The algorithm will possibly compute a new PNE , cut it off (e.g., through a hamming distance constraint), and move the search towards the next equilibrium. Eventually, Q will become infeasible, thus certifying that the algorithm enumerated all the existing PNE s. Approximating PNEs. An absolute ǫ-PNE is a PNE where each player can deviate at most by a value ǫ for any best-response [40], namely, where the regret for each player is at most ǫ. Absolute ǫ-PNE s may be a reasonable compromise whenever no PNE exists. Although any PNE is an absolute ǫ-PNE with ǫ = 0, one may be interested in computing an absolute ǫ-PNE with an upper bound on ǫ while maximizing f (x, z). We can adapt our algorithmic framework to compute an absolute ǫ-PNE as follows. We introduce a bounded continuous variable ǫ in Q, and we let Algorithm 1 separate equilibrium inequalities in the form of u i (x i , x −i ) − ǫ ≤ u i (x i , x −i ). Depending on the application of interest, one may still optimize the function f (x, z) or minimize ǫ without affecting the correctness of the algorithm. A similar modification enables the algorithm to handle relative ǫ-PNE , i.e., a profile of strategies where the payoff of each player's strategy is at least ǫ times the best-response payoff. Given a constant ǫ, the corresponding equilibrium inequalities are u i (x i , x −i )ǫ ≤ u i (x i , x −i ). Applications We evaluate ZERO Regrets on a wide range of problems from relevant works in the literature. We aim to provide a solid benchmark against the existing solution approaches and show the effectiveness of ZERO Regrets in selecting and enumerating equilibria. The games we select stem from practical applications (e.g., competitive facility locations, network design) and methodological studies with the associated benchmark instances (e.g., games among quadratic programs). Specifically, we consider the following games: (i.) The Knapsack Game (KPG) [10,13,14], where each player solves a binary knapsack problem. We select the equilibrium maximizing the social welfare, and we provide theoretical results on the complexity of deciding whether a PNE exists. We also introduce two problem-specific equilibrium inequalities. (ii.) The Network Formation Game (NFG) [4,15] -a paradigmatic game in AGT with plenty of applications in network design -where players seek to build a network through a cost-sharing mechanism. We select the equilibrium maximizing the social welfare. (iii.) The Competitive Facility Location and Design game (CFLD ) [1,18], where each player decides both the location of its facilities and their "design" (i.e., the facilities' features) while competing for customer demand. As in the KPG and the NFG, we focus on finding the PNE maximizing the social welfare. (iv.) The Quadratic IPGs (qIPGs) introduced by Sagratella [45] and recently considered in Schwarze and Stein [48], where each player optimizes a (non-convex) quadratic function over box constraints and integrality requirements. As in the original papers, we focus on enumerating all the existing PNE s, or determine that none exists. In what follows, we briefly describe the previous games and present the associated computational results 1 . The Knapsack Game The KPG is an IPG among n players, where each player i solves a binary knapsack problem with m items in the form of max x i m j=1 p i j x i j + n k=1,k =i m j=1 C i k,j x i j x k j : m j=1 w i j x i j ≤ b i , x i ∈ {0, 1} m .(3) As in the classical knapsack problem [33], we assume that the profits p i j , weights w i j and capacities b i are in Z + 0 . The selection of an item j by a player k = i impacts either negatively or positively the profit p i j for player i through integer coefficients C i k,j . Clearly, given the strategies of the other players x −i , computing a corresponding best-response for player i is N P-hard. We can apply our algorithmic framework by linearizing the bilinear products x i j x k j (for any i, k, j) with O(mn 2 ) auxiliary variables and additional constraints (see Example 2). Carvalho [10] introduced the game with n = 2 and p i j = 0 ∀j = 1, . . . , m, i = 1, 2. Carvalho et al. [13,14] consider a more general game variant allowing p i j and w i j to take negative integer values. However, their algorithms focus on Mixed-Strategy equilibria and cannot perform exact equilibria selection. In Theorem 2, we formally prove that deciding if a KPG instance has a PNE -even with two players -is Σ p 2 -complete in the polynomial hierarchy, matching the result of Carvalho et al. [14] for general IPGs. Theorem 2. Deciding if a KPG instance has a PNE is a Σ p 2 -complete problem. The proof, where we perform a reduction from the Σ p 2 -complete DeNegre Bilevel Knapsack Problem [9,22,23], is in the appendix. Furthermore, we show that when at least one PNE exists, the PoS and PoA can be arbitrarily bad. Proof. Consider the following KPG instance with n = 2: player 1 solves the problem max x 1 {M x 1 1 + x 1 2 − (M − 2)x 1 1 x 2 1 − x 1 2 x 2 2 : 3x 1 1 + 2x 1 2 ≤ 4, x 1 ∈ {0, 1} 2 } where M is an arbitrarily large value; player 2 solves max x 2 {4x 2 1 + x 2 2 − x 2 1 x 1 1 − x 2 2 x 1 2 : 3x 2 1 + 2x 2 2 ≤ 4, x 2 ∈ {0, 1} 2 }. The only PNE is (x 1 1 ,x 1 2 ,x 2 1 ,x 2 2 ) = (1, 0, 1, 0), with u 1 (x 1 ,x 2 ) = 2, u 2 (x 2 ,x 1 ) = 3, S(x) = 5. The maximum welfare OSW = M + 1 is given by (x 1 1 ,x 1 2 ,x 2 1 ,x 2 2 ) = (1, 0, 0, 1), i.e., OSW is arbitrarily large and there are no bounds on both the PoA and the PoS . Strategic Inequalities. We further strengthen our cutting plane algorithm by introducing two classes of problem-specific equilibrium inequalities for the KPG. Strategic Dominance Inequalities. In the binary knapsack problem, a well-known hierarchy of dominance relationships exists among items, as we formalize in Definition 4. Definition 4 (Dominance Rule). Given two items j and j ′ with profitsp jpj ′ and weights w j , w j ′ , if w j ≤ w j ′ andp j >p j ′ , then we say item j dominates item j ′ . The above concept of dominance implies that, in any optimal knapsack solution, if one packs a dominated item j ′ , then it should also pack item j. Otherwise, one would be able to improve the solution by selecting j instead of j ′ . This reasoning translates to the inequality x j ′ ≤ x j , which is always valid for any optimal knapsack solution. We aim to extend this concept of dominance to the KPG by incorporating the strategic interactions among players. In order to derive such inequalities, we reason about how, for any player i, the decisions of x −i affect the profits of i's items. More formally, for any player i and item j, let p min j , and p max j be the minimum and maximum profit the strategies of the other players can induce, respectively. We claim the dominance rule of Definition 4 extends to the one of Proposition 3 in the KPG. Proof. Since all best-responses of player i cannot select the dominated item j ′ without selecting item j for any x −i , the claim holds. We denote the inequalities of Proposition 3 as Strategic Dominance Inequalities. We further extend the previous reasoning to derive other forms of dominance inequalities by evaluating how the strategic interaction (i.e., the items that the other players select) affects the items' profits for each player i. In other words, we derive equilibrium inequalities that incorporate the strategic interaction by including the variables of multiple players. For instance, consider the case with two players. If the profits of two items j and j ′ for player 1 fulfill the dominance rule when player 2 selects item j and does not select item j ′ , then x 1 j ′ ≤ x 1 j + (1 − x 2 j ) + x 2 j ′ is an equilibrium inequality. Namely, if there exists a PNE with x 2 j = 1 and x 2 j ′ = 0, the dominance rule between item j and j ′ applies for player 1, otherwise the inequality is not binding. Strategic Payoff Inequalities. We introduce a second class of strategic inequalities by exploiting two observations on the knapsack problem. For any player i, the strategy of all-zeros x i = (0, . . . , 0) is always feasible under the packing constraint, and yields a payoff of 0. Therefore, for any player i and item j, if p i j + n k=1,k =i C i k,j < 0, player i may not select j depending on its opponent choices x −i . More generally, let S i j be the interaction set of i's opponents inducing a negative profit for item j, namely, a set so that p i j + k∈S i j C i k,j < 0.(4) The interaction is minimal if, for any proper subsetS i j of S i j , then p i j + k∈S i j C i k,j > 0. The inequality (4) implies that if x k j = 1 for any k ∈ S i j , then x i j = 0. In general, this implies that for any interaction set, the inequality x i j + k∈S i j x k j ≤ |S i j | is an equilibrium inequality. We define the latter inequality as Strategic Payoff Inequality. In practice, the inequalities generated by minimal interaction sets are stronger than those generated by nonminimal interaction sets since they generally involve more variables. Clearly, the effort to separate and include all the previous strategic inequalities may not be negligible when n and m increase. In practice, at each iteration of Algorithm 2, we separate and add to Q only the inequalities violated by the incumbent solution (x,z). Computational Results. We generate KPG instances with n = 2, 3 and m = 25, 50, 75, 100, and with p i j and w i j being random integers uniformly distributed in [1,100] for any i. The knapsack capacities b i are equal to 0.2 m j=1 w i j , 0.5 m j=1 w i j , or 0.8 m j=1 w i j , respectively. For what concerns the strategic interaction, we focus on three different distributions for the integer interaction coefficients C i k,j . For any player i, they can be: A) equal and uniformly distributed in [1,100], or B) random and uniformly distributed in [1,100], or C) random and uniformly distributed in [−100, 100]. In Table 1 also consider the instances where we hit the time limit 2 , which we set to 1800 seconds. ZERO Regrets solves almost all instances with n = 2, regardless of the type of strategic interaction. Both running times and the number of equilibrium inequalities are significantly modest for a Σ p 2 -hard game. The PoS is generally low and increases with distribution C due to the nature of the complex strategic interactions stemming from both negative and positive C i k,j coefficients. We remind that a PoS close to 1 does not mean the instance is computationally "easy". On the contrary, a PoS ≈ 1 highlights the existence of a high-quality PNE (i.e., with a welfare close to the one of the OSW ) and also provides further evidence concerning the urgency of selecting efficient PNE s. ZERO Regrets performs robustly even in large instances, establishing a significant computational advantage over the previously developed approaches in the literature. Carvalho et al. [13,14] consider up to m = 40 items with n = 3 by just computing an equilibrium, while Cronert and Minner [18] only perform equilibria selection with m < 5. (Table 6 and Table 7). The Network Formation Game Network design games are paradigmatic problems in Algorithmic Game Theory [4,15,40]. Their natural application domain is often the one of computer networks and the Internet itself, where several selfish agents opportunistically decide how to share a scarce resource, for instance, the bandwidth. Tardos [51] accurately claims that the impact and future of the complex technology we develop through the Internet critically depend on the ability to balance the diverse needs of the selfish agents in the network. We consider a (weighted) NFG-similar to the one of Chen and Roughgarden [15] -where n players are interested in building a computer network. Let G(V, E) be a directed graph representing a network layout, where V , E are the sets of vertices and edges, respectively. Each edge (h, l) ∈ E has a construction cost c hl ∈ Z + , and each player i wants to connect an origin s i with a destination t i while minimizing its construction costs. A cost-sharing mechanism determines the cost of each edge (h, l) for a player as a function of the number of players crossing (h, l). Arguably, the most common and widely-adopted mechanism is the Shapley cost-sharing mechanism, where players using (h, l) equally share its cost c hl . The goal is to find a PNE minimizing the sum of construction costs for each player or determine that no PNE exists. We model the NFG as an IPG as follows. For any player i and edge (h, l), let the binary variables x i hl be 1 if i uses the edge. We employ standard flow constraints to model a path between s i and t i . For conciseness, we represent these constraints and binary requirements with a set F i for each i. Thus, each player i solves: min x i { (h,l)∈E c hl x i hl n k=1 x k hl : x i ∈ F i }.(5) For any player i, the cost contribution of each edge (h, l) to the objective is not linear in x and may not be defined for some choices of x (i.e., n k=1 x k hl = 0). However, we can linearize the fractional terms and eliminate the indefiniteness. For instance, consider a game with n = 3 and the objective of player 1. Let the binary variable z j,...,k hl be 1 if only players j, . . . , k select the edge (h, l). Then, x 1 hl = z 1 hl + z 12 hl + z 13 hl + z 123 hl , x 2 hl = z 2 hl + z 12 hl + z 23 hl + z 123 hl , x 3 hl = z 3 hl + z 13 hl + z 23 hl + z 123 hl along with a clique constraint z 1 hl + z 2 hl + z 3 hl + z 12 hl + z 13 hl + z 23 hl + z 123 hl ≤ 1. The term for edge (h, l) in the objective of player 1 is then c hl z 1 hl + c hl 2 (z 12 hl + z 13 hl ) + c hl 3 z 123 hl . In our tests, we consider the general weighted NFG [15], where each player i has a weight w i , and the cost share of each selected (h, l) is w i c hl divided by the weights of all players using (h, l). Specifically, we consider the 3-player weighted NFG, where a PNE may not exist, and selecting one if multiple equilibria exist is generally an N P-hard problem [4,15]. Computational Results. In order to tackle challenging instances, we consider the NFG with n = 3 on grid-based directed graphs G(V, E), where each i has to cross the grid from left to right to reach its destination. Compared to a standard grid graph, we randomly add some edges between adjacent layers to increase the number of paths, and to facilitate the interaction among players. The instances are so that |V | ∈ [50,500], and the costs c hl for each edge (h, l) are random integers uniformly distributed in [20,100]. We consider three distributions of player's weights: (i.) the Shapely-mechanism with w 1 = w 2 = w 3 = 1, where a PNE always exists, yet selecting the most efficient PNE is N P-hard, or (ii.) w 1 = 0.6, w 2 = 0.2, and w 3 = 0.2, or (iii.) w 1 = 0.45, w 2 = 0.45, and w 3 = 0.1. Table 2 reports the results, where we average over the distributions of the players' weights. For each graph, the table reports the graph size (|V |, |E|), whereas the other columns have the same meaning of the ones of Table 1. Our algorithm effectively solves all the instances but 3 within a time limit of 1800 seconds and consistently selects high-efficiency PNE s. Further, our algorithm finds the first PNE in considerably limited computing times. Generally, the previous literature does not consider this problem from a computational perspective, but only provides theoretical and possibly pessimistic bounds on the PoS and PoA. Nevertheless, we can compute efficient PNE s even in large-size graphs (i.e., PoS ≈ 1), with a limited number of equilibrium inequalities and modest running times, showing the practical effectiveness of our algorithm in a paradigmatic AGT problem. (Table 8). The Competitive Facility Location and Design Game The CFLD [1] is a game where sellers (players) compete for the demand of customers located in a given geographical area. Each seller makes two fundamental choices: where to open its selling facilities and the product assortment of such facilities, i.e., their "design". Symmetrically, the customers select their favorite facilities depending on the relative distance from a facility and its attractiveness in terms of design. We consider a variant recently presented by Cronert and Minner [18], where n competitors simultaneously choose the location and design of their facilities. Let L be the set of potential facility locations, J be the set of customers, and let R l denote the set of design alternatives for each location l ∈ L. Each player i has an available budget B i and incurs in a fixed cost f i lr when opening a facility at location l ∈ L and with the design r ∈ R l . Each player i acquires a share of the demand w j of a customer j ∈ J according to a utility u i ljr , whose value depends upon the distance of customer j from a facility in location l and the design choice of such facility (see Cronert and Minner [18] for more details). The CFLD formulates as an IPG where each player i solves max x i j∈J w j l∈L r∈R l u i ljr x i lr n k=1 l∈L r∈R l u k ljr x k lr (6a) s.t. l∈L r∈R l f i lr x i lr ≤ B i ,(6b) r∈R l x i lr ≤ 1 ∀l ∈ L, (6c) x i lr ∈ {0, 1} ∀l ∈ L, ∀r ∈ R l .(6d) The binary variable x i lr is 1 if player i opens a facility in the location l ∈ L with a design r ∈ R l . The objective function (6a) represents the share of customer demands player i maximizes, the constraint (6b) is the budget constraint for player i, and the constraints (6c) enforce that player i can open only one facility in a location l. As in the NFG, the objective is not linear in x, and the denominator can be zero; however, we can linearize it with tailored fractional programming techniques. Computational Results. We test ZERO Regrets on a representative set of instances from Cronert and Minner [18] to which we refer for the details concerning the distributions of locations and customers, and the entries w j , u ljr , f lr . The resulting 64 instances with n = 2, 3 have 50 locations and 50 customers, with budgets B 1 ∈ [10, 40] and B 2 = B 1 , B 1 + 10, . . . , 100, B 3 = 10. Table 3 summarizes the results, where we aggregate and average over the values of B 1 . We benchmark our results against the performance of eSGM-WM from Cronert and Minner [18, Table 2], which ran on a machine with similar hardware characteristics. Although the authors compute both pure and mixed welfaremaximizing equilibria, we focus on computing only the welfare-maximizing PNE . Generally, ZERO Regrets outperforms algorithm eSGM-WM even in instances where only PNE s exist. Our algorithm solves 62 instances over 64 within a time limit of 3600 seconds. The running times of ZERO Regrets are sensibly limited compared to those of eSGM-WM, and never hit the time limit on the instances with n = 2. Occasionally, the running times are dramatically smaller, e.g., the instance with n = 2, B 1 = 40, B 2 = 80 where only one PNE exists: our algorithm finds the most efficient PNE in about 1636 seconds, while eSGM-WM requires 163315 seconds. The Quadratic Game The qIPG is a simultaneous non-cooperative IPG introduced by Sagratella [45], where each player i solves the problem Table 2]. The complete table of results is in the Appendix (Table 9). min x i { 1 2 (x i ) ⊤ Q i x i + (C i x −i ) ⊤ x i + (c i ) ⊤ x i : LB ≤ x i ≤ U B, x i ∈ Z m }.(7) Specifically, each player i controls m integer variables bounded by the vectors LB and U B. The strategic interaction involves the term (C i x −i ) ⊤ x i , while the linear and quadratic terms solely depend on each player's choices. While Sagratella [45] considers only instances with positive-definite Q i matrices (i.e., the problem is convex in x i for any i), Schwarze and Stein [48] consider arbitrary matrices Q i (i.e., non-convex objectives). In particular, the latter generalizes the former by dropping the convexity requirement w.r.t. x i on the payoffs u i (x i , x −i ). In contrast with the aforementioned applications, we let the MIP solver manage the linearization of the quadratic terms in each player's payoff in order to fully integrate ZERO Regrets with the features of the existing MIP technology. As in Sagratella [45], Schwarze and Stein [48], we setup ZERO Regrets to enumerate all PNE s or to certify that no PNE exists. Computational Results. We test our algorithm on both convex and non-convex benchmarks of the qIPG. First, we consider the qIPG from Schwarze and Stein [48], and test our algorithm on the same instance set. We refer to the original paper for the details on instance generation. Besides the bounds on the x i variables, these instances also include m non-redundant linear inequalities A i x i ≤ b i for each player i. Table 4 reports an overview of the results with a similar notation to the one of the previous tables. In the first column, we report the tuple (n, m, t), where t is either C when each player's problem is convex or N C otherwise. We additionally report the average number of PNE s in the column #EQs. We solve all the 56 instances in less than 416 seconds globally, with the most computationally-difficult instance requiring 56 seconds. Similar to the previous tests, our algorithm strongly outperforms the one of Schwarze and Stein [48]. Specifically, their algorithm runs out of time in 25 instances (time limit of 3600 seconds) and solves the remaining 31 instances with non-negligible computational times (i.e., about 1302 seconds on average). Table 4: Results overview for the qIPG from the instances of Schwarze and Stein [48]. The complete table of results is in the Appendix (Table 10). To get a broader perspective, we also consider the convex instances of the game generated according to the scheme proposed in Sagratella [45]. The matrices Q i and C i are a random positive-definite and a random matrix, respectively, with rational entries in the range [−25, 25], while the entries of c i are integer numbers in the range [−5, 5]. We generate our instances with n ∈ [1,6], m ∈ [1,10], LB ∈ [−1000, 0], and U B ∈ [5, 1000] similarly to Sagratella [45]. We report the average results in Table 5, where we aggregate for n. ZERO Regrets finds the first PNE in less than a second on average, and manages to solve any instance in less than 12 seconds, even when more than 10 PNE s exist (see Table 11). Although we note that the algorithm of Sagratella [45] ran on a less performing machine, the results in Table 5 highlight the remarkable effectiveness of ZERO Regrets. The speedup in the performance seems to be considerably larger than the improvement associated with different hardware and software specifications (i.e., our algorithm is 100 times faster in terms of computing times). Table 5: Results for the qIPG from the instances of Sagratella [45]. The complete table of results is in the Appendix (Table 11). Concluding Remarks This paper presents a general framework to compute, enumerate and select equilibria for a class of IPGs. These games are a fairly natural multi-agent extension of traditional problems in Operations Research, such as resource allocation, pricing, and combinatorial problems, and are powerful modeling tools for various applications. We provide a theoretical characterization of our framework through the concepts of equilibrium inequality and equilibrium closure. We explore the interplay between rationality and cutting planes by introducing a series of general and special-purpose classes of equilibrium inequalities and provide an interpretable criterion to frame the strategic interaction among players. The algorithm we introduce is general and it smoothly integrates with the existing optimization technology. Practically, we apply our framework to various problems from the relevant literature and significant application domains. We perform an extensive computational campaign and demonstrate the high efficiency and scalability of ZERO Regrets. Our computational results also provide evidence of the existence of efficient PNE s, further motivating the need for suitable algorithms to select or enumerate them. We also remark that our algorithm could practically work -up to a numerical tolerance -even when some of the players' variables are bounded and continuous, e.g., by dropping the integrality requirement on some variables. We are prudently optimistic about the impact our framework may have in applied domains and the future methodological research directions it may open. We envision the potential for a series of theoretical contributions regarding the structure of new classes of general and problem-specific equilibrium inequalities and computational methods to further improve the algorithm's performance. Above all, we hope our framework will foster future academic research and clear the way for novel and impactful applications of IPGs. Proof. First note that deciding if KPG admits a PNE is in Σ p 2 , as we ask whether there exists a strategy profile where every player cannot improve its payoff with any of its strategies, and we can compute the payoff of such strategies in polynomial time. Given a BKP instance, we construct a KPG instance with 2 players as follows. We consider m + 1 items and associate the elements of vectors x and y with the first m elements of vectors x 1 and x 2 , respectively. Then, player 1 solves the problem in (8), whereas player 2 solves the problem in (9). max x 1 { m j=1 b j x 1 j x 2 j + x 1 m+1 x 2 m+1 : m j=1 a j x 1 j ≤ A, x 1 ∈ {0, 1} m+1 }.(8)max x 2 {(B − 1)x 2 m+1 + m j=1 b j x 2 j − m j=1 b j x 2 j x 1 j : m j=1 b j x 2 j + Bx 2 m+1 ≤ B, x 2 ∈ {0, 1} m+1 }.(9) In order to prove the theorem, we show that the KPG instance has a PNE if and only if the corresponding BKP instance admits a solution. BKP admits a solution. We assume the BKP instance has a solution x. We prove thatx 1 = (x, 1), x 2 = (0, 1) (with 0 being an m-dimensional vector of zeros) is a PNE . First, both the strategiesx 1 andx 2 are feasible by construction. Givenx 2 , player 1 attains the maximum payoff of 1 by playing strategyx 1 . The strategyx 2 yields a payoff of B − 1 for player 2 when player 1 playsx 1 . Player 2 cannot profitably deviate by setting x 2 m+1 = 0. This follows from the fact that the BKP instance has a solution x and, given thatx 1 j = x j for j = 1, . . . , m, the following inequality must hold. m j=1 b j x 2 j − m j=1 b j x 2 jx 1 j ≤ B − 1. Thus, the pair of strategies (x 1 ,x 2 ) is also a PNE for the KPG instance. BKP has no solution. If the BKP instance has no solution, player 2 never plays x 2 m+1 = 1 in a best-response, as it can always obtain a payoff of B with variables x 2 1 , . . . , x 2 m for any player 1's feasible strategy. Consider any player 2's best-responsex 2 , withx 2 m+1 = 0, and assume the KPG instance has a PNE (x 1 ,x 2 ). Then, in the player 1's best-responsex 1 , there exists at least onex 1 j = 1 whenx 2 j = 1 and b j > 0 (since a j ≤ A for any j). However, in this case, player 2 would deviate fromx 2 , sincex 2 gives a payoff < B underx 1 . Thus, no PNE exists in the KPG instance. Extended Computational Results In the following sections, we report the full results for the our computational tests. The columns are similar to the ones reported in the previous tables, possibly with the following additions (i.) #It indicating the number of iterations of ZERO Regrets, and (ii.) PNE * reporting the social welfare of the most efficient PNE , (iii.) PNE°reporting the social welfare of the less efficient PNE (if computed), (iv.) OSW reporting the optimal social welfare in the game, and (v.) Bound reporting the last proven bound on Q before the latter becomes infeasible (or the algorithm hits the time limit), irrespective on whether the algorithm enumerated PNE s or not. Full KPG Results We report the two tables with the full KPG results. In the first column of Tables 6 and 7 we add the field I to specify the instance type. Specifically, the knapsack capacity of player i is given by j=1 (w i j )I/10. Full NFG Results In Table 8, we report the full results for the NFG. In the second and third columns, we report the weights of player 1 and 2 as w 1 and w 2 , respectively. We remark that w 3 = 1 − w 1 − w 2 . Full CFLD Results We report the results for a set of instances from Cronert and Minner [18, Table 2] (i.e., β = 0.5 and d max = 20). We report the full set of our results in Table 9, where, in the second and third columns, we report the budget of player 1 and 2 as B 1 and B 2 . When n = 3, B 3 = 10. Table 2]. Full qIPGs Results We report the full results for the instances of Schwarze and Stein [48] in Table 10, and the ones generated following the scheme of Sagratella [45] in Table 11. In the latter table, we refer to Sagratella [45] for an overview on the instances acronyms. Table 11: Full results for the qIPG from the instances of Sagratella [45]. Proposition 2 . 2The PoA and the PoS in KPG can be arbitrarily bad. Proposition 3 . 3For each player i, if the dominance rule applies for two items j and j ′ withp j = p min j andp j ′ = p max j ′ , then the inequality x i j ′ ≤ x i j is an equilibrium inequality. , we present the results for the 72 resulting instances. For any given number of players n, items m and distribution of coefficients C i k,j ((n, m, d)), we report the performance over the 3 instances with different capacities, in terms of: (i.) the average number of Equilibrium Inequalities of Proposition 1 added (#EI ), (ii.) the average number of Strategic Payoff Inequalities (#EI P ) (which we only compute for the instances with distribution C ), (iii.) the average number of Strategic Dominance Inequalities (#EI D ), (iv.) the average computational time (Time), (v.) the average computational time (Time-1 st ) to find the first PNE (if any), (vi.) the average PoS (PoS ) for the best PNE (if any), and (vii.) the number of time limit hits (Tl ). The average values in #EI, #EI P, #EI D, Time and Time-1 st ( ( Definition 5 ( 5BKP). Given two m-dimensional non-negative integer vectors a and b and two nonnegative integers A and B, the BKP asks whether there exists a binary vectorx -with m j=1 a j x j ≤ A -satisfying m j=1 b j y j (1 − x j ) ≤ B − 1 for any binary vector y such that m j=1 b j y j ≤ B.Without loss of generality, we assume a j ≤ A for any j. If this is not the case, we can always modify the original BKP instance as follows: (i.) we replace A with 2A + 1, any a j ≤ A with 2a j , and any a j > A with (2A + 1), and (ii.) we add a new element m + 1 (i.e., new item), with a m+1 = 1 and b m+1 = B. In any solution of this modified instance, we must have x m+1 = 1, otherwisem+1 j=1 b j y j (1 − x j ) ≤ B − 1 would never hold since m+1 j=1 b j y j (1 − x j ) = Bwhen x m+1 = 0 and y m+1 = 1. Setting x m+1 = 1 gives a residual capacity 2A for the packing constraint of x. Indeed, every subset of x variables with original a j ≤ A that was satisfying m j=1 a j x j ≤ A now satisfies m j=1 2a j x j ≤ 2A. On the contrary, we cannot select any x j variable with original a j > A. Thus, a solution (if any) to the original instance corresponds to a solution to the modified instance, and vice versa. Table 1: Results overview for the KPG. The complete tables of results are in the Appendixn, m, d) #EI #EI P #EI D Time Time-1 st PoS Tl (2, 25, A) 14.67 0.00 3.00 0.06 0.05 1.07 0/3 (2, 25, B) 17.33 0.00 3.67 0.12 0.09 1.02 0/3 (2, 25, C) 29.33 9.67 7.67 0.39 0.04 1.06 0/3 (2, 50, A) 20.00 0.00 2.67 0.21 0.21 1.02 0/3 (2, 50, B) 26.67 0.00 19.67 0.51 0.39 1.01 0/3 (2, 50, C) 72.67 27.00 11.33 6.34 0.92 1.08 0/3 (2, 75, A) 38.00 0.00 31.00 0.60 0.44 1.00 0/3 (2, 75, B) 100.67 0.00 34.00 8.35 5.71 1.02 0/3 (2, 75, C) 112.67 38.33 67.00 47.75 3.96 1.08 0/3 (2, 100, A) 25.33 0.00 14.67 0.76 0.58 1.01 0/3 (2, 100, B) 205.33 0.00 79.67 220.42 143.45 1.01 0/3 (2, 100, C) 697.33 55.33 119.67 1205.29 11.33 1.05 2/3 (3, 25, A) 31.00 0.00 9.33 0.21 0.21 1.01 0/3 (3, 25, B) 44.00 0.00 14.67 0.33 0.33 1.02 0/3 (3, 25, C) 91.00 29.67 33.67 29.78 5.64 1.26 0/3 (3, 50, A) 95.00 0.00 24.33 18.39 11.68 1.03 0/3 (3, 50, B) 206.00 0.00 44.33 626.45 167.01 1.01 1/3 (3, 50, C) 148.00 63.00 224.67 382.24 - - 0/3 (3, 75, A) 64.00 0.00 119.00 4.65 2.07 1.02 0/3 (3, 75, B) 278.00 0.00 92.67 982.97 272.69 1.01 1/3 (3, 75, C) 173.00 87.33 319.67 658.77 - - 1/3 (3, 100, A) 261.00 0.00 144.67 1200.65 666.13 1.00 2/3 (3, 100, B) 479.00 0.00 168.33 tl - - 3/3 (3, 100, C) 184.00 171.00 1019.67 1200.31 - - 2/3 Table 2 : 2Results overview for the NFG. The complete table of results is in the Appendix Table 3 : 3Results overview for the CFLD from the instances of Cronert and Minner[18, Table 8 : 8Full results for the NFG. Table 9 : 9Full results for the CFLD from the instances of Cronert and Minner[18, Table 10 : 10Full results for the qIPG from the instances of Schwarze and Stein[48].Instance #EQs #EI #It Time Time-1 st PNE * PNE°OSW We performed our tests on an Intel Xeon Gold 6142 equipped with 128GB of RAM and 8 threads, employing Gurobi 9.5 as MIP solver for Algorithm 2. We remark that the same observation holds on all our experiments. Table 7: Full results for the KPG with n = 3. Acknowledgements We would like to thank Ulrich Pferschy for the valuable discussions concerning our work.AppendixKPG Complexity ProofWe perform a reduction from the DeNegre Bilevel Knapsack Problem (BKP ) below, which is Σ p 2complete[9].( Competitive facility location and design problem. R Aboolian, O Berman, D Krass, 10.1016/j.ejor.2006.07.02103772217European Journal of Operational Research. 1821R. Aboolian, O. Berman, and D. Krass. Competitive facility location and design problem. European Journal of Operational Research, 182(1):40-62, Oct. 2007. ISSN 03772217. https://doi.org/10.1016/j.ejor.2006.07.021. URL https://linkinghub.elsevier.com/retrieve/pii/S0377221706008009. Supplier Competition with Option Contracts for Discrete Blocks of Capacity. E Anderson, B Chen, L Shao, http:/pubsonline.informs.org/doi/10.1287/opre.2017.15930030-364XOperations Research. 654E. Anderson, B. Chen, and L. Shao. Supplier Competition with Option Con- tracts for Discrete Blocks of Capacity. Operations Research, 65(4):952-967, Aug. 2017. ISSN 0030-364X, 1526-5463. https://doi.org/10.1287/opre.2017.1593. URL http://pubsonline.informs.org/doi/10.1287/opre.2017.1593. Near-optimal network design with selfish agents. E Anshelevich, A Dasgupta, E Tardos, T Wexler, 10.1145/780542.780617978-1-58113-674-6Proceedings of the thirty-fifth ACM symposium on Theory of computing -STOC '03. the thirty-fifth ACM symposium on Theory of computing -STOC '03San Diego, CA, USAACM Press511E. Anshelevich, A. Dasgupta, E. Tardos, and T. Wexler. Near-optimal network design with selfish agents. In Proceedings of the thirty-fifth ACM symposium on Theory of computing -STOC '03, page 511, San Diego, CA, USA, 2003. ACM Press. ISBN 978-1-58113-674-6. https://doi.org/10.1145/780542.780617. URL http://portal.acm.org/citation.cfm?doid=780542.780617. The Price of Stability for Network Design with Fair Cost Allocation. E Anshelevich, A Dasgupta, J Kleinberg, E Tardos, T Wexler, T Roughgarden, http:/epubs.siam.org/doi/10.1137/0706800960097-5397, 1095-7111SIAM Journal on Computing. 384E. Anshelevich, A. Dasgupta, J. Kleinberg, E. Tardos, T. Wexler, and T. Roughgarden. The Price of Stability for Network Design with Fair Cost Allocation. SIAM Journal on Computing, 38(4): 1602-1623, Jan. 2008. ISSN 0097-5397, 1095-7111. https://doi.org/10.1137/070680096. URL http://epubs.siam.org/doi/10.1137/070680096. Enumeration of All the Extreme Equilibria in Game Theory: Bimatrix and Polymatrix Games. C Audet, S Belhaiza, P Hansen, http:/link.springer.com/10.1007/s10957-006-9070-30022-3239Journal of Optimization Theory and Applications. 1293C. Audet, S. Belhaiza, and P. Hansen. Enumeration of All the Extreme Equilibria in Game Theory: Bimatrix and Polymatrix Games. Journal of Optimization Theory and Applications, 129 (3):349-372, Dec. 2006. ISSN 0022-3239, 1573-2878. https://doi.org/10.1007/s10957-006-9070-3. URL http://link.springer.com/10.1007/s10957-006-9070-3. Enumeration of Nash equilibria for two-player games. D Avis, G D Rosenberg, R Savani, B Stengel, http:/link.springer.com/10.1007/s00199-009-0449-x0938-2259Economic Theory. 421D. Avis, G. D. Rosenberg, R. Savani, and B. von Stengel. Enumeration of Nash equilibria for two-player games. Economic Theory, 42(1):9-37, Jan. 2010. ISSN 0938-2259, 1432-0479. https://doi.org/10.1007/s00199-009-0449-x. URL http://link.springer.com/10.1007/s00199-009-0449-x. Rationalizable Strategic Behavior. B D Bernheim, 10.2307/191119600129682Econometrica. 5241007B. D. Bernheim. Rationalizable Strategic Behavior. Econometrica, 52(4):1007, July 1984. ISSN 00129682. https://doi.org/10.2307/1911196. URL https://www.jstor.org/stable/1911196. Competitive Equilibrium in an Exchange Economy with Indivisibilities. S Bikhchandani, J W Mamer, 10.1006/jeth.1996.226900220531Journal of Economic Theory. 742S. Bikhchandani and J. W. Mamer. Competitive Equilibrium in an Ex- change Economy with Indivisibilities. Journal of Economic Theory, 74(2):385- 413, June 1997. ISSN 00220531. https://doi.org/10.1006/jeth.1996.2269. URL https://linkinghub.elsevier.com/retrieve/pii/S0022053196922693. A Study on the Computational Complexity of the Bilevel Knapsack Problem. A Caprara, M Carvalho, A Lodi, G J Woeginger, http:/epubs.siam.org/doi/10.1137/1309065931052-6234, 1095-7189SIAM Journal on Optimization. 242A. Caprara, M. Carvalho, A. Lodi, and G. J. Woeginger. A Study on the Computa- tional Complexity of the Bilevel Knapsack Problem. SIAM Journal on Optimization, 24(2): 823-838, Jan. 2014. ISSN 1052-6234, 1095-7189. https://doi.org/10.1137/130906593. URL http://epubs.siam.org/doi/10.1137/130906593. Computation of equilibria on integer programming games. M Carvalho, Universidade do Porto, Faculdade de CiênciasPhD thesisM. Carvalho. Computation of equilibria on integer programming games. PhD thesis, Universidade do Porto, Faculdade de Ciências, 2016. URL https://repositorio-aberto.up.pt/handle/10216/83362. Nash equilibria in the two-player kidney exchange game. M Carvalho, A Lodi, J P Pedroso, A Viana, http:/link.springer.com/10.1007/s10107-016-1013-70025-5610Mathematical Programming. M. Carvalho, A. Lodi, J. P. Pedroso, and A. Viana. Nash equilibria in the two-player kidney exchange game. Mathematical Programming, 161(1-2):389-417, Jan. 2017. ISSN 0025-5610, 1436-4646. https://doi.org/10.1007/s10107-016-1013-7. URL http://link.springer.com/10.1007/s10107-016-1013-7. Existence of Nash Equilibria on Integer Programming Games. M Carvalho, A Lodi, J A P Pedroso, http:/link.springer.com/10.1007/978-3-319-71583-4_2978-3-319-71582-7 978-3-319-71583-4Operational Research. A. I. F. Vaz, J. a. P. Almeida, J. F. Oliveira, and A. A. PintoChamSpringer International Publishing223M. Carvalho, A. Lodi, and J. a. P. Pedroso. Existence of Nash Equilibria on In- teger Programming Games. In A. I. F. Vaz, J. a. P. Almeida, J. F. Oliveira, and A. A. Pinto, editors, Operational Research, volume 223, pages 11-23. Springer Inter- national Publishing, Cham, 2018. ISBN 978-3-319-71582-7 978-3-319-71583-4. URL http://link.springer.com/10.1007/978-3-319-71583-4_2. M Carvalho, G Dragotto, A Lodi, S Sankaranarayanan, abs/2111.05726The Cut and Play Algorithm: Computing Nash Equilibria via Outer Approximations. arXiv. M. Carvalho, G. Dragotto, A. Lodi, and S. Sankaranarayanan. The Cut and Play Algorithm: Computing Nash Equilibria via Outer Approximations. arXiv, abs/2111.05726, 2021. URL https://arxiv.org/abs/2111.05726. Computing equilibria for integer programming games. M Carvalho, A Lodi, J Pedroso, 10.1016/j.ejor.2022.03.04803772217European Journal of Operational Research. 0377221722002727M. Carvalho, A. Lodi, and J. Pedroso. Computing equilibria for integer program- ming games. European Journal of Operational Research, page S0377221722002727, Mar. 2022. ISSN 03772217. https://doi.org/10.1016/j.ejor.2022.03.048. URL https://linkinghub.elsevier.com/retrieve/pii/S0377221722002727. Network design with weighted players. H.-L Chen, T Roughgarden, 978-1-59593-452-9Proceedings of the eighteenth annual ACM symposium on Parallelism in algorithms and architectures -SPAA '06. the eighteenth annual ACM symposium on Parallelism in algorithms and architectures -SPAA '06Cambridge, Massachusetts, USAACM Press29H.-L. Chen and T. Roughgarden. Network design with weighted players. In Proceedings of the eighteenth annual ACM symposium on Parallelism in algorithms and architectures -SPAA '06, page 29, Cambridge, Massachusetts, USA, 2006. ACM Press. ISBN 978-1-59593-452-9. . 10.1145/1148109.1148114https://doi.org/10.1145/1148109.1148114. URL http://portal.acm.org/citation.cfm?doid=1148109.1148114. Edmonds polytopes and a hierarchy of combinatorial problems. V , 10.1016/0012-365X(73)90167-20012365XDiscrete Mathematics. 44V. Chvátal. Edmonds polytopes and a hierarchy of combinatorial problems. Discrete Mathemat- ics, 4(4):305-337, Apr. 1973. ISSN 0012365X. https://doi.org/10.1016/0012-365X(73)90167-2. URL https://linkinghub.elsevier.com/retrieve/pii/0012365X73901672. M Conforti, G Cornuéjols, G Zambelli, Integer Programming. 271M. Conforti, G. Cornuéjols, and G. Zambelli. Integer Programming, volume 271 of Graduate Texts in Mathematics. http:/link.springer.com/10.1007/978-3-319-11008-0978-3-319-11007-3 978-3-319-11008-0Springer International PublishingChamGraduate Texts in Mathematics. Springer International Publishing, Cham, 2014. ISBN 978-3-319-11007-3 978-3-319-11008-0. https://doi.org/10.1007/978-3-319-11008-0. URL http://link.springer.com/10.1007/978-3-319-11008-0. Equilibrium identification and selection in integer programming games. T Cronert, S Minner, 10.2139/ssrn.3762380SSRN pre-printT. Cronert and S. Minner. Equilibrium identification and selection in integer programming games. SSRN pre-print, 2020. URL http://dx.doi.org/10.2139/ssrn.3762380. The complexity of computing a Nash equilibrium. C Daskalakis, P W Goldberg, C H Papadimitriou, https:/dl.acm.org/doi/10.1145/1461928.14619510001-0782Communications of the ACM. 522C. Daskalakis, P. W. Goldberg, and C. H. Papadimitriou. The complexity of computing a Nash equilibrium. Communications of the ACM, 52(2):89-97, Feb. 2009. ISSN 0001-0782, 1557-7317. https://doi.org/10.1145/1461928.1461951. URL https://dl.acm.org/doi/10.1145/1461928.1461951. Alternative models for markets with nonconvexities. J , David Fuller, E Elebi, 10.1016/j.ejor.2017.02.03203772217European Journal of Operational Research. 2612J. David Fuller and E. Ç elebi. Alternative models for markets with non- convexities. European Journal of Operational Research, 261(2):436-449, Sept. 2017. ISSN 03772217. https://doi.org/10.1016/j.ejor.2017.02.032. URL https://linkinghub.elsevier.com/retrieve/pii/S0377221717301546. Totally Unimodular Congestion Games. A Pia, M Ferris, C Michini, 978-1-61197-478-2Proceedings of the Twenty-Eighth Annual ACM-SIAM Symposium on Discrete Algorithms. the Twenty-Eighth Annual ACM-SIAM Symposium on Discrete AlgorithmsA. Del Pia, M. Ferris, and C. Michini. Totally Unimodular Congestion Games. In Proceedings of the Twenty-Eighth Annual ACM-SIAM Symposium on Discrete Algorithms, pages 577-588. Society for Industrial and Applied Mathematics, Jan. 2017. ISBN 978-1-61197-478-2. . http:/epubs.siam.org/doi/10.1137/1.9781611974782.37https://doi.org/10.1137/1.9781611974782.37. URL http://epubs.siam.org/doi/10.1137/1.9781611974782.37. An exact approach for the bilevel knapsack problem with interdiction constraints and extensions. F Della Croce, R Scatamacchia, http:/link.springer.com/10.1007/s10107-020-01482-50025-5610Mathematical ProgrammingF. Della Croce and R. Scatamacchia. An exact approach for the bilevel knapsack problem with interdiction constraints and extensions. Mathematical Programming, Feb. 2020. ISSN 0025-5610, 1436-4646. https://doi.org/10.1007/s10107-020-01482-5. URL http://link.springer.com/10.1007/s10107-020-01482-5. Interdiction and discrete bilevel linear programming. S Denegre, Lehigh UniversityPhD thesisS. DeNegre. Interdiction and discrete bilevel linear programming. PhD thesis, Lehigh University, 2011. URL https://www.proquest.com/docview/872083062/691B4CDD4A6C4E15PQ. G Dragotto, S Sankaranarayanan, M Carvalho, A Lodi, arXiv:2111.07932arXiv: 2111.07932ZERO: Playing Mathematical Programming Games. cs, mathG. Dragotto, S. Sankaranarayanan, M. Carvalho, and A. Lodi. ZERO: Playing Mathematical Programming Games. arXiv:2111.07932 [cs, math], Dec. 2021. URL http://arxiv.org/abs/2111.07932. arXiv: 2111.07932. http:/link.springer.com/10.1007/b97543978-0-387-95580-3Finite-Dimensional Variational Inequalities and Complementarity Problems. Springer Series in Operations Research and Financial Engineering. F. Facchinei and J.-S. PangNew York, New York, NYSpringerF. Facchinei and J.-S. Pang, editors. Finite-Dimensional Variational Inequalities and Comple- mentarity Problems. Springer Series in Operations Research and Financial Engineering. Springer New York, New York, NY, 2004. ISBN 978-0-387-95580-3. https://doi.org/10.1007/b97543. URL http://link.springer.com/10.1007/b97543. Multi-Product Price and Assortment Competition. A Federgruen, M Hu, http:/pubsonline.informs.org/doi/10.1287/opre.2015.13800030-364XOperations Research. 633A. Federgruen and M. Hu. Multi-Product Price and Assortment Competition. Operations Research, 63(3):572-584, June 2015. ISSN 0030-364X, 1526-5463. https://doi.org/10.1287/opre.2015.1380. URL http://pubsonline.informs.org/doi/10.1287/opre.2015.1380. Solving Discretely-Constrained Nash-Cournot Games with an Application to Power Markets. S A Gabriel, S A Siddiqui, A J Conejo, C Ruiz, http:/link.springer.com/10.1007/s11067-012-9182-21566-113XNetworks and Spatial Economics. 133S. A. Gabriel, S. A. Siddiqui, A. J. Conejo, and C. Ruiz. Solving Discretely-Constrained Nash-Cournot Games with an Application to Power Markets. Networks and Spatial Economics, 13 (3):307-326, Sept. 2013. ISSN 1566-113X, 1572-9427. https://doi.org/10.1007/s11067-012-9182-2. URL http://link.springer.com/10.1007/s11067-012-9182-2. The ellipsoid method and its consequences in combinatorial optimization. M Grötschel, L Lovász, A Schrijver, http:/link.springer.com/10.1007/BF025792730209-9683Combinatorica. 12M. Grötschel, L. Lovász, and A. Schrijver. The ellipsoid method and its consequences in com- binatorial optimization. Combinatorica, 1(2):169-197, June 1981. ISSN 0209-9683, 1439-6912. https://doi.org/10.1007/BF02579273. URL http://link.springer.com/10.1007/BF02579273. Generalized Nash Equilibrium Problems with Mixed-Integer Variables. T Harks, J Schwarz, WINE 2021 -The 17th Conference on Web and Internet Economics. T. Harks and J. Schwarz. Generalized Nash Equilibrium Problems with Mixed-Integer Vari- ables. WINE 2021 -The 17th Conference on Web and Internet Economics, Feb. 2022. URL http://arxiv.org/abs/2107.13298. A new theory of equilibrium selection for games with complete information. J C Harsanyi, 10.1016/S0899-8256(05)80018-108998256Games and Economic Behavior. 81J. C. Harsanyi. A new theory of equilibrium selection for games with complete information. Games and Economic Behavior, 8(1):91-122, Jan. 1995. ISSN 08998256. https://doi.org/10.1016/S0899-8256(05)80018-1. URL https://linkinghub.elsevier.com/retrieve/pii/S0899825605800181. R Hemmecke, S Onn, R Weismantel, arXiv:0903.4577arXiv: 0903.4577Nash-equilibria and N-fold integer programming. R. Hemmecke, S. Onn, and R. Weismantel. Nash-equilibria and N-fold integer programming. arXiv:0903.4577 [math], Mar. 2009. URL http://arxiv.org/abs/0903.4577. arXiv: 0903.4577. On Linear Characterizations of Combinatorial Optimization Problems. R Karp, C Papadimitriou, 10.1137/0211053SIAM Journal on Computing. 114R. Karp and C. Papadimitriou. On Linear Characterizations of Combinatorial Op- timization Problems. SIAM Journal on Computing, 11(4):620-632, 1982. URL https://doi.org/10.1137/0211053. . H Kellerer, U Pferschy, D Pisinger, Knapsack ProblemsH. Kellerer, U. Pferschy, and D. Pisinger. Knapsack Problems. . http:/link.springer.com/10.1007/978-3-540-24777-7978-3-642-07311-3 978-3-540-24777-7Berlin, HeidelbergSpringer Berlin HeidelbergSpringer Berlin Hei- delberg, Berlin, Heidelberg, 2004. ISBN 978-3-642-07311-3 978-3-540-24777-7. URL http://link.springer.com/10.1007/978-3-540-24777-7. Computation and efficiency of potential function minimizers of combinatorial congestion games. P Kleer, G Schäfer, https:/link.springer.com/10.1007/s10107-020-01546-60025-5610Mathematical Programming. 190P. Kleer and G. Schäfer. Computation and efficiency of potential function minimiz- ers of combinatorial congestion games. Mathematical Programming, 190(1-2):523-560, Nov. 2021. ISSN 0025-5610, 1436-4646. https://doi.org/10.1007/s10107-020-01546-6. URL https://link.springer.com/10.1007/s10107-020-01546-6. Rational Generating Functions and Integer Programming Games. M Köppe, C T Ryan, M Queyranne, http:/pubsonline.informs.org/doi/abs/10.1287/opre.1110.09641526-5463Operations Research. 596M. Köppe, C. T. Ryan, and M. Queyranne. Rational Generating Functions and Integer Program- ming Games. Operations Research, 59(6):1445-1460, Dec. 2011. ISSN 0030-364X, 1526-5463. URL http://pubsonline.informs.org/doi/abs/10.1287/opre.1110.0964. Worst-Case Equilibria. E Koutsoupias, C Papadimitriou, ; G Goos, J Hartmanis, J Van Leeuwen, C Meinel, S Tison, http:/link.springer.com/10.1007/3-540-49116-3_38978-3-540-65691-3 978-3-540-49116-3Series Title: Lecture Notes in Computer Science. Berlin Heidelberg; Berlin, HeidelbergSpringer1563STACS 99E. Koutsoupias and C. Papadimitriou. Worst-Case Equilibria. In G. Goos, J. Hartmanis, J. van Leeuwen, C. Meinel, and S. Tison, editors, STACS 99, volume 1563, pages 404-413. Springer Berlin Heidelberg, Berlin, Heidelberg, 1999. ISBN 978-3-540-65691-3 978-3-540-49116-3. URL http://link.springer.com/10.1007/3-540-49116-3_38. Series Title: Lecture Notes in Com- puter Science. The Expected Number of Nash Equilibria of a Normal Form Game. A Mclennan, http:/doi.wiley.com/10.1111/j.1468-0262.2005.00567.x1468-0262Econometrica. 731A. McLennan. The Expected Number of Nash Equilibria of a Nor- mal Form Game. Econometrica, 73(1):141-174, Jan. 2005. ISSN 0012- 9682, 1468-0262. https://doi.org/10.1111/j.1468-0262.2005.00567.x. URL http://doi.wiley.com/10.1111/j.1468-0262.2005.00567.x. Non-Cooperative Games. J Nash, 10.2307/19695290003486XThe Annals of Mathematics. 542286J. Nash. Non-Cooperative Games. The Annals of Mathematics, 54(2):286, Sept. 1951. ISSN 0003486X. https://doi.org/10.2307/1969529. URL https://www.jstor.org/stable/1969529. Equilibrium Points in n-Person Games. J F Nash, Proceedings of the National Academy of Sciences of the United States of America. the National Academy of Sciences of the United States of America36J. F. Nash. Equilibrium Points in n-Person Games. Proceedings of the Na- tional Academy of Sciences of the United States of America, 36(1):48-49, 1950. URL http://www.jstor.org/stable/88031. Algorithmic Game Theory. 10.1017/CBO9780511800481978-0-521-87282-9N. NisanCambridge Univ. PressCambridgerepr., [nachdr.] editionN. Nisan, editor. Algorithmic Game Theory. Cambridge Univ. Press, Cambridge, repr., [nachdr.] edition, 2008. ISBN 978-0-521-87282-9. URL https://doi.org/10.1017/CBO9780511800481. Rationalizable Strategic Behavior and the Problem of Perfection. Econometrica. D G Pearce, 10.2307/191119700129682521029D. G. Pearce. Rationalizable Strategic Behavior and the Problem of Perfection. Econo- metrica, 52(4):1029, July 1984. ISSN 00129682. https://doi.org/10.2307/1911197. URL https://www.jstor.org/stable/1911197. Simple search methods for finding a Nash equilibrium. R Porter, E Nudelman, Y Shoham, 10.1016/j.geb.2006.03.01508998256Games and Economic Behavior. 632R. Porter, E. Nudelman, and Y. Shoham. Simple search methods for find- ing a Nash equilibrium. Games and Economic Behavior, 63(2):642-662, July 2008. ISSN 08998256. https://doi.org/10.1016/j.geb.2006.03.015. URL https://linkinghub.elsevier.com/retrieve/pii/S0899825606000935. Enumeration of All Extreme Equilibria of Bimatrix Games with Integer Pivoting and Improved Degeneracy Check. G D Rosenberg, LSE-CDAM-2004-1868CDAM Research ReportG. D. Rosenberg. Enumeration of All Extreme Equilibria of Bimatrix Games with Integer Pivoting and Improved Degeneracy Check. CDAM Research Report LSE-CDAM-2004-18, page 68, 2005. URL http://www.cdam.lse.ac.uk/Reports/Files/cdam-2005-18.pdf. Bounding the inefficiency of equilibria in nonatomic congestion games. T Roughgarden, E Tardos, 08998256Games and Economic Behavior. 472T. Roughgarden and E. Tardos. Bounding the inefficiency of equilibria in nonatomic conges- tion games. Games and Economic Behavior, 47(2):389-403, May 2004. ISSN 08998256. URL https://linkinghub.elsevier.com/retrieve/pii/S089982560300188X. Computing All Solutions of Nash Equilibrium Problems with Discrete Strategy Sets. S Sagratella, http:/epubs.siam.org/doi/10.1137/15M10524451052-6234SIAM Journal on Optimization. 264S. Sagratella. Computing All Solutions of Nash Equilibrium Problems with Discrete Strategy Sets. SIAM Journal on Optimization, 26(4):2190-2218, Jan. 2016. ISSN 1052-6234, 1095-7189. https://doi.org/10.1137/15M1052445. URL http://epubs.siam.org/doi/10.1137/15M1052445. Mixed-Integer Programming Methods for Finding Nash Equilibria. T Sandholm, A Gilpin, V Conitzer, https:/dl.acm.org/doi/10.5555/1619410.16194131-57735-236-XProceedings of the 20th National Conference on Artificial Intelli. the 20th National Conference on Artificial IntelliPittsburgh, PennsylvaniaAAAI Press2AAAI'05T. Sandholm, A. Gilpin, and V. Conitzer. Mixed-Integer Programming Methods for Find- ing Nash Equilibria. In Proceedings of the 20th National Conference on Artificial Intelli- gence -Volume 2, AAAI'05, pages 495-501. AAAI Press, 2005. ISBN 1-57735-236-X. URL https://dl.acm.org/doi/10.5555/1619410.1619413. event-place: Pittsburgh, Pennsylvania. On the performance of user equilibria in traffic networks. A S Schulz, N E Stier-Moses, https:/dl.acm.org/doi/10.5555/644108.644121Proceedings of the Fourteenth Annual ACM-SIAM Symposium on Discrete Algorithms. the Fourteenth Annual ACM-SIAM Symposium on Discrete AlgorithmsBaltimore, Maryland, USAA. S. Schulz and N. E. Stier-Moses. On the performance of user equilibria in traf- fic networks. In Proceedings of the Fourteenth Annual ACM-SIAM Symposium on Discrete Algorithms, January 12-14, 2003, Baltimore, Maryland, USA., 2003. URL https://dl.acm.org/doi/10.5555/644108.644121. A branch-and-prune algorithm for discrete Nash equilibrium problems. Optimization Online. S Schwarze, O Stein, ID 2022-03-8836:27S. Schwarze and O. Stein. A branch-and-prune algorithm for discrete Nash equi- librium problems. Optimization Online, Preprint ID 2022-03-8836:27, 2022. URL http://www.optimization-online.org/DB_FILE/2022/03/8836.pdf. A Reformulation-Linearization Technique for Solving Discrete and Continuous Nonconvex Problems. H D Sherali, W P Adams, Nonconvex Optimization and Its Applications. Springer US. 31H. D. Sherali and W. P. Adams. A Reformulation-Linearization Technique for Solv- ing Discrete and Continuous Nonconvex Problems, volume 31 of Nonconvex Op- timization and Its Applications. Springer US, Boston, MA, 1999. ISBN 978-1- . http:/link.springer.com/10.1007/978-1-4757-4388-3https://doi.org/10.1007/978-1-4757-4388-3. URL http://link.springer.com/10.1007/978-1-4757-4388-3. Game theory: an introduction. S Tadelis, 978-0-691-12908-2Princeton University PressPrinceton ; OxfordS. Tadelis. Game theory: an introduction. Princeton University Press, Princeton ; Oxford, 2013. ISBN 978-0-691-12908-2. Network games. E Tardos, 10.1145/1007352.1007356Proceedings of the thirty-sixth annual ACM symposium on Theory of computing. the thirty-sixth annual ACM symposium on Theory of computingE. Tardos. Network games. In Proceedings of the thirty-sixth annual ACM symposium on Theory of computing, pages 341-342, 2004. URL https://doi.org/10.1145/1007352.1007356. Mixed Integer Linear Programming Formulation Techniques. J P Vielma, http:/epubs.siam.org/doi/10.1137/1309153031095-7200SIAM Review. 571J. P. Vielma. Mixed Integer Linear Programming Formulation Techniques. SIAM Review, 57 (1):3-57, Jan. 2015. ISSN 0036-1445, 1095-7200. https://doi.org/10.1137/130915303. URL http://epubs.siam.org/doi/10.1137/130915303.
[]
[ "Extreme-scale many-against-many protein similarity search", "Extreme-scale many-against-many protein similarity search" ]
[ "Oguz Selvitopi [email protected] ", "Saliya Ekanayake \nMicrosoft Corporation\nUSA\n", "Giulia Guidi \nUniversity of California\nBerkeleyUSA\n", "Muaaz G Awan \nNERSC\nLawrence Berkeley National Laboratory\nUSA\n", "Georgios A Pavlopoulos \nInstitute for Fundamental Biomedical Research\nBSRC \"Alexander Fleming\"\n34 Fleming Street16672VariGreece\n\nJoint Genome Institute\nLawrence Berkeley National Laboratory\nIndiana University\nUSA, USA\n", "Ariful Azad ", "Nikos Kyrpides ", "Leonid Oliker ", "Katherine Yelick \nUniversity of California\nBerkeleyUSA\n", "Aydın Buluç \nUniversity of California\nBerkeleyUSA\n", "\nApplied Mathematics & Computational Research Division\nLawrence Berkeley National Laboratory\nUSA\n" ]
[ "Microsoft Corporation\nUSA", "University of California\nBerkeleyUSA", "NERSC\nLawrence Berkeley National Laboratory\nUSA", "Institute for Fundamental Biomedical Research\nBSRC \"Alexander Fleming\"\n34 Fleming Street16672VariGreece", "Joint Genome Institute\nLawrence Berkeley National Laboratory\nIndiana University\nUSA, USA", "University of California\nBerkeleyUSA", "University of California\nBerkeleyUSA", "Applied Mathematics & Computational Research Division\nLawrence Berkeley National Laboratory\nUSA" ]
[]
Similarity search is one of the most fundamental computations that are regularly performed on ever-increasing protein datasets. Scalability is of paramount importance for uncovering novel phenomena that occur at very large scales. We unleash the power of over 20,000 GPUs on the Summit system to perform all-vs-all protein similarity search on one of the largest publicly available datasets with 405 million proteins, in less than 3.5 hours, cutting the time-to-solution for many use cases from weeks. The variability of protein sequence lengths, as well as the sparsity of the space of pairwise comparisons, make this a challenging problem in distributed memory. Due to the need to construct and maintain a data structure holding indices to all other sequences, this application has a huge memory footprint that makes it hard to scale the problem sizes. We overcome this memory limitation by innovative matrix-based blocking techniques, without introducing additional load imbalance.I. JUSTIFICATION FOR ACM GORDON BELL PRIZEWe unleash the power of over 20,000 GPUs to perform many-against-many protein similarity search on one of the largest publicly available datasets with 405 million proteins in 3.4 hours with an unprecedented rate of 691 million alignments per second, cutting the time-to-solution for many use cases from weeks.II. PERFORMANCE ATTRIBUTESPerformance Attribute Value Category of achievement Time to solution, alignments per seconds, cell updates per second (CUPs) Type of method used N/A Results reported on the basis of Whole application for time to solution and alignments per second. Kernel time for cell updates per second Precision reported Integer System scale 3364 nodes (141,288 CPU cores and 20,184 GPUs) Measurement mechanism TimersIII. OVERVIEW OF THE PROBLEMComparative genomics studies the evolutionary and biological relationships between different organisms by exploiting similarities over the genome sequences. A common task, for example, is to find out the functional or taxonomic contents of the samples collected from an environment often by querying the collected sequences against an established reference database. The importance of enabling and building of fast computational infrastructure for comparative genomics becomes more critical as more and more genomes are sequenced.Our work addresses the computational challenges posed by searching similarities between two sets of proteins in the sequence domain. The use cases of this task in computational biology are numerous and include functional annotation [1], gene localization and studying protein evolution [2]. In metagenomics the DNA sequences collected from the environment enable the study of a diverse microbial genome pool that is often missed by the cultivation-based methods. Such samples contain millions of protein sequences [3] and a major component of many biological workflows is to find out the existing genes by aligning them against a reference database. With the sequencing costs dropping and the technology becoming more available, the bottlenecks in metagenomics research are gradually shifting towards computation and storage [4],[5].We focus on the problem of aligning a set of sequences against another set of sequences. This problem often occurs within the context of identifying sequences in one set (set of query sequences) by using another set of sequences whose functions are already known (set of reference sequences). Another context is to find the similar sequences in a given set by clustering them. In this variant, a many-against-many search is performed over a set of sequences to find the similar sequences in the set (often followed by clustering of sequences). This variant can also be seen as aligning the given set against itself where the query and the reference set is the
10.5555/3571885.3571887
[ "https://export.arxiv.org/pdf/2303.01845v1.pdf" ]
257,158,064
2303.01845
607afd32a64eb998364501a68796600cd533fbbb
Extreme-scale many-against-many protein similarity search 3 Mar 2023 Oguz Selvitopi [email protected] Saliya Ekanayake Microsoft Corporation USA Giulia Guidi University of California BerkeleyUSA Muaaz G Awan NERSC Lawrence Berkeley National Laboratory USA Georgios A Pavlopoulos Institute for Fundamental Biomedical Research BSRC "Alexander Fleming" 34 Fleming Street16672VariGreece Joint Genome Institute Lawrence Berkeley National Laboratory Indiana University USA, USA Ariful Azad Nikos Kyrpides Leonid Oliker Katherine Yelick University of California BerkeleyUSA Aydın Buluç University of California BerkeleyUSA Applied Mathematics & Computational Research Division Lawrence Berkeley National Laboratory USA Extreme-scale many-against-many protein similarity search 3 Mar 2023 Similarity search is one of the most fundamental computations that are regularly performed on ever-increasing protein datasets. Scalability is of paramount importance for uncovering novel phenomena that occur at very large scales. We unleash the power of over 20,000 GPUs on the Summit system to perform all-vs-all protein similarity search on one of the largest publicly available datasets with 405 million proteins, in less than 3.5 hours, cutting the time-to-solution for many use cases from weeks. The variability of protein sequence lengths, as well as the sparsity of the space of pairwise comparisons, make this a challenging problem in distributed memory. Due to the need to construct and maintain a data structure holding indices to all other sequences, this application has a huge memory footprint that makes it hard to scale the problem sizes. We overcome this memory limitation by innovative matrix-based blocking techniques, without introducing additional load imbalance.I. JUSTIFICATION FOR ACM GORDON BELL PRIZEWe unleash the power of over 20,000 GPUs to perform many-against-many protein similarity search on one of the largest publicly available datasets with 405 million proteins in 3.4 hours with an unprecedented rate of 691 million alignments per second, cutting the time-to-solution for many use cases from weeks.II. PERFORMANCE ATTRIBUTESPerformance Attribute Value Category of achievement Time to solution, alignments per seconds, cell updates per second (CUPs) Type of method used N/A Results reported on the basis of Whole application for time to solution and alignments per second. Kernel time for cell updates per second Precision reported Integer System scale 3364 nodes (141,288 CPU cores and 20,184 GPUs) Measurement mechanism TimersIII. OVERVIEW OF THE PROBLEMComparative genomics studies the evolutionary and biological relationships between different organisms by exploiting similarities over the genome sequences. A common task, for example, is to find out the functional or taxonomic contents of the samples collected from an environment often by querying the collected sequences against an established reference database. The importance of enabling and building of fast computational infrastructure for comparative genomics becomes more critical as more and more genomes are sequenced.Our work addresses the computational challenges posed by searching similarities between two sets of proteins in the sequence domain. The use cases of this task in computational biology are numerous and include functional annotation [1], gene localization and studying protein evolution [2]. In metagenomics the DNA sequences collected from the environment enable the study of a diverse microbial genome pool that is often missed by the cultivation-based methods. Such samples contain millions of protein sequences [3] and a major component of many biological workflows is to find out the existing genes by aligning them against a reference database. With the sequencing costs dropping and the technology becoming more available, the bottlenecks in metagenomics research are gradually shifting towards computation and storage [4],[5].We focus on the problem of aligning a set of sequences against another set of sequences. This problem often occurs within the context of identifying sequences in one set (set of query sequences) by using another set of sequences whose functions are already known (set of reference sequences). Another context is to find the similar sequences in a given set by clustering them. In this variant, a many-against-many search is performed over a set of sequences to find the similar sequences in the set (often followed by clustering of sequences). This variant can also be seen as aligning the given set against itself where the query and the reference set is the Abstract-Similarity search is one of the most fundamental computations that are regularly performed on ever-increasing protein datasets. Scalability is of paramount importance for uncovering novel phenomena that occur at very large scales. We unleash the power of over 20,000 GPUs on the Summit system to perform all-vs-all protein similarity search on one of the largest publicly available datasets with 405 million proteins, in less than 3.5 hours, cutting the time-to-solution for many use cases from weeks. The variability of protein sequence lengths, as well as the sparsity of the space of pairwise comparisons, make this a challenging problem in distributed memory. Due to the need to construct and maintain a data structure holding indices to all other sequences, this application has a huge memory footprint that makes it hard to scale the problem sizes. We overcome this memory limitation by innovative matrix-based blocking techniques, without introducing additional load imbalance. I. JUSTIFICATION FOR ACM GORDON BELL PRIZE We unleash the power of over 20,000 GPUs to perform many-against-many protein similarity search on one of the largest publicly available datasets with 405 million proteins in 3.4 hours with an unprecedented rate of 691 million alignments per second, cutting the time-to-solution for many use cases from weeks. II. PERFORMANCE ATTRIBUTES III. OVERVIEW OF THE PROBLEM Comparative genomics studies the evolutionary and biological relationships between different organisms by exploiting similarities over the genome sequences. A common task, for example, is to find out the functional or taxonomic contents of the samples collected from an environment often by querying the collected sequences against an established reference database. The importance of enabling and building of fast computational infrastructure for comparative genomics becomes more critical as more and more genomes are sequenced. Our work addresses the computational challenges posed by searching similarities between two sets of proteins in the sequence domain. The use cases of this task in computational biology are numerous and include functional annotation [1], gene localization and studying protein evolution [2]. In metagenomics the DNA sequences collected from the environment enable the study of a diverse microbial genome pool that is often missed by the cultivation-based methods. Such samples contain millions of protein sequences [3] and a major component of many biological workflows is to find out the existing genes by aligning them against a reference database. With the sequencing costs dropping and the technology becoming more available, the bottlenecks in metagenomics research are gradually shifting towards computation and storage [4], [5]. We focus on the problem of aligning a set of sequences against another set of sequences. This problem often occurs within the context of identifying sequences in one set (set of query sequences) by using another set of sequences whose functions are already known (set of reference sequences). Another context is to find the similar sequences in a given set by clustering them. In this variant, a many-against-many search is performed over a set of sequences to find the similar sequences in the set (often followed by clustering of sequences). This variant can also be seen as aligning the given set against itself where the query and the reference set is the same. We refer to this as many-against-many protein similarity search and focus on this search problem in our work. Our work demonstrates that HPC is a viable fast alternative in enabling tree-of-life scale metagenomics research. Harnessing the power of accelerators which are well suited to SIMD-type parallelism that are required by the alignment operations in the search, we develop novel parallel algorithms and optimization techniques that are able to simultaneously utilize all resources on the nodes and attain high performance. The key points in our approach for addressing the described challenges can be summarized as follows: • The immense computational resources required by the largescale search operations are met by distributed utilization of accelerators. Compute-intensive alignment operations form the main computational bottleneck and popular tools in this field [6]- [8] make use of SIMD parallelism on the CPUs with vector instructions but they do not utilize accelerators which are more suited to these types of operations. • We take advantage of the heterogeneous architecture of the nodes and hide the overhead of memory-bound distributed overlap detection component of the search by performing them on the CPUs simultaneously with the alignment operations on the accelerators. • To avoid IO during the search-which are often the method of choice with distributed tools in this domain when the scale of the search starts to become infeasible-we develop a distributed 2D Blocked Sparse SUMMA algorithm that performs the search incrementally and hence can effectively control the maximum amount of memory required by the entire search. In this way, our approach only uses IO at the beginning and at the end, which are both done in parallel and constitutes at most 3% of the entire search time. • By relying on custom load balancing techniques and distributed sparse matrices as the founding structures whose parallel performance is well-studied in numerical linear algebra, we obtain good scalability by attaining more than 75% strong-scaling and 80% weak-scaling parallel efficiency. The biggest reported protein sequence similarity search on a supercomputer system to the best of our knowledge was in 2021 by DIAMOND [6]. This search involved querying 281 million sequences against 39 million sequences on 520 nodes of the Cobra supercomputer at the Max Planck Society and took 5.42 hours by performing a total of 23.0 billion pairwise alignments in the very sensitive mode (1.2 million alignments per second). With our search tool PASITS (Protein Alignment via Sparse Matrices), we significantly improve this by performing a search of 405 million sequences against 405 million sequences on 3364 compute nodes of the Summit supercomputer at Oak Ridge Leadership Computing Facility Our search took 3.44 hours by performing a total of 8.6 trillion pairwise alignments at a rate of 690.6 million alignments per second. Overall, we increase the scale of the solved problem by an order of magnitude (15.0x) and improve the performed alignments per second by more than two orders of magnitude (575.5x). IV. CURRENT STATE OF THE ART There are many protein similarity search tools in the literature and each of them has different search techniques that are refined over the years. Among the more popular of these tools are BLASTP [9], MMSeqs2 [7], LAST [10], DIAMOND [11], and USearch [12]. In terms of parallelism, almost all of the mentioned tools support some of parallelism with varying degrees of efficiencies. The libraries such as DIAMOND, LAST, and MMSeqs2 have great support for on-node parallelism: they can take advantage of vector instructions, can use multiple cores, have cache-friendly algorithms within them, etc. Some of these tools such as DIAMOND, MMSeqs2, or mpiBLAST [13] also run in a distributed setting. Some of the main shortcomings of these tools' distributed-memory parallelization can be summarized as follows: • In LAST and MMSeqs2, the index data structures for at least one set of the sequences (queries or targets) are replicated on each compute node before the search phase, which limits the largest problems that can be solved. In DIAMOND, they are written as partitioned chunks into disk, which severely increases the pressure on the file system. • The existing software do not have a global view of these replicas/chunks and the parameters are set per replica, which results in changing sensitivity with increased parallelism or memory constraints. For example, the DIAMOND guide states that "this [block size] parameter affects the algorithm and results will not be completely identical for different values of the block size". By contrast, the PASTIS algorithm give identical results irrespective of the amount of parallelism utilized and the blocking size chosen. • These search tools do not support GPUs. The GPUs harbor a much higher level of SIMD parallelism which are perfect fits for the pairwise alignments usually utilized in the search, which greatly enjoy this type of parallelism. Among the protein search tools that have distributedmemory support, we further examine MMSeqs2 and DIA-MOND as these two tools are the current state-of-the-art in distributed protein similarity search. MMSeqs2 [7] uses hybrid MPI/OpenMP for distributedmemory parallelism and has support for SSE and AVX2 vector instruction sets. There are two modes of parallelism provided according to whether the reference or the query sequence set is distributed among the parallel processes. In the first, the reference sequence set is divided into chunks and distributed among the parallel nodes. In this mode, each process searches all query sequences against its chunk of the reference. In the second mode, the query set is divided into chunks and distributed among the parallel nodes. In this mode, each process searches its chunk of queries against all reference sequences. In our earlier CPU-based PASTIS work, we found our approach to be more scalable than MMSeqs2 [14] and MMSeqs2 suffering from high IO overheads. The distributed-memory parallelism in DIAMOND [6] is geared more towards providing capability to run on commodity clusters, i.e., cloud computing. In this regard, it avoids using MPI that may not be found on such clusters and instead heavily relies on IO operations supported by POSIX-compliant parallel file systems. DIAMOND divides both the reference and the query sequence sets into chunks and an element that is in the cartesian product of these two sets of chunks is referred to as a work package. These packages are processed in parallel by worker processes. This workflow makes a distinction between the parallel shared file system among nodes and the disks local to nodes. Once the processing of a query chunk against all reference chunks is complete, the final worker process joins the results and writes them to an output file. These choices may have serious performance implications for HPC systems. Nevertheless, the focus of distributed-memory parallelism in DIAMOND is the capability to also run on commodity clusters and fault tolerance, rather than high performance. V. PROTEIN SIMILARITY SEARCH PIPELINE Our approach for the protein similarity search problem consists of three main components: (i) discovery of candidate pairwise sequences which may harbor a certain degree of homology, (ii) batch alignment of the discovered candidate sequences, and (iii) forming the protein similarity graph from the information obtained in the alignment. In the discovery of the candidate pairwise sequences, PASTIS has the option to introduce substitute k-mers that are m-nearest neighbors of a k-mer or plugging in a reduced alphabet [15], both of which can enhance the sensitivity. It can make use of different alignment libraries and algorithms within them and can seamlessly integrate common sequence alignment metrics such as average nucleotide identity and coverage in the formation of the similarity graph. These options enable PASTIS to reach out different regions of the overall search space and increase the effectiveness of the search. The basic information storage and manipulation medium PASTIS is sparse matrices. They are used to represent different types of information required throughout the search. For instance, k-mer information in sequences are captured in a sparse matrix whose rows and columns respectively correspond to sequences and k-mers and a nonzero element in this matrix indicates the existence of a specific k-mer in a specific sequence. Apart from being one of the most well-studied and optimized structures in parallel linear algebra, sparse matrices provide flexibility in the sense that any arbitrary information can be encapsulated within these elements, such as location, score, etc. Figure 1 illustrates some of the sparse matrices utilized in PASTIS and it can be seen that these matrices easily allow to store and manipulate any necessary information required by our similarity search pipeline. Although sparse matrices are very common and widely used in the field of linear algebra, their utilization and importance have recently started to gain momentum in graph computations thanks to the GraphBLAS standardization efforts [16]. The basic motivation is to express graph computations in the language of linear algebra and by doing so utilize decades of algorithmic and optimization work in sparse linear algebra within the graph computation frameworks. Different from the matrix operations expressed in linear algebra, the operations on graphs usually require different operators to perform computations on sparse matrices. For example in PASTIS, the discovery of candidate pairwise sequences is expressed through an overloaded sparse matrix sparse matrix "multiplication", in which the elements involved in this operation are custom data types and the conventional "multiply-add" operation is overloaded with custom operators, which are known as semirings. Semiring algebra allows to express graph operations through operations on sparse matrices and we utilize various semirings to enable different types of alignments ( Figure 2). A. Software stack and parallelism Our protein similarity search pipeline utilizes several libraries and orchestrates them in a distributed setting. For distributed sparse matrices and computations on them, it relies on CombBLAS [17] -a distributed-memory parallel graph library that is based on arbitrary user-defined semirings on sparse matrices and vectors. For parallel alignment, it utilizes SeqAn C++ library [8] and ADEPT [18]. Among these libraries, CombBLAS supports MPI/OpenMP hybrid parallelism, SeqAn supports node-level shared-memory parallelism with vectorization, and ADEPT supports node-level many-core parallelism Apart from those PASTIS itself directly makes use of MPI/OpenMP hybrid parallelism. The software stack of PASTIS is illustrated in Figure 3. An important design choice in our approach is to separate the parallelism level used for alignment and other components. For alignment, we deliberately prefer on-node libraries that are able to exploit different aspects of parallelism found on the node such as threads, CPU vector instructions, or fine-grained parallelism on GPUs. These on-node alignment libraries are handled in PASTIS through distributed sparse matrix computations for which there already exist fast and optimized data structures and algorithms. The key to high performance, as we demonstrate in our work, is the good orchestration of onnode and node-level parallelism with techniques that are able to overcome the performance bottlenecks. CombBLAS [17] uses a 2D decomposition for distributed sparse matrices in which the the matrices are partitioned into rectangular blocks. It uses a square process grid with the requirement of number of processes to be a perfect square number. It supports compressed sparse column and doublycompressed sparse column sparse matrix storage formats [19] and contains fast and state-of-the-art algorithms for complicated operations like SpGEMM, being able to run such operations efficiently both on-node level [20] and on massive scale [21]. ADEPT [18] is a GPU accelerated sequence alignment library that supports both DNA and protein sequence alignments. It uses a combination of inter-and intra-task parallelism approach to realize the full Smith-Waterman sequence alignment on GPUs. ADEPT derives its performance from efficient use of GPU's memory hierarchy and exploiting fast registerto-register data transfers for inter-cell communications while computing the dynamic programming matrix. ADEPT has CUDA, HIP, and SYCL ports being able to utilize NVIDIA, AMD, and Intel GPUs, respectively. ADEPT's driver class works as an interface between the calling application and the GPU kernels, the driver detects all the available GPUs on a node and distributes alignments across all the available GPUs. A unique C++ thread handles data packing and transfers (to and from GPU) for each GPU. B. Performance characteristics Computational patterns. The two basic types of computations performed in our protein similarity search pipeline are the sparse computations and edit distance computations required in the alignment. The former is memory-bound having low computational intensity and high memory footprint with irregular access patterns while the latter is compute-bound having high computational intensity and a uniform pattern in computing the edit distance matrices, which are small and dense. Taking into account the fact that most alignment libraries are able to achieve high performance through SSE and AVX vector instruction sets (as is the case in SeqAn utilized by PASTIS) and these are not supported by IBM PowerPC processors on Summit, we dedicate GPU resources solely for alignment and utilize CPU resources for memoryintensive sparse computations. Considering the mentioned characteristics of these computations, the accelerators are more suited for alignment than for sparse computations. For this reason, we rely on ADEPT (see Figure 3) for alignment, which has on-node multiple accelerator support. This library also uses resources on the CPU but they constitute a small percentage of the alignment. Memory requirements. Many-against-many protein similarity search requires huge amount of memory and the existing libraries in this area have various techniques to deal with this issue ranging from writing intermediate files to disks to performing the search in stages. For a modest dataset containing 20 million sequences, one usually needs to store hundreds of billions candidate alignments and need to perform tens of billions of alignments. The memory required by such a relatively small-scale search can quickly exceed the amount of memory found on a node. Moreover, one also needs additional data structures to efficiently perform the search. For example, the method to discover candidate alignments in our approach uses a parallel SpGEMM, which usually needs much more intermediate memory than the actual storage required by the found candidates. This factor, the average amount of intermediate results computed and stored per output element, is called the compression factor, and even with a modest value between 1 and 10 that are often seen in genomics datasets, it is clear that memory management must be given special attention in many-against-many search. Finally, the number of candidate pairs that need to be stored and aligned grows quadratically with the number of sequences in the search, which makes the similarity search over huge datasets even more challenging in terms of memory requirements. I/O and communication. PASTIS uses parallel MPI I/O for input and output files. The input to PASTIS is a file in FASTA format (a very common file format in bioinformatics to represent nucleotide and protein sequences) and the output is the similarity graph in triplets whose entries indicate two sequences and the similarity between them. The output file is typically larger than the input file. The communication in PASTIS can be categorized into two as the communication required for the sequences and for sparse computations. The overhead of the former is effectively hidden by performing it in a non-blocking manner till they are required, which is when the pairwise alignments are to be performed. The communication (and computation) required by the most sparse computations can also be hidden given that the nodes have accelerator support. We investigate this issue in Section VI-C. Compared to memory and computational issues described so far, I/O and communication bottlenecks usually constitute less of a problem in our approach in performing many-againstmany protein similarity search. Typically, IO takes no more than 3% of the overall execution time in PASTIS. VI. INNOVATIONS REALIZED We address the computational challenges posed by the distributed protein similarity search mainly with three novel techniques. The main performance bottleneck, huge memory requirement of the search, is addressed via proposing a blocked variant of the 2D Sparse SUMMA utilized in the distributed formation of the overlap matrix (Section VI-A). By relying on the observation that the overlap matrix is symmetric (similarity graph is undirected), we propose techniques to avoid significant amount of sparse computations and two different loadbalancing schemes that exhibit different behavior based on the blocking factors and are able to achieve good computational load balance (Section VI-B). We then describe a technique that hides the overhead of memory-bound sparse computations as well as certain communication operations (Section VI-C). We validate the proposed innovations on small-scale datasets containing a few tens of millions sequences. All experiments are conducted on the Summit system (see Section VIII for specs). For all the experiments reported, we use 1 MPI task per node and utilize all 42 cores and 6 GPUs on each node. A. Blocked 2D Sparse SUMMA In our approach, the candidate sequences are discovered through a parallel SpGEMM which produces an overlap matrix that contains pairs of sequences to be aligned. The memory required by candidate pairs are huge and the motivation for blocked formation of the candidate pairs rests on the observation that only a fraction of them are actually similar. Using the information available before and after the alignment (common number of k-mers, nucleotide identity, coverage, etc.), typically only less than 5% of the candidate pairs end up in the final similarity graph. Therefore, incremental similarity search can greatly reduce the memory used. The algorithm for parallel SpGEMM of form C = AB used for the computation of the overlap matrix is the 2D Sparse SUMMA algorithm [22]. For our analyses, we assume the matrices are square having a dimension of n for rows/columns and the elements of the sparse matrices are distributed uniformly. Given p parallel processes, this algorithm proceeds in √ p stages in which certain sub-matrices of the input matrices are broadcast and partial results for the output matrix are computed. Assuming that the collective broadcasts use a tree algorithm [23], its communication cost is given by 2α √ p log √ p + 2βs √ p log √ p, where s is the number of nonzeros in a sparse sub-matrix of dimensions n/ √ p × n/ √ p, α is the message startup time and β is the per-word transfer time. We generalize the 2D Sparse SUMMA algorithm with arbitrary blocking factors in order to form the output matrix in blocks. We form the output matrix in SpGEMM in br × bc blocks, where br and bc are respectively row and column blocking factors. In the computation of the output block C(r, c) required are the row stripe A(r, * ) and the column stripe B( * , c). Originally, the input matrices are distributed among p processes in a √ p × √ p grid. Therefore, to be able to compute the output matrix in blocks, A must be split into br row stripes and B must be split into bc column stripes. Each of these row and column stripes must be distributed among √ p × √ p process grid. The left of Figure 4 displays how the input matrices are distributed among four processes organized into 2 × 2 grid for a 3 × 4 blocking and the sub-matrices used in the computation of C(1, 1). Communication costs. Compared against the plain SUMMA, the blocked variant increases the communication overhead as the input matrices need to be broadcast multiple times. As mentioned earlier, the memory requirement is one of the main bottlenecks in the similarity search. In addition, as will be described in Section VI-C, the overhead of the output block computations can be hidden to a large degree, i.e., overheads of both the broadcasts and local sparse computations. Nevertheless, the increase in the communication cost might be prohibitive when there are a lot of blocks and if its overhead cannot be hidden. The overall communication cost of blocked variant is given by 2α(br · bc) √ p log √ p + βs(br + bc) √ p log √ p. From similarity search perspective, the advantages and disadvantages of blocked 2D Sparse SUMMA are as follows: • Opens the path for different types of optimizations and hence able to further reduce the overall time required by the search significantly. Up to 30% reduction in the overall runtime can be obtained with the techniques described in Section VI-B and VI-C, which complement the blocked formation. • It increases the time required to discover candidate alignments due to increased communication and split sparse computations. Figure 5 plots the parallel runtime of various components of the similarity search against increasing number of blocks on a dataset containing 20 million sequences over 100 nodes of Summit. Compared to performing the entire search at once, i.e., when number of blocks is 1, there is an average increase of 10-15% and 40-45% in the runtime of alignment and multiplication, respectively, with the increase in overall runtime being around 30%. We note that this search could not be performed on fewer nodes using only one block, which indicates the severity of the memory required. B. Load balancing techniques The overlap matrix C in the similarity search is computed to generate the candidate sequences that will be aligned. The rows and the columns of this sparse matrix represent the sequences and each nonzero element of it corresponds to a pairwise alignment that needs to be performed. The nonzero elements contain custom information that are needed by the alignment or filtering (such as seed locations in the sequences, common k-mer counts, etc). When computed in parallel using the Blocked 2D Sparse SUMMA algorithm, each block of the overlap matrix is distributed among all p processes in the √ p × √ p grid. The overlap matrix is symmetric: the nonzeros C ij and C ji indicate that an alignment needs to be performed between sequences i and j. This has computational implications for how the search is performed in our work. First, roughly half of the elements in this matrix may not need to be computed in the multiplication. Secondly, these also need not be aligned. Finally, with blocked formation of this matrix, good load balancing necessitates custom methods that take into account all the mentioned conditions. To this end, we propose two different schemes. Triangularity-based load balancing. In the first load balancing scheme, we only compute the blocks whose intersection with the strictly upper triangular portion of the overlap matrix is non-empty. The blocks in this way can be categorized into three as full, partial, and avoidable as illustrated in left matrix in Figure 6, where full blocks are colored in green, partial blocks in yellow, and avoidable blocks in white. The full and the partial blocks need to be computed in the Blocked 2D Sparse SUMMA, while the avoidable blocks are neither computed nor aligned. The elements in the full blocks all require an alignment while the the elements in the partial blocks may or may not require an alignment depending on they are in the lower or upper triangular portion of the overlap matrix. As these blocks are distributed among all processes in the process grid, the partial blocks may lead to load imbalance especially when their intersection with the strictly upper triangular portion is small. For instance in the overlap matrix in left of Figure 6, assuming a 2 × 2 process grid, three processes will stay idle when performing alignments in the block that is in the intersection of second row and column. In contrast, the load balance of full blocks is better than the partial blocks and the number of full blocks grows quadratically with increasing number of blocks while the number of partial blocks grow linearly. Index-based load balancing. In the second load-balancing scheme, we compute all the blocks and prune the blocks in a manner to preserve the original nonzero distribution of the overlap matrix, which is usually uniform. In the lower triangular portion of the matrix, we keep a nonzero if its row and column indices are both odd or both even; and in the upper triangular portion of the matrix, we keep a nonzero if its row index is odd and its column index is even or vice versa. This process is illustrated at right matrix in Figure 6 for a 3 × 3 blocking. This scheme prunes roughly half of each block while respecting the symmetricity of the matrix and ensuring each pair of sequences will be aligned only once. Comparison. The two proposed load-balancing schemes incur same amount of alignment computations. The triangularitybased load balancing scheme favors saving from sparse computations at the expense of sacrificing from load balance in the partial blocks. The load balance in full blocks of this scheme however should be as good as the load balance in blocks of the index-based load balancing. The index-based load balancing aims for better load balancing at the expense of computing each block. Another aspect these two schemes differ is how they change the structure of the blocks of the overlap matrix. The index-based scheme preserves the uniform structure of the blocks in the overlap matrix to a large extent, which should lead to better memory access pattern compared to the triangularity-based load balancing. We compare these two load-balancing schemes on a dataset containing 20 million sequences on 64 nodes of Summit in Figure 7. In Figures 7a, 7b, and 7c, each vertical bar in the plots illustrate the average, minimum, and maximum quantities attained by the parallel processes in the respective metric. As seen in Figure 7a, the index-based method is able to attain better load balance than the triangularity-based method for all tested block counts. The load balance of the triangularity-based method tends to get better with increasing number of blocks. This is because the ratio of the partial blocks, which are the main cause of load imbalance in this method, decreases with increasing number of blocks. Although alignment time can be said to be directly proportional to the number of alignments, a better metric is the summation of the sizes of the edit distance matrices as the sequence lengths are different from each other and the alignment algorithm used in this work is a variant of the Smith-Waterman algorithm [18] which computes the entire distance matrix. The load imbalance in this metric is presented in Figure 7b. The index-based method has again better load balance than the triangularity-based method in this metric. Note that the objective of both load balancing methods is the number of aligned pairs (i.e., Figure 7a) and the indexbased method achieves very good load balance in it. This is reflected in the load imbalance in actual time spent in alignment in Figure 7c, where the index-based method attains better performance. Finally, the effect of the main advantage of the triangularitybased method, being able to save sparse computations, can be seen in Figure 7d. It is able to attain shorter runtime for sparse computations as it is able to avoid a great deal of such computations. When the effect of this method's load imbalance is low (i.e., when number of blocks is high), it is able attain better total runtime than the index-based method. As seen in Figure 7d, for block counts {5, 10, 15, 20}, the index-based method attains better runtime while in the rest the triangularity-based method attains better runtime due to its ability to avoid sparse computations despite its longer alignment time. C. Pre-blocking The blocked formation of the similarity graph results in an iterative pipeline in which the graph is constructed incrementally (as seen in Figure 4). The components of this pipeline are executed on CPU or GPU resources on the nodes. The heterogeneous node architecture and the capability to perform the compute-bound batch pairwise alignments-which are amenable for SIMD type of parallelism-on GPUs allow our approach to perform similarity search by utilizing all compute and memory resources on a node. We further propose an optimization technique, which we refer to as pre-blocking, based on the pre-computation of the sparse blocks containing candidate pairs. The goal of pre-blocking is to increase the efficiency of resource utilization on a node and hence reduce the overall search time. In the incremental formation of the similarity graph, the alignments are performed on GPUs after discovering them through sparse computations, which are performed on CPUs. While the alignments are performed in a distributed manner, a big portion of CPU resources is idle and the sparse computations regarding the next block or blocks can actually begin. In this way, the candidate pairs for the next set of alignments can be discovered in advance and made ready for alignment. Note that this discovery is a fully-fledged distributed SpGEMM with its collective communication operations and memorybound computations. Hence, the ability to hide them can prove invaluable by resolving both of these bottlenecks at the same time. Thread management. PASTIS and the graph library used for sparse computations CombBLAS heavily rely on OpenMP for on-node parallelism. Although the alignment library ADEPT performs alignments on GPUs, it still uses CPU resources for pre-and post-processing. Specifically, it uses as many C++ threads as there are GPUs on a node. We dedicate as many threads as the number of devices on a node to ADEPT and use the rest of the threads for pre-blocking. The affinities of the threads used by ADEPT are set according to which CPU socket they are attached to. Comparison. The main trade-off of the proposed pre-blocking technique is that at the expense of slightly using more memory (the memory required to compute and store the next block or blocks), it hides the SpGEMM overhead used in discovering candidate sequence pairs. Although the similarity search requires extensive amount of memory, the extra memory consumption of the pre-blocking should be low as long as the number of pre-computed blocks is small. On Summit, we found that in most of our runs the time spent in alignment and sparse computations have a ratio of no more than 2:1. The number of blocks to pre-compute should depend on this ratio and can be adjusted. In our experiments we always precompute only the next block. The pre-blocking is expected to increase the time spent in both the alignment and the sparse computations because these components are now computed concurrently and the computational and memory resources on the CPU have to be shared by both. However, the overall runtime with the pre-blocking is reduced from the summation of these components to the maximum of them, even if that maximum is slightly increased. Table I presents various metrics that evaluate the efficiency of the pre-blocking technique. We compare indexand triangularity-based load balancing methods with and without pre-blocking for five different number of blocks {10, 20, 30, 40, 50}. We assess the time spent in alignment, sparse multiplication, summation of these two, and the overall execution time (in the first two big column titles). Note that the "sum" column with the pre-blocking gives the actual obtained time instead of the plain summation of the "align" and "sparse" columns. Under the normalized column title, the values obtained with pre-blocking is normalized with respect to those obtained without pre-blocking. Finally, the last column evaluates the efficiency of the pre-blocking technique. As seen in Table I, although the pre-blocking scheme increases the time spent in alignment and sparse computations, it is able to hide the overheads of the former to a great extent and reduce the runtime by 30% for the index-based scheme and 20% for the triangularity-based scheme. The efficiency of the pre-blocking in the triangularity-based load balancing scheme is lower than that of the index-based scheme (around 80% vs. 95%), which can be explained by the fact that the load imbalance found in that scheme adversely affects the efficiency of pre-blocking. In summary, the pre-blocking technique can be said to be more effective for the index-based load balancing and in both load balancing schemes it is able to reduce the overall runtime significantly. VII. HOW PERFORMANCE WAS MEASURED There are three types of reporting mechanisms used in our work. The first type is the timers. These simply measure the elapsed time for certain types of components such as the time spent in alignment, sparse computations, IO, or the total runtime, etc. The load imbalance for some of the experiments in Section VI is taken by measuring the minimum, average, and maximum time spent in the respective component by all the processes. The second reported metric is alignments performed per second. In this metric, we consider the entire parallel runtime and record the number of total pairwise alignments performed and divide the latter by the former. The final metric is cell updates per second. This metric is typically utilized within the context of measuring performance of alignment algorithms and it indicates how many cells the utilized algorithm updates in a second. For this metric we only use the time spent in the alignment kernel, i.e., the forward scoring time in the Smith-Waterman algorithm and divide the number of updated cells by this value. VIII. PERFORMANCE RESULTS This section increases the scale of the experiments conducted compared to the evaluations in Section VI to examine the parallel performance of PASTIS. We conduct our evaluation on the IBM system Summit at OLCF. This system consists of 4608 IBM Power System AC922 nodes and each node is equipped with two 22-core 3.8 GHz IBM POWER9 processors and six NVIDIA Tesla V100 accelerators each of which has 80 streaming multiprocessors. On each node there is 512 GB of CPU memory and a total of 96 GB of HBM2 memory for accelerators. The nodes are connected with a dualrail InfiniBand network in a non-blocking fat tree topology. We investigate the strong and weak scaling behavior in Section VIII-A and VIII-B, respectively. For these experiments, we stay below 1000 nodes with the largest number of nodes used for strong scaling experiments being 400 and for weak scaling 784. In the last part of this section (Section VIII-C), we demonstrate our full-scale run using 3364 nodes of Summit. A. Strong scaling We assess the strong scaling performance for both the indexbased and the triangularity-based load balancing schemes. We use a dataset containing 50 million sequences and scale our approach on {49, 81, 100, 144, 196, 289, 400} nodes. We use 1 MPI task per node and use all the cores and accelerators on each node. We use a blocking factor of 8 × 8 in forming the similarity graph with pre-blocking enabled. The number of performed alignments for this dataset is 86.5 billion and the entire overlap matrix (i.e., with all blocks) contains 1.99 trillion elements in the index-based scheme and 1.12 trillion elements in the triangularity-based scheme. Figure 8a illustrates the strong scaling of PASTIS by plotting the parallel runtime. The dashed line in the figure indicates the ideal case. Scaling from 49 nodes to 400 nodes, the index-based load balancing scheme attains a 66% parallel efficiency and the triangularity-based load balancing scheme attains a 76% parallel efficiency. The better efficiency of the latter can be attributed to its avoidance of significant amount of sparse computations, despite having worse load balance. In Figures 8b and 8c we plot the speedup rates of different components. For both of the schemes, it is seen that the computationally intensive component "align" (which is performed on accelerators) exhibits better scalability: 78% and 87% parallel efficiency for the index-based and triangularitybased schemes, respectively, on 400 nodes. The efficiency of sparse operations is around 60% for both schemes. With the proposed algorithmic innovations, we are able to overcome the overhead of the sparse computations-whose runtime constitutes a significant portion of the overall runtime (Figure 7d)to a large extent and lose only 11%-12% efficiency due to them. We note that there are sparse computations that cannot be avoided via pre-blocking. Although the IO scalability is somewhat erratic, this component constitutes a very minor portion of the overall runtime to be a bottleneck. Table II presents the overall percentage of IO time and wait time for sequence communication to complete. PASTIS does not need the sequences until the alignments begin and it uses nonblocking communication for them by starting their transfer right after reading the input sequences. The waiting time for these communication operations is negligible. IO also does not constitute a bottleneck, largely due to efficient MPI IO. The sum of the percentages of these two components is usually less than 3% of the overall runtime. B. Weak scaling We examine the weak scaling behavior of our similarity search pipeline for the index-based load balancing scheme. We vary the number of sequences as we increase the number of nodes {25, 49, 100, 196, 400, 784}. The number of alignments scales quadratically with the number of sequences. This can also be said to be valid for a majority of the sparse computations as flops in the complexity of the sparse matrix multiplication is proportional to the number of output elements (assuming the compression factor stays the same). Hence, we use a factor of √ x for the number of sequences when we increase the number of processes by a factor of x. As a result, we start with 20 million sequences at 25 nodes and use 28, 40, 56, 80, and 112 million sequences for 49, 100, 196, 400, and 784 nodes, respectively. Figure 9 and Table III present the obtained results. The alignment component exhibits better weak scaling efficiency as seen in Figure 9. Overall, all components except IO can said to be exhibiting good weak scaling behavior. As mentioned before, IO constitutes a very small portion of the parallel runtime and this issue does not seem to be affecting the overall weak scaling efficiency, which stays above 80%. C. Similarity search at scale In this section we perform a many-against-many similarity search on a dataset containing 405 million protein sequences. This dataset was created by clustering and assembling 1.59 billion protein sequence fragments in more than two thousand metagenomic and metatranscriptomic datasets [24]. We use a subset of the non-redundant variant in which the subfragments that can be aligned to a longer sequence with 99% of their residues and a sequence identity of 95% are eliminated 1 . Our production run used 3364 nodes (73% of the whole Summit system) and completed the entire search in 3.44 hours. In total, it discovered 95.9 trillion candidate pairwise alignments, of which it performed 8.6 trillion, and 1.1 trillion of these passed the ANI and coverage thresholds ending up in the final result of the search. It sustained a rate of 690.6 million alignments per second and achieved a peak rate of 176.3 TCUPs. We used 1 MPI task per node and at each node 42 cores and 6 GPUs. We used a total of 400 blocks with a blocking factor of 20 × 20 for the Blocked 2D Sparse SUMMA. For the performance-related parameters that are described in this work, we enabled pre-blocking and utilized triangularity-based load balancing scheme due to its better performance in larger block counts. A further breakdown of the overall execution time is presented at the bottom of Table IV. We attempted to run both DIAMOND and MMSeqs2 on sizable datasets containing 50, 100, and 200 million sequences to perform many-against-many search. Both of these two search tools rely on SSE and AVX vector instructions for fast alignments. These instruction sets do not exist on the processors of the Summit system. For this reason, we tried using Cori system at NERSC, which is a CPU-based system with Intel processors and support for these types of vector instructions. For MMSeqs2, we started with small number of nodes, i.e., 64 nodes for the 50 million sequence subset but it was not able to complete this run in 6 hours. We also tried 50 million and 100 million sequence subsets on 256 nodes but again they were not able to complete in 12 hours. We used a sensitivity value of 5.7 for MMSeqs2 in these tests. For DIAMOND, we tried 100 million sequence subset on 150 nodes and 200 million sequence subset on 400 nodes with both failing with errors. We tried both very sensitive and ultra sensitive modes for DIAMOND. We were able to complete the 50 million sequence subset for DIAMOND on 4 nodes in the default mode (fast mode). This run completed in 22 minutes sustaining around 60k alignments per second. This value was much lower than what the authors obtained when running on higher sensitivity modes and larger number of nodes. For this reason, we compare our run results with those of DIAMOND which are reported very recently on an another supercomputer system Cobra at the Max Planck Society [6]. We compare the performance results of our run with that of reported by DIAMOND [6]. As reported in their work, DIAMOND completed a search of 281 million query sequences against a reference database of 39 million sequences on 520 nodes, taking 5.42 hours and performing 23.0 billion alignments in the very sensitive mode and taking 17.77 hours and performing 23.1 billion alignments in the ultra sensitive mode. Considering the former, this translates into 1.2 million sequences per second in a many-against-many search space of 281 × 39 × 10 12 sequences. Our run attained 690.6 million sequences per second in a search space of 405 × 405 × 10 12 sequences. Our experiment conducted the search in a space that is 15.0x bigger by increasing the rate of alignments per second by two orders of magnitude, i.e., 575.5x. As for the total number of alignments performed, if we scale the reported DIAMOND run to the scale of search space our experiment was performed, this becomes equal to a projected value of 345.7 billion alignments. Our approach can said to perform more sensitive search with alignments per search space being 5.2e-5 compared to DIAMOND's 2.1e-6, amounting to a factor of 24.8x difference. Time to solution for DIAMOND with an assumption of linear scaling to 2025 nodes results in a projected time to solution of 12.53 hours compared to 3.44 hours of our experiment, which shows our search is 3.6x faster despite performing an order of magnitude more alignments. IX. IMPLICATIONS Many-against-many protein sequence search is a form of sparse and irregular all-vs-all comparisons. The sparsity is data dependent, hence it is known only during runtime which pairwise comparisons will be worth performing. Consequently, it puts a significant stress on the network interconnect. Naively performing this task would amount to a giant MPI_AlltoAllv call, also known as the personalized allto-all broadcast. PASTIS significantly reduces this pressure on the interconnect network by regularizing the computation. It does by casting the problem in terms of sparse matrix operations. This technique could also be applied to other irregular applications, as it has been successfully done so for problems in graph and combinatorial problems [17]. The in-node computation involves sparse matrix-matrix multiplications and a large set of pairwise alignments per node. While the latter maps well to the wide vector units of GPUs, NVIDIA's introduction of Dynamic Programming (DPX) instructions with its newly announced Hopper architecture promises up to 40x speedup for the most expensive part of protein sequence search. PASTIS running on such architectures with DPX instructions would be significantly faster, but also bound by communication costs at scale as the computation speeds up drastically. Hence, future supercomputers that employ accelerators such as Hopper need to provision for higher bisection and network injection bandwidth. Since PASTIS uses semiring in SpGEMM, and all the high-performance GPU implementations of SpGEMM being hard-coded for floating-point arithmetic, we performed those steps on the CPU. Thanks to decades of research on highperformance SpGEMM implementations on the CPU, this did not become a performance bottleneck. However, with the aforementioned changes coming to accelerators, sparse matrix operations on other algebras would be a huge welcome. NVIDIA's cuSPARSE library took a big step towards this direction with its latest release where they provide an optimized implementation that performs the multiplication of a sparse matrix and a dense matrix with custom operators. We suggest this ability to extend to other functions in the sparse matrix libraries provided by GPU vendors. Fig. 1 : 1Examples of various sparse matrices used in PASTIS. The types of the elements in each matrix are different and a sparse matrix can utilize different element types according to the options provided (such as alignment type). Fig. 2 : 2Semiring algebra allows PASTIS to express computations in similarity search through sparse operations. Here illustrated a simple example to discover a candidate pair for alignment. Fig. 3 : 3PASTIS utilizes different libraries and is able to make efficient use of both CPU and GPU resources found on a node. Fig. 4 : 4Discovery of the candidate alignments via Blocked 2D Sparse SUMMA and incremental similarity search. Fig. 5 : 5The effect of increasing number of blocks on the runtime of sparse and alignment components. Fig. 6 : 6Triangularity-based (left) vs. index-based load balancing (right). Fig. 7 : 7Comparison of two load balancing schemes on 64 processes. The three points on a vertical line in the plots at the top and bottom left illustrate the load imbalance by capturing the minimum, average, and maximum values attained by the parallel processes in the respective metric that is measured. Fig. 8 : 8Strong scaling performance. The annotated values in the plots indicate the attained parallel efficiency (%). TABLE I : IThe effect of pre-blocking for index-and triangularity-based load balancing methods.time w/o pre-blocking (sec.) time w/ pre-blocking (sec.) normalized load balancing blocks align sparse sum total align sparse sum total align sparse total efficiency (%) index-based 10 627 582 1209 1555 722 663 740 1090 1.15 1.14 0.70 97.6 20 667 582 1249 1606 765 726 793 1123 1.15 1.25 0.70 96.4 30 705 586 1291 1659 804 767 842 1163 1.14 1.31 0.70 95.5 40 740 590 1330 1724 836 801 873 1203 1.13 1.36 0.70 95.7 50 774 596 1370 1774 871 841 919 1245 1.13 1.41 0.70 94.8 triangularity-based 10 610 465 1076 1812 674 610 864 1468 1.10 1.31 0.81 78.0 20 634 411 1045 1641 694 571 844 1320 1.09 1.39 0.80 82.2 30 658 394 1052 1602 716 574 857 1287 1.09 1.46 0.80 83.5 40 674 388 1062 1609 731 585 867 1286 1.08 1.51 0.80 84.3 50 692 362 1053 1548 749 568 844 1243 1.08 1.57 0.80 88.7 TABLE II : IISequence communication wait (cwait) and IO time percentage in overall runtime. index-based triangularity-based #nodes cwait% IO% cwait% IO% 49 0.14 0.68 0.14 1.37 81 0.17 0.70 0.17 1.39 100 0.18 0.78 0.19 1.39 144 0.21 0.87 0.22 1.45 196 0.23 0.97 0.25 1.54 289 0.23 1.48 0.27 1.62 400 0.27 1.98 0.31 2.77 TABLE III : IIINumber of sequences and alignments. Table IV IVpresents the parameters used in the experiment and the program, and gives details about the obtained results. TABLE IV : IVParameters, results, and statistics of our largescale production run.Experiment parameters System Summit at OLCF Number of nodes 3364 Process grid (2D) 58 × 58 Cores per process 42 GPUs per process 6 Compiler (CPU) GNU gcc 9.1.0 Compiler (GPU) CUDA nvcc 11.0.3 MPI Spectrum MPI 10.4 Program parameters Number of input sequences 404,999,880 k-mer length 6 Gap open penalty 11 Gap extension penalty 2 Common k-mer threshold 2 ANI threshold 0.30 Coverage threshold 0.70 Blocking factor 20 × 20 Load balancing Triangularity-based Pre-blocking Enabled Results Discovered candidates 95,855,955,765,012 Performed alignments 8,552,623,259,518 (8.9%) Similar pairs (output elements) 1,048,288,620,764 (12.3%) Search space 1.6e17 Alignment space 5.2e-5 Output (file size) 27 TB Runtime 3.44 hours Alignments per second 690,609,577 Cell updates per second 176.3 TCUPs Breakdown & other Time Align 2.62 hours SpGEMM 2.06 hours Sparse (all) 2.22 hours Pre-blocking 2.62 hours IO 12.0 minutes Communication wait 0.2 minutes Imbalance (%) Alignment 7.1 Sparse 3.1 Sequence by kmer matrix Dimensions 404,988,624 × 244,140,625 Elements 48,824,292,733 • Enables similarity search over huge datasets and reduces the consumed memory required by many-against-many sequence alignment. https://metaclust.mmseqs.org/current release/ Advances and Applications in the Quest for Orthologs. N Glover, C Dessimoz, I Ebersberger, S K Forslund, T Gabaldón, J Huerta-Cepas, M.-J Martin, M Muffato, M Patricio, C Pereira, A S Silva, Y Wang, E Sonnhammer, Q F O C Thomas, D Paul, Molecular Biology and Evolution. 3610N. Glover, C. Dessimoz, I. Ebersberger, S. K. Forslund, T. Gabaldón, J. Huerta-Cepas, M.-J. Martin, M. Muffato, M. Patricio, C. Pereira, A. S. da Silva, Y. Wang, E. Sonnhammer, and Q. f. O. C. Thomas, Paul D, "Advances and Applications in the Quest for Orthologs," Molecular Biology and Evolution, vol. 36, no. 10, pp. 2157-2164, 06 2019. An evolutionarily structured universe of protein architecture. G Caetano-Anollés, D Caetano-Anollés, Genome Research. 137G. Caetano-Anollés and D. Caetano-Anollés, "An evolutionarily struc- tured universe of protein architecture," Genome Research, vol. 13, no. 7, pp. 1563-1571, 2003. Metagenomics and the protein universe. A Godzik, Current opinion in structural biology. 213A. Godzik, "Metagenomics and the protein universe," Current opinion in structural biology, vol. 21, no. 3, pp. 398-403, 2011. Next generation sequencing and bioinformatic bottlenecks: the current state of metagenomic data analysis. M B Scholz, C.-C Lo, P S Chain, Current Opinion in Biotechnology. 231analytical biotechnologyM. B. Scholz, C.-C. Lo, and P. S. Chain, "Next generation sequencing and bioinformatic bottlenecks: the current state of metagenomic data analysis," Current Opinion in Biotechnology, vol. 23, no. 1, pp. 9-15, 2012, analytical biotechnology. Functional assignment of metagenomic data: challenges and applications. T Prakash, T D Taylor, Briefings in Bioinformatics. 1362012T. Prakash and T. D. Taylor, "Functional assignment of metagenomic data: challenges and applications," Briefings in Bioinformatics, vol. 13, no. 6, pp. 711-727, 07 2012. Sensitive protein alignments at tree-of-life scale using diamond. B Buchfink, K Reuter, H.-G Drost, Nature Methods. 184B. Buchfink, K. Reuter, and H.-G. Drost, "Sensitive protein alignments at tree-of-life scale using diamond," Nature Methods, vol. 18, no. 4, pp. 366-368, Apr 2021. Mmseqs2 enables sensitive protein sequence searching for the analysis of massive data sets. M Steinegger, J Söding, Nature Biotechnology. 3511M. Steinegger and J. Söding, "Mmseqs2 enables sensitive protein sequence searching for the analysis of massive data sets," Nature Biotechnology, vol. 35, no. 11, pp. 1026-1028, Nov 2017. Seqan an efficient, generic c++ library for sequence analysis. A Döring, D Weese, T Rausch, K Reinert, BMC Bioinformatics. 9111A. Döring, D. Weese, T. Rausch, and K. Reinert, "Seqan an efficient, generic c++ library for sequence analysis," BMC Bioinformatics, vol. 9, no. 1, p. 11, Jan 2008. Gapped BLAST and PSI-BLAST: a new generation of protein database search programs. S F Altschul, T L Madden, A A Schäffer, J Zhang, Z Zhang, W Miller, D J Lipman, Nucleic Acids Research. 2517S. F. Altschul, T. L. Madden, A. A. Schäffer, J. Zhang, Z. Zhang, W. Miller, and D. J. Lipman, "Gapped BLAST and PSI-BLAST: a new generation of protein database search programs," Nucleic Acids Research, vol. 25, no. 17, pp. 3389-3402, 09 1997. Adaptive seeds tame genomic sequence comparison. S M Kiełbasa, R Wan, K Sato, P Horton, M C Frith, Genome research. 213S. M. Kiełbasa, R. Wan, K. Sato, P. Horton, and M. C. Frith, "Adaptive seeds tame genomic sequence comparison." Genome research, vol. 21, no. 3, pp. 487-93, Mar 2011. Fast and sensitive protein alignment using DIAMOND. B Buchfink, C Xie, D H Huson, Nature methods. 12159B. Buchfink, C. Xie, and D. H. Huson, "Fast and sensitive protein alignment using DIAMOND," Nature methods, vol. 12, no. 1, p. 59, 2015. Search and clustering orders of magnitude faster than BLAST. R C Edgar, Bioinformatics. 2619R. C. Edgar, "Search and clustering orders of magnitude faster than BLAST," Bioinformatics, vol. 26, no. 19, pp. 2460-2461, 08 2010. The design, implementation, and evaluation of mpiblast. A E Darling, L Carey, W C Feng, A. E. Darling, L. Carey, and W. C. Feng, "The design, implementation, and evaluation of mpiblast," 1 2003. [Online]. Available: https: //www.osti.gov/biblio/976625 Distributed many-to-many protein sequence alignment using sparse matrices. O Selvitopi, S Ekanayake, G Guidi, G A Pavlopoulos, A Azad, A Buluç, SC20: International Conference for High Performance Computing, Networking, Storage and Analysis. O. Selvitopi, S. Ekanayake, G. Guidi, G. A. Pavlopoulos, A. Azad, and A. Buluç, "Distributed many-to-many protein sequence alignment using sparse matrices," in SC20: International Conference for High Performance Computing, Networking, Storage and Analysis, 2020, pp. 1-14. Simplified amino acid alphabets for protein fold recognition and implications for folding. L R Murphy, A Wallqvist, R M Levy, Protein Engineering, Design and Selection. 133L. R. Murphy, A. Wallqvist, and R. M. Levy, "Simplified amino acid alphabets for protein fold recognition and implications for folding," Protein Engineering, Design and Selection, vol. 13, no. 3, pp. 149-152, 03 2000. Mathematical foundations of the graphblas. J Kepner, P Aaltonen, D Bader, A Buluç, F Franchetti, J Gilbert, D Hutchison, M Kumar, A Lumsdaine, H Meyerhenke, S Mcmillan, C Yang, J D Owens, M Zalewski, T Mattson, J Moreira, 2016 IEEE High Performance Extreme Computing Conference (HPEC). J. Kepner, P. Aaltonen, D. Bader, A. Buluç, F. Franchetti, J. Gilbert, D. Hutchison, M. Kumar, A. Lumsdaine, H. Meyerhenke, S. McMillan, C. Yang, J. D. Owens, M. Zalewski, T. Mattson, and J. Moreira, "Math- ematical foundations of the graphblas," in 2016 IEEE High Performance Extreme Computing Conference (HPEC), 2016, pp. 1-9. Combinatorial blas 2.0: Scaling combinatorial algorithms on distributedmemory systems. A Azad, O Selvitopi, M T Hussain, J R Gilbert, A Buluç, IEEE Transactions on Parallel and Distributed Systems. 334A. Azad, O. Selvitopi, M. T. Hussain, J. R. Gilbert, and A. Buluç, "Com- binatorial blas 2.0: Scaling combinatorial algorithms on distributed- memory systems," IEEE Transactions on Parallel and Distributed Sys- tems, vol. 33, no. 4, pp. 989-1001, 2022. Adept: a domain independent sequence alignment strategy for gpu architectures. M G Awan, J Deslippe, A Buluc, O Selvitopi, S Hofmeyr, L Oliker, K Yelick, BMC Bioinformatics. 211406M. G. Awan, J. Deslippe, A. Buluc, O. Selvitopi, S. Hofmeyr, L. Oliker, and K. Yelick, "Adept: a domain independent sequence alignment strategy for gpu architectures," BMC Bioinformatics, vol. 21, no. 1, p. 406, Sep 2020. On the Representation and Multiplication of Hypersparse Matrices. A Buluç, J R Gilbert, IPDPS. IEEEA. Buluç and J. R. Gilbert, "On the Representation and Multiplication of Hypersparse Matrices," in IPDPS. IEEE, 2008. High-performance sparse matrix-matrix products on intel KNL and multicore architectures. Y Nagasaka, S Matsuoka, A Azad, A Buluc, ICPP Workshops. New York, NY, USAACM34Y. Nagasaka, S. Matsuoka, A. Azad, and A. Buluc, "High-performance sparse matrix-matrix products on intel KNL and multicore architectures," in ICPP Workshops. New York, NY, USA: ACM, 2018, pp. 34:1-34:10. Communicationavoiding and memory-constrained sparse matrix-matrix multiplication at extreme scale. M T Hussain, O Selvitopi, A Buluç, A Azad, 2021 IEEE International Parallel and Distributed Processing Symposium (IPDPS). M. T. Hussain, O. Selvitopi, A. Buluç, and A. Azad, "Communication- avoiding and memory-constrained sparse matrix-matrix multiplication at extreme scale," in 2021 IEEE International Parallel and Distributed Processing Symposium (IPDPS), 2021, pp. 90-100. Parallel sparse matrix-matrix multiplication and indexing: Implementation and experiments. A Buluç, J R Gilbert, SIAM Journal on Scientific Computing. 344A. Buluç and J. R. Gilbert, "Parallel sparse matrix-matrix multiplication and indexing: Implementation and experiments," SIAM Journal on Scientific Computing, vol. 34, no. 4, pp. C170-C191, 2012. Collective communication: theory, practice, and experience: Research articles. E Chan, M Heimlich, A Purkayastha, R Van De Geijn, Concurr. Comput. : Pract. Exper. 1913E. Chan, M. Heimlich, A. Purkayastha, and R. van de Geijn, "Collective communication: theory, practice, and experience: Research articles," Concurr. Comput. : Pract. Exper., vol. 19, no. 13, pp. 1749-1783, Sep. 2007. Clustering huge protein sequence sets in linear time. M Steinegger, J Söding, Nature Communications. 912542M. Steinegger and J. Söding, "Clustering huge protein sequence sets in linear time," Nature Communications, vol. 9, no. 1, p. 2542, Jun 2018.
[]
[ "A COMPARATIVE STUDY OF DEEP LEARNING LOSS FUNCTIONS FOR MULTI-LABEL REMOTE SENSING IMAGE CLASSIFICATION", "A COMPARATIVE STUDY OF DEEP LEARNING LOSS FUNCTIONS FOR MULTI-LABEL REMOTE SENSING IMAGE CLASSIFICATION", "A COMPARATIVE STUDY OF DEEP LEARNING LOSS FUNCTIONS FOR MULTI-LABEL REMOTE SENSING IMAGE CLASSIFICATION", "A COMPARATIVE STUDY OF DEEP LEARNING LOSS FUNCTIONS FOR MULTI-LABEL REMOTE SENSING IMAGE CLASSIFICATION" ]
[ "Hichame Yessou \nFaculty of Electrical Engineering and Computer Science\nTechnische Universität Berlin\nGermany\n", "Gencer Sumbul \nFaculty of Electrical Engineering and Computer Science\nTechnische Universität Berlin\nGermany\n", "Begüm Demir \nFaculty of Electrical Engineering and Computer Science\nTechnische Universität Berlin\nGermany\n", "Hichame Yessou \nFaculty of Electrical Engineering and Computer Science\nTechnische Universität Berlin\nGermany\n", "Gencer Sumbul \nFaculty of Electrical Engineering and Computer Science\nTechnische Universität Berlin\nGermany\n", "Begüm Demir \nFaculty of Electrical Engineering and Computer Science\nTechnische Universität Berlin\nGermany\n" ]
[ "Faculty of Electrical Engineering and Computer Science\nTechnische Universität Berlin\nGermany", "Faculty of Electrical Engineering and Computer Science\nTechnische Universität Berlin\nGermany", "Faculty of Electrical Engineering and Computer Science\nTechnische Universität Berlin\nGermany", "Faculty of Electrical Engineering and Computer Science\nTechnische Universität Berlin\nGermany", "Faculty of Electrical Engineering and Computer Science\nTechnische Universität Berlin\nGermany", "Faculty of Electrical Engineering and Computer Science\nTechnische Universität Berlin\nGermany" ]
[]
This paper analyzes and compares different deep learning loss functions in the framework of multi-label remote sensing (RS) image scene classification problems. We consider seven loss functions: 1) cross-entropy loss; 2) focal loss; 3) weighted cross-entropy loss; 4) Hamming loss; 5) Huber loss; 6) ranking loss; and 7) sparseMax loss. All the considered loss functions are analyzed for the first time in RS. After a theoretical analysis, an experimental analysis is carried out to compare the considered loss functions in terms of their: 1) overall accuracy; 2) class imbalance awareness (for which the number of samples associated to each class significantly varies); 3) convexibility and differentiability; and 4) learning efficiency (i.e., convergence speed). On the basis of our analysis, some guidelines are derived for a proper selection of a loss function in multi-label RS scene classification problems.
10.1109/igarss39084.2020.9323583
[ "https://export.arxiv.org/pdf/2009.13935v1.pdf" ]
221,995,561
2009.13935
89d5ef3ccf5682d8c8b8fa2e7929761b202fffe5
A COMPARATIVE STUDY OF DEEP LEARNING LOSS FUNCTIONS FOR MULTI-LABEL REMOTE SENSING IMAGE CLASSIFICATION Hichame Yessou Faculty of Electrical Engineering and Computer Science Technische Universität Berlin Germany Gencer Sumbul Faculty of Electrical Engineering and Computer Science Technische Universität Berlin Germany Begüm Demir Faculty of Electrical Engineering and Computer Science Technische Universität Berlin Germany A COMPARATIVE STUDY OF DEEP LEARNING LOSS FUNCTIONS FOR MULTI-LABEL REMOTE SENSING IMAGE CLASSIFICATION Index Terms-Multi-label image classificationdeep learningloss functionsremote sensing This paper analyzes and compares different deep learning loss functions in the framework of multi-label remote sensing (RS) image scene classification problems. We consider seven loss functions: 1) cross-entropy loss; 2) focal loss; 3) weighted cross-entropy loss; 4) Hamming loss; 5) Huber loss; 6) ranking loss; and 7) sparseMax loss. All the considered loss functions are analyzed for the first time in RS. After a theoretical analysis, an experimental analysis is carried out to compare the considered loss functions in terms of their: 1) overall accuracy; 2) class imbalance awareness (for which the number of samples associated to each class significantly varies); 3) convexibility and differentiability; and 4) learning efficiency (i.e., convergence speed). On the basis of our analysis, some guidelines are derived for a proper selection of a loss function in multi-label RS scene classification problems. INTRODUCTION Recent advances on remote sensing (RS) instruments have led to a significant growth of remote sensing (RS) image archives. Accordingly, multi-label image scene classification (MLC) that aims at automatically assigning multiple class labels (i.e., multi-labels) to each RS image scene in an archive has attracted great attention in RS. In recent years, deep learning (DL) based methods have been introduced for the MLC problems due to high generalization capabilities of DL models (e.g., convolutional neural networks (CNNs) and recurrent neural networks (RNNs)). As an example, in [1] conventional use of CNNs developed for single-label image classification is adapted for MLC. In this method, the sigmoid function is suggested for MLC adaptation instead of the softmax function as the activation of the last CNN layer. In [2], a data augmentation strategy is proposed to employ a shallow CNN in the framework of MLC. This method aims to apply an end-toend training of the shallow CNN, while avoiding to use a pretrained network. In [3], a multi-attention driven approach is introduced for high-dimensional high-spatial resolution RS images. In this approach, a branch-wise CNN is jointly exploited with an RNN to characterize a global image descriptor based on the extraction and exploitation of importance scores of im-age local areas. All the existing approaches utilize the conventional combination of sigmoid activation and cross-entropy loss functions to simultaneously learn multi-labels for each image in the framework of DL. Sigmoid activation function provides Bernoulli distributions and thus allows multiple class predictions. The cross-entropy loss function has strong foundations from information theory and its effectiveness has been widely proven. However, it is not fully suitable to use when: i) imbalanced training sets are present; and ii) there is a time constraint on the training phase of a DL based method. Since a loss function guides the whole learning procedure throughout the training, its proper selection is important for DL based MLC. Thus, in this paper, we present a study to analyze and compare different loss functions in the content of MLC and propose a scheme to guide the choice of loss functions based on a set of properties. All the considered loss functions are analyzed for the first time in RS in terms of their: 1) overall accuracy; 2) class imbalance awareness; 3) convexibility and differentiability; and 4) learning efficiency. BigEarthNet [4], which is a large scale multi-label benchmark archive, is employed to validate our theoretical findings within experiments. DEEP LEARNING LOSS FUNCTIONS FOR MULTI-LABEL IMAGE CLASSIFICATION Let X = {x 1 , ..., x M } be an archive that consists of M images, where x i is the i th image in the archive. Each image in the archive is associated with one or more classes from a label set {l 1 , ..., l C }. Let y i,c be a binary variable that indicates the presence or absence of a label l c for the image x i . Thus, the multi-labels of the image are given by the binary vector y i = [y i,1 , . . . , y i,C ]. A MLC task can be formulated as a function F (x i ) = g(f (x i )) that maps the image x i to multiple classes based on the function f (x i ) = p i (which provides a classification score for each class in the label set) and the function g(·) (which defines the multi-labels of the image based on the probabilities). The learning process is performed by minimizing the empirical loss L(y, y * ) = h(g(f (x i )), y i ), which compares multi-label predictions with the ground reference samples. For a comparative analysis, we consider seven DL loss functions: cross-entropy loss (CEL) [5]; focal loss (FL) [6]; weighted cross-entropy loss (W-CEL) [5]; Hamming loss (HAL) [7]; Huber loss (HL) [8]; ranking loss (RL) [ and sparseMax loss (SML) [10]. For the image x i we define its class probabilities p i as follows: p i = ŷ if y = 1 1 −ŷ otherwise (1) whereŷ is resulting output from the Sigmoid activation function defined as δ(x) = 1/(1 + e −x ). The CEL is formulated as: CEL = − log(p i ).(2) For the CEL, easily classified images may significantly affect the value of the loss function and thus control the gradient that limits the learning from hard images. The FL adds a modulating factor to the CEL, shifting the objective from easy negatives to hard negatives by down-weighting the easily classified images as follows: FL = − (1 − p i ) γ log(p i )(3) where γ is a focusing parameter, which increases the importance of correcting wrongly classified examples. Another way to guide the learning procedure is to consider class weighting that allows exploiting the importance for each class. The W-CEL is defined by setting a weighting vector inversely proportional to the class distribution. The HAL aims at reducing the fraction of the wrongly predicted labels compared to the total number of labels as follows: HAL = 1 C C c=1 y i,c ⊕ g(p i,c )(4) where ⊕ denotes the XOR logical operation. The HL consists of: i) a quadratic function for values in the target proximity; and ii) a linear function for larger values as follows: HL = C c=1 max(0, 1 − y i,c z i,c ) 2 , for y i,c z i,c ≥ −1 −4 y i,c z i,c , otherwise(5) where z i,c is the class score (i.e., logit) of the label c without applying any activation function. It is worth noting that to utilize the HL, the value of y i is replaced by y i ∈ {−1, +1} C . The SML is coupled with the sparseMax activation function that provides sparse distributions, while holding a separation margin for classification. Its generalization for the multi-label classification is defined as follows: SML = −y T i z i + 1 2 j∈S (z 2 i,j − τ 2 (z i )) + 1 2 y i 2(6) where τ is a thresholding function to define which class scores will be further leveraged (denoted as S) and the remaining class scores will be truncated to zero (for a detailed explanation, see [10]). The RL aims to provide an accurate order of class probabilities, and thus assign higher probabilities to ground reference classes compared to others. This is achieved with pairwise comparisons as follows: RL = v / ∈yi u∈yi max(0, α + z i,v − z i,u )) (7) where u is the ground reference class labels associated with the image x i and v is the remaining labels from the label set of the archive. A COMPARATIVE ANALYSIS We analyze and compare the above-mentioned loss functions in the framework of MLC based on their: 1) class imbalance awareness; 2) convexibility and differentiability; and 3) learning efficiency. Our analysis of DL loss functions under these criteria aims at providing a guideline to select the most appropriate loss function for MLC applications. Most of the operational RS applications include a degree of class imbalance, which is associated to the fact that classes are not equally represented in the archive. This is more evident in the case of MLC. When the number of images for a given class is not sufficient in the training set, characterization of this class can be more difficult compared to others. This may lead to misclassification of images. To overcome this limitation, the modulating factor defined in (3) significantly down-weights the effect of wellclassified images on the value of the loss function (e.g., when p i → 1, the modulating factor shrinks towards 0). Since the FL focuses more on hard samples, minority classes can be better characterized. In addition to FL, W-CEL considers images with minority classes more than the vastly represented classes in the training set. This is due to the fact that the weighting vector applied to the loss function is inversely proportional to the class distribution. The optimization problems of DL methods are generally non-convex, while convex properties exist in the trajectory of gradient minimizers [11]. The convexity of a DL loss function is an important property for an effective training procedure and better generalization capability. In addition to the convexity, another factor that supports the optimization of a loss function is its differentiability. It is worth noting that the differentiability is not a sufficient condition for guaranteeing the convergence to a global minimum. However, it is a required condition for providing a non-zero gradient back to the DL model during backpropagation. There are several strategies that allow the training of non-differentiable loss functions. However, these strategies may undesirably change the aim of loss functions and introduce additional complexity. Among the considered loss functions, only the HAL and RL do not embrace the convexity and differentiability. This is due to the fact that they are non-convex and discontinuous, and thus difficult to be directly optimized. The learning efficiency is another criterion, which is evaluated as a rate at which the approximation of an iterative procedure in training reaches a high performance in terms of MLC. By employing more efficient learning procedures, similar MLC accuracies can be obtained with fewer iterations. Thus, a fast convergence reduces the total training time, which is required to reach a high MLC performance. Accordingly, it is crucial for a DL loss function particularly when there is a time constraint on the training phase. In this work, we use the same optimization strategy for all loss functions, and thus do not assess the effect of optimizers on the learning efficiency. EXPERIMENTAL RESULTS Experiments have been carried out on the BigEarthNet [4] large-scale benchmark archive. We used the BigEarthNet-19 class nomenclature proposed in [12] instead of the original BigEarthNet classes. For the detailed explanation about the archive and the class nomenclature, the reader is referred to [4] and [12], respectively. For the experiments, we considered a standard CNN architecture in order not to lose in generality. To this end, the CNN architecture given in the first step of the classification approach proposed in [3] is used with the difference in terms of the number of units (1024) in the last two fully connected layers. We applied the same training procedure and hyperparameters to all considered loss functions for 80 epochs. Initial learning rate was selected as 10 −4 for the RMSprop optimizer. The performance of each loss function is provided in terms of precision (P ), recall (R) and F 1 -Score. We did not apply early-stopping with the validation set not to change the actual characteristics of the loss functions. We applied the Layer-wise Relevance Propagation (LRP) [13] technique to RGB spectral bands of the images. This technique allows propagating the multi-label predictions backward in CNNs and providing heatmaps, which indicate the most informative areas in RS images for each class. The heatmaps provide an accurate way to explain the characteristics of different loss functions. Low and high heatmap values are highlighted in blue and red tones, respectively. To analyze the overall accuracy of the considered loss functions, Table 1 shows the overall multi-label classification performances. As one can see from Table 1, the CNNs trained with HL and RL achieve the highest values of precision and recall, respectively. However, since the CNN trained with HL provides a low recall, it does not lead to a high F 1 -Score. Similar to the HL, the CNN trained with RL leads to a low F 1 -Score. Since the CNN trained with SML achieves high precision and recall, it leads to the highest F 1 -Score compared to the other loss functions. To analyze the class imbalance and convexity and differentiability criteria, Figure 1 shows two examples of the BigEarthNet images, their multi-labels and LRP heatmaps with multi-label predictions of the considered loss functions. From Fig. 1.a, one can see the behavior of different loss functions when an image is associated with the classes, which are not equally represented in the archive. In detail, on the heatmap of the CEL, the semantic content associated with one of the well represented classes (which is Urban fabric) overwhelms the heatmap values. However, using the FL and W-CEL shows Figure 2 shows the overall F 1 scores on the validation set at different epochs of the training phase. As one can see from Figure 2, the CNNs trained with the SML and RL lead considerably better performances in F 1 -Score from the initial epochs compared to the other loss functions. CONCLUSION This paper analyzes and compares different loss functions in the framework of MLC problems in RS. In particular, we have presented advantages and limitations of different DL loss functions in terms of their: 1) overall accuracy; 2) class imbalance awareness; 3) convexity and differentiability; and 4) learning efficiency. In Table 2, a comparison of the considered loss functions is given on the basis of our experimental and theoretical analysis. In greater detail, experimental results show that the highest overall accuracy is achieved when the SML is utilized as a loss function. The FL and W-CEL can be more convenient to be utilized as loss functions when the imbalanced training sets are present. For the MLC applications that require a training phase with convex and differentiable loss functions, the HAL and the RL are less suitable to be used during the training phase. The SML and RL can be more convenient to be utilized as loss functions when a lower computational time is preferred for the training phase of a DL based MLC method. This study shows that for MLC problems in RS, DL loss functions should be chosen according to the need of the considered problem. As a future work, we plan to further analyze the differences of the MLC loss functions by visualizing their 3D trajectories under different network architectures. ACKNOWLEDGEMENTS This work is funded by the European Research Council (ERC) through the ERC-2017-STG BigEarth Project under Grant 759764. Fig. 1 : 1An example of the BigEarthNet images, their multi-labels and LRP heatmaps with the multi-label predictions of the considered loss functions. LRP heatmaps are given for the classes of a) Urban Fabric; and b) Coniferous Forest. Fig. 2 : 2F 1 -Scores over the validation set obtained by considering different loss functions during the different epochs of training. Table 1 : 1OverallPrecision, Recall and F 1 -Score obtained us- ing the CEL, FL, W-CEL, HAL, HL, RL and SML. Metric CEL FL W-CEL HAL HL RL SML P (%) 75.2 72.2 76.2 75.9 76.6 58.0 70.7 R (%) 58.2 57.8 64.5 60.4 59.9 76.5 74.4 F 1 (%) 62.3 61.1 66.2 64.2 64.1 62.9 69.9 Table 2 : 2Comparison of the considered MLC loss functions. InFig. 1.b, one can see that convex loss functions provide a more accurate distribution of heatmap values in terms of the correlation between the semantic content of the image and the heatmap values. Loss functions that hold convexity and differentiability have more reliable heatmap values. However, applying a weighting factor to a relatively smooth loss function such as the CEL introduces significant uncertainty in the heatmap values of W-CEL. In contrast to W-CEL, the modulating factor of the FL provides more regular values for the same regions. The RL and HAL show an irregular profile of predictions, while having high and low heatmap values associated with the same regions of the image. Although the LRP heatmaps are given for two examples, the similar behavior is also observed by varying the images in the BigEarthNet. To compare the learning efficiency of the considered loss functions,Different marks are provided: "H" (High), "M" (Medium), "L" (Low) or NA (Not Applied). Loss Function Overall Accuracy Class Imbalance Awareness Convexity and Differentiability Learning Efficiency CEL [5] L L M M FL [6] L H M L W-CEL [5] M H M L HAL [7] M L NA M HL [8] M L H M RL [9] L L NA H SML [10] H M M H a more regular distribution of heatmap values. On the other hand, using the HAL and RL provides a high values associated with most of the image regions on the heatmap of the Urban fabric class while showing the highest values for the seman- tic content associated with the Industrial or commercial units class. Deep learning -a new approach for multi-label scene classification in planetscope and sentinel-2 imagery. I Shendryk, Y Rist, R Lucas, P Thorburn, C Ticehurst, IEEE Intl. Geosci. Remote Sens. Symp. I. Shendryk, Y. Rist, R. Lucas, P. Thorburn, and C. Ticehurst, "Deep learning -a new approach for multi-label scene classi- fication in planetscope and sentinel-2 imagery," in IEEE Intl. Geosci. Remote Sens. Symp., 2018, pp. 1116-1119. Deep learning for multilabel land cover scene categorization using data augmentation. R Stivaktakis, G Tsagkatakis, P Tsakalides, IEEE Geosci. Remote Sens. Lett. 167R. Stivaktakis, G. Tsagkatakis, and P. Tsakalides, "Deep learn- ing for multilabel land cover scene categorization using data augmentation," IEEE Geosci. Remote Sens. Lett., vol. 16, no. 7, pp. 1031-1035, 2019. A deep multi-attention driven approach for multi-label remote sensing image classification. G Sumbul, B Demir, IEEE Access. 8G. Sumbul and B. Demir, "A deep multi-attention driven approach for multi-label remote sensing image classification," IEEE Access, vol. 8, pp. 95934-95946, 2020. BigEarth-Net: A large-scale benchmark archive for remote sensing image understanding. G Sumbul, M Charfuelan, B Demir, V Markl, IEEE Intl. Geosci. Remote Sens. Symp. G. Sumbul, M. Charfuelan, B. Demir, and V. Markl, "BigEarth- Net: A large-scale benchmark archive for remote sensing image understanding," IEEE Intl. Geosci. Remote Sens. Symp., pp. 5901-5904, 2019. The "wake-sleep" algorithm for unsupervised neural networks. G Hinton, P Dayan, B Frey, R Neal, Science. 2685214G. Hinton, P. Dayan, B. Frey, and R. Neal, "The "wake-sleep" algorithm for unsupervised neural networks," Science, vol. 268, no. 5214, pp. 1158-1161, 1995. Focal loss for dense object detection. T Lin, P Goyal, R Girshick, K He, P Dollar, IEEE Trans. Pattern Anal. Mach. Intell. 422T. Lin, P. Goyal, R. Girshick, K. He, and P. Dollar, "Focal loss for dense object detection," IEEE Trans. Pattern Anal. Mach. Intell., vol. 42, no. 2, pp. 318-327, 2020. A simple approach to ordinal classification. E Frank, M Hall, European Conf. Machine Learning. E. Frank and M. Hall, "A simple approach to ordinal classifi- cation," in European Conf. Machine Learning, 2001, pp. 145- 156. Robust estimation of a location parameter. P J Huber, Ann. Math. Statist. 351P. J. Huber, "Robust estimation of a location parameter," Ann. Math. Statist., vol. 35, no. 1, pp. 73-101, 1964. Improving pairwise ranking for multi-label image classification. Y Li, Y Song, J Luo, IEEE Conf. Comput. Vis. Pattern Recog. Y. Li, Y. Song, and J. Luo, "Improving pairwise ranking for multi-label image classification," in IEEE Conf. Comput. Vis. Pattern Recog., 2017, pp. 3617-3625. From softmax to sparsemax: A sparse model of attention and multi-label classification. A F T Martins, R F Astudillo, Intl. Conf. Mach. Learn. A. F. T. Martins and R. F. Astudillo, "From softmax to sparse- max: A sparse model of attention and multi-label classification," in Intl. Conf. Mach. Learn., 2016, pp. 1614-1623. Qualitatively characterizing neural network optimization problems. I J Goodfellow, O Vinyals, A M Saxe, Intl. Conf. Learn. Represent. I. J. Goodfellow, O. Vinyals, and A. M. Saxe, "Qualitatively characterizing neural network optimization problems," in Intl. Conf. Learn. Represent., 2015. BigEarthNet dataset with a new class-nomenclature for remote sensing image understanding. G Sumbul, J Kang, T Kreuziger, F Marcelino, H Costa, P Benevides, M Caetano, B Demir, arXiv:2001.06372G. Sumbul, J. Kang, T. Kreuziger, F. Marcelino, H. Costa, P. Benevides, M. Caetano, and B. Demir, "BigEarthNet dataset with a new class-nomenclature for remote sensing image under- standing," 2020, [Online]. Available: arXiv:2001.06372. Kindermans, "iNNvestigate neural networks!. M Alber, S Lapuschkin, P Seegerer, M Hägele, K T Schütt, G Montavon, W Samek, K R Müller, S Dähne, P.-J , J. Mach. Learn. Res. 2093M. Alber, S. Lapuschkin, P. Seegerer, M. Hägele, K. T. Schütt, G. Montavon, W. Samek, K. R. Müller, S. Dähne, and P.-J. Kin- dermans, "iNNvestigate neural networks!," J. Mach. Learn. Res., vol. 20, no. 93, pp. 1-8, 2019.
[]
[ "Universality for Random Matrices", "Universality for Random Matrices" ]
[ "Simona Diaconu " ]
[]
[]
Traces of large powers of real-valued Wigner matrices are known to have Gaussian fluctuations: for A = 1 √ n (aij) 1≤i,j≤n ∈ R n×n , A = A T with (aij) 1≤i≤j≤n i.i.d., symmetric, subgaussian, E[a 2 11 ] = 1, and p = o(n 2/3 ), as n, p → ∞,. This work shows the entries of A 2p , properly scaled, also have normal limiting laws when n → ∞, p = n o(1) : some normalizations depend on E[a 4 11 ], contributions that become negligible as p → ∞, whereas the behavior of the diagonal entries of A 2p+1 depends substantially on all the moments of a11 when p is bounded or the moments of a11 grow relatively fast compared to it. This result demonstrates large powers of Wigner matrices are roughly Wigner matrices with normal entries when a11, providing another perspective on eigenvector universality, which until now has been justified exclusively using local laws. The last part of this paper finds the first-order terms of traces of Wishart matrices in the random matrix theory regime, rendering yet another connection between Wigner and Wishart ensembles as well as an avenue to extend the results herein for the former to the latter. The primary tools employed are the method of moments and a simple identity the Catalan numbers satisfy. √ n (a ij ) 1≤i,j≤n ∈ R n×n , A = A T with (a ij ) 1≤i≤j≤n i.i.d. random variables (results for their complex-valued counterparts exist as well), and denote by λ 1 (M ) ≥ λ 2 (M ) ≥ ... ≥ λ n (M ) the eigenvalues of a symmetric matrix M ∈ R n×n , with corresponding eigenvectors (u i ) 1≤i≤n , chosen to form an orthonormal basis of R m if need be, u ik = (u i ) k1 for 1 ≤ i, k ≤ n.The second technique (ii), combinatorial in nature, is based on the seminal work of Sinai and Soshnikov[32], where the authors computed the order of E[tr(A p )] for p = o(n 1/2 ) under the assumption that the distribution of a 11 is symmetric and subgaussian (i.e., there exists C > 0 such that E[|a 11 | k ] ≤ (Ck) k/2 for all k ∈ N). When E[a 2 11 ] = 1, p = o(n 1/2 ), as n, p → ∞ E[tr(A p )] = 0, p = 2s − 1, s ∈ N 4 s
null
[ "https://export.arxiv.org/pdf/2305.04687v1.pdf" ]
258,557,244
2305.04687
2fcd0036ea180acfa92fb9ab2a94645fe0228ff8
Universality for Random Matrices May 2023 Simona Diaconu Universality for Random Matrices 8May 2023 Traces of large powers of real-valued Wigner matrices are known to have Gaussian fluctuations: for A = 1 √ n (aij) 1≤i,j≤n ∈ R n×n , A = A T with (aij) 1≤i≤j≤n i.i.d., symmetric, subgaussian, E[a 2 11 ] = 1, and p = o(n 2/3 ), as n, p → ∞,. This work shows the entries of A 2p , properly scaled, also have normal limiting laws when n → ∞, p = n o(1) : some normalizations depend on E[a 4 11 ], contributions that become negligible as p → ∞, whereas the behavior of the diagonal entries of A 2p+1 depends substantially on all the moments of a11 when p is bounded or the moments of a11 grow relatively fast compared to it. This result demonstrates large powers of Wigner matrices are roughly Wigner matrices with normal entries when a11, providing another perspective on eigenvector universality, which until now has been justified exclusively using local laws. The last part of this paper finds the first-order terms of traces of Wishart matrices in the random matrix theory regime, rendering yet another connection between Wigner and Wishart ensembles as well as an avenue to extend the results herein for the former to the latter. The primary tools employed are the method of moments and a simple identity the Catalan numbers satisfy. √ n (a ij ) 1≤i,j≤n ∈ R n×n , A = A T with (a ij ) 1≤i≤j≤n i.i.d. random variables (results for their complex-valued counterparts exist as well), and denote by λ 1 (M ) ≥ λ 2 (M ) ≥ ... ≥ λ n (M ) the eigenvalues of a symmetric matrix M ∈ R n×n , with corresponding eigenvectors (u i ) 1≤i≤n , chosen to form an orthonormal basis of R m if need be, u ik = (u i ) k1 for 1 ≤ i, k ≤ n.The second technique (ii), combinatorial in nature, is based on the seminal work of Sinai and Soshnikov[32], where the authors computed the order of E[tr(A p )] for p = o(n 1/2 ) under the assumption that the distribution of a 11 is symmetric and subgaussian (i.e., there exists C > 0 such that E[|a 11 | k ] ≤ (Ck) k/2 for all k ∈ N). When E[a 2 11 ] = 1, p = o(n 1/2 ), as n, p → ∞ E[tr(A p )] = 0, p = 2s − 1, s ∈ N 4 s Introduction Random matrices are employed in a plethora of disciplines, including statistics, physics, genetics, and computer science. Two of the most commonly studied families are Wigner and Wishart ensembles: the former are named after Eugene Wigner, who in 1955 proposed them as a model for the organization of heavy nuclei ( [39]), while the latter were introduced by John Wishart in 1928, his main motivation being multivariate populations, i.e., observations or feature measurements ( [40]). Initially the entries of these matrices were assumed to be normally distributed (real-or complex-valued), and there has been a great interest in understanding the sensitivity of these results to their laws, in particular, how much Gaussianity could be weakened. Mehta [27] conjectured in 1991 the asymptotic behavior of large random matrices to be universal: for each family of ensembles, it should depend on few moments of the entry distributions. Ever since, considerable progress has been made in this direction, primarily with three tools, (i) orthogonal polynomials (bulk eigenvalues), (ii) method of moments (edge eigenvalues), and (iii) local laws (most of the eigenspectrum and its corresponding eigenvectors): a work that does not fit within these categories is Johansson's [23], where the author deals with matrices having a Gaussian component and computes eigenvalue densities exactly. Inasmuch as the results of this paper rely on the method of moments and are concerned primarily with eigenvectors, the reader is referred to [8], [13], [14], [28] for (i), while (ii) and (iii) are looked at next in more detail in the context of Wigner matrices, after which Wishart ensembles are discussed. Consider first solely real-valued Wigner matrices, i.e., A = 1 n N = γ) by a symmetrization trick: employ H = 0 X X T 0 instead of A. Especially, eigenvalue universality for such Wishart matrices with γ = 1 has been justified exclusively through local laws: the primary reason underlying this discrepancy between the two families is related to combinatorics. The key in the method of moments, when applied to a random matrix A ∈ R n×n , is computing E[tr(A p )] for p ∈ N growing with n. For Wigner matrices, p = ⌊tn 2/3 ⌋ for t > 0 is needed; these traces can be understood using the structure of cycles of length p : E[tr(A p )] = (i0,i1, ... ,ip−1) E[a i0i1 a i1i2 ...a ip−1i0 ], (1.5) and due to the i.i.d. behavior, the expectations of such products are easy to compute as they factor into moments of a 11 . On the other hand, when A = 1 N XX T for X ∈ R n×N with i.i.d. entries, the contributions of individual cycles in (1.5) are more cumbersome. When opening the parentheses in E[tr(A p )] = (i0,i1, ... ,ip−1) E[( 1≤k≤N x i0k x i1k )( 1≤k≤N x i1k x i2k )...( 1≤k≤N x ip−1k x i0k )], (1.6) many products are created, the graphs underlying them being no longer cycles, and there is an additional asymmetry due to having two dimensions to deal with, n, N. Historically, traces as in (1.6) were handled in two contexts, almost sure limits of the edge eigenspectrum of A and edge universality. Assuming lim n→∞ for k n ∈ N growing slightly faster than log n, and A n := 1 N XX T . Subsequently, Bai and Yin [3] proved λ 1 (A) a.s. − − → (1 − √ γ) 2 when γ ∈ (0, 1) (for γ > 1, the same result holds but for λ n−N +1 (A) : this is due to the duality between XX T and X T X) by justifying the analog of (1.7) for a family of matrices T (l), whose linear combinations yield, roughly speaking, the powers of A − (1 + γ)I. The authors of the aforementioned works considered the directed graphs underlying (1.6), which could be visualized as two (parallel) lines L 1 , L 2 , each containing p points, with 2p segments between them: Soshnikov [34] led another analysis of such traces. He found their asymptotic behavior for N = n + o(n 1/3 ) and p = o(n 2/3 ), p = ⌊tn 2/3 ⌋ by an entirely different approach than the one just described, based on a comparison between traces of powers of A and those of a Wigner matrix (the latter ranges of p demonstrate universality at the right edge of A). At a high level, N and n must be roughly equal in [34] for the transition to a Wigner matrix to be tight: in such ensembles, a key feature of the cycles underlying (1.5) is all the vertices belonging to the same set, {1, 2, ... , n}, whereas in the graphs underlying (1.6), there are two types, elements of {1, 2, ... , n} and {1, 2, ... , N }, respectively. This suggests a close connection between these families of random matrices when the two sets are roughly equal, whereas it is not a priori clear what occurs when this condition is violated. This paper relies on the technique introduced by Sinai and Soshnikov in [32], used for universality results ( [34], [22]) as well as upper bounds on eigenvalues ( [1], [4]), and a simple observation from [17] allowing to compute the leading terms in analogs of (1.6), originally employed for deriving the asymptotic law of the largest eigenvalue of Wigner matrices with entries at the boundary between light-and heavy-tailed regimes. Concretely, this work contains three types of results: CLTs in the spirit of (1.2), asymptotic Haar behavior of the matrix consisting of eigenvectors of a Wigner ensemble, and trace expectations for Wishart matrices, which translate into CLTs, revealing a novel connection between Wishart and Wigner ensembles and providing a means to extend the results herein to the former (despite this not being pursued here). Loosely speaking, the eigenvector behavior comes forth in this case through traces: it is yet another product of the methodology from [32], originally devised for the latter and up to this point exploited primarily for eigenspectra. Main Results The first step towards comprehending eigenvectors of Wigner matrices is comprised of the following convergences, closely connected to (1.2). 2C p−1 + c 1 · 2 p/2 p · √ δ ≤ c(p, 1 + δ) ≤ 2C p−1 + c 2 · 2 p/2 p · (1 + √ δ) for c 1 , c 2 > 0 universal, and all p ∈ N, δ ≥ 0. Specifically, c 2 (p, 1 + δ) = 2A 1 (p) + 2 p−6 (p + 2)(p + 4)χ 2|p · (δ − 1) where A 1 (p) is given by (3.16). Some remarks on the statement of Theorem 1 are in order. The condition p = n o(1) is likely suboptimal, and given [34], p = o(n 2/3 ) is necessary: the proof shows the diagonal entries of A 2p+1 have heavier tails than Gaussian, their (2l) th moment being a linear combination of (E[a 2k 11 ]) 0≤k≤l , with weights decaying polynomially in p. This implies that when p is bounded, the behavior is not universal (it depends on all moments of a 11 : this is illustrated also by the simple case p = 1) and turns asymptotically Gaussian when the moments a 11 do not grow too fast relative to it, lim n→∞ (log p − 4 log E[a 2l 11 χ |a11|≤n δ ] 3l ) = ∞, (2.1) for some δ ∈ ( 2 8+ǫ0 , 1 4 ) and all l ∈ N (see end of subsection 3.3) suffice (e.g., p → ∞ and E[a 2l 11 ] ≤ C(l)). The moment condition in (W1) is a weakening of subgaussianity, the assumption behind (1.2) and comes at the cost of a lower range for p due to the trade-off between the range of p and tails of a 11 : the larger the former, the more first-order terms in the quantities of interest (see discussion at the beginning of section 3). The function c appears when computing the variance of e T i A p e i − 1 n E[tr(A p )] (once it is expressed as a sum over cycles of length p), whose leading components are identified with a simple property underlying the Catalan numbers ((3.9) and the description preceding it): an interesting feature of this result is the fourth moment appearing in the normalization. This statistic is known to separate the heavy-and light-tailed regimes: when the law of a 11 is regularly varying with index α ∈ (0, 4) (in particular, E[|a 11 | α+ǫ ] = ∞ for all ǫ > 0), its (right) edge eigenvalues, properly normalized (roughly by n 2/α , not √ n), follow a Poisson point process and their corresponding eigenvectors are completely delocalized (equidistributed on two entries: see [35], [1]), staying in stark contrast with the case E[a 4 11 ] < ∞. A consequence of a 11 d = −a 11 is E[(à p ) ij · (à p ) i ′ j ′ ] = 0 when i < j, i ′ < j ′ , {i, j} = {i ′ , j ′ }, wherẽ A p = n C p − C 2 p/2 χ 2|p (A p − 1 n E[tr(A p )]I), thereby suggestingà p ≈ A G (2.2) when 2|p for A G = (a ij ) 1≤i,j≤n a symmetric matrix with (a ij ) 1≤i≤j≤n independent, normally distributed, centered, and E[a 2 ii ] = 1 2 = lim p→∞ 2C p−1 C p − C 2 p/2 χ 2|p , E[a 2 ij ] = 1, i = j. To make (2.2) rigorous, a distance between the metrics induced byà p and A (G) , respectively, must be considered as their dimensions increase with n. Let m = n(n+1) 2 , R m = {(a ij ) 1≤i≤j≤n , a ij ∈ R}, denote by µ n,p the law given by E µn,p [f (X)] = R m f (X) 1≤i≤j≤n da ij , X = ((à p ) ij ) 1≤i≤j≤n , when f : R m → R ≥0 is Borel measurable, and ν n,p a multivariate Gaussian distribution of dimension m, whose entries (x ij ) 1≤i≤j≤n have diagonal covariance with E[x 2 ii ] = c 2 (p, E[a 4 11 ]) C p , E[x 2 ij ] = 1, i = j. Additionally, take dµ n := 1≤i≤j≤n da ij , dν n = 1≤i≤j≤n da G ij , with (a G ij ) 1≤i≤j≤n independent centered Gaussian, E[(a G ii ) 2 ] = 1 2 , E[(a G ij ) 2 ] = 1 when i < j. Recall the distance giving weak convergence: d(η 1 , η 2 ) = sup ||f ||∞=1,f ∈C(R m ) | f dη 1 − f dη 2 |, where ||f || ∞ = sup x∈R m |f (x)|, and C(R m ) is the set of continuous functions f : R m → R. A rigorous interpretation µ n,p ≈ ν n,p would be d(µ n,p , ν n,p ) n→∞ − −−− → 0. The following result is a weaker version of this ideal convergence. Theorem 2. Under the assumptions of Theorem 1, there exist universal constants c 0 , c 1 , c 2 ∈ (0, 1) such that for p ∈ N, p ≤ c 0 ( log n ǫ0 ) 1/7 , d n (µ n,2p , ν n,2p ) := sup f ∈Rn,p | f dµ n,2p − f dν n,2p | ≤ c m 1 + 2C(ǫ 0 )n −ǫ0/8 , (2.3) where R n,p = {f ∈ C(R m ), supp(f ) ⊂ B m (0, c 2 √ mp), ||f || ∞ ≤ 1}. Since the compactly supported elements of C b (R m ) = C(R m )∩{f : R m → R, ||f || ∞ < ∞}, a metric space equipped with the L ∞ norm, are dense in it, Theorem 2 restricts the sizes of the domains of the functions underlying the distance d n . Despite this condition being likely suboptimal, it already renders nontrivial results: by the law of large numbers, ν n,p concentrates around the boundary of the ball centered at the origin of radius √ m, suggesting most functions of interest in C(R n ) with compact support are captured by Theorem 2. A consequence of this result is stated next. (f l ) l∈N ⊂ C(R m ) with lim l→∞ ||f − f l || ∞ = 0, ||f || ∞ ≤ 1, f l ≥ 0 for l ∈ N, and p ≥ 16 c 2 2 , | f dµ n,2p − f dν n,2p | ≤ 2c m 1 + 4C(ǫ 0 )n −ǫ0/8 + c 3 mp 2 . Because for any λ ∈ R, A and A p −λI share their eigenvectors, a notion of asymptotically Haar distributed for them ensues from the two results above. Theorem 3. Let R n be the set of functions f : Sym(n) := {M ∈ R n×n , M = M T } → R with ||f || ∞ ≤ 1, the restriction f | Sym d (n):={M∈R n×n ,M=M T ,λ1(M)>...>λn(M)} continuous, given by f (M ) = h(u 1 , u 2 , ... , u n ), for u 1 , u 2 , ... , u n unit eigenvectors of M, h : (S n−1 ) n → R even in each of its components, and Borel measurable. Under the assumptions in Theorem 1, A ∈ Sym d (n), and sup f ∈Rn | f dµ n − f dν n | → 0 with high probability as n → ∞. Theorem 3 offers a new perspective on universality: eigenvectors of a Wigner matrix A ∈ R n×n behave like their counterparts for A G = (a G ij ) 1≤i,j≤n , and up to an extent,à p ≈ A G is a central limit theorem. Although this result is justified by the same combinatorial device as (1.2), requiring growth conditions on all moments (out of which subgaussianity is the most common), the current context permits a relaxation of such constraints primarily because eigenvectors are considerably more invariant than eigenvalues: properties of former for A p − λI stream down to A, whereas the latter are evidently highly dependent on p. Furthermore, edge eigenvalue universality using the method of moments relies on the asymptotic behavior of E[tr(A p )] for p ≈ tn 2/3 and all t > 0 being universal: this forces the tails of a 11 to be light (else the trace expectations might depend on all of its moments). Regarding tail conditions, an 8 + ǫ 0 moment is required chiefly for a truncation: a fourth is necessary as discussed above, and comparatively, E[a 2k 11 ] ≤ C(k) for all k ∈ N is usually assumed when employing local laws. The symmetry condition is mainly technical and might be dispensable, despite this relaxation not being pursued here (several works employ the method of moments and deal with distributions that are only centered: e.g., [22], [15], [16]; it also must be pointed out symmetry plays no role in local laws). For the sake of completeness, Haar measures on the orthogonal group O(n) := {O ∈ R n×n , OO T = I} and their connection with Gaussian distributions are discussed next: this is meant to shed light into why matrices with normal entries are easier to analyze than generic ones, even when it comes to eigenvectors. One definition of the normalized Haar measure on O(n) is the distribution of W 1 , where Z ∈ R n×n has i.i.d. centered standard normal entries, and Z = W 1 DW T 2 is an SVD-decomposition. Several parameterizations of O(n) have been discovered: for instance, if O ∈ O(n), det(O) = 1, then O = e T , O = (I − S)(I + S) −1 (the latter is called Cayley's transform) for T, S skew-symmetric. Another construction, generalized Eulerian angles, was introduced by Raffenetti and Ruedenberg [29]; it is a recurrent procedure in which the elements of O(n) are decomposed into products of n(n−1) 2 factors, each depending solely on one parameter: the building blocks are of the form a pq (α) ∈ R n×n , α ∈ [0, 2π], (a pq (α)) ij =                1, i = j, i ∈ {p, q} cos α, i = j, i ∈ {p, q} sin α, i = p, j = q − sin α, i = q, j = p 0, else for 1 ≤ p < q ≤ n; notwithstanding the inherent elegance of this decomposition, neither obtaining these parameters for a given matrix nor defining a measure on them leading to left or right invariance on O(n), the primary feature of Haar measures, is clear (see [24]). A myriad of definitions can be nevertheless concocted for the Haar measures on O n using multivariate Gaussian distributions, whose equivalence up to multiplication by a scalar, guaranteed by the uniqueness of such objects up to constant factors, is not evident. One recipe is choosing a well-behaved function where Z ∈ R n×n has i.i.d. centered standard normal entries, dZ := 1≤i,j≤n dz ij , and F : Sym(n) → [0, ∞) with F (M ) = F (V M V T ) for all V ∈ O(n), M ∈ Sym(n) (S α := {M ∈ R n×n : ∃M 0 ∈ S, F ((M − M 0 ) T (M − M 0 )) ≤ α} for some α > 0 (noteμ is both left and right invariant under multiplication by elements of O(n)). Concentration and a careful selection of F ensure most random matrices Z ∈ R n×n belong to (O(n)) αn for a deterministic α n , producing a probability measure from the right-hand side of (2.4) upon scaling. Consider a slightly different approach: in (2.4) let . This definition can be employed to derive delocalization properties of the eigenvectors of Wigner matrices: loosely speaking, one expects orthogonal matrices to be close to random matrices of the type 1 √ n Z (by the law of large numbers), entailing any S ⊂ O(n) with positive measure has elements small perturbations of some 1 √ n Z, whose largest entries are of order log n n . Subsection 4.4 presents in further detail how this representation renders, for instance, S α = {X ∈ R n×n : ∀i ∈ {1, 2, ... , n}, ∃M i ∈ S, ||X i − M i || ≤ α}, (2.5) where Y i is the i th row of Y ∈ R n×n . This is right invariant because (Y V ) i = V T Y i ; sinceP(U ∈ O(n) : max 1≤i,j≤n |u ij | ≥ t · log n n ) ≤ n 2 (1 − exp(−cn 1−ct 2 )) (2.6) where c > 0 is universal, as well as (1.3) and (1.4) (this discussion is included for completeness, primarily because no reference with explicit justifications for Gaussian ensembles seems available). The last part of this paper is concerned with Wishart ensembles. Let X = (x ij ) 1≤i≤n,1≤i≤N have i.i.d. entries with E[x 11 ] = 0, E[x 2 11 ] = 1, and A = 1 N XX T . Silverstein [31] extended the results of the seminal paper [25] by Marchenko and Pastur and showed the almost sure weak convergence of the empirical spectral distribution of A = 1 N XX T to a probability distribution as n, N → ∞ with n N → γ ∈ (0, ∞) : for F n (x) = 1 N 1≤i≤N χ x≥λi(A) , F n a.s. − − → F γ , where F γ is the cdf of the probability distribution with density f γ (x)dx, and f γ (x) = √ (b(γ)−x)·(x−a(γ)) 2πγx , x ∈ [a(γ), b(γ)] 0, x ∈ [a(γ), b(γ)] for a(γ) = (1 − √ γ) 2 , b(γ) = (1 + √ γ) 2 . Lemma 3.1 in [2] gives its k th moment is β(k, γ) := b(γ) a(γ) x k f γ (x)dx = 0≤r≤k−1 1 r + 1 k r k − 1 r γ r . (2.7) An elementary result unveils the recurrence satisfied by these moments, similar in spirit to the one satisfied by the Catalan numbers (corresponding to γ = 1). Lemma 1. For k ∈ N, k ≥ 2, and γ > 0, β(k, γ) = (1 + γ)β(k − 1, γ) + γ 1≤a≤k−2 β(a, γ) · β(k − a − 1, γ). (2.8) Lastly, the analog of (1.1) for Wishart matrices is comprised below. Theorem 4. Let X = (x ij ) 1≤i≤n,1≤i≤N have i.i.d. entries with x 11 d = −x 11 , subgaussian, E[x 2 11 ] = 1, A = 1 N XX T , and lim n→∞ n N = γ ∈ (0, ∞). Then for p ∈ N, p = o(n 1/2 ), γ n = n N , E[tr(A p )] = nβ(p, γ n ) · (1 + O( p 2 n )). (2.9) With (2.9) under the belt, a rationale analogous to the one employed by Sinai and Soshnikov [32] to infer (1.2) from (1.1) can be used a trace CLT for tr(A p ). As previously mentioned, extending the results above for Wigner matrices to Wishart ensembles seems feasible, despite this extension not being pursued here; furthermore, the arguments below can be utilized for complex-valued matrices as well (the structure of the sets of dominant cycles, C(l), makes this transparent). The remainder of the paper is organized as follows: Theorem 1, Theorem 2 with its two consequences (Corollary 1, Theorem 3), and Theorem 4 are presented in sections 3, 4, and 5, respectively. Entry CLTs This section consists of the proof of Theorem 1: as mentioned in the introduction, the primary ingredients behind it are the counting device introduced by Sinai and Soshnikov [32], and the dominant cycles underlying the traces below, looked at in detail in [17]. The crux of the former is encompassed by [17], employed here, is: E[tr(B 2q )] = n(E[b 2 11 ]) q C q · (1 + o(1)), for a random matrix B = (b ij ) 1≤i,j≤n ∈ R n×n , B = B T ,Suppose p ∈ N and B = (b ij ) 1≤i,j≤n ∈ R n×n , B = B T , (b ij ) 1≤i≤j≤n i.i.d., b 11 d = −b 11 , E[b 2 11 ] ≤ 1, E[b 2l 11 ] ≤ L(n)n δ(2l−4) , 2 ≤ l ≤ p, δ > 0, and L : N → [1, ∞), L(n) < n 2δ . Then for δ 1 = 1 4 − δ, E[tr(B 2p )] ≤ C p n p+1 + C p n p+1 L(n) · Cp 2 n −2δ1 (3.1) when p ≤ n δ1 , n ≥ n(δ). Let δ = 2 8+ǫ0 + ǫ0 8(8+ǫ0) ∈ ( 2 8+ǫ0 , 1 4 ), and A s = 1 √ n (a ij χ |aij |≤n δ ). A union bound entails A = A s with high probability since P(A = A s ) = P( max 1≤i≤j≤n |a ij | > n δ ) ≤ n 2 · P(|a 11 | > n δ ) ≤ n 2 · C(ǫ 0 ) n δ(8+ǫ0) = C(ǫ 0 )n − ǫ 0 8 = o(1). (3.2) Note (3.1) holds for √ nA s : its entries are i.i.d., symmetric, and for l ∈ N, l ≥ 2, E[a 2 11 χ |a11|≤n δ ] ∈ [1 − n −2δ , 1], E[a 2l χ |a11|≤n δ ] ≤ C(ǫ 0 )n δ(2l−4) : (3.3) when 2 ≤ l ≤ 4 + ǫ 0 /2, the last claim follows from Hölder inequality, E[a 2l 11 ] ≤ (E[|a 11 | 8+ǫ0 ]) 2l−2 6+ǫ 0 · (E[a 2 11 ]) 8+ǫ 0 −2l 6+ǫ 0 ≤ C(ǫ 0 ), and for l > 4 + ǫ 0 /2, E[a 2l 11 χ |a11|≤n δ ] ≤ E[a 8+ǫ0 11 · n δ(2l−8−ǫ0) χ |a11|≤n δ ] ≤ C(ǫ 0 )n δ(2l−8−ǫ0) . (3.4) Since Slutsky's lemma and Carleman condition entail lim n→∞ n l/2 c l (2p, E[a 4 11 ]) E[(e T i A 2p s e i − 1 n E[tr(A 2p s )]) l ] = 0, l = 2l 0 − 1, l 0 ∈ N (l − 1)!!, l = 2l 0 , l 0 ∈ N , (3.5) lim n→∞ n l/2 (C p − C 2 p/2 χ 2|p ) l/2 E[(e T i A p s e j ) l ] = 0, l = 2l 0 − 1, l 0 ∈ N (l − 1)!!, l = 2l 0 , l 0 ∈ N (3.6) suffice to deduce (3) (see lemmas B.1, B.2 in [2]), by a slight abuse of notation, A = A s in what follows: in particular, P(|a 11 | ≤ n δ ) = 1. The goal of this section is justifying (3.5) and (3.6): before proceeding, a summary of the technique developed in [32], upon which these convergences rely, is in order. Specifically, its outcome is a change of summation in E[tr(B q )] = (i0,i1, ... ,iq−1) E[b i0i1 b i1i2 ...b iq−1i0 ] := i:=(i0,i1, ... ,iq−1,i0) E[b i ],(3.7) from cycles i := (i 0 , i 1 , ... , i q−1 , i 0 ) to tuples of nonnegative integers (n 1 , n 2 , ... , n q ), employed to infer the main contributors in (3.7). Recall terminology and notation from [32], necessary in what is to come. Interpret i := (i 0 , i 1 , ... , i q−1 , i 0 ) as a directed cycle with vertices among {1, 2, ... , n}, call (i k−1 , i k ) its k th edge for 1 ≤ k ≤ q, where i q := i 0 , and say i is an even cycle if each undirected edge appears an even number of times in it (otherwise i is odd ). By convention, for u, v ∈ {1, 2, ... , n}, (u, v) denotes a directed edge from u to v, whereas uv is undirected (the former are the building blocks of the cycles underlying the trace in (3.7), while the latter determine their expectations): consequently uv = vu; additionally, denote by m(uv) its multiplicity in i : i.e., m(uv) = |{1 ≤ r ≤ 2q, i r−1 i r = uv}|. Since the entries of A are symmetric, solely even cycles have non-vanishing contributions in (3.7) (such trace expectations have also been analyzed when the entries do not have symmetric distributions: e.g., [22], [15], [16]). The desired change of summation is achieved by mapping even cycles i to tuples of nonnegative integers (n 1 , n 2 , ... , n q ) and bounding the sizes of the preimages of this transformation as well as the expectations of their elements. For i, call an edge (i k , i k+1 ) and its right endpoint i k+1 marked if an even number of copies of i k i k+1 precedes it: i.e., |{t ∈ Z : 0 ≤ t ≤ k − 1, i t i t+1 = i k i k+1 }| ∈ 2Z. Each even cycle i has q/2 marked edges, and any vertex j ∈ {1, 2, ... , n} of i, apart perhaps from i 0 , is marked at least once (the first edge of i containing j is of the form (i, j) because i 0 = j, and no earlier edge is adjacent to j). For 0 ≤ k ≤ q, denote by N i (k) the set of j ∈ {1, 2, ... , n} marked exactly k times in i with n k := |N i (k)|. Then 0≤k≤q n k = n, 1≤k≤q kn k = q/2. (3.8) Having constructed (n 1 , n 2 , ... , n q ), it remains to bound the number of cycles mapped to a given tuple and their expectations, a task steps 1 − 5 summarized below undertake (see subsection 2.1 in [17] for details). Fix (n 1 , n 2 , ... , n p ), and let i be an even cycle mapped to it by the procedure just described. Step 1. Map i to a Dyck path (s 1 , s 2 , ... , s 2p ), where s k = +1 if (i k−1 , i k ) is marked, and s k = −1 if (i k−1 , i k ) is unmarked (there are C p = 1 p+1 2p p such paths). Step 2. Once the positions of the marked edges in i are chosen (i.e., a Dyck path), establish the order of their marked vertices. Step 3. Select the vertices appearing in i, V (i) := ∪ 0≤k≤2p−1 {i k }, one at a time, by reading the edges of i in order, starting at (i 0 , i 1 ). Step 4. Choose the remaining vertices of i from V (i), by reading anew the edges of i in order, beginning at (i 0 , i 1 ) (step 3 only established the first appearance of each element of V (i) in i). Solely the right ends of the unmarked edges are yet to be decided: (i 0 , i 1 ) is fixed as i 0 , i 1 have already been chosen (i 1 is marked); by induction, any subsequent edge has its left end fixed, and therefore only its right end must be selected. This yields marked edges are fully labeled: step 2 determines their positions, while step 3 appoints their right endpoints. Step 5. Bound the expectation generated by i. Regarding the recursion referred to earlier, this concerns the even cycles giving the first-order term in (3.7): keeping the notation from [17], let C(l) be the set of pairwise non-isomorphic even cycles of length 2l, with n 1 = l, and the first vertex unmarked (call two cycles i, j of length q isomorphic if i s = i t ⇐⇒ j s = j t for all 0 ≤ s, t ≤ q). This collection of sets has two key properties: 1. |C(l)| = C l , 2. a recursive description: C(l + 1) consists of three pairwise disjoint families, (i) i = (v 0 , v 1 , ... , v 2l−1 , v 0 ) ∈ C(l) with a loop at v 0 : (v 0 , u, v 0 , v 1 , ... , v 2l−1 , v 0 ) and u new (i.e. , not among the vertices of i); (ii) i = (v 0 , v 1 , ... , v 2l−1 , v 0 ) ∈ C(l) with a loop at v 1 : (v 0 , v 1 , u, v 1 , v 2 , ... , v 2l−1 , v 0 ) and u new; (iii) (u 0 , u 1 , u 2 , S 1 , u 2 , u 1 , S 2 ) with (u 2 , S 1 , u 2 ) ∈ C(a), (u 0 , u 1 , S 2 ) ∈ C(l−a) , no vertex appearing in both, u 0 , u 1 , u 2 pairwise distinct, and 1 ≤ a ≤ l − 1. One straightforward consequence of 1. is essential when justifying Theorem 1: the elements of C(l) are unions of cycles of the form (i 0 , L, i 0 ), where L ∈ ∪ q≤l−1 C(q), any two sharing no vertex but i 0 and i 0 ∈ L. These elements belong to C(l) as long as their lengths add up to 2l (in light of the definition of C(l)), and their number is 1≤k≤l l1+...+l k =l−k,li≥0 C l1 C l2 ...C l k = C l : (3.9) this follows from C l+1 = 0≤k≤l C k C l−k , and induction on l after rewriting the left-hand side term as 1≤k≤l l1+...+l k =l−k,li≥0 C l1 C l2 ...C l k = C l−1 + 0≤l1≤l−2 C l1 2≤k≤l−l1 l2+...+l k =l−k−l1 C l2 ...C l k , the first term corresponding to k = 1. Property 2. is critical in the proof of Theorem 4 (subsection 5.2) since it allows counting the dominant cycles in (1.6) by induction on p. In the rest of this section, • 3.1 presents the proof of (3.5) for l ≤ 2; • 3.2 introduces lemmas on which computing high moments relies; • 3.3 completes the justification of (3.5) by considering l > 2; • 3.4 argues (3.6). Diagonal Entries The goal is (3.5) for l ≤ 2 : the parity of the power in this case is irrelevant. Suppose without loss of generality i = 1 : l = 1 is clear by symmetry. Let l = 2 : Similarly to Sinai and Soshnikov [32], the key is gluing i, j into an even cycle P of length 2p − 2. This is done by using the first common undirected edge e and cutting two of its copies: specifically, let E[(e T 1 A p e 1 − 1 n E[tr(A p )]) 2 ] = n −p (i,j)∈S(p,1) (E[a i · a j ] − E[a i ] · E[a j ]),(3.i t−1 i t = j s−1 j s with t, s minimal in this order (i.e., t = min {1 ≤ k ≤ p, ∃1 ≤ q ≤ p, i k−1 i k = j q−1 j q }, s = min {1 ≤ q ≤ p, j q−1 j q = i t−1 i t }) . P is obtained by fusing both cycles along this common edge, which is erased: it traverses i up to i t−1 i t , uses it as a bridge to switch to j, traverse all of it, and get back to the rest of i upon returning to j s−1 j s = i t−1 i t : if (i t−1 , i t ) = (j s−1 , j s ), then P := (i 0 , ... , i t−1 , j s−2 , ... , j 0 , j p−1 , ... , j s , i t+1 , ... , i p ); else (i t−1 , i t ) = (j s , j s−1 ), and P := (i 0 , ... , i t−1 , j s+1 , ... , j p−1 , j 0 , ... , j s−1 , i t+1 , ... , i p ). P is an even cycle of length 2 · p − 2 = 2p − 2, and e has endpoints whose indices differ by p − 1 in P. By convention, all graphs G in what follows are directed, and e = uv ∈ G is a shorthand for (u, v) ∈ G. Take first the leading components among the contributors in (3.10), cycles of length 2p−2, and determine their preimages under the mapping described above: since the dominant terms (i.e., yielding the largest contribution in expectation) among even cycles of length 2l are the elements of C(l), this remains true here as well (the cycles ρ resulting from these merges have one vertex fixed, ρ 0 = 1) with the same rationale as for the trace going through to show these dominate; the main difference is that now there is an additional factor of n −1 in step 3, accounting for the set vertex 1, which is no longer to be chosen. Consider the preimage of ρ ∈ C(2p − 2) with ρ 0 = 1. Suppose ρ is the union of k cycles, with corresponding vertices i 1 , i 2 , ... , i k and loops attached to them of lengths 2l 1 , 2l 2 , ... , 2l k , called L 1 , L 2 , ... , L k (see description above (3.9)); continue denoting by i 0 the first vertex of ρ (i 0 = 1) for the sake of simplicity. The condition on the lengths is i≤k l i = p − 1 − k. Denote by ρ r the vertex with e := ρ r ρ r+p−1 the first shared edge between i and j, where 0 ≤ r ≤ p − 1. Case 1: ρ r = i 0 . Let 1 ≤ q ≤ k, ρ r+1 = i q ; the only condition is (2l 1 + 2)... + (2l q−1 + 2) ≤ p − 1 (the first q − 1 loops are contained in i) , and 0 ≤ t ≤ k − q contributes 2 to the number of preimages if and only if (2l q + 2) + ... + (2l q+t−1 + 2) ≤ p − 1 for t > 0, t = 0 always producing 2 (once r is chosen, i is fully determined, whereas for j, its first vertex, an appearance of i 0 , and its orientation are yet to be chosen; t tracks the former, and the factor 2 accounts for the latter). Case 2: ρ r = i 0 . Suppose ρ r ∈ L q for 1 ≤ q ≤ k. Since i 0 ∈ L q , and there is t ∈ [0, p − 1], i 0 = ρ r+t (the first vertex of j) , let t be minimal with this property; then ρ r+t−1 = i q . Because ρ r ρ r+p−1 is the first shared edge between i and j, r = 1 + i<q (2l i + 2) (i.e., ρ r is the first appearance of i q in ρ): otherwise i 0 i q is a shared edge with its right endpoint i q preceding ρ r . This entails the (t + q) th appearance of i 0 for 0 ≤ t ≤ k − q contributes 2 to the number of preimages if and only if (2l q + 1) + ... + (2l q+t + 1) + 1 ≤ p − 1 for t > 0, while for t = 0 if and only if 2l q + 1 ≤ p − 1. The remaining condition is r = 1 + i<q (2l i + 2) ≤ p − 1. Given the preimages above, consider their contributions, 4 11 ] or 1, depending on whether e ∈ ρ or e ∈ ρ. Take first Case 1 : E[a i · a j ] − E[a i ] · E[a j ]; the first term is either E[a(i) e ∈ ρ : then E[a i ] · E[a j ] = 0, E[a i · a j ] = 1 (e has multiplicity 1 in both i, j). (ii) e ∈ ρ : this entails ρ r+p−1 = i w since these are the only edges of ρ adjacent to i 0 , and E[a i ] = E[a j ] = 1, E[a i · a j ] = E[a 4 11 ]. Similarly, in Case 2, the contribution is 1 : it cannot be E[a 4 11 ] − 1 since the only edge that could appear more than twice in i ∪ j is i 0 i q , entailing ρ r+p−1 = i 0 ; then ρ r−1 ρ r = i 0 i q is a shared edge, contradicting the minimality of r in i. Therefore, the first-order term in the variance is 2( A 1 + (E[a 4 11 ] − 2)A 2 ), where A 1 (p) = 1≤k≤p−1,1≤q≤k,0≤t≤k−q, i≤k li=p−1−k χ l1+...+lq−1≤ p+1 2 −q · χ lq+...+lq+t−1≤ p−1 2 −t + (3.11) + 2≤k≤p−1,1≤q≤k,0≤t≤k−q, i≤k li=p−1−k χ l1+...+lq−1≤ p 2 −q · χ lq+...+lq+t≤ p−t−χ t>0 2 −1 := A 1,1 (p) + A 1,2 (p), A 2 (p) = 2 i≤k 1 li=p−2k1,2 i≤k 2 l k 1 +i =p−2k2−2 k 1 (k 2 + 1),(3.12) the last identity being justified as follows: each tuple k 1 , k 2 , (l i ) 1≤i≤k1+k2 with the stated properties generates a configuration of length 2p − 2 as above (because 1≤i≤k1+k2 l i = p − k 1 − k 2 − 1) and it remains to choose the first vertices of ρ and j : there are k 1 possibilities for the former and k 2 + 1 for the latter. Look next at each sum separately, beginning with the last as it is the easiest. Induction on m + n yields a m,n := |{(x 1 , x 2 , ... , x m ) ∈ Z m ≥0 , x 1 + x 2 + ... + x m = n}| = m + n − 1 m − 1 : (3.13) (ã m+1,n =ã m,n +ã m+1,n−1 because x 1 = 0 or x 1 ≥ 1), whereby for q ≥ 1, 1≤k≤q, i≤k l k =q−k k = 1≤k≤q k · q − 1 k − 1 = = 1≤k≤q [ q − 1 k − 1 + (q − 1) q − 2 k − 2 ] = 2 q−1 + 2 q−2 (q − 1) = 2 q−2 (q + 1), 1≤k≤q, i≤k l k =q−k (k + 1) = 2 q−2 (q + 1) + 1≤k≤q q − 1 k − 1 = 2 q−2 (q + 3). This entails A 2 (p) = 2 p/2−2 ( p 2 + 1) · 2 p/2−1−2 ( p − 2 2 + 3) · χ 2|p = 2 p−7 (p + 2)(p + 4)χ 2|p . Continue with A 1,1 (p) in (3.11). Denote by a = l 1 + ... + l q−1 , b = l q + ... + l q+t−1 ; employing (3.13), the first sum in (3.11) is (k,q,t,a,b)∈T (p) a + q − 2 q − 2 b + t − 1 t − 1 p − a − b − q − t − 1 k − q − t , with the convention n −1 = 1, n ∈ Z, where T (p) is the set of ordered integer tuples (k, q, t, a, b) satisfying 1 ≤ k ≤ p − 1, 1 ≤ q ≤ min (k, p + 1 2 ), 0 ≤ t ≤ min (k − q, p − 1 2 ), 0 ≤ a ≤ min ( p + 1 2 − q, p − 1 − k), 0 ≤ b ≤ min ( p − 1 2 − t, p − 1 − k − a). (3.14) Since in the current situation n −1 = 1, not zero, these edge cases must be separated from the rest: they are given by q = 1 or t = 0. If q = 1, t = 0, then the contribution is 1, yielding C p−1 (each element of C(2p − 2) contributes 1); if q = 1, t ≥ 1, then k ≥ 2 (i.e., i 0 has multiplicity at least 2 in ρ), and the total is (k,1,t,a,b)∈T (p),t≥1 b + t − 1 t − 1 p − b − t − 2 k − t − 1 ≤ 2≤k≤p−1 ( p − 1 2 + 1) p − 3 k − 2 = (p + 1)2 p−4 , employing b + t ≤ p−1 2 and 0≤i≤p n i m p − i = n + m p ; t = 0, q ≥ 2 produces k ≥ 2 and (k,q,0,a,b)∈T (p),q≥2 a + q − 2 q − 2 p − a − q − 1 k − q ≤ 2≤k≤p−1 ( p + 1 2 + 1) p − 3 k − 2 = (p + 3)2 p−4 . Finally, by summing over a + q ≤ p+1 2 , b + t ≤ p−1 2 , the remainder of A 1,1 (p) is (k,q,t,a,b)∈T (p),q≥2,t≥1 a + q − 2 q − 2 b + t − 1 t − 1 p − a − b − q − t − 1 k − q − t ≤ k≥3 (p + 3)(p + 1) 4 p − 4 k − 3 = 2 p−6 (p+3)(p+1); putting these together yields C p−1 ≤ A 1,1 (p) ≤ C p−1 + 2 p−6 (p 2 + 12p + 19). Similarly, by looking separately at q = 1, t = 0 and q = 1, t ≥ 1 for p ≥ 2, A 1,2 (p) = 0≤l1≤min ( p 2 −1,p−1−k),2≤k≤p−1 p − l 1 − 3 k − 2 + 1≤t≤k−2,2≤k≤p−1,0≤a≤min ( p−t−1 2 ,p−1−k), a + t t p − a − t − 3 k − t − 2 + + (k,q,t,a,b)∈T ′ (p),q≥2,t≥0 a + q − 2 q − 2 b + t t p − a − b − q − t − 2 k − q − t − 1 , ≤ 2 p−4 p + p 2≤k≤p−1 p − 3 k − 2 + ( p 2 + 1)p 3≤k≤p−1 p − 4 k − 3 = 2 p−5 (p 2 + 8p), where T ′ (p) is the set of ordered integer tuples (k, q, t, a, b) satisfying 1 ≤ k ≤ p − 1, 2 ≤ q ≤ min (k, p 2 ), 0 ≤ t ≤ min (k − q, p 2 ), 0 ≤ a ≤ min ( p 2 − q, p − 1 − k), 0 ≤ b ≤ min ( p − t 2 − 1, p − 1 − k − a). (3.15) This concludes the analysis of l = 2 since the merges of length 2p − 2 that do not belong to C(p − 1) are negligible (use Lemma 4: this corresponds to the second term in the upper bound, and since in this situation max L ′ V (X, L ′ 02 ) = 1 as there are only two paths, these cycles yield overall O(p 4 n −2δ1 ), after normalization) and the upper bound is tight when the merges are in C(p − 1) (recall E[a 2 11 χ |a11|≤n δ ] ∈ [1 − n 2δ , 1]); besides it completes the justification of the claims made in Theorem 1 on c(p, E[a 4 11 ]). In light of the above, a more explicit form for A 1 (p) than (3.11) is A 1 (p) = C p−1 + (k,1,t,a,b)∈T (p),t≥1 b + t − 1 t − 1 p − b − t − 2 k − t − 1 + (k,q,0,a,b)∈T (p),q≥2 a + q − 2 q − 2 p − a − q − 1 k − q + + (k,q,t,a,b)∈T (p),q≥2,t≥1 a + q − 2 q − 2 b + t − 1 t − 1 p − a − b − q − t − 1 k − q − t + + 0≤l1≤min ( p 2 −1,p−1−k),2≤k≤p−1 p − l 1 − 3 k − 2 + 1≤t≤k−2,2≤k≤p−1,0≤a≤min ( p−t−1 2 ,p−1−k), a + t t p − a − t − 3 k − t − 2 + + (k,q,t,a,b)∈T ′ (p),q≥2,t≥0 a + q − 2 q − 2 b + t t p − a − b − q − t − 2 k − q − t − 1 , (3.16) where T (p), T ′ (p) are given by (3.14) and (3.15), respectively. Patched Paths Consider l > 2 in (3.5): out of cycles i 1 , i 2 , ... , i l of length p, construct a simple undirected graph G with i a i b ∈ E(G) if only if i a , i b share an edge. The contributors to E[(e T i A p e i − 1 n E[tr(A p )]) l ] = n −pl/2 (i1,i2, ... ,i l ) E[(a i1 − E[a i1 ]) · (a i2 − E[a i2 ]) · ... · (a i l − E[a i l ])] (3.17) are tuples for which all connected components of G have size at least two (else the expectation vanishes by independence). The key observation in [32] is that solely graphs containing l/2 components are first-order terms in (3.17): each must be of size 2, and this generates (l − 1)!!, the number of partitions of {1, 2, ... , l} in l/2 pairs. Given the additional factor of n 1/2 in the normalization ((1.2) does not contain any power of n), the gluing from [32] is not tight enough: Lemma 2 below treats a more general case that allows controlling the moments of both e T i A p e i and e T i A p e j . There are several reasons for considering a wider class of configurations than those underlying (3.17): the moments of e T i A p e j are also needed, induction is more amenable in this enlarged universe than it is in the original one, and this generalization can be adapted to handle mixed moments of off-diagonal entries, i.e., E[e T i1 A p e j1 · e T i2 A p e j2 · ... · e T iL A p e jL ] with i k = j k for all 1 ≤ i ≤ L, relevant in forthcoming sections (this is the content of Lemma 3, the tool rendering many expectations to come). Another extension, Lemma 4, addresses the aforementioned situation when some entries are diagonal: this latter scenario differs from the former primarily because diagonal entries can be not centered (E[e T i A 2p+1 e i ] = 0, whereas E[e T i A 2p+1 e i ] = 1 n E[tr(A 2p )]) . Before proceeding with these results, additional terminology and notation are necessary. For i 1 , i 2 , ... , i L , j 1 , j 2 , ... , j L ∈ {1, 2, ... , n}, call the tuple of undirected edges (i r j r ) 1≤r≤L even if |{1 ≤ k ≤ L : i k = v}| + |{1 ≤ k ≤ L : j k = v}| ∈ 2Z for all 1 ≤ v ≤ n (i.e., v appears an even number of times among i 1 , i 2 , ... , i L , j 1 , j 2 , ... , j L ). The contributors to the forthcoming expectations of interest are even tuples, and to understand which dominate, another combinatorial object is needed. For l 1 , l 2 , ... , l L ∈ N and (i k j k ) 1≤k≤L an even tuple, let M(((i k j k , l k )) 1≤k≤L ) = {L0,L1, ... ,Lt} 1≤r≤t D(Q(L r )), max 1≤r≤L l r > 1, 1, l 1 = l 2 = ... = l L ,(3.18) where the summation is over sets {L 0 , L 1 , L 2 , ... , L t } with the property that for each 1 ≤ q ≤ t, (i) L q = (k q1 , k q2 , ... , k qTq ), k q1 = min 1≤r≤Tq k qr , k qr ∈ {1, 2, ... , L} for 1 ≤ r ≤ T q , (ii) i kqr j kqr = v qr v q(r+1) (as undirected edges) for 1 ≤ r ≤ T q , and some v q1 , v q2 , ... , v qTq ∈ {1, 2, ... , n}, pairwise distinct with v q(Tq +1) := v q1 , (iii) Q(L q ) = ({(l kq1 + ... + l k q(r−1) , v qr ), 1 ≤ r ≤ T q }, 1≤r≤Tq l kqr ) (the sum is 0 when r = 1), with D(S, q) the number of Dyck paths of length 2q to which i ∈ C(q) with i s = v s for (s, v s ) ∈ S are mapped in step 1 below (3.8) (S ⊂ {0, 1, ... , 2q − 1} × {1, 2, .... , n}), and (iv) L 0 = {1 ≤ k ≤ L : i k = j k } ∪ (∪ 1≤k≤L S(i k j k )), where S(uv) is the set of the smallest 2 · l(uv) elements of S ′ (uv) = {1 ≤ k ≤ L : i k j k = uv, l k = 1}, l(uv) := ⌊ |S ′ (uv)| 2 ⌋, (v) for each k ∈ {1, 2, ... , L} − L 0 , there exist unique s, r with 1 ≤ s ≤ t, 1 ≤ r ≤ T s , k = k sr . More succinctly, in the non-degenerate scenario min 1≤k≤L l k > 1, M counts the configurations of Dyck paths for a collection of cycles belonging to ∪ q≤1 C(q), each formed by patching paths with endpoints i k , j k and of length l k for 1 ≤ k ≤ L : the summands in it are the weights of the leading terms in n −( 1≤k≤L l k −L ′ )/2 (i k ) 1≤k≤L ∈P E[ 1≤k≤L a i k ], where L ′ = |{1 ≤ k ≤ L : u k = v k }|, and P consists of tuples of paths ((u k , i k1 , ... , i k(l k −1) , v k )) 1≤k≤L := (i k ) 1≤k≤L , i kh ∈ {1, 2, ... , n} (this is the content of Lemma 3; although it consists solely of an upper bound, it can be easily shown these are tight when M(((i k j k , l k )) 1≤k≤L ) > 1 : see end of subsection 4.1; note all such expectations vanish unless the original tuple is even due to symmetry). These weights arise in step 1 of the counting procedure leading to the change of summation in (3 .7): what justifies the definition of M is the description above identity (3.9), entailing any loop in an element of C(l) belongs to ∪ 1≤k≤l C(k) (if (i 0 , i 1 , ... , i 2l−1 , i 0 ) ∈ C(l) and i a = i b for a < b, then (i a , i a+1 , ... , i b ) ∈ C( b−a 2 ) : this occurs because C(l) is invariant under shifts in Z/2lZ, (i 0 , i 1 , ... , i 2l−1 , i 0 ) ∈ C(l) ⇔ (i a , i a+1 , ... , i 2l−1 , i 0 , i 1 , ... , i a−1 , i a ) ∈ C(l)), implying, for instance, D(S, q) = 0≤i≤s χ 2|qi+1−qi · 0≤i≤s C q i+1 −q i 2 , when S = {(q i , v), 1 ≤ i ≤ s, 0 ≤ q 1 < q 2 < ... < q s ≤ 2q − 1} , q s+1 := 2q + q 1 (the relevant Dyck paths return at the origin after q 2 − q 1 , q 3 − q 1 , ... , q s − q 1 steps). For example, there are M((i 1 , i 1 , 2l 1 )) = C l1 = |C(l 1 )| elements in ∪ q≥1 C(q) of length 2l 1 , and M(((i 1 , i 1 , 2l 1 ), (i 1 , i 1 , 2l 2 ))) = C l1 C l2 elements in ∪ q≥1 C(q) of length 2l 1 + 2l 2 with i 0 = i 2l1 (i.e., concatenations of one element of C(l 1 ) and one of C(l 2 )). Certain degeneracy occurs when some of lengths are 1 as the set P might be nonempty, whereas M can vanish, e.g., i r = j r , l r = 1 (no element of ∪ q≥1 C(q) has two consecutive vertices equal), and the second branch in (3.18) is meant to account for these cases. Having introduced even multisets and the combinatorial function M, proceed with the statement and proof of Lemma 2. (i k ) 1≤k≤L ∈P E[ 1≤k≤L b i k ] ≤ n (p−L+l)/2 E[b 2l0 11 ] · [M(((u k v k , l k )) 1≤k≤L ) + C ⌊p/2⌋ L(n)(Cp 2 ) 2L−l · L! · n −2δ1 ], (3.19) where δ 1 = 1 4 − δ, l = |{1 ≤ k ≤ L : u k = v k }|, l 0 = |{1 ≤ k ≤ L : l k = 1}| , and C ≥ 16 satisfies (3.1). Proof. Assume 2|p : otherwise the claim is immediate (all expectations vanish). L = 1 forces u 1 = v 1 , l = 1, M((u 1 u 1 , l 1 )) = C p/2 , whereby the result follows from (3.1) (l 0 = 0, and one vertex is fixed, furnishing an additional factor of n −1 in step 3). Suppose L ≥ 2, and use induction on p ≥ L to show (3.19) holds with the second term encompassing the contribution of all configurations but those in which the paths with indexes in {1, 2, ... , L} − ({1 ≤ k ≤ L : i k = j k , l k = 1} ∪ (∪ 1≤k≤L S(i k j k ))) (see (3.18)) are assembled in a collection of edge-disjoint elements of ∪ q≥1 C(q), sharing vertices solely trivially (i.e., if w is a vertex belonging to two distinct cycles, then w ∈ {u, v}, and no collection of loops at u ′ contains v ′ for {u ′ , v ′ } = {u, v}). If p = L, then l 0 = p, (i k ) 1≤k≤L ∈P E[ 1≤k≤L b i k ] = E[ 1≤k≤p b u k v k ], yielding together with Hölder inequality (3.19): E[ 1≤k≤p b u k v k ] ≤ E[ 1≤k≤p b 2 u k v k ] ≤ 1≤k≤p (E[b 2p u k v k ]) 1 p = E[b 2l0 11 ]; the second part is trivially satisfied since M(((u k v k , l k )) 1≤k≤L ) = M(((u k v k , 1)) 1≤k≤L ) = 1. Assume p ≥ L + 1. Glue as many paths of length 1 as possible in cycles (all loops are included, and at most one copy of uv is left out): denote these cycles by L 1 , L 2 , ... , L T1 , and take T 1 minimal (i.e., no two distinct cycles share a vertex). Consider the remaining paths, and assume their indices are 1, 2, ... , T 2 ; among them, 2m have distinct endpoints since the original tuple is even: without loss of generality, let u k v k = uv for 1 ≤ k ≤ 2m, and construct m cycles by merging the paths with indices 2r − 1, 2r for 1 ≤ r ≤ m; the rest of the paths are already cycles. Hence the elements of P are unions of T 0 : = T 1 + m + (T 2 − 2m) cycles (L i ) 1≤i≤T0 ; let D 0 = 1≤i≤m χ 2|l2i−1+l2i C l 2i−1 +l 2i 2 · 2m<i≤T2 χ 2|li C l i 2 . (3.20) Case 1 : (L i ) 0≤i≤T0 are pairwise edge disjoint. This entails the contributions split into a product with T 0 factors, and an upper bound is where d(L k ) = 1 if L k has its two set vertices distinct, otherwise d(L k ) = 0, because in step 3 there is another factor of n −1 (n −2 ) when L k has its two fixed vertices equal (distinct), and n (p−L+l)/2 E[b 2l0 11 ] · D 0 · (1 + CL(n)p 2 n −2δ1 ) L(p − L + l = 1≤k≤L l k χ u k =v k + 1≤k≤L (l k − 1)χ u k =v k ≥ T1<k≤T0 (|L k | − 2d(L k )) (if L t contains u, v, then both of the paths forming it belong to the second sum: otherwise, to the first). Case 2 : (L i ) 1≤i≤T0 are not pairwise edge disjoint. Take t 1 < t 2 with L t1 , L t2 sharing an edge e = xy, whose two shared copies are not both among the paths of length 1, and if possible, choose one of them to be a loop; take s 1 < s 2 arbitrary with L s1 , L s2 sharing an edge e; if both of its copies are paths of length 1, then e = uv and s 1 ≤ T 0 ; either uv has odd multiplicity in ∪ 1≤k≤T0 L(k), in which case all expectations vanish, or there is a path of length at least 2 containing uv (its multiplicity in ∪ T1<k≤T0 L(k) is even and positive, hence at least 2), yielding a pair with the desired properties. The edge e belongs to two of the original paths, call them ρ 1 , ρ 2 , and by deleting the first copy of e in both of them, two edges are lost and the configuration is of the same type (i.e., a collection of paths with endpoints in {u, v}) by gluing them at the x, y. Consider what can occur with this merge: (i) ρ 1 , ρ 2 are loops at the same vertex, say u : L − l remains constant (they can both decrease if ρ 1 or ρ 2 has length 1); (ii) ρ 1 , ρ 2 are loops at different vertices: L − l increases by at least 1 as l decreases by 2, and L can only decrease by 1 (the result is empty if and only if both ρ 1 , ρ 2 have length 1, impossible); (iii) ρ 1 is a loop, say at u, and ρ 2 contains u, v : L − l stays constant or decreases by 1 (l decreases if ρ 1 = (u, u), and uu is adjacent to the endpoints of ρ 2 , in which case L also drops by 1; L cannot decrease by 2 since ρ 1 or ρ 2 has length at least 2); the case ρ 1 containing u, v, ρ 2 being a loop is analogous; (iv) ρ 1 , ρ 2 contain u, v : the new paths can either both contain u, v, or be loops, one at u and one at v; in the former case, L − l can only decrease by 1 (l does not change), while in the latter, L − l decreases by 2 (when ρ 1 , ρ 2 have length at least 2, l increases by 2 and L stays constant; otherwise say ρ 1 = (u, v) : then either L stays constant, l increases by 2, or L decreases by 1, l increases by 1 : the latter happens when the copy of uv in ρ 2 is adjacent to one its endpoints); denote this case by ( * ). Since the new paths can be assembled back to their original format in at most 4p 2 ways (choose a vertex on each, split there into two components, and glue them back), a bound for (i) − (iv), apart from ( * ), is, by the induction hypothesis, 4p 2 n −1/2 · n 2δ · n (p−L+l)/2 · C p/2 · E[b 2l0 11 ] · [(1 + (L − l − 1)!!χ L>l ) + L(n)(Cp 2 ) 2L−l · L! · n −2δ1 ] : (T2) in all situations, at least a factor of n −1/2 is gained (p decreases by 2, and l − L increases by at most 1), the number of paths of length 1 does not increase (such paths can appear only when ρ 1 or ρ 2 has length 1, in which case their number remains constant unless the other path has length 3 : if the path of length 1 is a loop, then L remains the same and l can only decrease, yielding an extra factor of n −1/2 , which can account for the two new paths of length 1; else l does not change since it could only increase when ( * ) is satisfied), the two erased copies of xy can be accounted by a factor of n 2δ , the uniform bound M(((u k v k , l k )) 1≤k≤L ) ≤ (1 + (L − l − 1)!!χ L>l ) · C p/2 can be employed (solely M(((u k v k , l k )) 1≤k≤L ) > 1 must be justified; when L − l = |{1 ≤ k ≤ L : u k = v k , l k = 1}|, the claim ensues because M(((u k v k , l k )) 1≤k≤L ) ≤ 1≤k≤L:l k >1 χ 2|l k C l k /2 ≤ C p/2 ; otherwise the sets L contributing to M have the sets (L r ) 1≤r≤t of length 2 because this is the only way of merging paths with endpoints in {u, v} into cycles, and any loop in an element of ∪ q≥1 C(q) belongs to ∪ q≥1 C(q); there are thus at most (L − l − 1)!! possibilities for (L r ) 1≤r≤t , and each summand in (3.18) is at most max 4p 2 · n (p−L+l)/2 · E[b 2l0 11 ] · C p/2 L(n) · (Cp 2 ) 2L−l−2 · L! · n −2δ1 (T3) if the cycles do not fall within the exceptional set described in the second part of the induction hypothesis as 2L − l decreases by at least 1 (l increases by 2, and L does not change). Suppose ρ is among those maximal configurations: the two newly formed loops are edge-disjoint and have no nontrivial vertex intersections. If there are no summands in M for ρ, then either (c1) all the loops in ρ have length 1, or (c2) there is also no summand in M for the original arrangement (∪ q≥1 C(q) is invariant under removing loops): (c1) the original configuration has all paths of length one, apart from two of length 2, u, x, v and u, y, v; then (i k ) 1≤k≤L ∈P E[ 1≤k≤L b i k ] ≤ E[b 2l0 11 ] · (n + 3n 4δ ) = n E[b 2l0 11 ] · (1 + 4n −4δ1 ) since the expectations vanish unless x = y or x, y ∈ {u, v}, n accounting for x = y ∈ {u, v}, p − L + l ≥ 2; in this situation, M(((u k , v k , l k )) 1≤k≤L ) = 1 (for the paths to form elements of ∪ q≥1 C(q), x, y ∈ {u, v}, and thus x = y : this yields at most one configuration) and both parts of the conclusion hold; (c2) this can occur solely when there is 1 ≤ r ≤ L, u r = v r , 2|l r + 1, l r > 1, or |{1 ≤ k ≤ L : u k v k = uv, 2|l k + 1}| is odd; in the both situations, all expectations vanish (ρ 1 , ρ 2 are not loops, and thus all paths with identical endpoints of length at least 2 are edge-disjoint from the rest). Assume now M > 1 for the paths comprising ρ : by adding back the two copies of xy, the configuration remains maximal ((x, y, L 1 , y, x, L 2 , x) ∈ ∪ q≥1 C(q) because L 1 , L 2 ∈ ∪ q≥1 C(q) are vertex-disjoint, and x = y from the second part of the induction hypothesis on the clipped configuration), entailing these terms are accounted by the first component in (3.19). Putting together (T1)-(T3) completes the induction step: n (p−L+l)/2 · E[b 2l0 11 ] · [M(((u k v k , l k )) 1≤k≤L ) + C p/2 · [(1 + CL(n)p 2 n −2δ1 ) L − 1]+ 4p 2 n −2δ1 C p/2 ·L(n)(Cp 2 ) 2L−l ·L!·n −2δ1 +4p 2 n −2δ1 C p/2 (1+(L−l−1)!!χ L>l )+4p 2 n −2δ1 C p/2 (Cp 2 ) 2L−l−1 ·L!], and for p ≥ 4 (p = 2 yields either L = p or L = 1), (1+CL(n)p 2 n −2δ1 ) L −1 ≤ L·CL(n)p 2 n −2δ1 (1+CL(n)p 2 n −2δ1 ) L−1 ≤ L·CL(n)p 2 n −2δ1 exp(CL(n)p 2 n −2δ1 (L−1)) and L · CL(n)p 2 exp( L−1 4 ) ≤ L!CL(n)p 2 ≤ L(n)(Cp 2 ) L L!/4, 4p 2 n −2δ1 ≤ 1 4 , 4p 2 (1 + (L − l − 1)!!χ L>l ) ≤ 8p 2 L! ≤ (Cp 2 ) L L!/4, 4p 2 (Cp 2 ) 2L−l−1 · L! ≤ (Cp 2 ) 2L−l L!/4. The generalization of Lemma 2 employed below reads as follows. Lemma 3. (3.19) continues to hold in the setup of Lemma 2 after dropping the condition u k , v k ∈ {u, v} for all 1 ≤ k ≤ L with L! replaced by (2L)!. Proof. The proof is almost identical to the case {u i , v i : 1 ≤ i ≤ L} ⊂ {u, v}, and solely the differences are included. Glue the paths of length 1 in stages: first, all loops with a shared vertex into one cycle, second, for each undirected edge with distinct endpoints, create cycles by merging them in pairs, and third, for the remainder, consisting of pairwise distinct undirected edges, assemble as many paths as possible in cycles: denote the outcomes of such a procedure by (L r ) 0≤r≤T1 . The endpoints of the paths of length at least 2 and the leftovers of length 1 form an even tuple, and induction on their number r yields they can be assembled into a collection of cycles (suppose without loss of generality these are (u k v k ) 1≤k≤r ; if r = 1, then u 1 = v 1 ; suppose r ≥ 2; if u 1 = v 1 , use the induction hypothesis on the remaining r − 1 cycles; else there is 2 ≤ k ≤ r such that u 1 ∈ {u k } ∪ {v k }; merge these two paths into one, and apply the induction hypothesis on the new collection of r − 1 paths, possible because the new tuple of endpoints continues to be even when ignoring the vertex at which the two paths were joined); by choosing a gluing with as many components as possible, all the resulting cycles have pairwise distinct vertices (if two vertices were equal, then the number of cycles could be increased by 1): denote them by (L r ) T1<r≤T0 . Change Case 1 to no pair of distinct cycles sharing an edge containing a path of length at least 2 (Case 2 is its complement). An analogous rationale goes through for Case 1 : for the exponent of n, the relevant function is d(L k ) = q k − 1, where q k is the number of paths forming L k (because the number of cycles is maximal, q k vertices are fixed in step 3), and the desired inequality is justified in the same vein. Regarding Case 2, a pair t 1 , t 2 as originally described continues to exist: an identical argument yields the desired claim unless (x, y) is a path of length 1 with x = y, xy having multiplicity 1 in ∪ T1<r≤T0 L(r) and thus odd in ∪ r≤T1 L(r); since the endpoints of the paths underlying ∪ r≤T1 L(r) form an even tuple, there must be an edge (y, z) of odd multiplicity in ∪ k≤T1 L(k); this entails a path among those of ∪ T1<k≤T0 L(k) contains a copy of it, giving either the needed pair or another path (y, z) of length 1; iterating this procedure finitely many times produces either t 1 , t 2 as wanted, or a sequence x 1 , x 2 , ... , x m with x 1 x 2 , x 2 x 3 , ... , x m x 1 paths of length 1, each among the components of ∪ T1<k≤T0 L(k), contradicting the maximality of T 1 . Continuing with the argument in Case 2, the other possibilities apart from (i) − (iv) can only increase l by 1, and thus generate a factor of n −1/2 . An upper bound on M is M(((u k v k , l k )) 1≤k≤L ) ≤ (1 + (2L − 2l − 1)!!χ L>l ) · C p/2 : (3.21) in any relevant configuration underlying M, the resulting cycles have all the endpoints of multiplicity 2 (i.e., after fusing them at the shared endpoints, the remaining vertices are pairwise distinct); each contains at least 2 paths, implying their number is at most 1≤r≤L ′ (m r − 1)!!, where L ′ = |{u k v k , 1 ≤ k ≤ L}|, {u k v k , 1 ≤ k ≤ L} = {u σ(1) v σ(1) , u σ(2) v σ(2) , ... , u σ(L ′ ) v σ(L ′ ) }, m r = |{1 ≤ k ≤ L : ∃u, v, w, u r v r = uv, u σ(r) v σ(r) = uw}|, 1 ≤ r ≤ L ′ , because the mapping taking each path to its right neighbor in the cycle it belongs to (take in each cycle the path of smallest index and let its right neighbor have minimal index as well: this defines an orientation) is injective, and 1≤r≤L ′ (m r − 1)!! ≤ ( 1≤r≤L ′ m r − 1)!! ≤ (2L − 2l − 1)!! employing (a − 1)!!(b − 1)!! ≤ (a + b − 1)!! and a → (a − 1)!! non-decreasing. Lastly, consider the situation with the second part of the induction hypothesis satisfied: (c1) the expectations vanish unless x = y or x, y ∈ ∪ 1≤k≤L ({u k } ∩ {v k }), yielding (i k ) 1≤k≤L ∈P E[ 1≤k≤L b i k ] ≤ E[b 2l0 11 ] · (n + (2L) 2 n 4δ ) = n E[b 2l0 11 ] · (1 + 4L 2 n −4δ1 ), with the bound continuing to hold because 4L 2 ≤ L 2L (p ≥ L, 2L − l ≥ L) for L ≥ 2. The remainder of the argument continues to hold, providing the desired claim. Before stating and justifying the last extension of Lemma 2, consider another version of M. For t 0 ∈ Z ≥0 and X a real-valued random variable with finite moments, let M X (t 0 /2, ((i k j k , l k )) 1≤k≤L ) =      0, t 0 ∈ 2Z, 1, l 1 = l 2 = ... = l L = 1, L V (X, L 02 ) {ar ,br}∈L02 D 2 (l ar , l br ) · 1≤i≤t D(Q(L i )), else, (3.22) where the summation is over L = {L 00 , L 01 , L 02 , L 03 , L 1 , L 2 , ... , L t }, satisfying (i)−(iii), (v) (beneath (3.18)) with L 0 = L 00 ∪ L 01 , (vi ′ ) L 00 = {1 ≤ k ≤ L : i k = j k , l k = 1} ∪ (∪ 1≤k≤L S(i k j k )), where S(uv) is the set of the smallest 2 · l(uv) elements of S ′ (uv) = {1 ≤ k ≤ L : uv = i k j k , l k = 1}, l(uv) := ⌊ |S ′ (uv)| 2 ⌋, (vii ′ ) L 01 = {1 ≤ k ≤ L : i k = j k , l k > 1}, L 02 = ∪ 1≤r≤t0/2 {a r , b r : i ar = i br } ⊂ L 01 with a s = b s , |{a s , b s } ∩ {a r , b r }| > 0 ⇔ r = s, and D(a, b) the number of Dyck paths of length a + b − 2 corresponding to elements of ∪ q≥1 C(q), each being the merge of two cycles of lengths a, b, respectively, sharing the first vertex and weighted by the size of its preimage of the map described in subsection 3.1: a rationale analogous to the one presented therein yields D(a, b) ≤ C ′ · C (a+b−2)/2 (3.23) for C ′ > 0 universal, (viii ′ ) V (X, L 02 ) = v∈{i k ,j k ,1≤k≤L} (vr) r∈V (v) ,vr ∈S(la r ,l br ) E[ r∈V (v) X 2vr+|{1≤k≤L,i k =j k =v,l k =1}| ], where V (v) = {1 ≤ s ≤ t 0 /2 : i as = v}, S(a, b) = {1, 2}, 2|a + 1, {1}, else. M X differs from M in one regard: it forces some loops to be non-edge disjoint from the rest. These configurations are relevant inasmuch as the diagonal entries e T i A p e i are not centered when 2|p, entailing the first-order terms in products containing them must account for this. Nevertheless, the core idea behind M X remains identical to that underlying M : the paths are pasted in elements of ∪ q≥1 C(q), with no nontrivial vertex intersections (i.e., if two distinct cycles share a vertex, then each contains a path with it as one of its endpoints). Since a pair of edges is deleted in each merge in (vii ′ ), there can be edge multiplicities larger than 2 for loops at the endpoints, which is what (viii ′ ) tracks: e.g., for L = 4, u k = v k = u, l k = 2p k + 1, 1 ≤ k ≤ 4, the maximal configurations include unions of four paths belonging to ∪ q≥1 C(q) with one loop at u, overall leading to uu having multiplicity 4. (3.24) where C ′ > 0 satisfies (3.23), and the maximum is taken over sets L ′ underlying (3.22) for a sub-tuple of ((u k v k , l k )) 1≤k≤L . (i k ) 1≤k≤L ∈P E[ 1≤k≤L b i k ] ≤ n (p−L+l)/2 · n −t0/2 E[b 2l0 11 ] · [M b11 (t 0 /2, ((u k v k , l k )) 1≤k≤L )+ + C ⌊p/2⌋ L(n) · (CC ′ p 2 ) 2L−l · (2L)! · n −2δ1 · max L ′ V (X, L ′ 02 )], Proof. Proceed in the same vein as in Lemmas 2, 3: the second part of the induction hypothesis remains the same for the cycles with indices {1, 2, ... , L} − ({1 ≤ k ≤ L : i k = j k , l k = 1} ∪ (∪ 1≤k≤L S(i k j k ))) − L 02 and the merges of the pairs in L 02 (the common edges left out). When t 0 = 0, Lemma 3 yields the conclusion (M b11 (0, ((u k v k , l k )) 1≤k≤L ) = M(((u k v k , l k )) 1≤k≤L ). In virtue of (3.23), reasoning as for (3.21) gives M X (t 0 /2, ((u k v k , l k )) 1≤k≤L ) ≤ (1 + (2L − 2l − 1)!!χ L>l ) · (C ′ ) L/2 C p/2 · max L ′ V (X, L ′ 02 ). (3.25) Suppose t 0 > 0. The base case for induction does not change since t 0 = 0 when L = p. Case 1 is impossible (t 0 > 0), and in Case 2, select ρ 1 among the paths giving t 0 , and denote by xy the (first) common edge with ρ 2 , and their endpoints (u, u), (v, w), respectively; furthermore, if there are loops containing two copies of xy among the paths, choose ρ 1 one of them. By deleting two copies of xy, t 0 decreases by at most 2 because either ρ 1 contains another copy of it or there are two other paths containing xy (its multiplicity is even: otherwise no contribution ensues). Call this new configuration of paths ρ ′ . (i ′′ ) v = w : the remaining t 0 − 1 cycles continue to be non-edge disjoint with the rest. Hence for ρ ′ , t 0 decreases by 1, and l − L does not increase (l can only decrease, and when L decreases, so does l : this occurs when ρ 2 has length 1 and vw = uu ′ ), whereby p − L + l − t 0 decreases by 1; the induction hypothesis yields the claim in the same fashion as before; (ii ′′ ) v = w, v = u : t 0 decreases by 2, whereby p − L + l − t 0 decreases by 2 (2 loops are lost), and anew the induction hypothesis and the previous rationale are effective; (iii ′′ ) v = w = u : denote by ρ 3 the merge of ρ 1 , ρ 2 (keep them into one cycle as in subsection 3.1), and suppose ρ 1 , ρ 2 are the first two paths for the sake of notational simplicity. If ρ 3 shares no edge with the rest of the paths, either 2|l 1 + l 2 + 1, in which case all expectations vanish, or the induction hypothesis on the remainder of the paths and independence yield the conclusion: n (l1+l2)/2−1 D(l 1 , l 2 )(1 + CL(n)n −2δ1 ) · n (p−L+l)/2−t0/2−(l1+l2)/2+1 · · E[b 2l0 11 ]·[M b11 ((t 0 −2)/2, ((u k v k , l k )) 3≤k≤L )+C (p−m1−m2)/2 L(n)(CC ′ p 2 ) 2L−l−2 (2L−2)!n −2δ1 ·max L ′ V (X, L ′ 02 )] since p− L + l − t 0 for the clipped configuration decreases by at least −l 1 − l 2 + 2 : the first-order term remains included in M b11 (t 0 /2, ((u k v k , l k )) 1≤k≤L ) because vertex-disjoint configurations underlying M b11 are closed under unions, while the second-order terms are at most, using (3.23) and (3.25), C ′ C (m1+m2)/2−1 (1 + CL(n)n −2δ1 ) · C (p−m1−m2)/2 L(n)(CC ′ p 2 ) 2L−l−2 (2L − 2)!n −2δ1 · max L ′ V (X, L ′ 02 )+ +C ′ C (m1+m2)/2−1 · CL(n)n −2δ1 · (1 + (2L − 2l − 1)!!χ L>l ) · (C ′ ) L/2−1 C (p−l1−l2)/2 · max L ′ V (X, L ′ 02 ) producing the desired inequality. Lastly, suppose ρ 3 shares some edge with the remaining paths; use the induction hypothesis on them and ρ 3 : t 0 decreases by 1, whereby so does p − L + l − t 0 , and again the conclusion ensues from the induction hypothesis. Higher Moments Return to (3.17) for l > 2 : n l/2 E[(e T i A p e i − 1 n E[tr(A p )]) l ] = n −(p−1)l/2 (i1,i2, ... ,i l ) E[(a i1 − E[a i1 ]) · (a i2 − E[a i2 ]) · ... · (a i l − E[a i l ])] The additional normalization for each factor is c(p, E[a 4 11 ]) ≥ (2C p−1 ) 1/2 , and 1 n |E[tr(A p )]| ≤ 2χ 2|p · C p/2 , whereby 1 n |E[tr(A p )]| c(p, E[a 4 11 ]) ≤ 2χ 2|p · C p/2 2C p−1 = O(1) using lim s→∞ s 3/2 Cs 4 s = √ π, a consequence of Stirling's formula; in light of the case l = 2, this entails showing connected components of G with L > 2 cycles are negligible, i.e., the sum of their corresponding expectations is o(n (p−1)L/2 (2C p−1 ) −L/2 ), suffices to conclude (3.5) for l > 2. Take first 2|p, and consider a connected component with L > 1 cycles, i 1 , i 2 , ... , i L . Apply Lemma 4 to B = 1 E[a 2 11 χ |a11|≤n δ ] A s ,(3.26) (L(n) = 2C(ǫ 0 ) suffices for n ≥ n(ǫ 0 ) since E[a 2 11 χ |a11|≤n δ ] ∈ [1 − n −2δ , 1]), and the even tuple formed by the endpoints of i 1 , i 2 , ... , i L (i.e., u k = v k = 1, 1 ≤ k ≤ L); in this situation, t 0 = l = L, l 0 = 0, and the conclusion follows because the first order term in (3.24) corresponds to configurations with pairs of cycles sharing no vertex apart from 1, entailing connected components with L > 2 are negligible (the second term is negligible since C pL/2 (2Cp−1) L/2 = O(p 3L/2 ), and V = 1 when there are no paths of odd length since elements of ∪ q≥1 C(q) can only be split into cycles in the same set, entailing the common edges from (vii ′ ) are pairwise distinct among the pairs ({a r , b r }) 1≤r≤L/2 ). This concludes 2|p. Some comments are in order for 2|p+1 : there are additional configurations yielding leading contributions. For instance, a cluster with 2m cycles, each an element of C( p−1 2 ) with a loop attached at some appearance of 1 in it; these give, after normalization, E[a 2m 11 ] · ( 2≤t≤ p+1 2 t(f p−1 2 ,t − 1) c(p, E[a 4 11 ]) ) 2m , where f l,t is the number of elements in C(l) with the first vertex of multiplicity t − 1. Lemma 4 in [17] shows for 1 ≤ t ≤ l, This shows all moments of a 11 contribute to those of e T i A p e i when 2|p + 1 : as p → ∞, the higher moments can turn negligible if p 3l/4 grows faster than E[a 2l 11 ]. Although Lemma 4 does not shed light into this case, it can be shown that when (2.1) holds, the sole non-negligible clusters are of the two types described above (i.e., of size 2, or 2m with each containing 11). One implication of this growth condition and p = n o(1) is f l,t = 2l − t l − 1 − 2l − t l , whereby C l ≤ 2≤t≤l tf l,t ≤ C ′′ · C l (f l,1 = C l−1 , and (t+1)f l,t+1 tf l,t = 1 − t(l+1)−(3l+1) (t−1)(2l−t) ≤ 2 3 for t 0 ≤ t ≤ l), andE[a 2l 11 χ |a11|≤n δ ] = n o l (1) . (3.27) To see this, consider a connected component G 0 of G of size L > 1, and suppose first 11 appears in at most one of its vertices. Let {u 1 v 1 , u 2 v 2 , ..., u k v k } be a set of edges whose copies in i 1 , i 2 , ... , i L underlie the edges of a spanning tree of G 0 : particularly, for any 1 ≤ r ≤ k, u r v r belongs to at least two cycles, and so u r v r = 11. Split i 1 , i 2 , ... , i L into paths using the shared edges necessary for such a merge and their first vertices (i.e., by cutting out these edges, a set of paths is formed, and if need be, cut again the path to which the first vertex belongs into two). Fix the vertices of the edges (u r v r ) 1≤r≤k , and apply Lemma 3 to B given by (3.26) and this configuration of paths. This entails the overall contribution (after accounting for the set vertices) is at most E[a 4L 2 11 χ |a11|≤n δ ] · n pL/2−(L ′ −l ′ )/2 · n |{ur,vr ,1≤r≤k}∩{2,3, ... ,n}| · (2L)!C pL/2 (3.28) (using (3.21)) since there are at most L · 2L paths of length 1 (each cycle has at most L − 1 edges employed for this procedure as G is a simple graph). Lemma 5. A cycle i r with s ≥ 1 pairwise distinct vertices in {2, 3, ... , n} among the endpoints has at least s + 1 paths that are not loops at the end of any split as described above. Proof. Use induction on s. The base case s = 1 holds because there is an edge uv = 11 cut out, yielding at least 2 paths that are not loops: recall the first vertex 1 also generates a rupture: if u = v, then uv is not a loop and there is another such path because if it were not, then by traversing the cycle clockwise and counterclockwise, all the endpoints and 1 would be equal to both u and v, absurd; similarly, when u = v > 1, at least two paths with distinct endpoints are created due to the split at the first vertex, 1. Suppose s ≥ 2 : if only one edge is cut out, then it must be uv with u > v > 1, whereby s = 2 and the claim follows as there is a path with distinct vertices in each of the three segments in which u, v, and the first vertex separate the cycle. Assume at least two edges are drawn out, and take (x, y) = (i s1 , i s1+1 ), (z, t) = (i s2 , i s2+1 ) with s 1 < s 2 and y = x or y = z (if this were impossible, then all the cut edges would be uu for some u, apart perhaps from the one with largest index which must be adjacent to u; since s ≥ 2, the last edge must be uv, u = v, and u, v > 1, in which case s = 2; the conclusion ensues as there are three paths with endpoints (1, u), (1, v), (u, v), respectively); the induction hypothesis yields s − 1 + 1 = s paths that are not loops in the abridged cycle with (i s1 , ... , i s2 ) replaced by (i s1 , i s2 ) as an unerased edge; finally, y = x or y = z entails at least another path with distinct endpoints is created in the original cycle, completing the induction step. Lemma 5 entails − (L ′ − l ′ )/2 + L/2 + |{u r , v r , 1 ≤ r ≤ k} ∩ {2, 3, ... , n}| ≤ 0 (3.29) inasmuch as it gives L ′ − l ′ − L ≥ 1≤h≤L V (i h ), where V (i h ) is the number of pairwise distinct endpoints among {2, 3, ... , n} adjacent to the cut out edges in i h , and each element of {u r , v r , 1 ≤ r ≤ k} ∩ {2, 3, ... , n} belongs to at least two cycles among i 1 , i 2 , ... , i L . In particular, the bound (3.28) is at most O L (p c(L) E[a 4L 2 11 χ |a11|≤n δ ]). (3.30) If the inequality is strict, then the contribution is negligible ((3.27) entails E[a 2l 11 χ |a11|≤n δ ] = n o(1) ) for any fixed l ∈ N, and the contribution of M under the normalization is at most c L p cL in light of (3.21)). Suppose equality holds and L > 2 : each edge must appear exactly twice, no two can be adjacent, at least one cycle has two distinct edges cut out (L > 2), in which case the inequality becomes strict unless one of its cut out edges is a loop uu (return to the proof of Lemma 5: this situation falls within s ≥ 2, and hence at least s + 2 paths that are not loops are formed since y = x as xy is not a loop, and y = z as distinct edges are not adjacent). Discarding two copies of uu produces a new configuration with L loops and 2 fewer edges; apply Lemma 3 to it, which generates a factor of n −1 , counteracting the cost of gluing and the dropped copies p 2 n 2δ as well as (3.30), turning anew the contribution negligible. This completes the claim of clusters with L > 2 cycles being negligible as long as 11 appears in at most one of them. The remaining situation can be handled similarly by putting aside the edges 11 : this entails the remaining configuration is negligible, unless the cycles with 11 are elements of ∪ q≥1 C(q) (by taking out one edge, the normalization per cycle is its length divided by 2 : hence any cluster is negligible from the rationale yielding (3.29)). In conclusion, under the additional assumption (2.1), for 2|L, the first-order term of the L th moment is (2C p−1 ) −L/2 k≤L/2 L 2k (2k − 1)!!(2C p−1 ) k C L−2k (p−1)/2 E[a L−2k 11 χ |a11|≤n δ ], with the term corresponding to k = L/2 dominating: k≤L/2 L 2k (2k − 1)!!( C (p−1)/2 2C p−1 ) L−2k E[a L−2k 11 χ |a11|≤n δ ] ≤ ≤ (L − 1)!! + k<L/2 L 2k (2k − 1)!!( c p 3/4 ) L−2k E[a L−2k 11 χ |a11|≤n δ ] ≤ ≤ (L − 1)!! + k<L/2 L 2k k c M −(L−2k) ≤ (L − 1)!! + 2L L+1 · M −1 for all M ≥ 2. Off-Diagonal Entries Suppose without loss of generality that i = 1, j = 2, and consider a cluster of L paths from 1 to 2, i 1 , i 2 , ... , i L . If 2|L + 1, then the expectation is 0 as there exists an edge in the union of i 1 , i 2 , ... , i L of odd multiplicity (adjacent to either 1 or 2). Assume next 2|L : apply Lemma 2 to B given by (3.26) , u k = 1, v k = 2, l k = p, whereby E[(e T 1 A p e 2 ) L ] ≤ n pL/2−L/2 · [M(((u k , v k , l k )) 1≤k≤L ) + C pL/2 · (CpL 2 ) 2L−l · L! · n −2δ1 ] and M(((u k , v k , l k )) 1≤k≤L ) = (L − 1)!! · C L/2 p since the cycles underlying the contributors in M consist of L/2 loops, each formed with two paths, and there are (L − 1)!! possibilities for forming these pairs. The second term is of smaller order than the first because C −L/2 p C pL/2 · (CpL 2 ) 2L−l · L! · n −2δ1 ≤ (Cp) 3L/2 (CpL 2 ) 2L−l · L! · n −2δ1 ≤ c L p 4L n −2δ1 employing lim p→∞ p 3/2 Cp 4 p = √ π. Conversely, this bound is tight since the arrangements yielding M contribute at least n pL/2−L/2 · (L − 1)!! · C L/2 p (E[a 2 11 χ |a11|≤n δ ]) pL/2 ≥ n pL/2−L/2 · (L − 1)!! · C L/2 p (1 − n −2δ ) pL/2 and (1 − n −2δ ) pL/2 ≥ 1 − Cn −2δ pL. This completes the proof of part (b) in Theorem 1. Weak Convergence This section contains the proof of Theorem 2, from which Corollary 1 and Theorem 3 are inferred. The key towards (4) is Lemma 6, quantifying µ n,p ≈ ν n,p at the level of characteristic functions (subsection 4.1). Theorem 5 builds on this and gives the mechanism behind its claimed inequality (subsection 4.2). Lastly, Corollary 1 and Theorem 3 are justified (subsection 4.3), and delocalization with high probability under the Haar (probability) measure on the orthogonal group O(n) is discussed in more depth (subsection 4.4). In what follows, the subscripts of the two measures of interest are dropped: µ = µ n,p , ν = ν n,p , and E η [·] denotes the expectation with respect to a measure η. Characteristic Functions This subsection presents Lemma 6, the backbone of Theorem 2. Proof. For T ∈ N, x ∈ R, |x| ≤ T 2e , |e ix − m≤T (ix) m m! | ≤ m>T |x| m m! ≤ m>T |x| m (m/e) m ≤ 2 · ( e|x| T ) T +1 , whereby |E µ [e iθ·X ] − E ν [e iθ·X ]| ≤ |E µ [ j≤T (iθ · X) j j! ] − E ν [ j≤T (iθ · X) j j! ]|+ +2E µ [( e|θ · X| T ) T +1 ]+2E ν [( e|θ · X| T ) T +1 ]+E µ [|e iθ·X − j≤T (iθ · X) j j! |·χ |θ·X|> T 2e ]+E ν [|e iθ·X − j≤T (iθ · X) j j! |·χ |θ·X|> T 2e ] := := B 1 + B 2 + B 3 + B 4 + B 5 . (4.2) Begin with the expectations with respect to ν : Cauchy-Schwarz inequality gives B 3 ≤ 2( e · ||θ|| T ) T +1 · (2T + 1)!! ≤ 2( e · ||θ|| T ) T +1 · ( √ 2T + 2) T +1 ≤ 2 · ( 2e · ||θ|| √ T ) T +1 ; (4.3) for M ∈ N, Markov's inequality entails B 5 /2 ≤ E ν [ j≤T (|θ · X|) j j! ·χ |θ·X|> T 2e ] ≤ ( 2e T ) M E ν [ j≤T (|θ · X|) j+M j! ] ≤ ( 2e||θ|| T ) M j≤T ||θ|| j (2j + 2M + 1)!! j! ≤ ≤ ( 4e||θ|| T ) M j≤T ||θ|| j · 2 j (j + M )! j! ≤ ( 4e||θ|| T ) M j≤T ||θ|| j · 2 j (j + M ) (j+M)/2 j! ≤ ≤ ( 4e||θ|| √ T + M T ) M j≤T ||θ|| j · 2 j (j + M ) j/2 j! ≤ ( 4e||θ|| √ T + M T ) M · exp(2||θ|| · √ T + M ),(4.4) employing (2t + 1)!! ≤ 2 t (t + 1)!. Consider next the mixed term, B 1 : B 1 = E µ−ν [ j≤T (iθ · X) j j! ] = l≤m r l =r i r r! · r! r 1 !r 2 !...r m ! · θ r1 l1 θ r2 l2 ...θ rm lm · E µ−ν [X r1 l1 X r2 l2 ...X rm lm ]. Let l k correspond to (u k , v k ) under the bijection {1, 2, ... , m} → {(i, j) : 1 ≤ i ≤ j ≤ n} used for R m = {(a ij ) T 1≤i≤j≤n , a ij ∈ R}; if (u k v k ) 1≤k≤r (including multiplicities) is not an even set, then the expectation above vanishes by symmetry. Otherwise apply Lemma 4 to this collection of paths for µ, ν : their first-order terms are universal because they are tight when l 0 = 0, up to an O(C pr/2 (1 − (1 − n −2δ ) pr/2 )) error, entailing |E µ−ν [X r1 l1 X r2 l2 ...X rm lm ]| ≤ (C p−1 ) −r/2 C pr/2 [(1−(1−n −2δ ) pr/2 )+C ′ (ǫ 0 )(CC ′ p 2 ) 2r (2r)!n −2δ1 ] ≤ n −2δ2 (2r) 2r (Cp) 4r for δ 2 = min (δ, δ 1 ) > 0, using lim s→∞ s 3/2 Cs 4 s = √ π. Therefore, in this situation, the error is at most n −2δ2 (exp(c||θ|| 2 p 4 T 2 ) − 1) in B 1 (regarding the odd elements in the multiset r i form, they can be absorbed by powers of ||θ|| because these are split in cycles of even length, and (i1,i2, ... ,i 2l ) θ i1i2 θ i2i3 ...θ i 2l i1 ≤ (i1,i2, ... ,i 2l ) ( θ 2 i1i2 2 + θ 2 i2i3 2 )...( θ 2 i 2l−1 i 2l 2 + θ 2 i 2l i1 2 ) ≤ 2 l ( 1 2 i,j θ 2 ij ) l = ||θ|| 2l by opening the parentheses in the products of the first upper bound). Collecting all the errors for B 1 renders B 1 ≤ n −2δ2 (exp(c||θ|| 2 p 4 T 2 ) − 1). (4.5) Consider the two remaining errors in (4.2): B 2 can be absorbed by the upper bound for B 1 , upon multiplying it by c √ T (choose 2|T + 1), in which case e T +1 · ( |θ · X| T ) T +1 = (θ · X) T +1 (T + 1)! · e T +1 · (T + 1)! T T +1 ≤ c √ T · (θ · X) T +1 (T + 1)! . For B 4 , use a similar rationale as for B 5 . Use Markov's inequality for the first component: when k ∈ N, P µ (|θ · X| > T 2e ) ≤ E µ [(θ · X) 2k ] (T /2e) 2k ≤ E ν [(θ · X) 2k ] + n −2δ2 (exp(c||θ|| 2 p 4 k 2 ) − 1) (T /2e) 2k = = ||θ|| 2k · (2k − 1)!! + n −2δ2 (exp(c||θ|| 2 p 4 k 2 ) − 1) (T /2e) 2k . Gathering all the inequalities gives for T = 2T 0 − 1, T 0 , k ∈ N, M > 0, |E µ [e iθ·X ] − E ν [e iθ·X ]| ≤ 2n −2δ2 (exp(c||θ|| 2 p 4 T 2 ) − 1) + 2 · ( 2||θ|| √ T ) T +1 + + 2||θ|| 2k · (2k − 1)!! + n −2δ2 (exp(c||θ|| 2 p 4 k 2 ) − 1) (T /2e) 2k + ( 4e||θ|| √ T + M T ) M · exp(2||θ|| · √ T + M ). Since ||θ|| ≤ c 4 √ p, p 7 ≤ c 5 log n, let T = ⌊C ′ (||θ|| 2 + p)⌋, M = c ′ T 2 ||θ|| 2 +1 , k = ⌊T ⌋ for some universal c ′ , C ′ > 0 : this makes the terms above exp(C ′′ p 7 + C ′′ ||θ|| 6 p 2 − log n), exp(−c ′′ T ), exp(−c ′′ T ), exp(−c ′′ p) and renders the conclusion of the lemma. Pointwise Errors Return to the distance d n : this subsection contains the proof of Theorem 5, which entails Theorem 1. Theorem 5. Suppose f : R m → R ≥0 is continuous, supp(f ) ⊂ B m (R) := {x ∈ R m , ||x|| ≤ R}. Then there exist c 8 , c 9 > 0 universal (i.e., independent of m, p, R) such that for p ∈ N, p ≤ c 9 ( log (m+1) ǫ0+1 ) 1/7 , | f dµ − f dν| ≤ ( R 2 c 8 mp ) m/2 · ||f || ∞ . (4.6) Proof. Let c 10 , c 11 > 0 such that G(p, a) ≤ exp(−c 10 pa 2 ) for a ∈ [0, 1], p ≤ c 11 ( log n ǫ0+1 ) 1/7 (recall (4.1)). Take ǫ = ǫ(f ) > 0 such that sup ||x−y||≤ǫ √ m |f (x) − f (y)| ≤ ( R 2 mp ) m/2 · ||f || ∞ (4.7) (f is uniformly continuous),φ as in Lemma 7 for c = c(m, p, R, ǫ), φ : R m → [0, ∞) given by φ(x) = i≤mφ (x i ), and f ǫ = f * φ ǫ : i.e., f ǫ (x) = R m f (x − y)ǫ −m φ(ǫ −1 y)dy. Since R m φ(x)dx = 1, φ ≥ 0, supp(φ) ⊂ [0, 1] m , | f dµ − f dν| ≤ | f ǫ dµ − f ǫ dν| + 2 sup x∈R m |f (x) − f ǫ (x)| ≤ ≤ | f ǫ dµ − f ǫ dν| + 2 sup ||x−y||≤ǫ √ m |f (x) − f (y)|. (4.8) Because supp(f ǫ ) ⊂ B m (R+ǫ √ m) , the Fourier inversion formula holds for f ǫ and any of its partial derivatives (use the Fourier transform given byĝ(ξ) = R m g(x)e −2πix·ξ dx). Fubini's theorem and (4.1) yield | f ǫ dµ − f ǫ dν| = | R mf ǫ (θ)E µ−ν [e 2πiθ·X ]dθ| ≤ ||θ||≤2π |f ǫ ((θ)| · G(p, ||θ|| 2π )dθ + 2 ||θ||>2π |f ǫ (θ)|dθ. The desired conclusion ensues from this last inequality, (4.8), (4.7), and ||θ||>2π |f ǫ (θ)|dθ ≤ (c(m, R, ǫ)c) m · ||f || ∞ , (4.9) ||θ||≤2π |f ǫ ((θ)| · G(p, ||θ|| 2π )dθ ≤ ( R 2 c 12 · mp ) m/2 · ||f || ∞ ,(4.10) for c 12 > 0 universal. Proof of (4.9): For any k ∈ N, Plancherel formula yields R m |f ǫ (ξ)| 2 · ||ξ|| 2k dξ = (2π) −k |κ|=k m κ 1 , κ 2 , ... , κ m R m (f (κ) ǫ (x)) 2 dx,(4.11) where for κ = (κ 1 , κ 2 , ... , κ m ) ∈ Z m ≥0 , k = |κ| : = κ 1 + κ 2 + ... + κ m , f (κ) (y) = ∂ k f ∂y κ 1 1 ...∂y κm m . Note that f (κ) ǫ (x) = R m f (y)ǫ −m φ (κ) (ǫ −1 (x − y))dy = R m f (y)ǫ −m−|κ| i≤mφ (κi) (ǫ −1 (x i − y i ))dy = = R m f (x − zǫ)ǫ −|κ| i≤mφ (κi) (z i )dz, whereby |f (κ) ǫ (x)| ≤ ǫ −|κ| · V (B m (R)) · ||f || ∞ · c |κ| which together with supp(f ǫ ) ⊂ B(R + ǫ √ m) gives R m (f (κ) ǫ (x)) 2 dx ≤ V (B m (R + ǫ √ m)) · V 2 (B m (R)) · ǫ −2|κ| · ||f || 2 ∞ · c 2|κ| . Cauchy-Schwarz and (4.11) entail ||θ||>2π |f ǫ (θ)|dθ ≤ ||f || ∞ · (c(m, R, ǫ)c) m · a>2π a −2k+m−1 da; let k = ⌊ m 2 ⌋ + 1 to complete the justification of (4.9). Proof of (4.10): Recall ||θ||≤2π |f ǫ (θ)| · G(p, ||θ|| 2π )dθ ≤ ||θ||≤2π |f ǫ (θ)| · exp(− c 8 p 4π 2 · ||θ|| 2 )dθ. Because |f ǫ (θ)| ≤ ||f ǫ || L1 ≤ R m · V (B m (1)) · ||f ǫ || ∞ ≤ R m · V (B m (1)) · ||f || ∞ , and R m exp(−c||θ|| 2 )dθ = ( R exp(−cx 2 )dx) m = ( √ π 2 √ c ) m ≤ c −m/2 , (4.10) ensues from V (B m (1)) = π m/2 Γ( m 2 +1) . Lemma 7. For c > 0, there exists φ : R → [0, ∞), φ ∈ C ∞ , supp(φ) ⊂ [0, 1] such that 1 0 φ(x)dx = 1, 1 0 |φ (k) (x)|dx ≤ c k for all k ∈ N. Proof. For α > 0, let φ, g, G : R → R be given by φ(x) = R g(y)G(x − y)dy, g(y) = cos 2 (αy)χ y∈[0,1] , G(y) = e − 1 y(1−y) χ y∈(0,1) . Because G is smooth and compactly supported, so is φ (g is bounded) and clearly φ ≥ 0. Fix δ ∈ (0, 1 2 ) : g is smooth on [δ, 1 − δ], and for any k ∈ N, φ(x) = (g * G)(x) = R g(x − y)G(y)dy = x+δ−1 x−1 g(x − y)G(y)dy+ x−δ x+δ−1 g(x − y)G(y)dy+ x x−δ g(x − y)G(y)dy = = 1 1−δ g(y)G(x − y)dy + x−δ x+δ−1 g(x − y)G(y)dy + δ 0 g(y)G(x − y)dy, whereby φ (k) (x) = 1 1−δ g(y)G (k) (x − y)dy + x−δ x+δ−1 g (k) (x − y)G(y)dy + δ 0 g(y)G (k) (x − y)dy. This entails 1 0 |φ (k) (x)|dx ≤ ||g (k) [δ,1−δ] || L 1 · ||G|| L1 + ( δ 0 |g(y)|dy + 1 1−δ |g(y)|dy) · ||G (k) || L 1 . Since ||g (k) [δ,1−δ] || L 1 ≤ 2 k−1 α k (1 − 2δ), δ 0 |g(y)|dy + 1 1−δ |g(y)|dy ≤ 2δ, letting δ → 0 yields 1 0 |φ (k) (x)|dx ≤ 2 k−1 α k . The result follows by setting α = min ( c·C 4 , π 2 ) as ||φ|| L 1 = ||g|| L 1 · ||G|| L 1 = 1 2 (1 + sin (2α) 2α ) · C ≥ C 2 . Two Implications This subsection contains the proofs of Corollary 1 and Theorem 3. Proof. (Corollary 1) Since µ n,p (R m ) = ν n,p (R m ) = 1, both measures are nonnegative, and ||f l || ∞ < ∞, lim l→∞ ( f l dµ n,p − f l dν n,p ) = f dµ n,p − f dν n,p . Let φ : R m → [0, 1], φ| Bm( c 2 √ mp 2 ) = 1, φ| B c m (c2 √ mp) = 0 continuous (e.g., for K 1 ⊂ K 2 , K 1 = K 2 , K 1 closed, φ(x) = d(x, K c 2 ) d(x, K 1 ) + d(x, K c 2 ) (4.12) is continuous with φ(R m ) ⊂ [0, 1], φ(K 1 ) = 1, φ(K c 2 ) = 0) . For l sufficiently large, ||f l || ∞ ≤ 2, and thus φf l 2 ∈ R n,p , whereby Theorem 2 entails | φf l dµ n,p − φf l dν n,p | ≤ 2c m 1 + 4C(ǫ 0 )n −ǫ0/8 ; (see (4.12)), φ : R m → [0, 1] smooth, with R m φ(x)dx = 1, and G k : (the integral with respect to ν n,p is zero because this measure is absolutely continuous with respect to the Lebesgue measure η), giving the first part of the theorem. The second part follows because then | f l dµ n,p − f l dν n,p | ≤ | φf l dµ n,p − φf l dν n,p | + (B( c 2 √ mp 2 )) c |f l |dµ n,p + (B( c 2 √ mp 2 )) c |f l |dν n,p ≤ ≤ 2c m 1 + 4C(ǫ 0 )n −ǫ0/8 + P µn,p (||X|| > c 2 √ mp 2 ) + P νn,p (||X|| > c 2 √ mp 2 ) ≤ 2c m 1 + 4C(ǫ 0 )n −ǫ0/8 + 3 · 2 + 3 m · (R m → R, G k = F k • φ 1/k ; for x ∈ R m , G k (x) = R m F k (y)k −m φ(k −1 (x − y))dy,f χ F =1 dν n,p = f χ F =1 dν n = 0, | f χ F =0 dν n,p − f χ F =0 dν n | ≤ C p ,| exp(−x 2 ) − 1 a exp(− x 2 a 2 )| = | exp(−x 2 ) · (1 − exp(− x 2 (1 − a 2 ) a 2 )) + (1 − a) · 1 a exp(− x 2 a 2 )| ≤ ≤ exp(−x 2 ) · x 2 |1 − a 2 | a 2 + |1 − a| · 1 a exp(− x 2 a 2 ) . Corollary 1 yields the desired convergence because f χ F =1 = f F is the pointwise limit of f (1 − G k ). Haar Measures Since for 1 ≤ i ≤ n, ||( 1 √ n Z) i || 2 = 1 + 1 n 1≤i≤n (z 2 ij − 1), Bernstein's inequality (theorem 2.8.1 in Vershynin [38]) entails P( 1 n | 1≤i≤n (z 2 ij − 1)| ≥ t) ≤ 2 exp(−C min ( nt 2 ||z 2 11 − 1|| 2 ψ1 , nt ||z 2 11 − 1|| ψ1 )), whereby for some C, T > 0, and all t ≥ T, P( 1 n max 1≤i≤n | 1≤i≤n (z 2 ij − 1)| ≥ t log n n ) ≤ n 1−Ct . Choose α = α n = ( 2 C + T + 1) log n n , for which Z∈(O(n))α dZ ≥ 1 − n −1 . A union bound entails for t ≥ T ′ = 2 · ( 2 C + T + 1), P(U ∈ O(n) : max 1≤i,j≤n |u ij | ≥ t · log n n ) ≤ n 2 P( 1 √ n max 1≤i≤n |z 1i | ≥ t 2 · log n n ) ≤ n 2 (1 − exp(−cn 1−c ′ t 2 )) as P(max 1≤i≤n |z 1i | ≤ t √ log n) ≥ (1 − exp(− t 2 log n 4 )) n from 1 √ 2π ∞ a exp(− y 2 2 )dy ≤ 1 √ 2π · exp(− a 2 2 ) ∞ 0 exp(−ay)dy = 1 a √ 2π · exp(− a 2 2 ) ≤ 1 √ 2π · exp(− a 2 4 ). The quantum unique ergodicity version (1.3) proved by Bourgade and Yau [10] and the strong version (1.4) by Cipolloni, Erdös and Schröder [12] can be justified by a similar rationale (Gaussianity allows to reduce (1.4) to M diagonal with trace zero and max 1≤i≤n |M ii | ≤ C; for the dot products u T i u j , use Cauchy-Schwarz to deal with the errors). Trace Expectations This section constains the proofs of Lemma 1 and Theorem 4 (subsections 5.1 and 5.2, respectively). Proof of Lemma 1 Consider the power series S(X) = k≥1 β(k, γ)X k , whose radius of convergence is 1 b(γ) = (1 + √ γ) −2 (β(k, γ) is the k th moment of a probability distribution supported on [a(γ), b(γ)], with positive mass on [b(γ) − ǫ, b(γ)] for all ǫ > 0). Then S 2 (X) = r≥2 ( 1≤k≤r−1 β(k, γ)β(r − k, γ))X r , whereby (2.8) is equivalent to S 2 (X) = r≥2 β(r + 1, γ) − (1 + γ)β(r, γ) γ · X r = 1 γX (S(X) − X − X 2 (γ + 1)) − 1 + γ γ (S(X) − X) or γXS 2 (X) + ((γ + 1)X − 1)S(X) + X = 0. for z ∈ C + , and S, m being analytic in {z ∈ C, ||z|| ≤ (1 + √ γ) −2 } ∩ C + (lemma 3.11 in [2]). Proof of Theorem 4 E[x i0k0 x i1k0 x i1k1 x i2k1 ...x ip−1kp−1 x i0kp−1 ]. By convention, i l ∈ {1, 2, ... , n}, k l ∈ {1, 2, ... , N } : denote by G(p, n, N ) the set of directed graphs with edges (i 0 , k 0 ), (i 1 , k 0 ), (i 1 , k 1 ), (i 2 , k 1 ), ... , (i p−1 , k p−1 ), (i 0 , k p−1 ), and vertices i l ∈ {1, 2, ... , n}, k l ∈ {1, 2, ... , N }. Let g = (i, j), i = (i 0 , i 1 , ... , i p−1 ), j = (j 0 , j 1 , ... , j p−1 ) be a generic element of G(p, n, N ), and x g := x i0k0 x i1k0 x i1k1 x i2k1 ...x ip−1kp−1 x i0kp−1 . Consider the map taking graphs in G(p, n, N ) and reversing the orientation of the even-indexed edges: W : G(p, n, N ) → D(p, n, N ), for D(p, n, N ) the set of directed graphs with edges (i 0 , k 0 ), (k 0 , i 1 ), (i 1 , k 1 ), (k 1 , i 2 ), ... , (i p−1 , k p−1 ), (k p−1 , i 0 ), and vertices i l ∈ {1, 2, ... , n}, k l ∈ {1, 2, ... , N }. Take the expectations induced by the elements of D(p, n, N ) : symmetry yields only those with edges of even multiplicity have non-vanishing contributions, a property invariant under this mapping. Call the subset of such cycles E(p, n, N ), which in turn is contained in E(p, n + N ), where E(l, m) denotes the set of even cycles of size 2l and vertices in {1, 2, ... , m}. Return to the image of G(p, n, N ), and consider the dominant graphs in (1.6). Since such graphs g have edges solely of even multiplicity, their images under W are elements of E(n + N ). Note ]. If W(x g ) ∈ C(p), then it has p pairwise distinct undirected edges, each of multiplicity 2; using the recursion 2. in section 3 that C(l) satisfies, induction on l yields no element of it contains two copies of a directed edge, entailing equality occurs in (5.2). It suffices to consider C(p, n, N ) := C(p) ∩ W(D(p, n, N )), i.e., the simple even cycles whose odd numbered vertices belong to {1, 2, ... , N }, and its even numbered to {1, 2, ... , n} : all the expectations of such elements is 1, and take β(p, n, N ) = 0≤i≤p−1 E[x g ] ≤ E[W(x g )],(5.|C i (p)| · n i+1 N p−i , where C j (p) = {i ∈ C(p), j + 1 = |{i 2k , 0 ≤ k ≤ p − 1}|, p − j = |{i 2k+1 , 0 ≤ k ≤ p − 1}|}.j 1 j 2 ...j k ≤ 1≤k≤j n j−k · j 2k k! ≤ n j (exp( j 2 n )−1). The recurrent description 2. in section 3 of C(p) gives a partition of C j (p) into three subsets, implying β(p+1, n, N ) = nN p+1 +n p+1 N + Let β(p, X) = 0≤i≤p−1 |C i (p)| · X i+1 , for which this, upon division by N p+2 , entails when X = n/N, β(p + 1, X) = 1 + X p+1 + 1≤i≤p−1 (|C i (p)| + |C i−1 (p)| + 1≤k≤i,1≤a≤p−1 |C k (a)| · |C i−k−1 (p − a)|) · X i+1 = = X p+1 + β(p, X) + X(β(p, X) − X p ) + 1≤i≤p−1 1≤k≤i,1≤a≤p−1 |C k (a)| · |C i−k−1 (p − a)| · X i+1 or β(p + 1, X) = (1 + X)β(p, X) + Theorem 1 . 1For ǫ 0 > 0 fixed, let A = 1 √ n (a ij ) 1≤i,j≤n ∈ R n×n , A = A T , (a ij ) 1≤i≤j≤n i.i.d. with a 11 d = −a 11 , E[a 2 11 ] = 1, , E[a 8+ǫ0 11 ] ≤ C(ǫ 0 ),(W1)and e 1 , e 2 , ... , e n the standard basis in R n . Then for p ∈ N, i, j ∈ {1, ... , n}, i = j, p = n o(1) , as n → ∞, 2|p · e T i A p e j ⇒ N (0, 1),where c : N × [1, ∞) → [1, ∞), Corollary 1 . 1Under the assumptions in Theorem 1, there exists c 3 > 0 such that for any Haar measures (both left and right) are unique up to constant factors, and there exists one both right and left invariant (e.g., F (M ) = tr(M 2 )), (2.4) and (2.5) provide the Haar measures on O(n) under growth conditions on q relative to n and the moments of the entries of B (e.g., q = o(n 1/2 ), o(1) = O(q 2 n −1 ) when B has i.i.d. entries with b 11 d = −b 11 , b 11 subgaussian). Another version, proved in detail in subsection 2.1 of 10) where S(q, 1) := {i = (1, i 1 , i 2 , ... , i q−1 , 1), i 1 , i 2 , ... , i q−1 ∈ {1, 2, ... , n}}. By independence, the contribution of a pair (i, j) does not vanish if and only if i, j share some edge, with their union being an even cycle of length 2p : all the pairs considered next are implicitly assumed to satisfy this property. Lemma 2 . 2Suppose B = (b ij ) 1≤i,j≤n ∈ R n×n , B = B T , (b ij ) 1≤i≤j≤n i.i.d. with b 11 d = −b 11 , E[b 2 11 ] = 1, E[b 2l 11 ] ≤ L(n)n δ(2l−4) , P(|b 11 | ≤ n δ ) = 1 for 2 ≤ l ≤ p, δ > 0, and L : N → [1, ∞), L(n) < n 2δ .For u, v ∈ {1, 2, ... , n}, u = v, an even tuple (u k v k ) 1≤k≤L , u k , v k ∈ {u, v}, p, L ∈ N, p = 1≤k≤L l i , (l i ) 1≤k≤L ⊂ N, let P consist of all tuples of paths ((u k , i k1 , ... , i k(l k −1) , v k )) 1≤k≤L := (i k ) 1≤k≤L , i kh ∈ {1, 2, ... , n}. Then for p ≤ n δ 1 4(C+1)·(L(n)+1) , n ≥ n(δ), T1) from (3.1): if any of the indicator functions in (3.20) is 0, then one factor in the expectations always vanishes as the cycles underlying it are odd; consider the remaining situations; the first T 1 factors yield at most E[b 2l0 11 ] by Hölder inequality and l → E[b 2l 11 ] being non-decreasing (E[b 2 11 ] = 1), and lastly, by (3.1), the remaining T 0 − T 1 give at most n T 1 <k≤T 0 (|L k |/2−d(L k )) · D 0 · (1 + CL(n)p 2 n −2δ1 ) T0−T1 ≤ n (p−L+l)/2 · D 0 · (1 + CL(n)p 2 n −2δ1 ) L q,(pj ) 1≤j≤q ): 1≤j≤q pj =p 1≤j≤t C pj /2 ≤ C p/2 employing C(a 1 ) × C(a 2 ) × ... × C(a k ) ⊂ C( 1≤r≤k a r )), and lastly, when 2L − l increases, then it does so by at most 2 in (ii), in which case there is an additional factor of n −1 because p − L + l decreases by at least 3 from l − L = −(2L − l) + L (l can decrease in (iii) as well: however, it drops by 1 only when the copies of xy are adjacent to the endpoints of ρ 1 , ρ 2 , reducing L by 1 as well).Finally, take ( * ) : the induction hypothesis for the new configuration ρ yields the bound Lemma 4 . 4Under the assumptions in Lemma 3, impose the elements of P have at least t 0 paths with u k = v k , l k > 1 share some edge with other paths. Then Lemma 6 . 6Let θ ∈ R m . Under the assumptions of Theorem 2, there exist c 4 , c 5 , c 6 , c 7 > 0 such that if p ∈ N, 2|p, ||θ|| ≤ c 7 √ p, p 7 ≤ c 5 · log n ǫ0+1 , then |E µ [e iθ·X ] − E ν [e iθ·X ]| ≤ exp(c 6 (p 7 (ǫ 0 + 1) − log n)) + exp(−c 7 p) := G(p, ||θ||).(4.1) EFχ µn,p [||X|| 2 ] = m(1 + o(1)), V ar µn,p (||X|| 2 ) = 3m(1 + o(1)), consequences of the proof of Lemma 6. Proof. (Theorem 3) A ∈ Sym d (n) if and only if F (A) = 1, where (λi(M) =λi+1(M) . The set of discontinuities of F, D := {M : F (M ) = 0}, is a null set under the Lebesgue measure η on R m because η is absolutely continuous with respect to the distribution of a Wigner matrix with i.i.d. entries that are centered standard normal random variables, and this assigns measure zero to sets {M ∈ R n×n , M = M T , λ i (M ) = λ i+1 (M )} from the change of variables taking the entries of M to its eigenvalues and eigenvectors: for k ∈ N, let F k : R m → [0, 1] be continuous with F k (D) = 1, F k ({M : min 1≤i≤n−1 (λ i (M ) − λ i+1 (M )) > 1 k 2 }) = 0 |G k (x) − ( 1 − 1F i (M ) − λ i+1 (M ))) ≤ 1 k } = c k → 0as k → ∞, using η is translation invariant. Corollary 1 entailsP(A ∈ Sym d (n)) ≤ (1 − F )dµ n,p = | (1 − F )dµ n,p − (1 − F )dν n,p | ≤ c m 1 + 2C(ǫ 0 )n f γ (x)dx is the Stieltjes transform of f γ , entailing (5.1) as γzm 2 (z) − (1 − γ − z)m(z) + 1 = 0 This constitutes a good estimate of the quantity of interest because for p = o( 1≤i≤p ( 1≤i≤p|C i (p)| + |C i−1 (p)| + 1≤k≤i,1≤a≤p−1 |C k (a)| · |C i−k−1 (p − a)|) · n i+1 N p+1−i . e.g., F nonzero and continuous), and letting for S ⊂ O n measurable,S dμ := Z∈Sα dZ, (2.4) Fix p : (1.6) yields E[tr(A p )] = (i0,k0,i1,k1, ... ,ip−1,kp−1) 2 ) 2a consequence of independence and Hölder's inequality, E[x 2q 11 ] · E[x 2l 11 ] ≤ E[x 2q+2l 11 Poisson convergence for the largest eigenvalues of heavy tailed random matrices. A Auffinger, G Ben-Arous, S Péché, Ann. Inst. H. Poincaré Probab. Statist. 453610A. Auffinger, G. Ben-Arous, and S. Péché. Poisson convergence for the largest eigenvalues of heavy tailed random matrices, Ann. Inst. H. Poincaré Probab. Statist., Vol. 45, No. 3, 589 − 610, 2009. Spectral Analysis of Large Dimensional Random Matrices. Z D Bai, J Silverstein, Springer Series in Mathematics. Second EditionZ. D. Bai, and J. Silverstein. Spectral Analysis of Large Dimensional Random Matrices, Springer Series in Mathematics, Second Edition, 2010. Limit of the Smallest Eigenvalue of a Large Dimensional Sample Covariance Matrix. Z D Bai, Y Q Yin, Ann. Probab. 213Z. D. Bai, and Y. Q. Yin. Limit of the Smallest Eigenvalue of a Large Dimensional Sample Covariance Matrix, Ann. Probab., Vol. 21, No. 3, 1275 − 1294, 1993. Localization and Delocalization for Band Matrices. F Benaych-Georges, S Péché, Ann. Inst. H. Poincaré Probab. Statist. 504F. Benaych-Georges, and S. Péché. Localization and Delocalization for Band Matrices, Ann. Inst. H. Poincaré Probab. Statist., Vol. 50, No. 4, 1385 − 1403, 2014. Eigenvectors distribution and quantum unique ergodicity for deformed Wigner matrices. L Benigni, Ann. Inst. H. Poincaré Probab. Statist. 564L. Benigni. Eigenvectors distribution and quantum unique ergodicity for deformed Wigner matrices, Ann. Inst. H. Poincaré Probab. Statist. 56(4), 2822 − 2867, November 2020. Optimal delocalization for generalized Wigner matrices. L Benigni, P Lopatto, Advances in Mathematics. 396L. Benigni, and P. Lopatto. Optimal delocalization for generalized Wigner matrices, Advances in Mathematics, Vol. 396, February 2022. Isotropic Local Laws for Sample Covariance and Generalized Wigner Matrices. A Bloemendal, L Erdös, A Knowles, H.-T Yau, J Yin, Electron. J. Probab. 1933A. Bloemendal, L. Erdös, A. Knowles, H.-T. Yau, and J. Yin. Isotropic Local Laws for Sample Co- variance and Generalized Wigner Matrices, Electron. J. Probab., Vol. 19, No. 33, 1 − 53, 2014. Semiclassical asymptotics of orthogonal polynomials, Riemann-Hilbert problem, and universality in the matrix model. P Bleher, A Its, Ann. Math. 150P. Bleher, and A. Its. Semiclassical asymptotics of orthogonal polynomials, Riemann-Hilbert problem, and universality in the matrix model, Ann. Math., Vol. 150, 185 − 266, 1999. Random Band Matrices in the Delocalized Phase I: Quantum Unique Ergodicity and Universality. P Bourgade, H.-T Yau, J Yin, Comm. Pure Appl. Math. 73P. Bourgade, H.-T. Yau, and J. Yin. Random Band Matrices in the Delocalized Phase I: Quantum Unique Ergodicity and Universality, Comm. Pure Appl. Math, Vol. 73, Issue 7, 1526 − 1596, July 2020. The Eigenvector Moment Flow and Local Quantum Unique Ergodicity. P Bourgade, H.-T Yau, Commun. Math. Phys. 350P. Bourgade, and H.-T. Yau. The Eigenvector Moment Flow and Local Quantum Unique Ergodicity, Commun. Math. Phys., Vol. 350, 231 − 278, 2017. Edge Universality for Generalized Wigner Matrices. P Bourgade, L Erdös, H.-T Yau, Commun. Math. Phys. 332P. Bourgade, L. Erdös, and H.-T. Yau. Edge Universality for Generalized Wigner Matrices, Commun. Math. Phys., Vol. 332, 261 − 353, 2014. Eigenstate Thermalization Hypothesis for Wigner Matrices. G Cipolloni, L Erdös, D Schröder, Commun. Math. Phys. 388G. Cipolloni, L. Erdös, and D. Schröder. Eigenstate Thermalization Hypothesis for Wigner Matrices, Commun. Math. Phys., Vol. 388, 1005 − 1048, 2021. Uniform asymptotics for polynomials orthogonal with respect to varying exponential weights and applications to universality questions in random matrix theory. P Deift, T Kriecherbauer, K T .-R. Mclaughlin, S Venakides, X Zhou, Comm. Pure Appl. Math. 52P. Deift, T. Kriecherbauer, K. T.-R. McLaughlin, S. Venakides, and X. Zhou. Uniform asymptotics for polynomials orthogonal with respect to varying exponential weights and applications to universality questions in random matrix theory, Comm. Pure Appl. Math., Vol. 52, 1335 − 1425, 1999. Strong asymptotics of orthogonal polynomials with respect to exponential weights. P Deift, T Kriecherbauer, K -R Mclaughlin, S Venakides, X Zhou, Comm. Pure Appl. Math. 52P. Deift, T. Kriecherbauer, K.T-R McLaughlin, S. Venakides, and X. Zhou. Strong asymptotics of orthogonal polynomials with respect to exponential weights, Comm. Pure Appl. Math., Vol. 52, 1491 − 1552, 1999. Finite Rank Perturbations of Heavy-Tailed Random Matrices. S Diaconu, arxiv:2208.02756S. Diaconu. Finite Rank Perturbations of Heavy-Tailed Random Matrices, arxiv:2208.02756, 2022. Two CLTs for Sparse Random Matrices. S Diaconu, arxiv:2210.09625S. Diaconu. Two CLTs for Sparse Random Matrices, arxiv:2210.09625, 2022. More Limiting Distributions for Eigenvalues of Wigner Matrices. S Diaconu, Ann. Prob. 512S. Diaconu. More Limiting Distributions for Eigenvalues of Wigner Matrices, Ann. Prob., Vol. 51, No. 2, 774 − 804, 2023. Bulk universality for Wigner matrices. L Erdös, S Péché, J Ramírez, B Schlein, H.-T Yau, Commun. Pure Appl. Math. 63L. Erdös, S. Péché, J. Ramírez, B. Schlein, and H.-T. Yau. Bulk universality for Wigner matrices, Commun. Pure Appl. Math. 63, 895 − 925, 2010. Semicircle Law on Short Scales and Delocalization. L Erdös, B Schlein, H.-T Yau, Ann. Prob. 373L. Erdös, B. Schlein, and H.-T. Yau. Semicircle Law on Short Scales and Delocalization, Ann. Prob., Vol. 37, No. 3, 815 − 852, 2009. Local Semicircle Law and Complete Delocalization for Wigner Random Matrices. L Erdös, B Schlein, H.-T Yau, Commun. Math. Phys. 287L. Erdös, B. Schlein, and H.-T. Yau. Local Semicircle Law and Complete Delocalization for Wigner Random Matrices, Commun. Math. Phys., Vol. 287, 641 − 655, 2009. Universality of sine-kernel for Wigner matrices with a small Gaussian perturbation. L Erdös, J Ramírez, B Schlein, H.-T Yau, Electr. J. Prob. 15L. Erdös, J. Ramírez, B. Schlein, and H.-T. Yau. Universality of sine-kernel for Wigner matrices with a small Gaussian perturbation. Electr. J. Prob., Vol. 15, paper 18, 526 − 604, 2010. The largest eigenvalue of rank one deformation of large Wigner matrices. D Féral, S Péché, Commun. Math. Phys. 272D. Féral, and S. Péché. The largest eigenvalue of rank one deformation of large Wigner matrices, Commun. Math. Phys., Vol. 272, 185 − 228, 2007. Universality of the local spacing distribution in certain ensembles of Hermitian Wigner matrices. K Johansson, Commun. Math. Phys. 2153K. Johansson. Universality of the local spacing distribution in certain ensembles of Hermitian Wigner matrices, Commun. Math. Phys. 215, No. 3, 683 − 705, 2001. The Parameterization of Orthogonal Matrices: A Review Mainly for Statisticians. A I Khuri, I J Good, South African Statist. A. I. Khuri, and I. J. Good. The Parameterization of Orthogonal Matrices: A Review Mainly for Statisticians, South African Statist., 1, 23, 231 − 250, 1989. Distribution of Eigenvalues For Some Sets Of Random Matrices. V A Marchenko, L A Pastur, Math. USSR-Sbornik. 194V. A. Marchenko, and L. A. Pastur. Distribution of Eigenvalues For Some Sets Of Random Matrices, Math. USSR-Sbornik, Vol. 1, No. 94, 1967. High dimensional normality of noisy eigenvectors. J Marcinek, H.-T Yau, Commun. Math. Phys. 395J. Marcinek, and H.-T. Yau. High dimensional normality of noisy eigenvectors, Commun. Math. Phys. 395, 1007 − 1096, 2022. Random matrices. M L Mehta, Academic PressNew YorkM. L. Mehta. Random matrices, Academic Press, New York, 1991. Bulk universality and related properties of Hermitian matrix models. L Pastur, M Shcherbina, J. Stat. Phys. 1302L. Pastur, and M. Shcherbina. Bulk universality and related properties of Hermitian matrix models, J. Stat. Phys. 130, No. 2, 205 − 250, 2008. Parameterization of an orthogonal matrix in terms of generalized Eulerian angles. R C Raffenetti, K Ruedenberg, Inter. J. Quantum Chem. 3R. C. Raffenetti, and K. Ruedenberg. Parameterization of an orthogonal matrix in terms of generalized Eulerian angles, Inter. J. Quantum Chem., 3S, 625 − 634, 1970. The behaviour of eigenstates of arithmetic hyperbolic manifolds. Z Rudnick, P Sarnak, Comm. Math. Phys. 1611Z. Rudnick, and P. Sarnak. The behaviour of eigenstates of arithmetic hyperbolic manifolds, Comm. Math. Phys. 161, No. 1, 195 − 213, 1994. Strong Convergence of the Empirical Distribution of Eigenvalues of Large Dimensional Random Matrices. J Silverstein, J. Multivar. Anal. 55J. Silverstein. Strong Convergence of the Empirical Distribution of Eigenvalues of Large Dimensional Random Matrices, J. Multivar. Anal., 55, 331 − 339, 1995. Central Limit Theorem for Traces of Large Random Symmetric Matrices With Independent Matrix Elements. Ya, A Sinai, Soshnikov, Bol. Soc. Brasil. Mat. 291Ya. Sinai, and A. Soshnikov. Central Limit Theorem for Traces of Large Random Symmetric Matrices With Independent Matrix Elements, Bol. Soc. Brasil. Mat., Vol. 29, No. 1, 1 − 24, 1998. A Refinement of Wigner's Semicircle Law in a Neighborhood of the Spectrum Edge for Random Symmetric Matrices. Ya, A Sinai, Soshnikov, Funct. Anal. Appl. 322Ya. Sinai, and A. Soshnikov. A Refinement of Wigner's Semicircle Law in a Neighborhood of the Spectrum Edge for Random Symmetric Matrices, Funct. Anal. Appl., Vol. 32, No. 2, 1998. Universality at the edge of the spectrum in Wigner random matrices. A Soshnikov, Comm. Math. Phys. 2073A. Soshnikov. Universality at the edge of the spectrum in Wigner random matrices, Comm. Math. Phys., Vol. 207, No. 3, 697 − 733, 1999. A Note on Universality of the Distribution of the Largest Eigenvalues in Certain Sample Covariance Matrices. A Soshnikov, J. Stat. Phys. 108A. Soshnikov. A Note on Universality of the Distribution of the Largest Eigenvalues in Certain Sample Covariance Matrices, J. Stat. Phys., Vol. 108, 1033 − 1056, 2002. Poisson Statistics for the Largest Eigenvalue of Wigner Random Matrices with Heavy Tails. A Soshnikov, Comm. in Probab. 9A. Soshnikov. Poisson Statistics for the Largest Eigenvalue of Wigner Random Matrices with Heavy Tails, Comm. in Probab., 9, 82 − 91, 2004. On the Limit of the Largest Eigenvalue of the Large Dimensional Sample Covariance Matrix. Y Q Yin, Z D Bai, P R Krishnaiah, Prob. Th. Rel. Fields. 78Y. Q. Yin, Z. D. Bai, and P. R. Krishnaiah. On the Limit of the Largest Eigenvalue of the Large Dimensional Sample Covariance Matrix, Prob. Th. Rel. Fields, 78, 509 − 521, 1998. High-Dimensional Probability: An Introduction with Applications in Data Science. R Vershynin, R. Vershynin. High-Dimensional Probability: An Introduction with Applications in Data Science, https://www.math.uci.edu/~rvershyn/papers/HDP-book/HDP-book.pdf. On the Distribution of the Roots of Certain Symmetric Matrices. E P Wigner, Ann. Math., Second Series. 672E. P. Wigner. On the Distribution of the Roots of Certain Symmetric Matrices, Ann. Math., Second Series, Vol. 67, No. 2, 325 − 327, 1958. The generalised product moment distribution in samples from a normal multivariate population. J Wishart, Biometrika. 201 −J. Wishart. The generalised product moment distribution in samples from a normal multivariate pop- ulation, Biometrika, 20A (1 − 2), 32 − 52, 1928.
[]
[ "Fluid surface self-propulsion via confined Hocking radiation fields", "Fluid surface self-propulsion via confined Hocking radiation fields" ]
[ "Steven W Tarr \nSchool of Physics\nGeorgia Institute of Technology\n837 State Street30332AtlantaGeorgiaUSA\n", "Joseph S Brunner \nSchool of Physics\nGeorgia Institute of Technology\n837 State Street30332AtlantaGeorgiaUSA\n", "Daniel Soto \nSchool of Physics\nGeorgia Institute of Technology\n837 State Street30332AtlantaGeorgiaUSA\n", "Daniel I Goldman \nSchool of Physics\nGeorgia Institute of Technology\n837 State Street30332AtlantaGeorgiaUSA\n" ]
[ "School of Physics\nGeorgia Institute of Technology\n837 State Street30332AtlantaGeorgiaUSA", "School of Physics\nGeorgia Institute of Technology\n837 State Street30332AtlantaGeorgiaUSA", "School of Physics\nGeorgia Institute of Technology\n837 State Street30332AtlantaGeorgiaUSA", "School of Physics\nGeorgia Institute of Technology\n837 State Street30332AtlantaGeorgiaUSA" ]
[]
Diverse physical systems, from maritime to quantum mechanical, experience forces mediated by asymmetries in surrounding fields. Here we discover a new phenomenon in which a vertically oscillating, floating robot can be attracted to or repelled from a boundary via a complex interference of generated and reflected gravity-capillary (GC) waves, whose dynamics were studied by Hocking in the 1980s. Force measurements on the robot reveal that attraction increases as oscillation frequency increases or particle-boundary separation decreases. Reconstruction of Hocking wave dynamics rationalizes a field-based asymmetry in wave amplitudes near boundaries that drives robot motion.
null
[ "https://export.arxiv.org/pdf/2305.04390v1.pdf" ]
258,557,277
2305.04390
2052639e37d7338c99996a55ba6d25d7dbb00827
Fluid surface self-propulsion via confined Hocking radiation fields Steven W Tarr School of Physics Georgia Institute of Technology 837 State Street30332AtlantaGeorgiaUSA Joseph S Brunner School of Physics Georgia Institute of Technology 837 State Street30332AtlantaGeorgiaUSA Daniel Soto School of Physics Georgia Institute of Technology 837 State Street30332AtlantaGeorgiaUSA Daniel I Goldman School of Physics Georgia Institute of Technology 837 State Street30332AtlantaGeorgiaUSA Fluid surface self-propulsion via confined Hocking radiation fields (Dated: May 9, 2023)Gravity-capillary wavesBoundary effectField-mediated locomotionRobophysics Diverse physical systems, from maritime to quantum mechanical, experience forces mediated by asymmetries in surrounding fields. Here we discover a new phenomenon in which a vertically oscillating, floating robot can be attracted to or repelled from a boundary via a complex interference of generated and reflected gravity-capillary (GC) waves, whose dynamics were studied by Hocking in the 1980s. Force measurements on the robot reveal that attraction increases as oscillation frequency increases or particle-boundary separation decreases. Reconstruction of Hocking wave dynamics rationalizes a field-based asymmetry in wave amplitudes near boundaries that drives robot motion. Locomotion via internal driving sufficient to overcome dissipation is well-studied in diverse systems [1][2][3][4][5][6][7][8][9][10]. Inertial self-propulsion requires asymmetric momentum generation and is typically produced using changes in shape or mass distribution in the desired direction of motion. For example, boats can use propellers to expel fluid [11] while swimmers can propel via traveling waves of body bending [1][2][3][4][5]. In both cases, Newton's Third Law ensures locomotion. At fluid surfaces, movement is typically accompanied by wave generation [5][6][7][8][9][12][13][14]. Such wave fields are an important potential source of locomotor benefits, including asymmetric self-propulsion in animals and robots [6][7][8], wave-drag reduction of surface-swimming collectives [9], and animal signaling [13]. The physics of coupled waves and sources has been carefully studied and exploited in the transport of a fluid droplet; interactions with its own wave field spontaneously produce asymmetric locomotion despite no imposed directional bias [15][16][17][18]. An interesting question is whether an entity that produces a spatially symmetric wave field will move on its own. The answer, of course, must be no for a sufficiently free system. However, in physically confined systems, emergent wave field asymmetries can produce nonzero net radiation forces on boundaries; such forces are observed across scales, from the quantum mechanical vacuum [20][21][22] to fluids [16][17][18][23][24][25][26][27][28]. In quantum mechanics, the Casimir effect demonstrates that nearby neutral plates confine and modify zeropoint-energy wave fields such that they attract one another [20][21][22]. In driven fluid systems with fluctuating surface waves, boundaries generate an analogous downsampling of wave modes called the "maritime Casimir effect" [23,24]. The downsampled wave modes reduce the radiation pressure between objects at the fluid surface and can be observed as reduced amplitude waves [23][24][25][26]. In this Letter, we discover that locomotion can be in- duced in a free-floating, oscillating robophysical system that does not directly generate an asymmetric wave field. Symmetrically propagating waves undergo a complex interference when reflected at a boundary, breaking symmetry and generating propulsive radiation forces. In lab experiments, we probe these dynamics with a custom-developed robot and map radiation forces as we vary both oscillation frequency and confinement distance. We find that confinement on one side leads to a reduction in wave field amplitude. The dependence of the consequent radiation force on oscillation frequency can be quantitatively explained by theory for gravity-capillary (GC) waves developed by Hocking in the 1980s [29][30][31]. Apparatus & fundamental behaviors -The robotic boat (total mass m = 368 g) consists of a circularly symmetric hull of radius R B = 6 cm, a custom circuit board, two fan motors (uxcell Coreless Micro Motor 412), and an eccentric motor (Vybronics Inc. Cylindrical Vibration Motor VJQ24-35K270B). The boat's hull was 3Dprinted in Polymaker PolyLite™ PLA and waterproofed with marine epoxy. All electronics and batteries were mounted onboard, and additional weights were added such that a free-floating boat at rest is level to within 1°. We mounted the eccentric motor beneath the electronics; when enabled, the motor vibrates the boat with powerdependent frequency ω primarily along the fore-aft axis (roll) with minimal vertical motion or induced surface currents. Beyond ω = 20 Hz, the vibration tends toward roll amplitude 0.15°± 0.02°, pitch (left-right axis) amplitude 0.05°± 0.01°, and vertical oscillation amplitude 0.09±0.02 mm (see SI). The result is a left-right and foreaft symmetric, radially emanated ( k = kr), monochromatic wave train of wavelength λ(ω) traveling along the fluid surface ( Fig. 1A-B, Movie S1). Due to the symmetries of the emitted waves, a boat placed far from boundaries experiences no net radiation force F W . Upon breaking symmetry by approaching a boundary, F W becomes nonzero, and the boat selfpropels (Figs. 1D1-2, Movie S2). We observe both repulsive and attractive behaviors (Fig. 1E), with repulsion occurring more weakly such that it is often indistinguishable from noise. To probe these dynamics, we placed the boat near a planar boundary extending from the floor above the water (61 cm long, 30 cm tall, vertical to within 1°), varied both ω and initial hull-boundary distance d ⊥ 0 , and allowed the boat to move freely in response to F W . Though we were unable to prescribe wave amplitude A independently from ω, we expect it to affect F W in accord with established theory on the energy of surface waves [32,33]. We chose a wall with length R B , λ such that we may treat our system as quasi-1D and study the boat's motion along the axis normal to the wall. Any observed parallel motion had no clear bias. For all experiments, we programmed a motor controller to ramp the eccentric motor up linearly to the target frequency over 10 s to minimize transients. We recorded images of trials with a Logitech C920 webcam at 30 FPS and tracked the boat's lateral motion with color-thresholding code in MATLAB. We extracted the boat's perpendicular accelerationd ⊥ by fitting a quadratic equation to the position-time data prior to any drag-induced inflection point. Below a threshold d ⊥ T (ω), we observed an increasingly attractive force with increasing ω and decreasing d ⊥ 0 (Figs. 2A-C). Near d ⊥ T , a lightly repulsive F W emerged. Above d ⊥ T , the boat was considered to be "far from boundaries"; the wave field symmetry was restored and the boat experienced a near-zero F W . Direct measurement of wave force -Having observed O(d ⊥ ) ≤ 10 2 µm/s 2 across all tested initial conditions, we sought to isolate F W from any transient effects (e.g., viscous [34] and wave [5,12] drag, inertia [35]) that could dampen the system's evolution and result in such a minuscule acceleration. We investigated F W alone by restricting the boat's motion to that of a simple pendulum without impeding vibration (Fig. 3A), a method similarly employed to quantify water wave analog Casimir forces [25]. The boat was affixed along its central axis 1.3 cm above the waterline to a thin fishing line of length L = 1.4 m via a bowline knot. We calibrated the line such that when the pendulum angle θ was zero, the tension force F T too was zero. For nonzero F W , the boat's resultant displacement ∆x caused F T to increase until reaching force balance (Fig. 3B). We measured ∆x for a variety of ω (0-42 Hz) and d ⊥ 0 (1.9-3.8 cm) and observed typical values within 0-3 mm. Since L ∆x, we assume the boat undergoes negligible vertical displacement. By measuring ∆x in steady state, we can estimate the perpendicular wave force F W = (m − ρV )g∆ x/L, where ρ is the fluid density and V is the liquid volume displaced by the boat. We plotted F W as a function of ω and the steady-state hull-wall distance d ⊥ ss for 195 trials alongside a few fixed-parameter slices (Figs. 3C-E). As expected, the qualitative behavior of F W closely resembles the acceleration heatmap from Fig. 2C, with increasing attraction below, light repulsion near, and near-zero effects above d ⊥ T (ω). Despite the removal of transient effects, attractive and repulsive forces remained small, respectively demonstrating O(F W ) = [10 1 , 10 2 ] and [10 0 , 10 1 ] µN. We note that most measurements with d ⊥ ss < d ⊥ T fall outside the experimental noise floor 2.9 ± 13.1 µN. Surface wave measurements -To better understand the role of the emanated waves in generating a locomotive force, we employed the synthetic [36] Schlieren visualization technique Fast Checkerboard Demodulation [19] (FCD, see SI) to obtain quantitative measurements of the wave field (Figs. 1C, 4A-B, Movie S3). For optimal visualization quality, we minimized the water's depth to h rest = 5 cm for all experiments. Imaging was performed with a high speed camera (AOS X-PRI) at 500 FPS when the system had reached steady state and processed using custom MATLAB code derived from Refs. [19,37]. We observed the wave train to possess A ∝ r −1/2 in accord with established surface wave theory and follow the known dispersion relation for GC waves: ω 2 = gk + γk 3 ρ tanh (h rest k),(1) where γ is the fluid's surface tension and g is the standard gravity (Fig. S1C) [32]. FCD analysis of steady-state waves between the boat and wall reveals a net field propagating outward from the boat (Fig. 4B). These waves share ω with those emitted on the boat's far side and far from boundaries, but possess reduced A regardless of ω (Fig. 4C). We surmise that when the boat is sufficiently close to the wall, reflected waves return with non-negligible energy and modulate the free surface height at the hull. This modulation impedes concurrent wave generation on the side nearest the wall while minimally affecting the opposite side. Consequently, the steady-state amplitude between the hull and wall is reduced. We liken these dynamics to the reductions in height when jumping off a deformable medium [38] or pumping a swing [39] with poor timing. A hydrodynamic model -Armed with an understanding of the wave fields both near and far from boundaries, we motivate the boat's locomotive behavior as it relates to d ⊥ (Fig. 5A). The existence of a radiative force incident on a wave emitter and proportional to square amplitude is a classical result [32,33] observed in many systems with asymmetric wave generation [8,24,25]. When the boat is far from boundaries, the generated waves are spatially symmetric, leading to a net-zero F W . Near boundaries, reflected waves induce an amplitude imbalance, resulting in a finite F W . We postulate that at a perfect d ⊥ T , the reflected wave will have energy insufficient to modulate the amplitude of emission but sufficient to lightly force the boat away from the wall. Though the amplitude dynamics successfully describe the boat's attraction and motionlessness for small and large d ⊥ respectively, they provide insufficient reasoning for F W 's observed frequency dependence. We hypothesize that the unique properties of GC waves are relevant to these complex hydrodynamics. Work by Hocking on the interactions of GC waves with hard surfaces emphasizes the importance of wavenumber (alternatively frequency) to radiation and reflection [29][30][31]. Upon reflecting off a rigid boundary, GC waves dissipate substantial energy through complex contact-line dynamics. Within the accessible wavenumber range for our boat, the reflection coefficient R < 0.22 with R ∝ k 3 and k 0.85 for k 7 m −1 and k 20 m −1 respectively [29]. Coupled with the aforementioned amplitude modulation, this wavenumber dependence suggests that higher frequency waves will have sufficient energy to induce attraction at farther hull-wall distances. Further, GC waves radiated by a vertically oscillating body have energy given by the following [31]: E R = π 2 1 + 3γk 2 ρg A 2 .(2) Considering the boat as two back-to-back, semi-circular wave emitters, this expression implies the following ra- diation force incident upon one side: |F R | = E R k 4π = k 8 + 3γk 3 8ρg A 2 .(3) The factor of 4π accounts for projecting the wave momentum normal to the semi-circular boundary (see, e.g., Ref. [24] for a more detailed derivation). For our boat which has non-trivial A(ω), the expected F R follows a power law with exponent between 3 and 4 ( Fig. 5B Inset). We reiterate that the amplitude measurements were taken within the attractive regime, and so we shift the origin of our power law to the observed threshold frequency for attraction. The difference between F R on either side of the boat yields a predicted F W ; this prediction matches well with experimental results without any fitting parameters (Fig. 5B). We summarize our postulated model of the boat's boundary-driven locomotion in four regimes. In all cases, when the boat first emits waves, the field is symmetric, leading to a net-zero radiation force on the boat. When d ⊥ 0 d ⊥ T , the waves reflected off the boundary return to the boat with negligible energy compared to emission. Consequently, the boat experiences a force negligibly close to zero. When d ⊥ 0 d ⊥ T , the reflected waves hinder wave generation between the boat and wall, leading to an observed amplitude reduction. Meanwhile, waves on the far side remain unchanged; this broken symmetry yields a net force appearing as a boat-wall attraction. When approaching d ⊥ T from above, reflected waves have insufficient amplitude to affect wave generation but still carry non-negligible momentum. Symmetry is again broken and the boat experiences a slight repulsive force. Since the energy of a reflected GC wave increases with k, d ⊥ T also increases with k (and consequently ω). Should the original choice of d ⊥ 0 be retained while increasing ω, the reflected waves will then have sufficient energy to affect wave generation, causing the same result as when d ⊥ 0 d ⊥ T . Given the importance of the seldom-studied reflection and generation properties of GC waves to the boat's locomotion, we refer to these asymmetric wave fields as "Hocking radiation fields." We note that our model can only explain the boat's steady-state position using wave amplitudes measured in that state. Additionally, Hocking's theories on GC waves require both the emitter and reflecting boundary to be stationary on average. A much harder problem then is computing the system dynamics as the wave field updates; how would one compute the position versus time of the attracting boat in a dynamic environment? Indeed, we find the boat exhibits complex attractive modes like "towing" in response to a moving boundary (see SI, Movie S5). These dynamical experiments will help characterize transient locomotive states owed to Hocking radiation in stationary and active environments. Conclusion -In this Letter, we revealed how a fluid surface swimmer can use "Hocking radiation" to locomote without the need for a traditional propulsion mechanism and made the first direct measurement of the corresponding force. In doing so, we add to the growing list of transport phenomena that employ surface wave fields both for propulsion and nonlocal interaction with fellow substrate occupants [6, 8, 9, 13, 14, 16-18, 23-25, 40, 41]. By symmetrically generating waves near a boundary, our boat takes advantage of the reflection dynamics unique to GC waves to self-propel with frequency-and distancedependent locomotive modes. Though the necessity of a nearby boundary condition seemingly imposes restrictions on the relevance or usefulness of Hocking radiation, manipulation of oscillation frequency and profile in response to transient conditions may prove valuable in overcoming such limitations. We thank Enes Aydin for designing and constructing the tank apparatus and for helping during the boat design phase. We thank Ryan Hirsh for assisting with pendulum experiments. We thank Paul Umbanhowar for helpful comments and discussion. This work was funded by the Army Research Office GR00008673 (D.I.G.). SUPPLEMENTARY INFORMATION Details of boat vibration -The robot boat vibrates in response to the oscillation of an internally mounted eccentric motor (Vybronics Inc. Cylindrical Vibration Motor VJQ24-35K270B). High-speed measurements of the motor in motion reveal a consistent frequency ω 0 in response to power input P 0 . We observe two response modes: P 0 ∝ ω 2 0 and ω 3 0 for ω 0 and 30 Hz respectively (Fig. S1A). The boat resonates at the crossover between modes, which appears experimentally as both the maximum in A-ω space (see main text) and the only significant deviation from linearity in ω-ω 0 space (Fig. S1B). We cast the boat's vibratory response in 1D using the rotational analog of Newton's Second Law. By suspending the boat in midair on a string, we eliminate the need to model the complex feedback mechanisms owed to surface wave generation. Consequently, the relevant torques on the boat hull are produced by gravity, air drag, and the eccentric motor, which we model as a rotating un- balance [42]: Iθ = m 0 0 ω 2 0 R 0 sin (ω 0 t − θ) − mgR B sin θ − π 10 ρc D R 5 B sgn θ θ 2 ,(4) where I, m, R B and c D are the boat's moment of inertia, mass, radius, and drag coefficient respectively; m 0 and 0 are the rotating unbalance's mass and radius respectively; R 0 is the distance between the motor shaft and boat hull; and ρ is the density of air. We simulate Eq. (4) with an ordinary differential equation solver in MATLAB and find an expected response frequency ω = 0.99ω 0 (Fig. S1B). However, physical measurement of the boat's in-air vibration reveals a reduced response driven at 69% the motor's frequency. Placed in water, the boat's vibration drops further to 60%, with the surface wave frequency nearby at 57%. We attribute these discrepancies to two sources of damping, namely the motor's non-idealized mounting to the boat and the coupling between the fluid surface and the hull. To better understand the boat's vibratory response in 3D, we tracked the oscillation with multiple high-speed cameras (OptiTrack) at 360 FPS (Fig. S2). Unlike many established systems that employ periodic heaving (vertical) motions to generate surface waves [14,31,[43][44][45], our boat undergoes minimal vertical displacement. Instead, the eccentric motor induces oscillations primarily along the fore-aft (roll) axis. Still, the boat's overall vibrational motion is miniscule, with a maximum roll amplitude φ R = 0.20°± 0.02°corresponding to a vertical amplitude of 0.21 ± 0.02 mm. Surface currents -As a further check on our hypothesis that the boat's attraction toward and repulsion from boundaries is the result of surface waves, we investigated the possible existence of surface currents induced by the boat's wave generation. After fixing the boat's lateral position such that vibration was unimpeded, we seeded ∈ (0, 19), [19,29], and (29,42] Hz. the fluid surface with lycopodium powder (CAS number: 8023-70-9) for use with open-source Particle Image Velocimetry (PIV) software in MATLAB [46]. Results are shown in Movie S4. For ω < 19 Hz, no surface currents are produced, and seed particles trace circular paths in the vertical plane as they bob over the waves (Fig. S3A). For 19 Hz ≤ ω ≤ 29 Hz, vortices emerge as seed particles are drawn in at the fore and aft, circulate along the boat perimeter, and eject in jets at the left and right sides with maximum velocity v ≈ 8 mm/s (Fig. S3B). We rationalize the ingress and egress positions as the locations with the weakest and strongest vibratory motion respectively, a result of the orientation of the eccentric motor driving the oscillation. For ω > 29 Hz, a few smaller vortices with maximum velocity v ≈ 1 mm/s emerge inconsistently around the boat perimeter (Fig. S3C). All three regimes persist when the boat is brought near a boundary. The frequency dependence of these distinct surface current modes does not correlate with that of Hocking radiation as described in the main text. Furthermore, when orienting the boat such that the primary surface jets expel toward a nearby boundary, the boat still experiences an attractive force where mechanical intuition suggests a repulsion should emerge. For these reasons, we rule out surface currents as a probable cause for Hocking radiation and reaffirm our surface wave hypothesis. Synthetic Schlieren imaging [47] -Before starting the experiments, we captured a reference image of the background pattern (checkerboard) as seen through a still free surface with the high speed camera. During experiments, surface waves appeared as a distortion field u applied to the checkerboard. We compared the spatial Fourier transform of the distorted checkerboard to that of the reference image to find how the carrier peaks were modulated. When the free surface curvature had focal length greater than the distance to the background pattern (i.e., the invertibility condition is met [37]) we filtered the modulated signal to extract u(t, r), which is proportional to the gradient of the free surface height. Moisy and colleagues quantify this invertibility condition as follows: h p < h p,c = λ 2 4π 2 αη 0 ,(5) where h p is the effective surface-pattern distance, h p,c is the free surface focal length, λ is the wavelength, α is the ratio of indices of refraction given by 1 − n air /n fluid , and η 0 is the wave amplitude [37]. Further, we adapted the open-source code in Ref. [19] for use with our apparatus, incorporating a scale factor to account for additional interfaces between the background pattern and the fluid free surface [37]. Quantitatively identifying where the invertibility condition fails requires knowledge of wave properties that are not known a priori and cannot be reliably obtained from the reconstruction itself. However, we note that failed reconstruction surface height data typically is highly discontinuous, both with itself and with successfully reconstructed surface heights. We used this characteristic to estimate regions where the reconstruction failed per video frame with an autocorrelation method described by the following steps: 1. Perform a 2D spatially-moving variance with square kernel given by the 8-way nearest pixel neighbors. 2. Compare the moving variance to a threshold value. We obtained our threshold through trial-and-error but postulate that it is related to the effective distance between the free surface and background pattern. 3. Convert any pixels for which the variance exceeds the threshold to a mask. 4. Perform minor cleanup on the mask using morphological operations. The result is an estimate of all failed surface reconstructions in the frame. Probing response to moving boundaries -Though the constrained pendulum system enabled measurement of the Hocking radiation force, it restricted phenomenological exploration to a firmly asymmetric-field regime. To probe the existence of transition dynamics between the asymmetric and symmetric regimes of the Hocking radiation phenomenon, we measured the response of a freefloating boat with constant ω to a wall retreating with constant speed v wall in 1D (Fig. S4, Movie S5). When initiated with a boat-wall attraction, we posit the existence of a bifurcation in v wall at which the wall would "tow" the boat with constant d ⊥ . We mounted the wall from previous experiments on a linear actuator (Firgelli ® FA-240-S-12-18) powered with constant current and pulse-width modulated voltage. Velocimetry measurements [46] revealed minimal surface currents (v < 5 mm/s) near the wall's center; consequently, we performed experiments in this central region. We chose d ⊥ 0 = 1.18 ± 0.28 cm and ω = 41.9 Hz such that the boat started firmly within the attractive regime. Once v boat ≈ v min wall = 2 mm/s, we initialized the wall and recorded d ⊥ (t) using an onboard Logitech C920 webcam (Fig. S4B). When v wall ≤ 4.9 mm/s, acceleration induced by Hocking radiation was sufficient for the boat to catch the retreating boundary. For large v wall , the wall swiftly outperformed the boat's locomotion. We most closely approached our expected bifurcation when v wall = 7.7 mm/s. During this trial, the wall towed the boat with d ⊥ < 2 cm for a total distance of 43 cm. Despite the stringent boundary requirement for Hocking radiation to appear, proper choice of (ω, v wall ) enabled the boat to travel over 10x further than the corresponding d ⊥ T (ω). FIG. 1 . 1Wave-generating robot boat. (A) Photo of boat generating 17.1 Hz waves. (B) Schematic of the eccentric motor vibrating the boat to generate waves; propellers shown in (A) are not used in this study and thus omitted in (B). (C) Diagram of the tank wherein all experiments were performed. A backlit checkerboard enables Fast Checkerboard Demodulation (FCD) for spatiotemporal surface reconstruction [19]. (Inset) FCD determines fluid surface height using the instantaneous distortion of a checkerboard by surface perturbations. (D1-2) Time series of repulsion from (17.1 Hz) and attraction toward (33.5 Hz) wall, respectively. (E) Perpendicular hull-wall distance for repulsive and attractive trials at 17.1 Hz. FIG. 2 . 2Wave-generating boat experiences attraction and repulsion near boundaries. (A-B)d ⊥ versus ω and d ⊥ 0 . Red dotted lines denote the noise floor determined by behavior far from boundaries. Simultaneous dependence on both parameters is shown in (C), where each box corresponds to the average of 5 trials. FIG. 3 . 3Hocking radiation field measurements. (A) Force diagram for pendulum experiments used to directly measure FW . (B) Archetypal boat displacement plots for pendulum experiments at 33.5 Hz. Oscillations are attributed to the interplay between FW and FT . (C-D) FW versus ω and d ⊥ ss . Red dotted lines denote the noise floor determined by behavior far from boundaries. Simultaneous dependence on both parameters is shown for 195 trials in (E). FIG. 4 . 4Surface wave reconstructions reveal amplitude imbalance near boundaries. (A-B) Archetypal reconstructions of 17.1 Hz waves far from and near a boundary, respectively, with space-time heatmaps corresponding to dotted yellow lines. η(t, r) describes the free surface height with respect to hrest. Dark gray regions were occupied by solid objects (e.g., boat, wall). Light gray regions were deemed unreconstructable (see SI). (C) FCD measurements reveal the net field near the wall to have reduced A(ω). FIG. 5 . 5Hocking radiation hypothesis and model prediction. (A) When d ⊥ 0 d ⊥ T , the reflected waves have insufficient energy to affect the boat. When d ⊥ 0 d ⊥ T , the reflected waves perturb the free-surface height at the boat hull, yielding a reduced-amplitude field. This amplitude asymmetry produces a net radiation force toward the boundary. (B) Despite the non-monotonic nature of A(ω), the predicted net radiation force near the wall compares favorably with experimental results from Fig. 3C. (Inset) Radiation forces on boat sides due to emitted GC waves as predicted by Eq. (3) and Fig. 4C. FIG. S1 . S1Eccentric motor oscillation drives boat vibration and consequent wave generation. (A) Motor frequency response falls into two distinct modes with a resonance emerging at the crossover. (B) The boat's vibration is damped significantly due to coupling effects between the motor and boat and the fluid surface and hull. (C) Wave amplitude measurements yielded a dispersion relation comparable to established theory on GC waves[32]. FIG. S2 . S2Boat vibrates primarily along the roll axis with small amplitude. (A) Archetypal boat vibration at 38.1 Hz. (B) Amplitudes of oscillation along roll, pitch, and vertical displacement axes. For nearly all accessible frequencies, the primary boat oscillation occurs along the roll axis. FIG. S3. Boat vibration generates minimal surface currents. (A-C) Surface currents induced by wave generation at 6.3, 19.6, and 33.5 Hz respectively. Currents shown are representative of behaviors within the three distinct regimes: ω FIG. S4 . S4Attractive Hocking radiation enables towing by moving boundary. (A) Force diagram for towing experiments. (B) d ⊥ versus t with t0 corresponding to the wall's initialization. For low v wall , the boat catches the wall within 20 s. For high v wall , the wall rapidly outpaces the boat. Slight variance of d ⊥ 0 caused a nonmonotonic trend with v wall . (C) Lab-and (Inset) wall-frame time series of towing experiment closest to v wall bifurcation. Snapshots correspond to red points in (B). Life at low reynolds number. M Edward, Purcell, American Journal of Physics. 451Edward M Purcell. Life at low reynolds number. American Journal of Physics, 45(1):3-11, 1977. Undulatory swimming in sand: subsurface locomotion of the sandfish lizard. Yang Ryan D Maladen, Chen Ding, Daniel I Li, Goldman, Science. 3255938Ryan D Maladen, Yang Ding, Chen Li, and Daniel I Goldman. Undulatory swimming in sand: subsurface lo- comotion of the sandfish lizard. Science, 325(5938):314- 318, 2009. Ciliary contact interactions dominate surface scattering of swimming eukaryotes. Vasily Kantsler, Jörn Dunkel, Marco Polin, Raymond E Goldstein, Proceedings of the National Academy of Sciences. 1104Vasily Kantsler, Jörn Dunkel, Marco Polin, and Ray- mond E Goldstein. Ciliary contact interactions dominate surface scattering of swimming eukaryotes. Proceedings of the National Academy of Sciences, 110(4):1187-1192, 2013. Self-propulsion via slipping: Frictional swimming in multilegged locomotors. Baxi Chong, Juntao He, Shengkai Li, Eva Erickson, Kelimar Diaz, Tianyu Wang, Daniel Soto, Daniel I Goldman, Proceedings of the National Academy of Sciences. 120112213698120Baxi Chong, Juntao He, Shengkai Li, Eva Erickson, Keli- mar Diaz, Tianyu Wang, Daniel Soto, and Daniel I Gold- man. Self-propulsion via slipping: Frictional swimming in multilegged locomotors. Proceedings of the National Academy of Sciences, 120(11):e2213698120, 2023. Water surface swimming dynamics in lightweight centipedes. Kelimar Diaz, Baxi Chong, Steven Tarr, Eva Erickson, Daniel I Goldman, arXiv:2210.09570arXiv preprintKelimar Diaz, Baxi Chong, Steven Tarr, Eva Erick- son, and Daniel I. Goldman. Water surface swim- ming dynamics in lightweight centipedes. arXiv preprint arXiv:2210.09570, 2022. Honeybees use their wings for water surface locomotion. Chris Roh, Morteza Gharib, Proceedings of the National Academy of Sciences. the National Academy of Sciences116Chris Roh and Morteza Gharib. Honeybees use their wings for water surface locomotion. Proceedings of the National Academy of Sciences, 116(49):24446-24451, 2019. Milli-scale biped vibratory water strider. Ki Yun Lee, Lu Wang, Jinhong Qu, Kenn R Oldham, 2019 International Conference on Manipulation, Automation and Robotics at Small Scales (MARSS). IEEEKi Yun Lee, Lu Wang, Jinhong Qu, and Kenn R Old- ham. Milli-scale biped vibratory water strider. In 2019 International Conference on Manipulation, Automation and Robotics at Small Scales (MARSS), pages 1-6. IEEE, 2019. Surferbot: a wave-propelled aquatic vibrobot. Eugene Rhee, Robert Hunt, J Stuart, Daniel M Thomson, Harris, Bioinspiration & Biomimetics. 170550012022Eugene Rhee, Robert Hunt, Stuart J Thomson, and Daniel M Harris. Surferbot: a wave-propelled aquatic vi- brobot. Bioinspiration & Biomimetics, 17(055001), 2022. Wave-riding and wave-passing by ducklings in formation swimming. Zhi-Ming Yuan, Minglu Chen, Laibing Jia, Chunyan Ji, Atilla Incecik, Journal of Fluid Mechanics. 9282021Zhi-Ming Yuan, Minglu Chen, Laibing Jia, Chunyan Ji, and Atilla Incecik. Wave-riding and wave-passing by ducklings in formation swimming. Journal of Fluid Mechanics, 928, 2021. Realtime remodeling of granular terrain for robot locomotion. Andras Karsai, Deniz Kerimoglu, Daniel Soto, Sehoon Ha, Tingnan Zhang, Daniel I Goldman, Advanced Intelligent Systems. 2200119Andras Karsai, Deniz Kerimoglu, Daniel Soto, Sehoon Ha, Tingnan Zhang, and Daniel I Goldman. Real- time remodeling of granular terrain for robot locomotion. Advanced Intelligent Systems, page 2200119, 2022. Propeller handbook. International Marine Publishing. Dave Gerr, Dave Gerr. Propeller handbook. International Marine Publishing, 1989. Onset of wave drag due to generation of capillary-gravity waves by a moving object as a critical phenomenon. Teodor Burghelea, Victor Steinberg, Physical Review Letters. 86122557Teodor Burghelea and Victor Steinberg. Onset of wave drag due to generation of capillary-gravity waves by a moving object as a critical phenomenon. Physical Review Letters, 86(12):2557, 2001. Sex discrimination in gerris remigis: role of a surface wave signal. R Stimson Wilcox, Science. 2064424R Stimson Wilcox. Sex discrimination in gerris remigis: role of a surface wave signal. Science, 206(4424):1325- 1327, 1979. Gunwale bobbing. P Graham, Olivier Benham, Devauchelle, W Stephen, Jerome A Morris, Neufeld, Physical Review Fluids. 7774804Graham P Benham, Olivier Devauchelle, Stephen W Morris, and Jerome A Neufeld. Gunwale bobbing. Physical Review Fluids, 7(7):074804, 2022. A trajectory equation for walking droplets: hydrodynamic pilot-wave theory. U Anand, Rodolfo R Oza, John Wm Rosales, Bush, Journal of Fluid Mechanics. 737Anand U Oza, Rodolfo R Rosales, and John WM Bush. A trajectory equation for walking droplets: hydrody- namic pilot-wave theory. Journal of Fluid Mechanics, 737:552-570, 2013. The pilot-wave dynamics of walking droplets in confinement. Harris Daniel Martin, Massachusetts Institute of TechnologyPhD thesisDaniel Martin Harris. The pilot-wave dynamics of walking droplets in confinement. PhD thesis, Mas- sachusetts Institute of Technology, 2015. The interaction of a walking droplet and a submerged pillar: From scattering to the logarithmic spiral. M Daniel, P-T Harris, Adam Brun, Damiano, M Luiz, John Wm Faria, Bush, Chaos: An Interdisciplinary Journal of Nonlinear Science. 28996105Daniel M Harris, P-T Brun, Adam Damiano, Luiz M Faria, and John WM Bush. The interaction of a walking droplet and a submerged pillar: From scattering to the logarithmic spiral. Chaos: An Interdisciplinary Journal of Nonlinear Science, 28(9):096105, 2018. Emergent order in hydrodynamic spin lattices. J Pedro, Giuseppe Sáenz, Pucci, Alexis Sam E Turton, Rodolfo R Goujon, Jörn Rosales, John Wm Dunkel, Bush, Nature. 5967870Pedro J Sáenz, Giuseppe Pucci, Sam E Turton, Alexis Goujon, Rodolfo R Rosales, Jörn Dunkel, and John WM Bush. Emergent order in hydrodynamic spin lattices. Nature, 596(7870):58-62, 2021. Real-time quantitative schlieren imaging by fast fourier demodulation of a checkered backdrop. Sander Wildeman, Experiments in Fluids. 596Sander Wildeman. Real-time quantitative schlieren imaging by fast fourier demodulation of a checkered backdrop. Experiments in Fluids, 59(6):1-13, 2018. The influence of retardation on the london-van der waals forces. B G Hendrik, Dirk Casimir, Polder, Physical Review. 734360Hendrik BG Casimir and Dirk Polder. The influence of retardation on the london-van der waals forces. Physical Review, 73(4):360, 1948. The casimir force between real materials: Experiment and theory. Gl Klimchitskaya, V M Mohideen, Mostepanenko, Reviews of Modern Physics. 8141827GL Klimchitskaya, U Mohideen, and VM Mostepanenko. The casimir force between real materials: Experiment and theory. Reviews of Modern Physics, 81(4):1827, 2009. Materials perspective on casimir and van der waals interactions. Diego Alejandro Lm Woods, Roberto Dalvit, Alexandre Tkatchenko, Alejandro W Rodriguez-Lopez, R Rodriguez, Podgornik, Reviews of Modern Physics. 88445003LM Woods, Diego Alejandro Roberto Dalvit, Alexan- dre Tkatchenko, P Rodriguez-Lopez, Alejandro W Ro- driguez, and R Podgornik. Materials perspective on casimir and van der waals interactions. Reviews of Modern Physics, 88(4):045003, 2016. A maritime analogy of the casimir effect. L Sipko, Boersma, American Journal of Physics. 645Sipko L Boersma. A maritime analogy of the casimir ef- fect. American Journal of Physics, 64(5):539-541, 1996. Fluctuation spectra and force generation in nonequilibrium systems. A Alpha, Dominic Lee, John S Vella, Wettlaufer, Proceedings of the National Academy of Sciences. 11435Alpha A Lee, Dominic Vella, and John S Wettlaufer. Fluctuation spectra and force generation in nonequilib- rium systems. Proceedings of the National Academy of Sciences, 114(35):9255-9260, 2017. A water wave analog of the casimir effect. C Bruce, Joshua J Denardo, Andrés Puda, Larraza, American Journal of Physics. 7712Bruce C Denardo, Joshua J Puda, and Andrés Larraza. A water wave analog of the casimir effect. American Journal of Physics, 77(12):1095-1101, 2009. Casimir effect in swimmer suspensions. C Parra-Rojas, R Soto, Physical Review E. 90113024C Parra-Rojas and R Soto. Casimir effect in swimmer suspensions. Physical Review E, 90(1):013024, 2014. Casimir effect in active matter systems. D Ray, Reichhardt, Cj Olson Reichhardt, Physical Review E. 90113019D Ray, C Reichhardt, and CJ Olson Reichhardt. Casimir effect in active matter systems. Physical Review E, 90(1):013019, 2014. Tunable long range forces mediated by selfpropelled colloidal hard spheres. Ran Ni, Martien A Cohen, Peter G Stuart, Bolhuis, Physical Review Letters. 114118302Ran Ni, Martien A Cohen Stuart, and Peter G Bol- huis. Tunable long range forces mediated by self- propelled colloidal hard spheres. Physical Review Letters, 114(1):018302, 2015. Reflection of capillary-gravity waves. Lm Hocking, Wave Motion. 93LM Hocking. Reflection of capillary-gravity waves. Wave Motion, 9(3):217-226, 1987. The damping of capillary-gravity waves at a rigid boundary. Lm Hocking, Journal of Fluid Mechanics. 179LM Hocking. The damping of capillary-gravity waves at a rigid boundary. Journal of Fluid Mechanics, 179:253- 266, 1987. Capillary-gravity waves produced by a heaving body. Lm Hocking, Journal of Fluid Mechanics. 186LM Hocking. Capillary-gravity waves produced by a heaving body. Journal of Fluid Mechanics, 186:337-349, 1988. . Horace Lamb, Hydrodynamics. University PressHorace Lamb. Hydrodynamics. University Press, 1924. Radiation stresses in water waves; a physical discussion, with applications. Michael S Longuet-Higgins , R W Stewart, Deep sea research and oceanographic abstracts. Elsevier11Michael S Longuet-Higgins and RW Stewart. Radia- tion stresses in water waves; a physical discussion, with applications. In Deep sea research and oceanographic abstracts, volume 11, pages 529-562. Elsevier, 1964. Viscous drag of a solid sphere straddling a spherical or flat surface. Rumiana Krassimir D Danov, Bernard Dimova, Pouligny, Physics of Fluids. 1211Krassimir D Danov, Rumiana Dimova, and Bernard Pouligny. Viscous drag of a solid sphere straddling a spherical or flat surface. Physics of Fluids, 12(11):2711- 2722, 2000. Inertial delay of self-propelled particles. Christian Scholz, Soudeh Jahanshahi, Anton Ldov, Hartmut Löwen, Nature Communications. 91Christian Scholz, Soudeh Jahanshahi, Anton Ldov, and Hartmut Löwen. Inertial delay of self-propelled particles. Nature Communications, 9(1):1-9, 2018. Classical Schlieren techniques rely on the precise alignment of two light-filtering masks. The replacement of a physical mask with a digitally-generated (synthetic) mask circumvents this non-trivial task (see Ref. Classical Schlieren techniques rely on the precise align- ment of two light-filtering masks. The replacement of a physical mask with a digitally-generated (synthetic) mask circumvents this non-trivial task (see Ref. [48]). A synthetic schlieren method for the measurement of the topography of a liquid interface. Frédéric Moisy, Marc Rabaud, Kévin Salsac, Experiments in Fluids. 466Frédéric Moisy, Marc Rabaud, and Kévin Salsac. A syn- thetic schlieren method for the measurement of the to- pography of a liquid interface. Experiments in Fluids, 46(6):1021-1036, 2009. Robophysical study of jumping dynamics on granular media. Jeffrey Aguilar, Daniel I Goldman, Nature Physics. 123Jeffrey Aguilar and Daniel I Goldman. Robophysical study of jumping dynamics on granular media. Nature Physics, 12(3):278-283, 2016. Optimal pumping in a model of a swing. Saebyok Bae, Yoon-Hwan Kang, European Journal of Physics. 27175Saebyok Bae and Yoon-Hwan Kang. Optimal pumping in a model of a swing. European Journal of Physics, 27(1):75, 2005. Small fire ant rafts are unstable. Hungtang Ko, Mathias Hadgu, Keyana Komilian, David L Hu, Physical Review Fluids. 7990501Hungtang Ko, Mathias Hadgu, Keyana Komilian, and David L Hu. Small fire ant rafts are unstable. Physical Review Fluids, 7(9):090501, 2022. Ian Ho, Giuseppe Pucci, U Anand, Daniel M Oza, Harris, arXiv:2102.11694Capillary surfers: wave-driven particles at a fluid interface. arXiv preprintIan Ho, Giuseppe Pucci, Anand U Oza, and Daniel M Harris. Capillary surfers: wave-driven particles at a fluid interface. arXiv preprint arXiv:2102.11694, 2021. Palm III. System dynamics. J William, McGraw-Hill New Yorkthird editionWilliam J. Palm III. System dynamics. McGraw-Hill New York, third edition, 2014. Transfiguration of surface waves. M Tatsuno, J Inoue, Okabe, Rep. Res. Inst. Appl. Mech. Kyushu University. 17M Tatsuno, S Inoue, and J Okabe. Transfiguration of surface waves. Rep. Res. Inst. Appl. Mech. Kyushu University, 17:195-215, 1969. The wave forces acting on a floating hemisphere undergoing forced periodic oscillations. A Hulme, Journal of Fluid Mechanics. 121A Hulme. The wave forces acting on a floating hemi- sphere undergoing forced periodic oscillations. Journal of Fluid Mechanics, 121:443-463, 1982. Visual observations of the flow around a half-submerged oscillating sphere. Sadatoshi Taneda, Journal of Fluid Mechanics. 227Sadatoshi Taneda. Visual observations of the flow around a half-submerged oscillating sphere. Journal of Fluid Mechanics, 227:193-209, 1991. Particle image velocimetry for matlab: Accuracy and enhanced algorithms in pivlab. William Thielicke, René Sonntag, Journal of Open Research Software. 912021William Thielicke and René Sonntag. Particle image velocimetry for matlab: Accuracy and enhanced algo- rithms in pivlab. Journal of Open Research Software, 9(1), 2021. Wildeman's open-source code [19] and author SWT's modifications derived from Refs. [19, 37] as described in this section were also used in the analysis of Ref. The descriptive text is reproduced from the SI accompanying Ref. with minor editsWildeman's open-source code [19] and author SWT's modifications derived from Refs. [19, 37] as described in this section were also used in the analysis of Ref. [5]. The descriptive text is reproduced from the SI accompanying Ref. [5] with minor edits. Visualization and measurement of internal waves by 'synthetic schlieren'. part 1. vertically oscillating cylinder. R Bruce, Sutherland, B Stuart, Dalziel, O Graham, P F Hughes, Linden, Journal of Fluid Mechanics. 390Bruce R Sutherland, Stuart B Dalziel, Graham O Hughes, and PF Linden. Visualization and measurement of internal waves by 'synthetic schlieren'. part 1. verti- cally oscillating cylinder. Journal of Fluid Mechanics, 390:93-126, 1999.
[]
[ "Probabilistic Jacobian-based Saliency Maps Attacks", "Probabilistic Jacobian-based Saliency Maps Attacks" ]
[ "António Loison [email protected]@student-cs.fr \nCentraleSupélec\n3 Rue Joliot-Curie 91192Gif-sur-YvetteFrance\n", "Théo Combey \nCentraleSupélec\n3 Rue Joliot-Curie 91192Gif-sur-YvetteFrance\n", "Hatem Hajri [email protected] \nIRT SystemX\n8 Avenue de la Vauve91120PalaiseauFrance\n" ]
[ "CentraleSupélec\n3 Rue Joliot-Curie 91192Gif-sur-YvetteFrance", "CentraleSupélec\n3 Rue Joliot-Curie 91192Gif-sur-YvetteFrance", "IRT SystemX\n8 Avenue de la Vauve91120PalaiseauFrance" ]
[]
Machine learning models have achieved spectacular performances in various critical fields including intelligent monitoring, autonomous driving and malware detection. Therefore, robustness against adversarial attacks represents a key issue to trust these models. In particular, the Jacobian-based Saliency Map Attack (JSMA) is widely used to fool neural network classifiers. In this paper, we introduce Weighted JSMA (WJSMA) and Taylor JSMA (TJSMA), simple, faster and more efficient versions of JSMA. These attacks rely upon new saliency maps involving the neural network Jacobian, its output probabilities and the input features. We demonstrate the advantages of WJSMA and TJSMA through two computer vision applications on 1) LeNet-5, a well-known Neural Network classifier (NNC), on the MNIST database and on 2) a more challenging NNC on the CIFAR-10 dataset. We obtain that WJSMA and TJSMA significantly outperform JSMA in success rate, speed and average number of changed features. For instance, on LeNet-5 (with 100% and 99.49% accuracies on the training and test sets), WJSMA and TJSMA respectively exceed 97% and 98.60% in success rate for a maximum authorised distortion of 14.5%, outperforming JSMA with more than 9.5 and 11 percentage points 3 . The new attacks are then used to defend and create more robust models than those trained against JSMA. Like JSMA, our attacks are not scalable on large datasets such as IMAGENET but despite this fact, they remain attractive for relatively small datasets like MNIST, CIFAR-10 and may be potential tools for future applications.
10.3390/make2040030
[ "https://arxiv.org/pdf/2007.06032v1.pdf" ]
220,496,565
2007.06032
78d326dae1ad671d331222f5fa7ac14d2300086a
Probabilistic Jacobian-based Saliency Maps Attacks António Loison [email protected]@student-cs.fr CentraleSupélec 3 Rue Joliot-Curie 91192Gif-sur-YvetteFrance Théo Combey CentraleSupélec 3 Rue Joliot-Curie 91192Gif-sur-YvetteFrance Hatem Hajri [email protected] IRT SystemX 8 Avenue de la Vauve91120PalaiseauFrance Probabilistic Jacobian-based Saliency Maps Attacks Jacobian-based Saliency MapAdversarial AttacksDeep Neural Net- worksMNISTCIFAR-10 Machine learning models have achieved spectacular performances in various critical fields including intelligent monitoring, autonomous driving and malware detection. Therefore, robustness against adversarial attacks represents a key issue to trust these models. In particular, the Jacobian-based Saliency Map Attack (JSMA) is widely used to fool neural network classifiers. In this paper, we introduce Weighted JSMA (WJSMA) and Taylor JSMA (TJSMA), simple, faster and more efficient versions of JSMA. These attacks rely upon new saliency maps involving the neural network Jacobian, its output probabilities and the input features. We demonstrate the advantages of WJSMA and TJSMA through two computer vision applications on 1) LeNet-5, a well-known Neural Network classifier (NNC), on the MNIST database and on 2) a more challenging NNC on the CIFAR-10 dataset. We obtain that WJSMA and TJSMA significantly outperform JSMA in success rate, speed and average number of changed features. For instance, on LeNet-5 (with 100% and 99.49% accuracies on the training and test sets), WJSMA and TJSMA respectively exceed 97% and 98.60% in success rate for a maximum authorised distortion of 14.5%, outperforming JSMA with more than 9.5 and 11 percentage points 3 . The new attacks are then used to defend and create more robust models than those trained against JSMA. Like JSMA, our attacks are not scalable on large datasets such as IMAGENET but despite this fact, they remain attractive for relatively small datasets like MNIST, CIFAR-10 and may be potential tools for future applications. Introduction Deep learning classifiers are used in a wide variety of situations, such as vision, speech recognition, financial fraud detection, malware detection, autonomous driving, defence, and more. The ubiquity of deep learning algorithms in many applications, especially those that are critical such as autonomous driving [4,20] or pertain to security and privacy [17,21] makes their attack particularly useful. Indeed, this allows firstly to identify possible flaws in the intelligent learned system and secondly set up a defense strategy to improve its reliability. In this context, adversarial machine learning has appeared as a new branch that aims to thwart intelligent algorithms. Many techniques called adversarial attacks succeeded in fooling well-known architectures of machine learning algorithms, sometimes in an astonishing way. Examples of adversarial attacks include but are not limited to: Fast Gradient Sign Method (FGSM) [5], Basic Iterative Method (IBM) [8], Projected Gradient Descent (PGD) [12], JSMA [15], DeepFool [13], Universal Adversarial Perturbations (UAP) [14] and Carlini-Wagner (CW) attacks [1]. Adversarial attacks are built upon the idea of adversarial samples. Given a classifier N and an input x with label l, an adversarial sample to x is an input x * close to x but such that label(x * ) = l. These attacks can be separated into two types: targeted and non-targeted depending on whether label(x * ) is specified in advance or not. In this paper, we focus on JSMA, a simple, reliable and intuitive targeted adversarial attack against machine learning classifiers. Despite the fact it does not scale to large datasets like IMAGENET [3], JSMA is still relevant on small datasets such as MNIST [10], CIFAR-10 [7], Fashion-MNIST [25] achieving good results on these datasets [15,6,24]. Relying on its cleverhans implementation [16], JSMA is able to generate 9 adversarial samples on MNIST in only 2 seconds on a laptop with 2 CPU cores. The combination between good performance and speed makes JSMA attractive although it is less efficient than CW attack which is 20 times slower [1]. In multiple other applications in cybersecurity, anomaly detection, intrusion detection and Reinforcement Learning involving small data, JSMA may be preferred over many approaches [18,2,19,11]. Before explaining our contribution, let us introduce some definitions and recall the principle of JSMA. Neural network classifier (NNC). The goal of a NNC is to predict through a neural network which class an item x belongs to, among a family of K possible classes. It outputs a vector of probabilities p(x) = (p 1 (x), · · · , p K (x)) where the label of x is deduced as follows: label(x) = argmax k p k (x). Jacobian-based Saliency Map Attack (JSMA). To fool NNCs, this attack relies on the Jacobian matrix of outputs with respect to inputs. By analysing this matrix, one can deduce how the output probabilities behave given a slight modification of an input feature. Consider a NNC N as before and denote by F (x) = (F 1 (x), · · · , F K (x)) the outputs of the second-to-last layer of N (no longer probabilities, but related to the final output by applying a softmax layer). To craft an adversarial example from a given input x, JSMA first computes the gradient ∇F (x). The next step is constructing a saliency map whose role is to select the most relevant component i to perturb: S[x, t][i] =                0 if ∂F t (x) ∂x i < 0 or k =t ∂F k (x) ∂x i > 0 ∂F t (x) ∂x i · k =t ∂F k (x) ∂x i otherwise. (1) Note the role of ∂F t (x) ∂x i and k =t ∂F k (x) ∂x i which is to increase F t (x) and de- crease k =t F k (x) . Working with the F k 's instead of the probabilities p k has been justified in [15] by the extreme variations introduced by the logistic regression. Then the algorithm selects the component: i max = argmax i S[x, t][i].(2) and augment x imax with a default increase value θ: x imax ← x imax + θ, clipped to the domain of features values. In a more advanced form, JSMA selects pairs of components (i max , j max ) using doubly indexed saliency maps recalled later in the paper. Contributions. We introduce two new adversarial attacks: (1) Weighted JSMA (WJSMA): This attack follows the mechanism of JSMA but "rectifies" it by weighting gradients by the respective probabilities of classes. The advantage of this fine-tuning is to reduce the impact of gradients associated with small output probabilities. (2) Taylor JSMA (TJSMA): It takes into account the output probabilities as WJSMA and additionally penalises the gradients by θ max − x k to encourage the selection of input features that are not close to θ max . We give justifications of WJSMA and TJSMA and experimentally demonstrate they give significantly better results than JSMA. Two illustrations will be considered by targeting the LeNet-5 [9] model on MNIST and a variant of All Convolutional Net [22] on CIFAR-10. Figures 1 and 2 show examples of targeted adversarial samples generated by the three attacks JSMA, WJSMA and TJSMA from an MNIST 0 image and a CIFAR-10 car image, respectively. At first glance, samples provided by WJSMA and TJSMA look less noisy and closer to the original images than those generated by JSMA. In addition to attacks, we present an application to defense. It essentially demonstrates that defending against WJSMA or TJSMA makes the NNC more robust against JSMA while defending against JSMA has less impact on the performances of WJSMA and TJSMA. Weighted Jacobian-based Saliency Map Attack (WJSMA) This section presents WJSMA, the first contribution of the paper, its motivation and mathematical argumentation. Motivating example. Assume a number of classes K ≥ 4 and for some input x: p 1 (x) = 0.5, p 2 (x) = 0.49, p 3 (x) = 0.01 and p k (x) = 0 for all 4 ≤ k ≤ K. Consider the problem of generating an adversarial sample to x with target label t = 2. In order to decrease k =2 F k (x), the first iteration step of JSMA relies on the gradients ∇F k (x), k = 2. The main observation is that as the probabilities p k (x) = 0 for 4 ≤ k ≤ K are already in their minimal values, the consideration of ∇F k (x) for these values of k in the search of i max is unnecessary. In other words, by acting only on gradients of the secondto-last layer, JSMA does not consider the crucial constraints on probabilities: p k (x) ≥ 0. Moreover, the possible decrease for p 1 (x) is high (up to 0.5) and, as p 3 (x) is relatively small, it will be hard to decrease further. In this situation, intuitively, instead of relying equally on ∇F 1 (x) and ∇F 3 (x), one would " bet more " on ∇F 1 (x) than ∇F 3 (x). To address the previous issue, WJSMA relies on new saliency maps derived quite naturally from the classical log softmax reasoning. First, we compute the derivative: ∂ ∂x i log p t (x) = = (1 − p t (x)) ∂F t ∂x i (x) − k =t p k (x) ∂F k ∂x i (x)(3) with t standing for the targeted class. This formula is separated as A − B, where A only depends on the targeted class and B depends on the other classes. To maximise this quantity, one can consider maximising A and minimising B independently by imposing the constraints A > 0 and B < 0. Note that, unlike JSMA, these constraints ensure that ∂p t ∂x i (x) remains positive. This allows us to introduce weighted saliency maps depending on one component as follows: S W [x, t][i] =                0 if ∂F t (x) ∂x i < 0 or k =t p k (x) ∂F k (x) ∂x i > 0 ∂F t (x) ∂x i · k =t p k (x) ∂F k (x) ∂x i otherwise.(4) Based on these maps, we present Algorithm 1, the first version of WJSMA that generates targeted adversarial samples. When the output x * of Algorithm 1 satisfies class(x * ) = t, the attack is considered as successful. To relax a bit the search of relevant components and motivated by an application to computer vision, Papernot et al. [15] introduced saliency maps indexed by pairs of components. Their main observation is that the conditions required Output: x * : adversarial sample to x. x * ← x iter ← 0 Γ ← 1, |x| \ {p ∈ 1, |x| | x[p] = θmax} while class(x * ) = t and iter < maxIter and Γ = ∅ do pmax = argmax p∈Γ S W [x * , t](p) Modify x * by x * [pmax] = Clip [θ min ,θmax] (x * [pmax] + θ) //Clip is the clipping function Remove pmax from Γ iter + + end while return x * in S[x, t][i] (1) may be too severe for some applications and very few components will verify it. By replicating the same one-component WJSMA reasoning, we introduce weighted versions of doubly indexed saliency maps S W [x, t][i, j] as follows: S W [x, t][i, j] =                0 if a∈{i,j} ∂F t (x) ∂x a < 0 or k =t p k (x) a∈{i,j} ∂F k (x) ∂x a > 0 a∈{i,j} ∂F t (x) ∂x a · k =t p k (x) a∈{i,j} ∂F t (x) ∂x a otherwise.(5) Based on these maps, we present Algorithm 2, the second version of WJSMA that generates targeted adversarial samples by operating on pairs of components. In the two previous algorithms, the selected components are always augmented by a positive default value, i.e. features are increased. It is possible to deduce two versions of Algorithms 1 and 2 where relevant components are selected and then decreased according to a similar logic. Taylor Jacobian-based Saliency Map Attack (TJSMA) This section presents Taylor JSMA, the second contribution of this paper. The idea of this attack is to additionally penalise the choice of feature components that are close the maximum value of features θ max and favour components that Inputs: Same inputs as Algorithm 1. Output: x * : adversarial sample to x. x * ← x iter ← 0 Γ ← {(p, q), p, q ∈ 1, |x| , x[p] = θmax, x[q] = θmax} while class(x * ) = t and iter < maxIter and Γ = ∅ do (pmax, qmax) = argmax p,q∈Γ S W [x * , t](p, q) Modify x * by x * [a] = Clip [θ min ,θmax (x * [a] + θ), a = pmax, pmax Remove (pmax, qmax) from Γ iter + + end while return x * are more distant from θ max . As a motivating situation, assume two components i and j have the same WJSMA score S W [x, t][i] and S W [x, t][j] and that x i is very close to θ max , while x j is far enough from θ max . In this case, searching for more impact, our saliency maps prefer x j over x i . Concretely, we consider maximising the two scores: S 1 = θ max − x i and S 2 = ∂ ∂x i log p t (x) which is translated into maximising S = S 1 S 2 . Accordingly, we introduce new saliency maps for one-and two-components attacks as follows. S T [x, t][i] = 0 if α i < 0 or β i > 0 α i |β i | otherwise.(6) where α i = (θ max − x i ) ∂F c (x) ∂x i , β i = k =t p k (x)(θ max − x i ) ∂F k (x) ∂x i and S T [x, t][i, j] = 0 if α i,j < 0 or β i,j > 0 α i,j |β i,j | otherwise.(7) where α i,j = a∈{i,j} (θ max − x a ) ∂F t (x) ∂x a , β i,j = k =t a∈{i,j} p k (x)(θ max − x a ) ∂F k (x) ∂x a We call these maps Taylor saliency maps because of the Taylor terms (θ max − x a ) ∂F k (x) ∂x a . One and two-components TJSMA follow exactly Algorithms 1 and 2 with only S W replaced with S T . Through Figures 3a and 3b, we observe that WJSMA and TJSMA decrease/increase the predicted/targeted probability of the original/targeted class much sooner than JSMA. In this example, it is worth noting how TJSMA behaves like WJSMA until it is able to find a more vulnerable component that makes it converge much faster. Experiments In the following, we give attacks and defense applications to illustrate the interest of WJSMA and TJSMA over JSMA. In doing so, we compare WJSMA and TJSMA and report better results for TJSMA despite that for a large part of samples WJSMA outperforms TJSMA. We use the following standard datasets: MNIST [10]. This dataset contains 70,000 28 × 28 greyscale images in 10 classes, divided into 60,000 training images and 10,000 test images. The possible classes are digits from 0 to 9. CIFAR-10 [7]. This dataset contains 60,000 32×32×3 RGB images. There are 50,000 training images and 10,000 test images. These images are divided into 10 different classes (airplane, automobile, bird, cat, deer, dog, frog, horse, ship, truck), with 6,000 images per class. DNN on MNIST. For the first experiment, we use LeNet-5 [9,15], whose architecture is given in the supplementary material. We implement and train this model using a cleverhans model that optimises crafting adversarial examples. The number of epochs is fixed to 20, the batch-size to 128, the learning rate to 0.001 and the Adam optimizer is used. Training results in a 100% accuracy on the training dataset and 99.49% accuracy on the test dataset. DNN on CIFAR-10. For the second experiment, a more complex DNN is trained to reach a good performance on CIFAR-10 which is more challenging than MNIST. Its architecture is inspired by the AllConvolutional model proposed in cleverhans and is described in the supplementary material.. Likewise, this model is implemented and trained using cleverhans for 10 epochs, with a batch size of 128, a learning rate of 0.001 and the Adam optimizer. Training results in a 99.96% accuracy on the training dataset and 83.81% accuracy on the test dataset. To compare our results with [15], we use the original implementation of JSMA available in cleverhans. We have also adapted the code to WJSMA and TJSMA obtaining fast implementations of these two attacks. We only test the attacks (i.e. Algorithm 2 in the three formats: original, weighted and Taylor) on samples that are correctly predicted by their respective neural networks. In this way, the attacks are applied to the whole training set and the 9,949 wellpredicted images of the MNIST test dataset. Similarly, CIFAR-10 adversarial examples are crafted from the well-predicted 9,995 images of the first training 10,000 images and the 8,381 well-predicted test images. To compare the three attacks, we rely on the notion of maximum distortion of adversarial samples defined as the ratio of altered components to the total number of components. Following [15], we choose a maximum distortion of γ = 14.5% on the adversarial samples from MNIST, corresponding to maxIter = 784 * γ 2 * 100 . On CIFAR-10, we fix γ = 3.7% in order to have the same maximum number of iterations for both experiments. This allows a comparison between the attacks in two different settings. Furthermore, for both experiments, we set θ = 1 (note that θ min = 0, θ max = 1). We report the metrics: Results on the metrics (1) and (2) are shown in Table 1 for MNIST and Table 2 for CIFAR-10. Overall, WJSMA and TJSMA significantly outperform JSMA according to the metrics (1)-(2). On MNIST. Results in terms of success rate are quite remarkable for WJSMA and TJSMA respectively outperforming JSMA with near 9.46, 10.98 percentage points (pp) on the training set and 9.46, 11.34 pp on the test set. The gain in the average number of altered components exceeds 6 components for WJSMA and 9 components for TJSMA in both experiments. On CIFAR-10. Similar results are obtained on this dataset. WJSMA and TJSMA outperform JSMA in success rate by near 9.74, 11.23 pp on the training set and more than 10, 12 pp on the test set. For both training and test sets, we report better mean L 0 distances exceeding 7 features in all cases and up to 10.14 features for TJSMA on the training set. Dominance of the attacks. The next figures illustrate the (strict) dominance of the attacks for the two experiments. In these statistics, we do not count the samples for which TJSMA and WJSMA realise the same number of iterations strictly less than JSMA. For both experiments, TJSMA has a noteworthy advantage over WJSMA and JSMA. The advantage of WJSMA over JSMA is also considerable. This shows that, in most cases, WJSMA and TJSMA craft better adversarial examples than JSMA, while being faster. Our results are actually better when directly comparing WJSMA or TJSMA with JSMA. As additional results, we give in the supplementary material the statistics for the pairwise dominance between the attacks. As it might be expected, both WJSMA and TJSMA dominate JSMA and TJSMA dominate WJSMA. Run-time comparison. In order to have a meaningful speed comparison between the three attacks, we evaluated the run-time needed for each attack to successfully craft the first 1,000 test images of MNIST in the targeted mode. Results shown in Table 3 reveal that TJSMA and WJSMA are 1.41 and 1.28 times faster than JSMA. These performance tests were realised on a machine equipped with a Intel Xeon 6126 processor and a Nvidia Tesla P100 graphics processor. Based on a previous analysis [1], TJSMA and WJSMA are at least 28 and 24 times faster than L 0 CW attack. Note that for WJSMA and TJSMA, the additional computations of one iteration compared to JSMA are negligible (simple multiplications). Thus the difference in speed between the attacks is mainly due to the number of iterations for each attack. Note that to compare the attacks, the adversarial samples were crafted one by one. In practice, it is possible to generate samples by batch. In this case, the algorithm stops when all samples are treated. Most of the time, with a batch of large size, the three attacks approximately take the same time to converge. For example, on the same machine as previously, with a batch size equal 1000, we were able to craft the same amount of samples in about 250s, for all the attacks. Defense The objective of this section is to train neural networks in a way that the attacks fail as much as possible. One way of doing that is by adding adversarial samples crafted by JSMA, WJSMA and TJSMA to the training set. This way of training may imply a decrease in the model accuracy but adversarial examples will be more difficult to generate. We experiment with this idea on MNIST with LeNet-5 in every possible configuration. To this end, 2,000 adversarial samples per class (20,000 more images in total), with distortion under 14.5%, are added to the original MNIST training set, crafted by either JSMA, WJSMA or TJSMA. Then, three distinct models are trained on these three augmented datasets. The models roughly achieve an accuracy of 99.9% on the training set and 99.3% on the test set, showing a slight loss compared to our previous MNIST model accuracy. Nevertheless, the obtained neural networks are more robust to the attacks as shown in the following Table 4. Note that each experiment is made over the well-predicted samples of the test images. For each model and image, nine adversarial examples are generated by the three attacks. Overall, the attacks are less efficient on each of these models, compared to Table 1. The success rates drop by about 8pp, whereas the number of iterations is increased by approximately 26%. From the defender's point of view, networks trained against JSMA and TJSMA give the best performance. The JSMA trained model provides the lowest success rates while the TJSMA trained network is more robust from the L 0 distance point of view. From the attacker's point of view, TJSMA remains the most efficient attack of the three regardless of the augmented dataset used. Avoid confusion In this section, we argue that our results do not contradict [15]. First, we stress that we use a more performant LeNet-5 model than the one in [15] (with 98.93% and 99.41% accuracies on the training and test sets). For completeness, we also generated a less performant model (with 99.34% and 98.94% accuracies on the training and test sets) and evaluated the three attacks on it through the first 1, 000 test MNIST images. We obtain 96.7% success rate for JSMA (very similar to [15]) and more than 99.5% for WJSMA and TJSMA. These results are also included in our experiments. Instead of presenting two models, we preferred to use the more performant one as this makes the paper shorter and moreover it values more our approach (giving us more advantage with respect to JSMA). Finally, we notice that for both experiments and contrary to [15] (see Appendix A in [15]), our results were obtained without simplifications on the model which is an additional advantage of our attacks. Conclusion This paper has introduced WJSMA and TJSMA new probabilistic adversarial attacks variants of JSMA. It has demonstrated that WJSMA and TJSMA significantly outperform JSMA on two standard DNNs on MNIST and CIFAR-10 after analysing more than 88, 200 × 9 adversarial images. Also, it has demonstrated that defending against WJSMA and TJSMA is more advantageous than against JSMA. It is important to recall that our attacks are derived quite naturally from a classical log softmax reasoning and benefit from substantial investigations of doubly-indexed saliency maps. Based on the analysis of 9,000 adversarial samples, WJSMA and TJSMA are at least 1.2 and 1.4 times faster than JSMA and accordingly at least 24 and 28 times faster than L 0 CW attack. We believe these results are quite reassuring and make the new attacks as promising tools for future applications. Finally, non-targeted versions of our attacks have not been discussed in this paper and may be subject of future work and comparison with existing approaches such as [23]. Supplementary material Architectures of the DNNs. Supplementary comments. Further analysis of the results on MNIST reveals that, even for examples where JSMA is better than WJSMA or TJSMA, in average, less than 10 more components are changed by WJSMA or TJSMA, whereas JSMA changes more than 17 more components in average when it is dominated by WJSMA or TJSMA. A similar gap can be remarked in CIFAR-10. Fig. 1 : 1Original image with label 0 and its adversarial samples generated by JSMA, WJSMA and TJSMA (from top to bottom). Fig. 2 : 2Adversarial examples crafted by JSMA, WJSMA and TJSMA on a car image Algorithm 1 1Generating adversarial samples by WJSMA: version 1 Inputs: N : a NNC, F : second-to-last output of N , x: input to N , t: target label (t = class(x)), maxIter: maximum number of iterations, θmin, θmax lower and upper bounds for features values, θ: positive default increase value. Fig. 3 : 3Evolution of the origin and target class probabilities till the target class is reached for JSMA, WJSMA and TJSMA changing the image of a one into a five. Figures 4 and 5 5display one sample per class, from MNIST and CIFAR-10 respectively. Fig. 4 : 4MNIST image examplesFig. 5: CIFAR-10 image examples On each dataset, a deep neural network classifier (DNN) is trained and the performances of the three attacks are evaluated on it. ( 1 ) 1Success rate: This is the percentage of successful adversarial examples, i.e crafted before reaching the maximal number of iterations maxIter, (2) Mean L 0 distance: This is the average number of altered components of the successful adversarial examples, (3) Strict dominance of an attack: Percentage of adversarial attacks for which this attack does strictly less iterations than the two other attacks 4 , (4) Run-time of an attack on a set of samples targeting every possible class. Fig. 6 : 6Distribution of the (strict) dominance of JSMA, WJSMA and TJSMA over the MNIST and CIFAR10 datasets (training and test sets included) Fig. 7 :Fig. 8 : 78Pairwise dominance on MNIST (= corresponds to samples with the same number of iterations by the attacks including when both attacks fail). Pairwise dominance on CIFAR-10 (= has the same significance as before). Table 1 : 1Comparison between JSMA, WJSMA and TJSMA on MNIST.Metric JSMA WJSMA TJSMA Targeted (Training dataset: Nb of well predicted images=60,000) Success rate 87.68% 97.14% 98.66% Mean L0 distance on successful samples 44.34 37.86 35.22 Targeted (Test dataset: Nb of well predicted images=9,949)) Success rate 87.34% 96.98% 98.68% Mean L0 distance on successful samples 44.63 38.10 35.50 Table 2 : 2Comparison between JSMA, WJSMA and TJSMA on CIFAR-10.Metric JSMA WJSMA TJSMA Targeted (Training dataset: Nb of well predicted images=9 995) Success rate 86.17 95.91% 97.40% Mean L0 distance on successful samples 47 38.54 36.86 Targeted (Test dataset: Nb of well predicted images=8 381)) Success rate 84.91 94.99% 96.96% Mean L0 distance on successful samples 46.13 38.82 37.45 Table 3 : 3Time comparison between JSMA, WJSMA and TJSMAAttack JSMA WJSMA TJSMA Time (second) 3964 3092 2797 Table 4 : 4Metrics (1) and (2) on JSMA, WJSMA and TJSMA augmented setsMetric JSMA WJSMA TJSMA Model trained over JSMA augmented set (9940 well predicted samples) Success rate 77.94% 84.79% 85.08% Mean L0 distance on successful samples 54.48 52.66 52.83 Model trained over WJSMA augmented set (9936 well predicted samples) Success rate 77.61% 90.05% 92.01% Mean L0 distance on successful samples 56.29 52.72 52.18 Model trained over TJSMA augmented set (9991 well predicted samples) Success rate 76.42% 86.18% 87.36% Mean L0 distance on successful samples 54.26 54.20 54.49 Table 5 : 5 Layer Parameters Input Layer size: (28 × 28) Conv2D kernel size: (5 × 5), 20 kernels, no stride ReLu MaxPooling2D kernel size: (2 × 2), stride: (2 × 2) Conv2D kernel size: (5 × 5), 50 kernels, no stride ReLu MaxPooling2D kernel size: (2 × 2), stride: (2 × 2) Flatten Dense size: 500 ReLu Dense size: number of classes (10 for MNIST) Softmax Table 6 : 6Architecture of the used DNN on CIFAR-10 Conv2D kernel size: (3 × 3), 10 kernels, no stride GlobalAveragePooling kernel size: (2 × 2), stride: (2 × 2) Softmax Pairwise dominance.Layer Parameters Input Layer size: (32 × 32) Conv2D kernel size: (3 × 3), 64 kernels, no stride ReLu Conv2D kernel size: (3 × 3), 128 kernels, no stride ReLu MaxPooling2D kernel size: (2 × 2), stride: (2 × 2) Conv2D kernel size: (3 × 3), 128 kernels, no stride ReLu Conv2D kernel size: (3 × 3), 256 kernels, no stride ReLu MaxPooling2D kernel size: (2 × 2), stride: (2 × 2) Conv2D kernel size: (3 × 3), 256 kernels, no stride ReLu Conv2D kernel size: (3 × 3), 512 kernels, no stride ReLu MaxPooling2D kernel size: (2 × 2), stride: (2 × 2) To avoid confusion, we explain in the paper that our results do not contradict[15], achieving a success rate of 97% on LeNet-5 but with a less performant model than ours. More discussions regarding this point can be found in Section 6. Codes and paper are under review. In the supplementary material, we give more statistics on the dominance between any two attacks. Acknowledgements.This collaboration was done in the context of a second year engineering students internship by A. Loison and T. Combey supervised by H. Hajri. We thank Gabriel Zeller for his assistance. We are grateful to Wassila Ouerdane and Jean-Philippe Poli at CentraleSupélec for their support. We thank the mesocentre de calcul Fusion, Metz computing center of CentraleSupélec and Stéphane Vialle for providing us effective computing ressources. H. Hajri is grateful to Sylvain Lamprier for very useful discussions. Towards evaluating the robustness of neural networks. N Carlini, D Wagner, CoRR 1608.04644v2Carlini, N., Wagner, D.: Towards evaluating the robustness of neural networks. CoRR 1608.04644v2 (2017), https://arxiv.org/pdf/1608.04644v2 Machine learning and security. C Chio, D Freeman, OreillyChio, C., Freeman, D.: Machine learning and security. Oreilly (2018) Imagenet: A large-scale hierarchical image database. J Deng, W Dong, R Socher, L Li, Kai Li, Li Fei-Fei, 2009 IEEE Conference on Computer Vision and Pattern Recognition. Deng, J., Dong, W., Socher, R., Li, L., Kai Li, Li Fei-Fei: Imagenet: A large-scale hierarchi- cal image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition. pp. 248-255 (2009) Robust physical-world attacks on deep learning visual classification. K Eykholt, I Evtimov, E Fernandes, B Li, A Rahmati, C Xiao, A Prakash, T Kohno, D Song, 2018 IEEE Conference on Computer Vision and Pattern Recognition. Salt Lake City, UT, USAEykholt, K., Evtimov, I., Fernandes, E., Li, B., Rahmati, A., Xiao, C., Prakash, A., Kohno, T., Song, D.: Robust physical-world attacks on deep learning visual classification. In: 2018 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2018, Salt Lake City, UT, USA, June 18-22, 2018. pp. 1625-1634 (2018) Explaining and harnessing adversarial examples. I J Goodfellow, J Shlens, C Szegedy, ICLR 1412.6572v3 (2015Goodfellow, I.J., Shlens, J., Szegedy, C.: Explaining and harnessing adversarial examples. ICLR 1412.6572v3 (2015), https://arxiv.org/pdf/1412.6572v3 Improving DNN robustness to adversarial attacks using jacobian regularization. D Jakubovitz, R Giryes, CoRR abs/1803.08680Jakubovitz, D., Giryes, R.: Improving DNN robustness to adversarial attacks using jaco- bian regularization. CoRR abs/1803.08680 (2018), http://arxiv.org/abs/1803. 08680 A Krizhevsky, V Nair, G Hinton, Cifar-10. Krizhevsky, A., Nair, V., Hinton, G.: Cifar-10 (canadian institute for advanced research) http://www.cs.toronto.edu/˜kriz/cifar.html Adversarial examples in the physical world. A Kurabin, I J Goodfellow, S Bengio, ICLR 1607.02533v4Kurabin, A., Goodfellow, I.J., Bengio, S.: Adversarial examples in the physical world. ICLR 1607.02533v4 (2017), https://arxiv.org/pdf/1607.02533v4 Gradient-based learning applied to document recognition. Y Lecun, L Bottou, Y Bengio, P Haffner, Proceedings of the IEEE. the IEEELecun, Y., Bottou, L., Bengio, Y., Haffner, P.: Gradient-based learning applied to document recognition. In: Proceedings of the IEEE. pp. 2278-2324 (1998) Y Lecun, C Cortes, MNIST handwritten digit database. LeCun, Y., Cortes, C.: MNIST handwritten digit database (2010), http://yann. lecun.com/exdb/mnist/ On the robustness of cooperative multi-agent reinforcement learning. J Lin, K Dzeparoska, S Q Zhang, A Leon-Garcia, N Papernot, ArXiv abs/2003.03722Lin, J., Dzeparoska, K., Zhang, S.Q., Leon-Garcia, A., Papernot, N.: On the robustness of cooperative multi-agent reinforcement learning. ArXiv abs/2003.03722 (2020) Towards deep learning models resistant to adversarial attacks 1706. A Madry, A Makelov, L Schmidt, D Tsipras, A Vladu, Madry, A., Makelov, A., Schmidt, L., Tsipras, D., Vladu, A.: Towards deep learning mod- els resistant to adversarial attacks 1706.06083v3 (2017), https://arxiv.org/pdf/ 1706.06083v3 Deepfool : a simple and accurate method to fool deep neural networks. S M Moosavi-Dezfooli, A Fawzi, P Frossard, CoRR 1511.04599Moosavi-Dezfooli, S.M., Fawzi, A., Frossard, P.: Deepfool : a simple and accurate method to fool deep neural networks. CoRR 1511.04599 (2015), https://arxiv.org/pdf/ 1511.04599 Universal adversarial perturbations. S Moosavi-Dezfooli, A Fawzi, O Fawzi, P Frossard, 10.1109/CVPR.2017.172017 IEEE Conference on Computer Vision and Pattern Recognition. Honolulu, HI, USAMoosavi-Dezfooli, S., Fawzi, A., Fawzi, O., Frossard, P.: Universal adversarial perturbations. In: 2017 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017, Hon- olulu, HI, USA, July 21-26, 2017. pp. 86-94 (2017). https://doi.org/10.1109/CVPR.2017.17, https://doi.org/10.1109/CVPR.2017.17 The limitations of deep learning in adversarial settings. N Papernot, P Mcdaniel, S Jha, M Fredrikson, Z Berkay Celik, A Swami, . IEEE 1511.07528v1Papernot, N., McDaniel, P., Jha, S., Fredrikson, M., Berkay Celik, Z., , Swami, A.: The limitations of deep learning in adversarial settings. IEEE 1511.07528v1 (2015), https: //arxiv.org/pdf/1511.07528v1 . N Papernot, F Faghri, N Carlini, I J Goodfellow, R Feinman, A Kurakin, C Xie, Y Sharma, T H Brown, A Roy, A Matyasko, V Behzadan, K Hambardzumyan, Z Zhang, Y L Juang, Z Li, R Sheatsley, A Garg, J Uesato, W Gierke, Y Dong, D Berthelot, P N J Hendricks, J Rauber, R Long, P D Mcdaniel, Technical report on the cleverhans v2.1.0 adversarial examples libraryPapernot, N., Faghri, F., Carlini, N., Goodfellow, I.J., Feinman, R., Kurakin, A., Xie, C., Sharma, Y., Brown, T.H., Roy, A., Matyasko, A., Behzadan, V., Hambardzumyan, K., Zhang, Z., Juang, Y.L., Li, Z., Sheatsley, R., Garg, A., Uesato, J., Gierke, W., Dong, Y., Berthelot, D., Hendricks, P.N.J., Rauber, J., Long, R., McDaniel, P.D.: Technical report on the cleverhans v2.1.0 adversarial examples library (2016) Scalable private learning with PATE. N Papernot, S Song, I Mironov, A Raghunathan, K Talwar, Ú Erlingsson, CoRR abs/1802.08908Papernot, N., Song, S., Mironov, I., Raghunathan, A., Talwar, K., Erlingsson,Ú.: Scalable private learning with PATE. CoRR abs/1802.08908 (2018), http://arxiv.org/abs/ 1802.08908 Hands-on artificial intelligence for cybersecurity. A Parisi, Packt PublishingParisi, A.: Hands-on artificial intelligence for cybersecurity. Packt Publishing (2019) A context-aware robust intrusion detection system: a reinforcement learning-based approach. K Sethi, S Edupuganti, R Kumar, P Bera, Y Madhav, 10.1007/s10207-019-00482-7International Journal of Information Security. Sethi, K., Edupuganti, S., Kumar, R., Bera, P., Madhav, Y.: A context-aware robust intrusion detection system: a reinforcement learning-based approach. International Journal of Infor- mation Security (12 2019). https://doi.org/10.1007/s10207-019-00482-7 DARTS: deceiving autonomous cars with toxic signs. C Sitawarin, A N Bhagoji, A Mosenia, M Chiang, P Mittal, CoRR abs/1802.06430Sitawarin, C., Bhagoji, A.N., Mosenia, A., Chiang, M., Mittal, P.: DARTS: deceiving au- tonomous cars with toxic signs. CoRR abs/1802.06430 (2018), http://arxiv.org/ abs/1802.06430 Privacy risks of securing machine learning models against adversarial examples. L Song, R Shokri, P Mittal, 10.1145/3319535.3354211Proceedings of the 2019 ACM SIGSAC Conference on Computer and Communications Security. the 2019 ACM SIGSAC Conference on Computer and Communications SecurityLondon, UKCCS 2019Song, L., Shokri, R., Mittal, P.: Privacy risks of securing machine learning models against adversarial examples. In: Proceedings of the 2019 ACM SIGSAC Conference on Computer and Communications Security, CCS 2019, London, UK, November 11-15, 2019. pp. 241- 257 (2019). https://doi.org/10.1145/3319535.3354211, https://doi.org/10.1145/ 3319535.3354211 Striving for simplicity: The all convolutional net. J Springenberg, A Dosovitskiy, T Brox, M Riedmiller, ICLR (workshop track. Springenberg, J., Dosovitskiy, A., Brox, T., Riedmiller, M.: Striving for simplicity: The all convolutional net. In: ICLR (workshop track) (2015), http://lmb.informatik. uni-freiburg.de/Publications/2015/DB15a Maximal jacobian-based saliency map attack. R Wiyatno, A Xu, 1808.07945v1Wiyatno, R., Xu, A.: Maximal jacobian-based saliency map attack 1808.07945v1 (2018), https://arxiv.org/pdf/1808.07945v1 Maximal jacobian-based saliency map attack. R Wiyatno, A Xu, CoRR abs/1808.07945Wiyatno, R., Xu, A.: Maximal jacobian-based saliency map attack. CoRR abs/1808.07945 (2018), http://arxiv.org/abs/1808.07945 Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms. H Xiao, K Rasul, R Vollgraf, Xiao, H., Rasul, K., Vollgraf, R.: Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms (2017)
[]
[]
[ "Alakabha Datta ", "Jiajun Liao ", "Danny Marfatia ", "\nDepartment of Physics and Astronomy\nDepartment of Physics and Astronomy\nUniversity of Mississippi\n108 Lewis Hall38677OxfordMSUSA\n", "\nDepartment of Physics and Astronomy\nUniversity of Hawaii at Manoa\n96822HonoluluHIUSA\n", "\nUniversity of Hawaii at Manoa\n96822HonoluluHIUSA\n" ]
[ "Department of Physics and Astronomy\nDepartment of Physics and Astronomy\nUniversity of Mississippi\n108 Lewis Hall38677OxfordMSUSA", "Department of Physics and Astronomy\nUniversity of Hawaii at Manoa\n96822HonoluluHIUSA", "University of Hawaii at Manoa\n96822HonoluluHIUSA" ]
[]
We show that the RK puzzle in LHCb data and the discrepancy in the anomalous magnetic moment of the muon can be simultaneously explained if a 10 MeV mass Z boson couples to the muon but not the electron, and that clear evidence of the nonstandard matter interactions of neutrinos induced by this coupling may be found at DUNE.
10.1016/j.physletb.2017.02.058
[ "https://arxiv.org/pdf/1702.01099v3.pdf" ]
119,258,091
1702.01099
ca513b323394b895f25fdcacb6c8c73129136f07
Alakabha Datta Jiajun Liao Danny Marfatia Department of Physics and Astronomy Department of Physics and Astronomy University of Mississippi 108 Lewis Hall38677OxfordMSUSA Department of Physics and Astronomy University of Hawaii at Manoa 96822HonoluluHIUSA University of Hawaii at Manoa 96822HonoluluHIUSA A light Z for the R K puzzle and nonstandard neutrino interactions We show that the RK puzzle in LHCb data and the discrepancy in the anomalous magnetic moment of the muon can be simultaneously explained if a 10 MeV mass Z boson couples to the muon but not the electron, and that clear evidence of the nonstandard matter interactions of neutrinos induced by this coupling may be found at DUNE. There are several perplexing anomalies related to the muon including its anomalous magnetic moment [1] and the charge radius of the proton extracted from muonic hydrogen [2]. In B physics, data from b → s decays indicate evidence of lepton flavor universality -the so called R K puzzle. The LHCb Collaboration has found a hint of lepton non-universality in the ratio R K ≡ B(B + → K + µ + µ − )/B(B + → K + e + e − ) = 0.745 ± 0.097 in the dilepton invariant mass-squared range 1 GeV 2 ≤ q 2 ≤ 6 GeV 2 [3]. We take the view that the R K puzzle may also be a consequence of new physics (NP) affecting the muon. There is also an anomaly in one of the angular observables in B → K * µ + µ − decay [4] which may be subject to large hadronic uncertainties [5]. However, unlike the R K puzzle lepton non-universal new physics is not necessary to explain the anomaly [6]. Here we focus on the R K puzzle which is a clean probe of the Standard Model (SM) due to very small hadronic uncertainties. Several NP models with heavy mediators have been considered to explain the R K puzzle. We consider a simple NP scenario with a Z lighter than the muon. The Z has flavor conserving coupling to quarks and leptons and in addition we assume that there is a flavor-changing bsZ vertex. The Z couplings to the lepton generations are non-universal to solve the R K puzzle. In particular we assume the Z has suppressed couplings to first generation leptons but has non-negligible couplings to second and third generation leptons. We constrain the bsZ coupling using B → Kνν and B s mixing and then from R K we fix the Z coupling to muons. We check that the coupling to muons is consistent with the muon a µ ≡ (g − 2) µ /2 measurement, ∆a µ ≡ a exp µ − a SM µ = (29±9)×10 −10 [7]. B → Kνν does not fix the Z coupling to neutrinos, but assuming SU(2) invariance we set the Z neutrino couplings to the charged lepton couplings. Estimates of the Z couplings to light quarks are obtained from non-leptonic b → sqq transitions where q = u, d, s. After we obtain the constraints from B physics, we study their implications for nonstandard neutrino interactions (NSI) at DUNE [8]. bsZ vertex. We assume there is a light Z with mass of order 10 MeV. The most general form of the bsZ vertex with vector type coupling is H bsZ = F (q 2 )sγ µ bZ µ ,(1) where the form factor F (q 2 ) can be expanded as F (q 2 ) = a bs + g bs q 2 m 2 B + . . . ,(2) where m B is the B meson mass and the momentum transfer q 2 m 2 B . The leading order term a bs is constrained by B → Kνν to be smaller than 10 −9 [9]. As will become clear below, the solution to the R K puzzle would then require the Z coupling to muons to be O(1) or larger which is in conflict with the (g − 2) µ measurement. The absence of flavor-changing neutral currents forces a bs ∼ 0, so that H bsZ = g bs q 2 m 2 Bs γ µ bZ µ ,(3) where g bs is assumed to be real. B → Kνν. Assuming Gaussian errors, the 95% C.L. upper limit for B → Kνν is [10] B(B → Kνν) ≤ 1.9 × 10 −5 . From Ref. [11], the SM prediction is B(B → Kνν) SM = (3.98 ± 0.43 ± 0.19) × 10 −6 . The SM Hamiltonian for each neutrino generation is H ef f = − 4G F √ 2 V tb V * ts α 4π sin 2 θ W [C ν 9 O ν 9 + C ν 10 O ν 10 ] ,(5) where O ν 9 = (sγ µ P L b)(νγ µ ν) , O ν 10 = (sγ µ P L b)(νγ µ γ 5 ν) .(6) In the SM, the Wilson coefficient is determined by box and Z-penguin loop diagrams computation which gives, C ν 9 = −C ν 10 = −X(m 2 t /m 2 W ) ,(7) where the loop function X can be found e.g. in Ref. [12]. Now we introduce a Z coupling only to left-handed neutrinos. We further simplify by assuming only flavor conserving couplings but do not assume the couplings to be generation-independent. We write for generation α = µ, τ , Equations (3) and (8) lead to the Hamiltonian for b → sν ανα decays, H ναναZ = g ναναναL γ µ ν αL Z µ ,(8)H bsνανα = − g bs g * νανα q 2 − m 2 Z q 2 m 2 Bs γ µ bν αL γ µ ν αL .(9) We get B(B → Kνν) = 3.96 × 10 −6 for the SM. From Eq. (4) we obtain the 2σ constraint, |g bs | < ∼ 1.4 × 10 −5 .(10) Note that this constraint does not dependent on g νν as the NP contribution is dominated by the two body b → sZ transition. In principle, we can also consider B → K * νν but only certain helicity amplitudes are affected by NP. Furthermore at low q 2 the NP amplitudes are suppressed. Hence this decay provides a weaker constraint than B → Kνν. B s mixing. Absent knowledge of F (q 2 ) for q 2 ∼ m 2 B , we assume that effects of the longitudinal polarization of the Z are compensated by the form factor so that the Hamiltonian responsible for B s mixing can be written as H Bs ≈ − g 2 bs m 2 Bs − m 2 Z sγ µ bsγ µ b .(11) The correction to B s mixing is given by ∆M N P s = − g 2 bs m 2 Bs − m 2 Z B 0 s sγ µ bsγ µ b B 0 s . (12) Using the vacuum insertion approximation [13] and the fact that m Bs ≈ m b + m s , ∆M N P s ≈ g 2 bs m 2 Bs − m 2 Z 1 3 m Bs f 2 Bs .(13) The mass difference in the SM is given by ∆M SM s = 2 3 m Bs f 2 BsBBs |N C V LL |,(14) where N = G 2 F m 2 W 16π 2 (V tb V * ts ) 2 , C V LL = η Bs x t 1 + 9 1 − x t − 6 (1 − x t ) 2 − 6x 2 t ln x t (1 − x t ) 3 . In the above, x t ≡ m 2 t /m 2 W , η Bs = 0.551 is the QCD correction [14] andB Bs is the bag parameter. Taking [16,17], and m t = 160 GeV [16,18], the SM prediction is [19] ∆M SM s = (17.4 ± 2.6) ps −1 . f Bs B Bs = (266 ± 18) MeV [15], V tb V * ts = −0.0405 ± 0.0012 This is to be compared with the experimental measurement [20], ∆M s = (17.757 ± 0.021) ps −1 ,(16) which is consistent with the SM prediction. To bound the NP coupling g bs we take the NP contribution to be at most the 1σ uncertainty in the SM contribution, i.e., ∆M N P s ∼ 2.6 ps −1 . With the B s decay constant f Bs from Ref. [21], and assuming m Bs m Z , Eq. (13) yields |g bs | < ∼ 2.3 × 10 −5 .(17) This is consistent with the bound obtained on g bs from B → Kνν. R K puzzle. Here we follow the discussions in Ref. [22,23]. Within the SM, the effective Hamiltonian for the quark-level transition b → sµ + µ − is [24] H SM eff = − 4G F √ 2 V * ts V tb 6 i=1 C i (µ)O i (µ) +C 7 e 16π 2 [sσ µν (m s P L + m b P R )b] F µν + C 9 α em 4π (sγ µ P L b)μγ µ µ +C 10 α em 4π (sγ µ P L b)μγ µ γ 5 µ ,(18) where P L,R = (1 ∓ γ 5 )/2. The operators O i (i = 1 . . . 6) correspond to the P i in Ref. [25], and m b = m b (µ) is the running b-quark mass in the MS scheme. We use the SM Wilson coefficients as given in Ref. [26]. Introducing a Z coupling to leptons H Z = g ¯ γ µ Z µ ,(19) Equations (3) and (19) lead to the Hamiltonian for b → s decays H bs = − g bs g * q 2 − m 2 Z q 2 m 2 Bs γ µ b¯ γ µ .(20) We can rewrite this as, H bs = − 4G F √ 2 V * ts V tb α em 4π R V (q 2 )sγ µ P L b¯ γ µ +R V (q 2 )sγ µ P R b¯ γ µ ,(21) where R V (q 2 ) = R V (q 2 ) = √ 2πg bs g * G F V * ts V tb α em q 2 m 2 B 1 q 2 − m 2 Z .(22) We assume the Z does not couple to electrons and so B(B + → K + e + e − ) is described by the SM, while B(B + → K + µ + µ − ) is modified by NP. We scan the parameter space of g bs and g µµ for values that are consistent with the experimental measurement of R K ; see Fig. 1. Muon magnetic moment. The light Z also explains the discrepancy in the muon magnetic moment measurement. From Ref. [27], we have ∆a µ = (g µµ ) 2 8π 2 1 0 2x 2 (1 − x) x 2 + (m 2 Z /m 2 µ )(1 − x) dx .(23) For m Z = 10 MeV, the measured value of ∆a µ gives g µµ as in Fig. 1. Other constraints. We now check that the result is consistent with other b → sµ + µ − transitions. Note that our light Z cannot be produced as a resonance in b → sµ + µ − decays. Also, as we have a vector coupling in Eq. (21) there is no contribution toB 0 s → µ + µ − . The BaBar Collaboration measures B(B → X s µ + µ − ) = (0.66 ± 0.88) × 10 −6 in the range 1 GeV 2 ≤ q 2 ≤ 6 GeV 2 [28]. The differential branching ratio forB 0 d → X s µ + µ − with SM and the general NP operators can be found in Ref. [22]. We find that the NP contribution to B(B 0 d → X s µ + µ − ) is only 7% of the SM prediction for 1 GeV 2 ≤ q 2 ≤ 6 GeV 2 . Given the current experimental uncertainties, the constraint from this decay is not stringent. The branching fractions forB 0 d →Kµ + µ − ,B 0 d → K * µ + µ − and the corresponding electron modes are known for the entire kinematical range. However due to the long distance contributions we do not use them to directly constrain NP. Finally, the NP amplitude forB 0 d →K * µ + µ − in the low q 2 region is suppressed relative to the leading SM amplitudes by √ q 2 m B and so this decay does not provide any constraints on the NP coupling. We note in passing that constraints from b → sτ + τ − decays are very weak [19] and do not produce a meaningful constraint on the NP coupling g τ τ . b → sqq. We now consider the Z coupling to light quarks with a focus on the up and down quarks: H qqZ = g qqq γ µ qZ µ .(24) It is reasonable for the Z coupling to quarks to be of the same size as the coupling to the charged leptons, i.e., ∼ 10 −4 . Decays like B → Kπ can constrain the Z coupling to light quarks. In spite of the hadronic uncertainties approximate bounds are obtainable from these decays. Equations (3) and (24) lead to the Hamiltonian for b → sqq decays, which is similar to Eq. (20) with replaced by q. The NP can add to the electroweak contribution in the SM. It is interesting to speculate if such NP can resolve the so called K − π puzzle [29]. This is the difference in the direct CP asymmetry in the decays B + → π 0 K + and B 0 → π − K + . It is puzzling that the leading amplitudes in both decays are the same in the SM while the former decay also gets contributions from a small color and CKM suppressed tree amplitude and the electroweak penguins for the two decays are different. It is possible that new contributions to the electroweak penguins may resolve the puzzle. However, the situation is a bit complicated. First, there are two other relevant decays, B + → π + K 0 and B 0 → π 0 K 0 , and one has to fit to all the decays. Since these are non-leptonic decays one has to account for hadronic uncertainties. In naive factorization, our NP does not contribute at leading order to B + → π 0 K + as the vector quark current does not produce a pion but can produce a ρ and will thus contribute to B + → ρ 0 K + and B 0 → ρ − K + . We can always change the chiral structure of the Z coupling to quarks to get a leading order contribution to B → πK. Our intention here is not to resolve the K − π puzzle but we can estimate the Z qq coupling in the following way. A reasonable assumption is that NP produces effects of the size of about 10% of the SM electroweak penguin. Both color allowed and color suppressed electroweak penguins are possible in the decay B 0 → ρ 0 K 0 , and we can compare these with the NP amplitude. The ratio of the NP amplitude to the color allowed penguin is r = ρ 0 K 0 H N P B 0 ρ 0 K 0 | H SM EW |B 0 ,(25) where H SM EW is the color allowed SM electroweak Hamiltonian. Using naive factorization, r = g bs (g uu − g dd )m 2 ρ G F √ 2 a 9 V tb V * ts 3 2 (m 2 ρ − m 2 Z )m 2 B ,(26) where we have assumed real couplings. The factor a 9 = C 9 + C10 Nc where C 9,10 are the Wilson's coefficients and N c = 3 is the number of colors. The ratio of the NP amplitude to the color suppressed electroweak penguin is −1.22α em and a 10 (µ = m b ) = 0.04α em [30] and requiring |r| ∼ 0.1 we find s = ρ 0 K 0 H N P B 0 ρ 0 K 0 | H SM,C EW |B 0 = g bs (g uu − g dd )m 2 ρ G F √ 2 a 10 V tb V * ts (m 2 ρ − m 2 Z )m 2 B ,(27)|g bs (g uu − g dd )| ∼ 1.3 × 10 −8 .(28) For |g bs | ∼ 10 −5 we get |g uu − g dd | ∼ 10 −3 . As we discuss next, this leads to nonstandard neutrino interactions that are too large. On the other hand requiring |s| ∼ 0.1 gives |g bs (g uu − g dd )| ∼ 2.8 × 10 −10 .(29) In this case |g uu − g dd | ∼ 10 −5 . We will assume that g uu is the same size as g dd and take these couplings to be ∼ 10 −5 to discuss neutrino NSI. NSI at DUNE. The light Z couplings to neutrinos and first generation quarks affect the neutrino propagation in matter. The matter NSI can be parameterized by the effective Lagrangian [31], L = −2 √ 2G F qC αα [ν α γ ρ P L ν α ] [qγ ρ P C q] + h.c. ,(30) where α = µ, τ , C = L, R, q = u, d, and qC αα are dimensionless parameters that represent the strength of the new interaction in units of G F . Since neutrino propagation in matter is affected by coherent forward scattering, q αα ≡ qL αα + qR αα , can be written as q αα = g qq g νανα 2 √ 2G F m 2 Z ,(31) regardless of the Z mass. For propagation in the earth, neutrino oscillation experiments are only sensitive to the combination, αα ≈ 3( u αα + d αα ) .(32) We now use the light Z couplings obtained from B physics to study signatures at neutrino oscillation experiments. We assume g νµνµ = g µµ , which is motivated by an SU(2) invariant realization of Eq. (30). We fix g µµ = 5.4 × 10 −4 and g bs = 1.3 × 10 −5 to explain both the R K and muon g − 2 anomalies; this set of couplings is marked by a cross in Fig. 1. To avoid a finetuned cancellation, we take g uu = 1.2 × 10 −5 and g dd = −1.0 × 10 −5 , which satisfies the relation in Eq. (29). For m Z = 10 MeV, these couplings satisfy a plethora of constraints [32]. From Eqs. (31) and (32), we get µµ = 1.0. To satisfy constraints from current neutrino oscillation data [33], we assume τ τ = µµ . Following the procedure in Ref. [34], we simulate 300 kt-MW-years of DUNE data with the normal neutrino mass hierarchy, the neutrino CP phase δ = 0, and µµ = 1.0. We scan over both the mass hierarchies, the neutrino oscillation parameters and µµ . The expected sensitivity of DUNE to reject the SM scenario is shown in Fig. 2. We see that the SM scenario with µµ = 0 is ruled out at the 3.6σ C.L. at DUNE. Summary. We showed that the R K puzzle in LHCb data can be explained by a light Z . The resulting coupling of the Z to muons also reconciles the muon g−2 measurement. After carefully examining various constraints from B physics, we find that this Z could yield large NSI in neutrino propagation. We further demonstrated that evidence of NSI induced by the light Z coupling may be found at DUNE. A scattering experiment at CERN will also search for such a boson [35]. FIG. 1 . 1The allowed regions in the (g bs , gµµ) plane for m Z = 10 MeV. The shaded bands are the 1σ and 2σ regions favored by RK . The regions between the horizontal solid and dashed lines explain the discrepancy in the anomalous magnetic moment of the muon at the 1σ and 2σ C.L. The vertical line shows the 2σ upper limit on g bs from B → Kνν. The cross denotes the parameters used for studying neutrino NSI. FIG. 2 . 2where a 10 = C 10 + C9 Nc and H SM,C EW is the color suppressed SM electroweak Hamiltonian. Using a 9 (µ = m b ) The sensitivity to µµ at DUNE. The data are simulated for the normal neutrino mass hierarchy, the neutrino CP phase δ = 0, and µµ = 1.0. We assume µµ = τ τ . . G W Bennett, Muon g-2 Collaborationhep-ex/0602035Phys. Rev. D. 7372003G. W. Bennett et al. [Muon g-2 Collaboration], Phys. Rev. D 73, 072003 (2006) [hep-ex/0602035]. . A Antognini, Science. 339417A. Antognini et al., Science 339, 417 (2013). . R Aaij, LHCb CollaborationarXiv:1406.6482Phys. Rev. Lett. 113151601hep-exR. Aaij et al. [LHCb Collaboration], Phys. Rev. Lett. 113, 151601 (2014) [arXiv:1406.6482 [hep-ex]]. . S Descotes-Genon, J Matias, J Virto, arXiv:1307.5683Phys. Rev. D. 8874002hep-phSee e.g., S. Descotes-Genon, J. Matias and J. Virto, Phys. Rev. D 88, 074002 (2013) [arXiv:1307.5683 [hep-ph]]. . M Ciuchini, M Fedele, E Franco, S Mishima, A Paul, L Silvestrini, M Valli, arXiv:1512.07157JHEP. 1606116hep-phSee e.g., M. Ciuchini, M. Fedele, E. Franco, S. Mishima, A. Paul, L. Silvestrini and M. Valli, JHEP 1606, 116 (2016) [arXiv:1512.07157 [hep-ph]]. . A Datta, M Duraisamy, D Ghosh, arXiv:1310.1937Phys. Rev. D. 89771501hep-phA. Datta, M. Duraisamy and D. Ghosh, Phys. Rev. D 89, no. 7, 071501 (2014) [arXiv:1310.1937 [hep-ph]]. . See F For A Review, A Jegerlehner, Nyffeler, Phys. Rept. 4771For a review, see F. Jegerlehner and A. Nyffeler, Phys. Rept. 477, 1 (2009). . R Acciarri, DUNE CollaborationarXiv:1512.06148physics.ins-detR. Acciarri et al. [DUNE Collaboration], arXiv:1512.06148 [physics.ins-det]. . K Fuyuto, W S Hou, M Kohda, arXiv:1512.09026Phys. Rev. D. 93554021hep-phK. Fuyuto, W. S. Hou and M. Kohda, Phys. Rev. D 93, no. 5, 054021 (2016) [arXiv:1512.09026 [hep-ph]]. . J P Lees, BaBar CollaborationarXiv:1303.7465Phys. Rev. D. 87112005hep-exJ. P. Lees et al. [BaBar Collaboration], Phys. Rev. D 87, 112005 (2013) [arXiv:1303.7465 [hep-ex]]; . O Lutz, Belle CollaborationarXiv:1303.3719Phys. Rev. D. 87111103hep-exO. Lutz et al. [Belle Collaboration], Phys. Rev. D 87, 111103 (2013) [arXiv:1303.3719 [hep-ex]]. . A J Buras, J Girrbach-Noe, C Niehoff, D M Straub, arXiv:1409.4557JHEP. 1502184hep-phA. J. Buras, J. Girrbach-Noe, C. Niehoff and D. M. Straub, JHEP 1502, 184 (2015) [arXiv:1409.4557 [hep-ph]]. . A J Buras, hep-ph/9806471A. J. Buras, hep-ph/9806471. . D Atwood, L Reina, A Soni, hep-ph/9609279Phys. Rev. D. 553156D. Atwood, L. Reina and A. Soni, Phys. Rev. D 55, 3156 (1997) [hep-ph/9609279]. . G Buchalla, A J Buras, M E Lautenbacher, hep-ph/9512380Rev. Mod. Phys. 681125G. Buchalla, A. J. Buras and M. E. Lautenbacher, Rev. Mod. Phys. 68, 1125 (1996) [hep-ph/9512380]. . S Aoki, arXiv:1310.8555Eur. Phys. J. C. 742890hep-latS. Aoki et al., Eur. Phys. J. C 74, 2890 (2014) [arXiv:1310.8555 [hep-lat]]; . S Aoki, arXiv:1607.00299hep-latS. Aoki et al., [arXiv:1607.00299 [hep-lat]]. . C Patrignani, Chin. Phys. C. 4010100001Particle Data GroupC. Patrignani et al. [Particle Data Group], Chin. Phys. C 40, no. 10, 100001 (2016). . J Charles, arXiv:1501.05013Phys. Rev. D. 9173007hep-phJ. Charles et al., Phys. Rev. D 91, 073007 (2015) [arXiv:1501.05013 [hep-ph]]. . K G Chetyrkin, J H Kuhn, M Steinhauser, hep-ph/0004189Comput. Phys. Commun. 133K. G. Chetyrkin, J. H. Kuhn and M. Steinhauser, Comput. Phys. Commun. 133, 43 (2000) [hep-ph/0004189]. . B Bhattacharya, A Datta, J P Guevin, D London, R Watanabe, arXiv:1609.09078JHEP. 170115hep-phB. Bhattacharya, A. Datta, J. P. Guevin, D. London and R. Watanabe, JHEP 1701, 015 (2017) [arXiv:1609.09078 [hep-ph]]. . Y Amhis, Heavy Flavor Averaging Group ; HFAG) CollaborationarXiv:1412.7515hep-exY. Amhis et al. [Heavy Flavor Averaging Group (HFAG) Collaboration], arXiv:1412.7515 [hep-ex]. . H Na, C J Monahan, C T H Davies, R Horgan, G P Lepage, J Shigemitsu, arXiv:1202.4914Phys. Rev. D. 8634506hep-latH. Na, C. J. Monahan, C. T. H. Davies, R. Horgan, G. P. Lepage and J. Shigemitsu, Phys. Rev. D 86, 034506 (2012) [arXiv:1202.4914 [hep-lat]]. . A K Alok, A Datta, A Dighe, M Duraisamy, D Ghosh, D London, arXiv:1008.2367JHEP. 1111121hep-phA. K. Alok, A. Datta, A. Dighe, M. Duraisamy, D. Ghosh and D. London, JHEP 1111, 121 (2011) [arXiv:1008.2367 [hep-ph]]. . A K Alok, A Datta, A Dighe, M Duraisamy, D Ghosh, D London, arXiv:1103.5344JHEP. 1111122hep-phA. K. Alok, A. Datta, A. Dighe, M. Duraisamy, D. Ghosh and D. London, JHEP 1111, 122 (2011) [arXiv:1103.5344 [hep-ph]]. . A J Buras, M Munz, arXiv:hep-ph/9501281Phys. Rev. D. 52186A. J. Buras and M. Munz, Phys. Rev. D 52, 186 (1995) [arXiv:hep-ph/9501281]. . C Bobeth, M Misiak, J Urban, hep-ph/9910220Nucl. Phys. B. 574291C. Bobeth, M. Misiak and J. Urban, Nucl. Phys. B 574, 291 (2000) [hep-ph/9910220]. . W Altmannshofer, P Ball, A Bharucha, A J Buras, D M Straub, M Wick, arXiv:0811.1214JHEP. 090119hep-phW. Altmannshofer, P. Ball, A. Bharucha, A. J. Buras, D. M. Straub and M. Wick, JHEP 0901, 019 (2009) [arXiv:0811.1214 [hep-ph]]. . J P Leveille, Nucl. Phys. B. 13763J. P. Leveille, Nucl. Phys. B 137, 63 (1978). . J P Lees, BaBar CollaborationarXiv:1312.5364Phys. Rev. Lett. 112211802hep-exJ. P. Lees et al. [BaBar Collaboration], Phys. Rev. Lett. 112, 211802 (2014) [arXiv:1312.5364 [hep-ex]]. . A J See For Example, R Buras, S Fleischer, F Recksiegel, Schwab, hep- ph/0402112Nucl. Phys. B. 697133See for example A. J. Buras, R. Fleischer, S. Recksiegel and F. Schwab, Nucl. Phys. B 697, 133 (2004) [hep- ph/0402112]; . S Baek, P Hamel, D London, A Datta, D A Suprun, hep- ph/0412086Phys. Rev. D. 7157502and references thereinS. Baek, P. Hamel, D. London, A. Datta and D. A. Suprun, Phys. Rev. D 71, 057502 (2005) [hep- ph/0412086] and references therein. . M Beneke, G Buchalla, M Neubert, C T Sachrajda, hep-ph/0104110Nucl. Phys. B. 606245M. Beneke, G. Buchalla, M. Neubert and C. T. Sachrajda, Nucl. Phys. B 606, 245 (2001) [hep-ph/0104110]. . L Wolfenstein, Phys. Rev. D. 172369L. Wolfenstein, Phys. Rev. D 17, 2369 (1978). . Y Farzan, arXiv:1505.06906Phys. Lett. B. 748311hep-phY. Farzan, Phys. Lett. B 748, 311 (2015) [arXiv:1505.06906 [hep-ph]]; Shoemaker. Y Farzan, I M , arXiv:1512.09147JHEP. 160733hep-phY. Farzan and I. M. Shoe- maker, JHEP 1607, 033 (2016) [arXiv:1512.09147 [hep-ph]]. . P Coloma, P B Denton, M C Gonzalez-Garcia, M Maltoni, T Schwetz, arXiv:1701.04828hep-phP. Coloma, P. B. Denton, M. C. Gonzalez-Garcia, M. Mal- toni and T. Schwetz, arXiv:1701.04828 [hep-ph]. . J Liao, D Marfatia, K Whisnant, arXiv:1601.00927arXiv:1612.01443Phys. Rev. D. 93971JHEP. hep-phJ. Liao, D. Marfatia and K. Whisnant, Phys. Rev. D 93, no. 9, 093016 (2016) [arXiv:1601.00927 [hep-ph]]; JHEP 1701, 071 (2017) [arXiv:1612.01443 [hep-ph]]. . S N Gninenko, N V Krasnikov, V A Matveev, arXiv:1412.1400Phys. Rev. D. 9195015hep-phS. N. Gninenko, N. V. Krasnikov and V. A. Matveev, Phys. Rev. D 91, 095015 (2015) [arXiv:1412.1400 [hep-ph]].
[]
[ "Acyclic edge-coloring of planar graphs: ∆ colors suffice when ∆ is large", "Acyclic edge-coloring of planar graphs: ∆ colors suffice when ∆ is large" ]
[ "Daniel W Cranston " ]
[]
[]
An acyclic edge-coloring of a graph G is a proper edge-coloring of G such that the subgraph induced by any two color classes is acyclic. The acyclic chromatic index, χ ′ a (G), is the smallest number of colors allowing an acyclic edge-coloring of G. Clearly χ ′ a (G) ≥ ∆(G) for every graph G. Cohen, Havet, and Müller conjectured that there exists a constant M such that every planar graph with ∆(G) ≥ M has χ ′ a (G) = ∆(G). We prove this conjecture.
10.1137/17m1158355
[ "https://arxiv.org/pdf/1705.05023v2.pdf" ]
31,046,382
1705.05023
25284c6e5858841ded9c80f16c0e1d7cf51479b0
Acyclic edge-coloring of planar graphs: ∆ colors suffice when ∆ is large May 2017 Daniel W Cranston Acyclic edge-coloring of planar graphs: ∆ colors suffice when ∆ is large May 2017 An acyclic edge-coloring of a graph G is a proper edge-coloring of G such that the subgraph induced by any two color classes is acyclic. The acyclic chromatic index, χ ′ a (G), is the smallest number of colors allowing an acyclic edge-coloring of G. Clearly χ ′ a (G) ≥ ∆(G) for every graph G. Cohen, Havet, and Müller conjectured that there exists a constant M such that every planar graph with ∆(G) ≥ M has χ ′ a (G) = ∆(G). We prove this conjecture. Introduction A proper edge-coloring of a graph G assigns colors to the edges of G such that two edges receive distinct colors whenever they have an endpoint in common. An acyclic edge-coloring is a proper edge-coloring such that the subgraph induced by any two color classes is acyclic (equivalently, the edges of each cycle receive at least three distinct colors). The acyclic chromatic index, χ ′ a (G), is the smallest number of colors allowing an acyclic edge-coloring of G. In an edge-coloring ϕ, if a color α is used incident to a vertex v, then α is seen by v. For the maximum degree of G, we write ∆(G), and simply ∆ when the context is clear. Note that χ ′ a (G) ≥ ∆(G) for every graph G. When we write graph, we forbid loops and multiple edges. A planar graph is one that can be drawn in the plane with no edges crossing. A plane graph is a planar embedding of a planar graph. Cohen, Havet, and Müller [9,4] conjectured that there exists a constant M such that every planar graph with ∆(G) ≥ M has χ ′ a (G) = ∆(G). We prove this conjecture. Main Theorem. All planar graphs G satisfy χ ′ a (G) ≤ max{∆, 4.2 * 10 14 }. Thus, χ ′ a (G) = ∆ for all planar graphs G with ∆ ≥ 4.2 * 10 14 . We start by reviewing the history of acyclic coloring and acyclic edge-coloring. An acyclic coloring of a graph G is a proper vertex coloring of G such that the subgraph induced by any two color classes is acyclic. The fewest colors that allows an acyclic coloring of G is the acyclic chromatic number, χ a (G). This concept was introduced in 1973 by Grünbaum [12], who conjectured that every planar graph G has χ a (G) ≤ 5. This is best possible, as shown (for example) by the octahedron. After a flurry of activity, Grünbaum's conjecture was confirmed in 1979 by Borodin [6]. This result contrasts sharply with the behavior of χ a (G) for a general graph G. Alon, McDiarmid, and Reed [1] found a constant C 1 such that for every ∆ there exists a graph G with maximum degree ∆ and χ a (G) ≥ C 1 ∆ 4/3 (log ∆) −1/3 . This construction is nearly best possible, since they also found a constant C 2 such that χ a (G) ≤ C 2 ∆ 4/3 for every graph G with maximum degree ∆. The best known upper bound is χ a (G) ≤ 2.835∆ 4/3 + ∆, due to Sereni and Volec [14]. Now we turn to acyclic edge-coloring. In contrast to the results above, there does exist a constant C 3 such that χ ′ a (G) ≤ C 3 ∆ for every graph G with maximum degree ∆. Using the Asymmetric Local Lemma, Alon, McDiarmid, and Reed [1] showed that we can take C 3 = 64. This constant has been improved repeatedly, and the current best bound is 3.74, due to Giotis et al. [11]. But this upper bound is still far from the conjectured actual value. Conjecture 1. Every graph G satisfies χ ′ a (G) ≤ ∆ + 2. Conjecture 1 was posed by Fiamčík [10] in 1978, and again by Alon, Sudakov, and Zaks [2] in 2001. The value ∆ + 2 is best possible, as shown (for example) by K n when n is even. In an acyclic edge-coloring at most one color class can be a perfect matching; otherwise, two perfect matchings will induce some cycle, by the Pigeonhole principle. Now the lower bound ∆ + 2 follows from an easy counting argument. For planar graphs, the best upper bounds are much closer to the conjectured value. Cohen, Havet, and Müller [9] proved χ ′ a (G) ≤ ∆ + 25 whenever G is planar. The constant 25 has been frequently improved [3,4,13,16,15]. The current best bound is χ ′ a (G) ≤ ∆ + 6, due to Wang and Zhang [15]. However, for planar graphs with ∆ sufficiently large, Conjecture 1 can be strengthened further. This brings us to the previously mentioned conjecture of Cohen, Havet, and Müller [9]. Conjecture 2. There exists a constant M such that if G is planar and ∆ ≥ M , then χ ′ a (G) = ∆. Our Main Theorem confirms Conjecture 2. For the proof we consider a hypothetical counterexample. Among all counterexamples we choose one with the fewest vertices, a minimal counterexample. In Section 2 we prove our Structural Lemma, which says that every 2-connected plane graph contains one of four configurations. In Section 3 we show that every minimal counterexample G must be 2-connected, and that G cannot contain any of these four configurations. This shows that no minimal counterexample exists, which finishes the proof of the Main Theorem. The Structural Lemma A vertex v is big if d(v) ≥ 8680. For a graph G, a vertex v is very big if d(v) ≥ ∆ − 4(8680). A k-vertex (resp. k + -vertex and k − -vertex ) is a vertex of degree k (resp. at least k and at most k). For a vertex v, a k-neighbor is an adjacent k-vertex; k + -neighbors and k − -neighbors are defined analogously. Similarly, we define k-faces, k + -faces, and k − -faces. For the length of a face f , we write ℓ(f ). A key structure in our proof, called a bunch, consists of two big vertices with many common 4 − -neighbors that are embedded as successive neighbors (for both big vertices); see Figure 1 for an example. Let x 0 , . . . , x t+1 denote successive neighbors of a big vertex v, that are also successive for a big vertex w. We require that d(x i ) ≤ 4 for all i ∈ [t], where [t] denotes {1, . . . , t}. Further, for each i ∈ [t + 1], we require that the 4-cycle vx i wx i−1 is not separating; so, either the cycle bounds a 4-face, or it bounds the two 3-faces vx i x i−1 and wx i x i−1 . (Each 4-vertex in a bunch is incident to four 3-faces, each 3-vertex in a bunch is incident to a 4-face and two 3-faces, and each 2-vertex in a bunch is incident to two 4-faces.) For such a bunch, we call x 1 , . . . , x t its bunch vertices, and we Figure 1: A bunch, with v and w as its parents. x 0 x t+1 v w call v and w the parents of the bunch. (When we refer to a bunch, we typically mean a maximal bunch.) For technical reasons, we exclude x 0 and x t+1 from the bunch. The length of the bunch is t. A horizontal edge is any edge x i x i+1 , with 1 ≤ i ≤ t − 1. Each path vx i w is a thread. Borodin et al. [8] constructed graphs in which every 5 − -vertex has at least two big neighbors. Begin with a truncated dodecahedron, and subdivide t times each edge that lies on two 10-faces. Now add a new vertex into every 4 + -face, making it adjacent to every vertex on the face boundary. The resulting plane triangulation has ∆ = 5k + 10, minimum degree 4, and every 5 − -vertex has two ∆-neighbors. This final fact motivates our Structural Lemma, by showing that if we omit from it (RC3) and (RC4), then the resulting statement is false. (For illustrating that we cannot omit both (RC3) and (RC4), the above construction can be generalized. Rather than truncating a dodecahedron, we can start by truncating any 3-connected plane graph with all faces of length 5 or 6; the rest of the construction is the same.) Now we state and prove our Structural Lemma. Proof. We use discharging, assigning d(v) − 6 to each vertex v and 2ℓ(f ) − 6 to each face f . By Euler's formula, the sum of these charges is −12. We assume that G contains none of the four configurations and redistribute charge so that each vertex and face ends with nonnegative charge, a contradiction. We use the following three discharging rules. (R1) Let v be a 5 − -vertex. If v has a single big neighbor w, then v takes 6 − d(v) from w. If v is in a bunch, then v takes 1 from each parent of the bunch. If v has exactly two big neighbors, and they are not its parents in a bunch, then v takes 1 2 from each of these big neighbors. (R2) Let v be a 5 − -vertex with a big neighbor w, and let vw lie on a face f . If ℓ(f ) = 4, then v takes 1 from f . If ℓ(f ) ≥ 5 and v has a second big neighbor along f , then v takes 2 from f . Otherwise, if ℓ(f ) ≥ 5, then v takes 1 from f . (R3) Every 5 − -vertex on a 3-face with two big neighbors takes 2 from a central "bank"; each big vertex gives 12 to the bank. If a vertex or face ends with nonnegative charge, then it ends happy. We show that each vertex and face (and the bank) ends happy. Let V big denote the set of big vertices. The number of 5 −vertices that take 2 from the bank is at most 2|E(G[V big ])|. Since G[V big ] is planar, |E(G[V big ])| < 3|V big |. So the bank ends happy, since it receives 12|V big | and gives away less than this. Consider a face f . 1. ℓ(f ) ≥ 6. Rather than sending charge as in (R2), suppose that f sends 1 to each incident vertex, and then each big incident vertex sends 1 to its successor (in clockwise direction) around f . Now each 5 − -vertex incident to f receives at least as much as in (R2), and f ends happy since 2ℓ(f ) − 6 − ℓ(f ) ≥ 0. 2. ℓ(f ) = 5. If f sends charge to at most two incident vertices, then f ends happy, since 2ℓ(f ) − 6 − 2(2) = 0. So suppose f sends charge to at least three incident vertices. Now two of these receive only 1 from f . So f again ends happy, since 2ℓ(f ) − 6 − 2 − 2(1) = 0. 3. ℓ(f ) = 4. Because f sends charge to at most two incident vertices, it ends happy, since 2(4) − 6 − 2(1) = 0. 4. ℓ(f ) = 3. Now f ends happy, since it starts and ends with 0. Now we consider vertices. A 5 − -vertex with no big neighbor would satisfy (RC1), so each 5 − -vertex has at least one big neighbor. Since G is 2-connected, it has minimum degree at least 2. 1. d(v) = 2. If v has only one big neighbor, w, then v receives 4 from w, so v finishes with 2 − 6 + 4 = 0. So assume that v has two big neighbors, w 1 and w 2 . Since G is 2-connected, the path w 1 vw 2 lies on two (distinct) faces. If one of these is a 3-face, then v takes 2 from the bank, at least 1 from its other incident 4 + -face, and 1 2 from each big neighbor; so v ends happy, since 2 − 6 + 2 + 1 + 2( 1 2 ) = 0. If one incident face is a 5 + -face, then v takes 2 from it, again ending happy. So assume that both incident faces are 4-faces. Now v is in a bunch with its two big neighbors, so takes 1 from each. Thus v ends with 2 − 6 + 2(1) + 2(1) = 0. 2. d(v) = 3. If v has only one big neighbor, then it gives 3 to v, and v finishes with 3 − 6 + 3 = 0. If v has three big neighbors, then for each incident face f , either v takes 1 from f or two from the bank, and v ends happy. So assume v has exactly two big neighbors, w 1 and w 2 . If w 1 vw 2 lies on a 3-face, then v takes 2 from the bank and 1 2 from each w i , ending happy, since 3−6+2+2( 1 2 ) = 0. So assume w 1 vw 2 lies on a 4 + -face. If it lies on a 5 + -face, or if v lies on two 4 + -faces, then v receives at least 2 from its incident faces and 1 2 from each w i , again ending happy. So assume v lies on a 4-face with w 1 and w 2 and also on two 3-faces. Now v is in a bunch with w 1 and w 2 , so takes 1 from each. Thus, v ends happy, since 3 − 6 + 1 + 2(1) = 0. 3. d(v) = 4. If v has only one big neighbor, w, then v receives 2 from w, and v finishes with 4 − 6 + 2 = 0. Suppose v has at least three big neighbors. So v has two big neighbors along at least two incident faces, f 1 and f 2 . If either f i is a 3-face, then v takes 2 from the bank and ends happy. Otherwise v takes at least 1 from each of f 1 and f 2 , so ends happy. So assume that v has exactly two big neighbors, w 1 and w 2 . Suppose that vw 1 and vw 2 are incident to the same face f . If f is a 3-face, then v takes 2 from the bank and ends happy. If f is a 4 + -face, then v takes at least 1 from f and at least 1 2 from each big neighbor, ending happy since 4 − 6 + 1 + 2( 1 2 ) = 0. So assume that w 1 and w 2 do not appear consecutively among the neighbors of v. If v is incident to any 4 + -face f , then v takes at least 1 from f and 1 2 from each of its big neighbors. Thus, we assume that v lies on four 3-faces. Now v is in a bunch with w 1 and w 2 , so takes 1 from each, and ends happy. 4. d(v) = 5. If v has only one big neighbor, w, then v receives 1 from w, and ends happy. If v has exactly two big neighbors, w 1 and w 2 , then v receives 1 2 from each, ending with 5 − 6 + 2( 1 2 ) = 0. So assume that v has at least three big neighbors. By the Pigeonhole principle, v lies on at least one face, f , with two big neighbors. So v receives at least 1 from either f or from the bank. Thus, v finishes with at least 5 − 6 + 1 = 0. 5. d(v) ≥ 6, but v is not big. Now v ends happy, since d(v) − 6 ≥ 0. 6. v is a big vertex but not a very big vertex. Suppose that v has a 5 − -neighbor w such that v is the only big neighbor of w. Now x∈N (w) d(x) ≤ 4(8680) + d(v) ≤ 4(8680) + (∆ − 4(8680)) = ∆ ≤ k. Thus, w is an instance of (RC1), a contradiction. So v has no such 5 − -neighbor. As a result, v sends at most 1 to each of its neighbors. Since G has no instance of (RC3), we have n 5 + 2n 6 ≥ 36 (where n 5 and n 6 are defined as in (RC3)). Note that v sends at most 1 2 to each vertex counted by n 5 and sends no charge to each vertex counted by n 6 . Further, v sends at most 1 to each other neighbor. Also, v sends 12 to the bank. So v finishes with at least d(v) − 6 − 12 − 1 2 n 5 − 1(d(v) − n 5 − n 6 ) = −18 + 1 2 (n 5 + 2n 6 ) ≥ −18 + 1 2 (36) = 0. 7. v is a very big vertex. Let W denote the set of 5 − -vertices w for which v is the only big neighbor of w. Since G has no instance of (RC2), the numbers of 2-vertices, 3 − -vertices, 4 − -vertices, and 5 − -vertices in W are (respectively) at most 8888, 17654, 26400, and 35136. So the total charge that v sends to these vertices is at most 8888+17654+26400+35136 = 88258. Since G has no copy of (RC4), we have n 5 + 2n 6 ≥ 141416. If w is counted by n 5 and is not in W, then v sends w at most 1 2 . If w is counted by n 6 , then v sends w nothing. So v ends happy, since d(v)−6−12−88258− 1 2 (n 5 −35136)−1(d(v)−n 5 −n 6 ) = −18−88258+ 1 2 n 5 + 1 2 (35136)+n 6 = −70708 + 1 2 (n 5 + 2n 6 ) ≥ −70708 + 1 2 (141416) = 0. Reducibility In this section we use the Structural Lemma to prove the Main Theorem (its second statement follows immediately from its first, so we prove the first). Throughout, we assume the Main Theorem is false and let G be a counterexample with the fewest vertices. Let k = max{∆, 4.2 * 10 14 }. We must show that χ ′ a (G) ≤ k. In Lemma 1, we show that G is 2-connected, so we can apply the Structural Lemma to G. Thus, it suffices to show that G contains none of (RC1), (RC2), (RC3), and (RC4). Lemma 1 forbids (RC1), and Lemma 2 and Corollary 3 forbid (RC2). For (RC3) the argument is longer, so we pull out a key piece of it as Lemma 4, before finishing the proof in Lemma 5. Finally, we handle (RC4) in Lemma 6, using a proof similar to that of Lemma 5. Lemma 1. Let G be a minimal counterexample to the Main Theorem. Now G is 2-connected and has no instance of configuration (RC1). That is, every vertex v has w∈N (v) d(w) > k. In particular, every 5 − -vertex has a big neighbor. Proof. Let G be a minimal counterexample. Note that G is connected, since otherwise one of its components is a smaller counterexample. Suppose G has a cut-vertex v, and let G 1 , G 2 , . . . denote the components of G − v. For each i, let H i = G[V (G i ) ∪ {v}] , the subgraph formed from G i by adding all edges between v and V (G i ). By minimality, each G i has a good coloring, say ϕ i . By permuting colors, we can assume that the sets of colors seen by v in the distinct ϕ i are disjoint. Now identifying the copies of v in each H i gives a good coloring of G, a contradiction. Thus, G must be 2-connected. Suppose that G has a vertex v such that w∈N (v) d(w) ≤ k. By minimality, G − v has a good coloring ϕ. We greedily extend ϕ to each edge incident to v. We color these edges with distinct colors that do not already appear on some edge incident to a vertex w in N (v). This is possible precisely because w∈N (v) d(w) ≤ k. Since each color seen by v is seen by only one neighbor of v, the resulting extension of ϕ is proper and has no 2-colored cycle containing v; thus, it is acyclic. This contradiction shows that w∈N (v) d(w) > k for every vertex v. Finally, suppose some 5 − -vertex v contradicts the final statement of the lemma. Now w∈N (v) d(w) ≤ d(v)(8680) ≤ 5(8680) ≤ k, a contradiction. Thus, the lemma is true. Figure 2: A big vertex v and its set W of 5 − -neighbors with v as their unique big neighbor, as in Lemma 2. x 1 x i w j w i w 2 w 1 w q+ √ 5q v ? i 1 j j 1 2 iLemma 2. Fix an integer q such that q ≥ 100. Now G cannot have a vertex v such that d(v) − ∆ + |W| ≥ q + √ 5q, where W is the set of 5 − -neighbors w of v such that x∈N (w)\v d(x) ≤ q and W is nonempty. Proof. Suppose the lemma is false, and that q, G, and v witness this. Let W be the set of these neighbors of v; see Figure 2 for an example. Pick an arbitrary w 1 ∈ W (here we use that W is non-empty). By minimality, G − w 1 has a good coloring, ϕ. We can greedily extend this coloring to G − w 1 + vw 1 (and we still call it ϕ). Let w 1 , w 2 , . . . denote the vertices of W. Let S be the set of colors either not used incident to v or else used on an edge from v to a neighbor in W. For each neighbor w i , by symmetry we assume that ϕ(vw i ) = i. For each w i , let C i be the set of colors used on edges incident to vertices in N (w i ) \ v. For each i, let S i = S \ C i . Set S i contains the colors that are potentially safe to use on an edge incident to w i , as we explain below. Let x 1 be an arbitrary neighbor in G of w 1 , other than v. We now show how to extend the good coloring to w 1 x 1 . This will complete the proof, since the same argument can be repeated to extend the coloring to each other uncolored edge of G incident to w 1 . If we color w 1 x 1 with any i ∈ S 1 , then any 2-colored cycle we create must use edges x 1 w 1 , w 1 v, vw i . Such a cycle is only possible if w i sees color 1. So we assume w i sees color 1, for every i ∈ S 1 (otherwise we can extend the coloring to w 1 x 1 ). Now for each i ∈ S 1 , define x i such that ϕ(w i x i ) = 1. (Note that x i = x 1 for at most one value of x i , so we can essentially ignore this case.) Our goal is to find indices i and j such that i ∈ S 1 and j ∈ S i and w j does not see color i. If we find such i and j, then we color w 1 x 1 with i and recolor w i x i with j. This creates no 2-colored cycles, as we now show. Any 2-colored cycle using w 1 x 1 must also use w 1 v and vw i (since i ∈ S 1 ). Bu no such 2-colored cycle exists, since w i no longer sees 1. Similarly, any 2-colored cycle using edge w i x i also uses w i v and vw j . Again, no such 2-colored cycle exists, since w j does not see i. Now we show that we can find such i and j. Suppose not. So for each i ∈ S 1 and j ∈ S i vertex w j sees color i. Thus, among the at most 4|W| edges incident to some w i , but not to v, each color i ∈ S 1 appears at least |S i | times. Since |S i | = |W| − |C i | ≥ |W| − q, we get (|W| − q) 2 ≤ 4|W|. Solving this quadratic gives |W| ≤ q + 2 + √ 4q + 4. But this quantity is less than q + √ 5q when q > 80, a contradiction. Corollary 3. Configuration (RC2) cannot appear in a minimal counterexample G. That is, G has no big vertex v such that among those 5 − -vertices with v as their unique big neighbor we have either (i) at least max{1, d(v) − ∆ + 8889} 2-vertices or (ii) at least max{1, d(v) − ∆ + 17655} 3 − -vertices or (iii) at least max{1, d(v) − ∆ + 26401} 4 − -vertices or (iv) at least max{1, d(v) − ∆ + 35137} 5 − -vertices. Proof. This is a direct application of the previous lemma. For a bunch B in a graph G, form G B from G by deleting all horizontal edges of B (recall that this does not delete x 0 x 1 and x t x t+1 ). Now B is long if, given any integer k ≥ 13 and any acyclic k-edge-coloring of G B , there exists an acyclic k-edge-coloring of G. Long bunches are crucial in our proofs that (RC3) and (RC4) cannot appear in a minimal counterexample to the Main Theorem. Lemma 4. In every planar graph, every bunch of length at least 11 is long. Proof. Consider a graph G with a bunch, B, of length at least 11. Fix an integer k ≥ 13. By assumption, G B has an acyclic k-edge-coloring; see Figure 3 for an example. Let v and w be the parents of the bunch and let x 1 , . . . , x t denote its vertices. We will reorder the threads of B so that (for each i ∈ [t − 1]) no color appears incident to both x i and x i+1 . (Technically, we reorder the pairs of colors on the edges vx i and x i w, while preserving, in each pair, which color is incident to v and which is incident to w; but this minor distinction will not trouble us.) We also require that the colors seen by x 2 not appear on edge x 0 x 1 and, similarly, the colors seen by x t−1 not appear on edge x t x t+1 . If we can reorder the threads to achieve this property, then it is easy to extend the k-edge-coloring to G, as follows. We greedily color the horizontal edges in any order, requiring that the color used on x i x i+1 not appear on any (colored) edge incident to x i−1 , x i , x i+1 , or x i+2 . Each of these vertices has two incident edges on a thread, for a total of 8 edges. We must also avoid the colors on at most 4 horizontal edges. Thus, at most 12 colors are forbidden. Since k ≥ 13, we greedily complete the coloring. Given an acyclic k-edge-coloring of G B , suppose that we reorder the threads of G B and greedily extend the coloring to the horizontal edges of B. Call the resulting k-edge-coloring ϕ. Clearly, ϕ is a proper edge-coloring. We must also show that it has no 2-colored cycles. Suppose, to the contrary, that ϕ has a 2-colored cycle, C. By the condition on our ordering of the threads of B, cycle C must use at least two successive horizontal edges of B. But now one of these horizontal edges x i x i+1 of C must share a color with an edge incident to x i−1 or x i+2 , a contradiction. Thus, ϕ is an acyclic k-edge-coloring of G, as desired. Hence, it suffices to show that we can reorder the threads of B so that no color appears incident to both x i and x i+1 . For each i ∈ [t], we think of putting some thread vx j w into position i (where also j ∈ [t]). We always put thread 1 into position 1 and thread t into position t. We will also initially put threads into the positions with i odd. Let O be the set of threads that we put in the odd positions (and thread t, whether or not t is odd); O is for odd. Note that |O| = ⌈(t + 1)/2⌉. Later, we put threads into the even positions. To do so, after putting threads into the odd positions, we build a bipartite graph , H(B, O), where the vertices of one part are the even numbered positions (excluding t) and the vertices of the other part are those threads not yet placed. We add an edge between a thread vx i w and a position j if no color used on the thread is also used on a thread already in position j − 1 or j + 1, or used on x 0 x 1 when j = 2, or on x t x t+1 when j = t − 1; see Figure 6 for an example. (The notation H(B, O) is slightly misleading, since the edges of this graph depend not only on our choice of O, but also on which threads we put where.) Thus, to place the remaining threads, it suffices to find a perfect matching in H(B, O). When t ≥ 22, we can put threads into the odd positions essentially arbitrarily, and we are guaranteed a perfect matching in H(B, O) by a straightforward application of Hall's Theorem (this approach allows us to complete the proof, but requires that we replace 4.2 * 10 14 with a larger constant). For smaller t, we use a similar approach, but need more detailed case analysis. We build a conflict graph, B conf , which has as its vertices the threads of B, that is, vx i w, for all i ∈ [t]. Two vertices are adjacent in B conf if their corresponding threads share a common color; see Figure 4 for an example. Note that B conf is a disjoint union of paths and cycles, since every edge in a thread of B is incident to either v or w. We refer interchangeably to a thread and its corresponding vertex in B conf . To form O we start with an empty set and repeatedly add vertices, subject to the following condition. Each component of B conf with a vertex in O must have all of its vertices in O, except for at most one component; if such a component exists, then its vertices that are in O must induce a path. Thus, at most two threads in O have neighbors in B conf that are not in O (and if exactly two, then each has at most one such neighbor). First suppose that threads 1 and t are in different components of B conf . We begin by putting into O all threads in the smaller of these components, and then proceed to the other component, beginning with the thread in {1, t}. If threads 1 and t are in the same component of B conf , then we start by putting into O all vertices on a shortest path in B conf from 1 to t, and thereafter continue growing arbitrarily, such that when the set reaches size ⌈(t + 1)/2⌉ it satisfies the desired property. The only exception is if the shortest path from 1 to t has more than ⌈(t + 1)/2⌉ vertices. In this case the component of B conf is a path; now we add a single edge in B conf joining its endpoints, and proceed as above, which allows us to take a shorter path from 1 to t, including the edge we just added. Thus, we have constructed the desired O. Now we describe how to place the threads of O in the odd positions. (See Figure 5 for an example of these threads in position, and Figure 6 for the resulting graph H(B, O).) Recall that at most two threads in O have neighbors in B conf that are not in O (and if exactly two, then each has at most one such neighbor). Let r denote the size of each part in H(B, O). Since |O| = ⌈(t + 1)/2⌉ and t ≥ 11, we get that r = ⌊(t − 1)/2⌋ ≥ 5. Suppose that 1 and t are the two threads in O with neighbors in B conf that are not in O. We put threads 1 and t in their positions and we put the other threads of O in the odd positions arbitrarily, except that if t is even, then we pick a thread for position t − 1 that does not conflict with thread t and does not conflict with the color on x t x t+1 (if it exists); this is easy, since t ≥ 11. Now we must put the remaining threads into the even positions. At most three threads are forbidden from position 2, since at most one thread has a color used on thread 1 and at most two threads have colors used on in B conf that are not in O. Suppose that exactly one of threads 1 and t has a neighbor in B conf that is not in O. By symmetry, assume that it is 1. Further, assume that also i ∈ O and thread i has a neighbor in B conf that is not in O (the case when no such i exists is easier). If t is odd, then we put thread i in position t − 2, and fill the remaining odd positions arbitrarily from O. If t is even, then we put thread i in position t − 2, and fill odd positions 3 through t − 3 arbitrarily from O, except that we require that the thread in position t − 3 not conflict with that in position t − 2 (this is possible, since at most two threads in O conflict with thread i, and |O| ≥ 7). Again, we use Hall's Theorem to show that H(B, O) has a perfect matching. Now positions 2 and t − 1 each have degree at least r − 3 ≥ 2, and position t − 3 has degree at least r − 1 ≥ 4. All other positions have degree r. Suppose that one of threads 1 and t has two neighbors in B conf that are not in O, and the other has no such neighbors. (This will happen when B conf consists of two cycles, each of length t/2.) By symmetry, assume that thread 1 has two neighbors in B conf that are not in O. We fill the odd positions arbitrarily with threads from O (here, and in the remaining cases, if t is even, then we also require that the thread in position t − 1 not conflict thread t or with the color on x t x t+1 ). In H(B, O), position 2 has degree at least r − 4 ≥ 1. Also, position t − 1 has degree at least r − 2 ≥ 3. All other positions have degree r. So H(B, O) has a perfect matching. Now we can assume that neither of threads 1 and t has neighbors in B conf that are not in O. Suppose that some thread, say i, in O has two neighbors in B conf that are not in O. We put thread i in position 3 and fill the remaining odd positions arbitrarily from O. Position 2 has degree at least r − 4 ≥ 1, and position 4 has degree at least r − 2 ≥ 3. If t is odd, then position t − 1 has degree at least r − 2 ≥ 3. All other positions have degree r. So H(B, O) has a perfect matching. Finally, suppose that two threads, i and j (neither of which is 1 or t), each have a neighbor in B conf that is not in O. Now we put thread i in position 3 and thread j in position 5, and fill the remaining odd positions from the rest of O. Between them, threads i and j forbid at most two threads from position 4 and at most one thread each from positions 2 and 6. Thus, position 2 has degree at least r − 3 ≥ 2, position 4 has degree at least r − 2 ≥ 3, and position 6 has degree at least r − 1 ≥ 4. Once again H(B, O) has a perfect matching. Proof. Suppose G is a minimal counterexample that contains such a vertex v. Form G ′ from G by deleting all horizontal edges of long bunches for which v is a parent. It suffices to find a good coloring of G ′ since, by definition, we can extend it to G. Let B be the longest bunch that has v as a parent, and let w be the other parent of this bunch. Let x be a bunch vertex in B. By minimality, we have a good coloring of G ′ − x; we can greedily extend this to G ′ − x + wx, and we call this coloring ϕ. We construct a set of colors C good (v) as follows. Remove from [k] every color that is used on an edge incident to v leading to a vertex that is not a bunch vertex of some bunch with v as a parent. Further, for each such color α used on an edge vp we do the following. Remove either (i) all other colors used incident to p or (ii) every colors used on an edge vu, whenever u is a 2-vertex incident to an edge colored with α; for each color α, we pick either (i) or (ii), giving preference to the option that removes fewer colors. Finally, we remove all colors used on edges incident to v that are in short bunches. This completes the construction of C good (v). Starting from ϕ, we remove all colors in C good (v) that are used on edges incident to v. We will gradually recolor all of these edges, as well as vx (first with a proper coloring, and eventually with an acyclic coloring). Suppose that ϕ(wx) is already used on some edge vy in bunch B. To avoid creating any 2colored cycles through x, it suffices to color vx with any color in C good (v) \ {ϕ(wx), ϕ(wy)}, which is easy. So assume ϕ(wx) is not used on any edge vy in B. (The hardest case is when ϕ(wx) is used on some edge incident to v leading to a non-bunch vertex. This case motivates most of our effort, so the reader will do well to keep it in mind.) Our goal is to find some color, say α, other than ϕ(wx), such that α ∈ C good (v) and α is already used on an edge wy of B. Given such an α, we use it to color vx, and color vy with some color in C good (v) \ {ϕ(wx), α}. This ensures that each of vx and wx will never appear in a 2-colored cycle, no matter how we further extend the coloring. Such an α exists by the Pigeonhole principle, because length(B) + |C good (v)| ≥ k + 2. We defer the computation proving this to the end of the proof. Now we greedily extend our coloring to a proper (not necessarily acyclic) k-edge-coloring of G ′ , essentially assigning the colors of C good (v) to the uncolored edges arbitrarily. We only require that the final two edges we color are in some bunch with at least two uncolored edges incident to v. (For each edge the only forbidden color is the one already used incident to the endpoint of degree 2.) If we get stuck on the final edge, then we can backtrack slightly and complete the coloring. Now we modify this proper edge-coloring to make it acyclic. It is important to note that any 2-colored cycle must pass through v. Further, it must use some edges e 1 , e 2 , e 3 , e 4 , where v is the common endpoint of e 2 and e 3 and the common endpoints of edges e 1 and e 2 and of edges e 3 and e 4 are both 2-vertices (this follows from our construction of C good ). Suppose that such a 2-colored cycle exits, say with colors β 1 , β 2 . One of these colors must be in C good (v), since the 2-colored cycle did not exist before assigning these colors; say it is β 1 . Suppose that a second such 2-colored cycle exists, with colors γ 1 , γ 2 ; by symmetry, assume that γ 1 ∈ C good . To fix both cycles, we swap colors β 1 and γ 1 on the edges incident to v where they are used. We repeat this process until we have only at most one 2-colored cycle through v. Suppose we have one, with edges colored β 1 , β 2 (and β 1 ∈ C good ); when we state the colors on edges of a thread, we always start with the edge incident to v. Now we look for some other thread with edges colored γ 1 , γ 2 (and γ 1 ∈ C good (v)) such that no thread incident to v has edges colored γ 2 , β 1 . If we find such a thread, then we swap colors β 1 and γ 1 on the edges incident to v where they appear, and this fixes the 2-colored cycle. Since v is a parent in at most 35 bunches, at most 35 incident threads have edges colored γ 2 , β 1 , for some choice of γ 2 . Further, for each choice of γ 2 , v has at most 35 incident threads colored γ 1 , γ 2 , for some choice of γ 1 . Thus, at most 35 2 = 1225 of these threads are forbidden. However, it is straightforward to check that k − |C good (v)| ≤ 2000 (in fact the difference is much smaller, but this is unimportant). Now we have the desired thread incident to v since d(v) − 1225 − 2000 > 0. Thus, we can recolor the edge colored β 1 to get an acyclic edge-coloring of G ′ , as desired. Now we prove that length(B) + |C good (v)| ≥ k + 2. Note that k − |C good (v)| + 2 ≤ 5n 5 + n 6 (n 5 + n 6 + 1 − s) + 10s + 2, where s is the number of short bunches with v as a parent. This is because each short bunch causes us to remove at most 10 colors, each vertex counted by n 5 causes us to remove at most 5 colors, and each counted by n 6 causes us to remove at most n 5 + n 6 + 1 − s colors. We must show that the right side of the latter inequality is at most length(B). In fact, we will show that it is no more than the average length of the long bunches (rounded up). Since the number of bunches is at most n 5 + n 6 , we want d(v) − (5n 5 + n 6 (n 5 + n 6 + 1 − s) + 10s) n 5 + n 6 − s > 5n 5 + n 6 (n 5 + n 6 + 1 − s) + 10s) + 1, which is implied by d(v) ≥ (5n 5 + n 6 (n 5 + n 6 + 1 − s) + 10s + 1)(n 5 + n 6 − s + 1). Since n 5 + 2n 6 ≤ 35, it suffices to have d(v) ≥ (5(35 − 2n 6 ) + n 6 ((35 − 2n 6 ) + n 6 + 1 − s) + 10s + 1)(35 − n 6 − s + 1). If we maximize the right side over all integers n 6 and s such that 0 ≤ n 6 ≤ 17 and 0 ≤ s ≤ 35 − n 6 (using nested For loops, for example), then we get 8680. Lemma 6. Configuration (RC4) cannot appear in a minimal counterexample G. That is, G cannot contain a very big vertex v such that n 5 + 2n 6 ≤ 141415, where n 5 and n 6 denote the numbers of 5 − -neighbors and 6 + -neighbors of v that are in no bunch with v as a parent. Proof. Most of the proof is identical to that of Lemma 4, that (RC3) cannot appear in a minimal counterexample. The only difference is our argument showing that length(B) + |C good (v)| ≥ k + 2, which we give now. As in the previous lemma, it suffices to have d(v) ≥ (5n 5 + n 6 (n 5 + n 6 + 1 − s) + 10s + 1)(n 5 + n 6 − s + 1). By hypothesis, we have n 5 ≤ 141415 − 2n 6 . Now substituting for n 5 , we get that it suffices to have d(v) ≥ (5(141415 − 2n 6 ) + n 6 ((141415 − 2n 6 ) + n 6 + 1 − s) + 10s + 1)((141415 − 2n 6 ) + n 6 − s + 1) = n 3 6 + 2n 2 6 s − 282822n 2 6 + n 6 s 2 − 282832n 6 s + 19996363820n 6 − 10s 2 + 707084s + 99991859616. (1) We must upper bound the value of (1) over the region where 0 ≤ n 6 ≤ 70707 and 0 ≤ s ≤ n 5 + n 6 − 1 ≤ 141415 − n 6 . Since this domain is much larger than in the previous lemma, we relax the integrality constraints and solve a multivariable calculus problem. The only critical point for this function is outside the domain, so it suffices to find the maximum along the boundary. This occurs when s = 0 and n 6 ≈ 47134; the value is approximately 4.19 * 10 14 . Recall that v is very big, so we have d(v) ≥ ∆ − 4(8680). Since we need d(v) ≥ 4.19 * 10 14 , it suffices to require that ∆ ≥ 4.2 * 10 14 . This completes the proof. Structural Lemma. Let G be a 2-connected plane graph. Let k = max{∆, 5(8680)}. Now G contains one of the following four configurations: (RC1) a vertex v such that w∈N (v) d(w) ≤ k; or (RC2) a big vertex v such that among those 5 − -vertices with have v as their unique big neighbor we have either (i) at least max{1, d(v) − ∆ + 8889} 2-vertices or (ii) at least max{1, d(v) − ∆ + 17655} 3 − -vertices or (iii) at least max{1, d(v) − ∆ + 26401} 4 − -vertices or (iv) at least max{1, d(v) − ∆ + 35137} 5 − -vertices; or (RC3) a big vertex v such that n 5 + 2n 6 ≤ 35, where n 5 and n 6 denote the number of 5 − -neighbors and 6 + -neighbors of v that are in no bunch with v as a parent; or (RC4) a very big vertex v such that n 5 + 2n 6 ≤ 141415, where n 5 and n 6 denote the number of 5 − -neighbors and 6 + -neighbors of v that are in no bunch with v as a parent. Each 5 − 5-vertex w with v as its only big neighbor has x∈d(w)\v d(x) ≤ (d(w) − 1)8680. Thus, for 2-vertices, 3-vertices, 4-vertices, and 5-vertices, the sums are (respectively) at most 8680, 17360, 26040, and 34720. Now we are done, since 8680 + 5(8680) ≤ 8889; 17360 + 5(17360) ≤ 17655; 26040 + 5(26040) ≤ 26401; and 34720 + 5(34720) ≤ 35137. Figure 3 : 3An acyclic edge-coloring of G B , restricted to the edges incident to bunch vertices of B, where B is a bunch of length 12. Figure 4 : 4x 0 x 1 . Similarly, at most three threads are forbidden from position t − 1. For all other positions, no threads are forbidden. Positions 2 and t − 1 have degree at least r − 3 ≥ 2 in H(B, O) and all other slots have degree r. Thus, by Hall's Theorem, H(B, O) has a perfect matching. We now use similar arguments to handle the other possibilities for which vertices of O have The conflict graph, B conf . We label each vertex with the color the thread uses on the edge to its parent "above". Applying our algorithm to this instance of B conf yields O = {1, 7, 8, 9, 10, 11, 12}. Figure 5 :Figure 6 : 56A partial acyclic edge-coloring of G B , with the threads in O in position. The auxiliary graph H(B, O), with threads on top and positions on bottom, and a perfect matching shown in bold. Lemma 5 . 5Configuration (RC3) cannot appear in a minimal counterexample G. That is, G cannot contain a big vertex v such that n 5 + 2n 6 ≤ 35, where n 5 and n 6 denote the numbers of 5 − -neighbors and 6 + -neighbors of v that are in no bunch with v as a parent. Figure 7 : 7The desired acyclic edge-coloring of G B , ready to be extended greedily to the horizontal edges of B. AcknowledgmentsThanks to Xiaohan Cheng for proposing acyclic edge-coloring as a possible problem for the 2016 Rocky Mountain-Great Plains Graduate Research Workshop in Combinatorics, and thanks to the organizers for posting the problem write-ups on the website. Reading that problem proposal got me started on this research. Thanks to Howard Community College for their hospitality during the research and writing of this paper. The idea of bunches, which was crucial to these results, comes from two papers of Borodin, Broersma, Glebov, and van den Heuvel[7,8]. I am especially grateful that they prepared English versions of these papers (which originally appeared in Russian). The main idea in the proof of the structural lemma is that each 5 − -vertex is "sponsored" by its big neighbors. This was inspired by a paper with Marthe Bonamy and Luke Postle[5]. Thanks to Beth Cranston for helpful discussion. Finally, thanks to the National Security Agency for partially funding this research, under grant H98230-15-1-0013. Acyclic coloring of graphs. N Alon, C Mcdiarmid, B Reed, Random Structures Algorithms. 23N. Alon, C. McDiarmid, and B. Reed. Acyclic coloring of graphs. Random Structures Algorithms, 2(3):277-288, 1991. Acyclic edge colorings of graphs. N Alon, B Sudakov, A Zaks, J. Graph Theory. 373N. Alon, B. Sudakov, and A. Zaks. Acyclic edge colorings of graphs. J. Graph Theory, 37(3):157-167, 2001. Acyclic edge coloring of planar graphs. M Basavaraju, L S Chandran, M. Basavaraju and L. S. Chandran. Acyclic edge coloring of planar graphs. August 2009. Available at: https://arxiv.org/abs/0908.2237. Acyclic edgecoloring of planar graphs. M Basavaraju, L S Chandran, N Cohen, F Havet, T Müller, SIAM J. Discrete Math. 252M. Basavaraju, L. S. Chandran, N. Cohen, F. Havet, and T. Müller. Acyclic edge- coloring of planar graphs. SIAM J. Discrete Math., 25(2):463-478, 2011. Available at: https://hal.inria.fr/inria-00638448. Planar graphs of girth at least five are square (∆ + 2)-choosable. M Bonamy, D W Cranston, L Postle, M. Bonamy, D. W. Cranston, and L. Postle. Planar graphs of girth at least five are square (∆ + 2)- choosable. August 2015. Available at: https://arxiv.org/abs/1508.03663. On acyclic colorings of planar graphs. O V Borodin, Discrete Math. 253O. V. Borodin. On acyclic colorings of planar graphs. Discrete Math., 25(3):211-236, 1979. Stars and bunches in planar graphs. Part I: Triangulations. Originally published as Diskretn. O V Borodin, H J Broersma, A Glebov, J Van Den, Heuvel, Anal. Issled. Oper. Ser. 1. 82in RussianO. V. Borodin, H. J. Broersma, A. Glebov, and J. van den Heuvel. Stars and bunches in planar graphs. Part I: Triangulations. Originally published as Diskretn. Anal. Issled. Oper. Ser. 1 8 (2001) no. 2, 15-39 (in Russian). Available at: http://www.cdam.lse.ac.uk/Reports/Abstracts/cdam-2002-04.html. Stars and bunches in planar graphs. Part II: General planar graphs and colourings. O V Borodin, H J Broersma, A Glebov, J Van Den, Heuvel, Originally published asO. V. Borodin, H. J. Broersma, A. Glebov, and J. van den Heuvel. Stars and bunches in planar graphs. Part II: General planar graphs and colourings. Originally published as . Diskretn. Anal. Issled. Oper. Ser. 1. 84in RussianDiskretn. Anal. Issled. Oper. Ser. 1 8 (2001) no. 4, 9-33 (in Russian). Available at: http://www.cdam.lse.ac.uk/Reports/Abstracts/cdam-2002-05.html. Acyclic edge-colouring of planar graphs. N Cohen, F Havet, T Müller, Research Report] RR-6876, INRIAN. Cohen, F. Havet, and T. Müller. Acyclic edge-colouring of planar graphs. [Research Report] RR-6876, INRIA, March 2009. Available at: https://hal.inria.fr/inria-00367394/. The acyclic chromatic class of a graph. I Fiamčík, Math. Slovaca. 282I. Fiamčík. The acyclic chromatic class of a graph. Math. Slovaca, 28(2):139-145, 1978. Acyclic edge coloring through the Lovász local lemma. I Giotis, L Kirousis, K I Psaromiligkos, D M Thilikos, Theoret. Comput. Sci. 665I. Giotis, L. Kirousis, K. I. Psaromiligkos, and D. M. Thilikos. Acyclic edge coloring through the Lovász local lemma. Theoret. Comput. Sci., 665:40-50, 2017. Available at: https://arxiv.org/abs/1407.5374. Acyclic colorings of planar graphs. B Grünbaum, Israel J. Math. 14B. Grünbaum. Acyclic colorings of planar graphs. Israel J. Math., 14:390-408, 1973. An improved bound on acyclic chromatic index of planar graphs. Y Guan, J Hou, Y Yang, Discrete Math. 31310Y. Guan, J. Hou, and Y. Yang. An improved bound on acyclic chromatic index of planar graphs. Discrete Math., 313(10):1098-1103, 2013. Available at: https://arxiv.org/abs/1203.5186. A note on acyclic vertex-colorings. J.-S Sereni, J Volec, J. Comb. 74J.-S. Sereni and J. Volec. A note on acyclic vertex-colorings. J. Comb., 7(4):725-737, 2016. Available at: https://arxiv.org/abs/1312.5600. Further result on acyclic chromatic index of planar graphs. T Wang, Y Zhang, Discrete Appl. Math. 201T. Wang and Y. Zhang. Further result on acyclic chromatic index of planar graphs. Discrete Appl. Math., 201:228-247, 2016. Available at: https://arxiv.org/abs/1405.0713. A new upper bound on the acyclic chromatic indices of planar graphs. W Wang, Q Shu, Y Wang, European J. Combin. 342W. Wang, Q. Shu, and Y. Wang. A new upper bound on the acyclic chromatic indices of planar graphs. European J. Combin., 34(2):338-354, 2013. Available at https://arxiv.org/abs/1205.6869.
[]
[ "Experimental Assessment of Real-time PDCP-RLC V-RAN Split Transmission with 20 Gbit/s PAM4 Optical Access", "Experimental Assessment of Real-time PDCP-RLC V-RAN Split Transmission with 20 Gbit/s PAM4 Optical Access" ]
[ "A El Ankouri [email protected] ", "L Anet Neto ", "S Barthomeuf ", "A Sanhaji ", "B Le Guyader ", "K Grzybowski ", "S Durel ", "P Chanclou ", "\n1) Orange Labs, 2 Avenue Pierre Marzin22300LannionFrance\n", "\n) IMT Atlantique\n655 Avenue du Technopôle29200Plouzané\n" ]
[ "1) Orange Labs, 2 Avenue Pierre Marzin22300LannionFrance", ") IMT Atlantique\n655 Avenue du Technopôle29200Plouzané" ]
[]
We experimentally demonstrate real-time, end-to-end transmission of 3GPP's option 2 functional split RAN interface with virtualized central units through up to 20km using a 20Gbit/s PAM4 link and 10GHz bandwidth optics.
10.1109/ecoc.2018.8535192
[ "https://arxiv.org/pdf/1902.06437v1.pdf" ]
53,429,468
1902.06437
0da00ff70c4a2ddd588aa30e1ca9ab7893dfbb9b
Experimental Assessment of Real-time PDCP-RLC V-RAN Split Transmission with 20 Gbit/s PAM4 Optical Access A El Ankouri [email protected] L Anet Neto S Barthomeuf A Sanhaji B Le Guyader K Grzybowski S Durel P Chanclou 1) Orange Labs, 2 Avenue Pierre Marzin22300LannionFrance ) IMT Atlantique 655 Avenue du Technopôle29200Plouzané Experimental Assessment of Real-time PDCP-RLC V-RAN Split Transmission with 20 Gbit/s PAM4 Optical Access We experimentally demonstrate real-time, end-to-end transmission of 3GPP's option 2 functional split RAN interface with virtualized central units through up to 20km using a 20Gbit/s PAM4 link and 10GHz bandwidth optics. , L. Anet Neto (1) , S. Barthomeuf (1) , A. Sanhaji (1) , B. Le Guyader (1) , K. Grzybowski (1) , S. Durel (1) , P. Chanclou Introduction Long Term Evolution was conceived following a clear trend to push the network intelligence towards its edges, with the whole radio protocol stack being processed in the Evolved Node B (eNBs) and the backhaul interface (S1) connecting the Evolved Packet Core (EPC) to the antenna sites ( Fig. 1, top). Distributed Radio Access Networks (D-RAN) offered real advantages to the operators such as the ability to precisely target capacity increase needs 1 . Thanks to the rearrangement of the eNB functional blocks, we have witnessed later the emergence of centralized RAN (C-RAN) topologies, as opposed to D-RAN, with benefits such as reduced footprints at the antenna sites 2 . The main limitation of C-RAN is imposed, however, by the cost and availability of suitable low layer split fronthaul connectivity due to its stringent bit-rate and latency requirements 2 . It was clear that a new interface had to be conceived to accommodate the bandwidths expected for the 5G while allowing some degree of network centralization. Based on yet other distributions of the radio protocol stack, different functional splits were proposed and particularly fomented by the rise of software defined radio solutions. Indeed, those suit particularly well the highest layers of the radio stack, which are bounded to less strict latency constraints. This new virtual RAN (V-RAN) could enable a much faster optimization and evolution of the network thanks to easily (re)configurable and manageable instances on agnostic hardware. Several possible splitting options have been defined by different standardization and industry groups. The 3GPP has defined a high-layer split interface, referred as V1 and F1 for the 4G and 5G respectively, between the Packet Data Convergence Protocol (PDCP) and Radio Link Control (RLC) blocks 3 . In Fig. 1 (bottom), a topology with a high layer split is shown where a central unit (CU) hosts virtualized layer 3 and part of layer 2 functions. The CU is connected to a distributed unit (DU) with a V1/F1 interface. The DU, with lower layer 2 and higher layer 1 blocks, is connected to the radio unit (RU), hosting the remainder of the PHY, through a low-layer split (not shown). Previously, we have assessed a V1-ish interface in point-to-point and point-to-multipoint passive optical network (PON) topologies 4 . Here, we exploit a new solution based on an advanced modulation format for the optical access segment. Indeed, standardization activities on fixed optical access now focus on beyond XG(S)-PON 5 systems. New multi-level formats such as Pulse Amplitude Modulation (PAM) are good candidates to attain 20 Gbit/s with 10 GHz optics. We experimentally demonstrate an end-toend real-time transmission of a PDCP-RLC split interface through aggregation and access networks. The EPC and CU are virtualized and managed inside a virtualization environment in a host server. The aggregation segment between the virtual CU (vCU) and the switch in Fig. 1, is emulated by an Ethernet impairment engine that degrades the transmitted packets with variable latency, packet jitter and packet loss linked to bit error rate (BER). In the access segment, between the switch and the antenna site, we implement a physical layer transmission using real-time 20 Gbit/s PAM4 over 20 km of fibre. Fig. 2 shows our experimental setup, which can be divided into 3 distinct parts: Experimental Setup Our radio plane runs on a server hosting the LTE mobile functions and is implemented on top of a single node CentOS Openstack virtualization environment. The EPC and radio protocol stack are aggregated in virtual machines, where each machine corresponds to a set of functions performed by an LTE node. For example, the EPC virtual machines, which offer the LTE core network services, contain the domain name system (DNS) server, the mobility management entity (MME), the serving and packet data network gateway (SPGW) and the home subscriber server (HSS). The EPC connects to the CU via the backhaul (S1) interface. The CU contains layer 3 and layer 2 up to the PDCP block of the LTE protocol stack. It generates a V1 interface, which goes out of the server over Ethernet and through the fixed aggregation and access networks before looping back to the same server. In our setup, we don't have a low-layer split interface and thus the DU and RU compose one single functional block. Also, the PHY layers of the RU and user equipment (UE) are abstracted. However, since our main objective is not to assess the mobile transmission through the air interface (Uu) but to evaluate the transmission of a high layer split through an optical transmission system, such abstraction can be made without loss of generality. The UE node is also implemented on a virtual machine and provisioned in the same server. It is important to notice that even though the various nodes are installed on the same server, they are logically separated and can only communicate via the existing mobile interfaces, also shown in Fig. 2 The second part of our experimental setup refers to the emulation of the aggregation network. This is done with a network impairment engine that can introduce latency, packet jitter and BER to the V1 interface. The V1 interface then goes to a 10 Gb Ethernet (10 GbE) switch that would also be connected to other cell sites in an actual deployed network. The traffic of those additional sites for both up and downlink (UL/DL) is created with an Ethernet traffic generator, allowing us to reach a symmetric throughput of 10.3125 Gbit/s. The overloading and V1 interface under evaluation are distinguished with different VLAN tags. Finally, the fixed access plane is represented essentially by the real-time PAM4 encoder and decoder, the optical transceivers and a point-topoint transmission through 20 km of standard single mode fibre (SSMF). We use SFP+ (10G Small Form-factor Pluggable transceiver) modules and evaluation boards to provide connectivity adaptation between the 10 GbE switches and the inputs/outputs of our PAM4 bench. We focus only on the downlink. The uplink does not go through the PAM4 bench and is short circuited between the evaluation boards. We generate a de-correlated copy of the 10GbE stream containing the V1 payload and the overloading traffic and then we inject both streams as the most and least significant bits (MSB and LSB) inputs of our PAM4 encoder. The PAM4 signal is amplified with an electrical driver before modulating a 10 GHz Directly Modulated Laser (DML) emitting at 1311 nm. The optical signal goes through 20 km SSMF, representing the typical length of a Fixed Access Network segment. An attenuator adjusts the power at the input of an 8 GHz APD (Avalanche Photo Diode), with embedded transimpedance amplifier (TIA). The received electrical signal then attacks the PAM4 decoder, which separates the LSB and MSB flows of the PAM4 signal according to a previous report 6 . Results and Discussions In order to assess the effects of the PAM4 modulation in our transmission, we take an optical back-to-back (OB2B) NRZ Ethernet transmission as reference and we compare it with the MSB flow of our PAM4 transmission after 20 km SSMF without forward error correction. To evaluate our access transmission and emulate different possible degradation phenomena coming from the aggregation network, we expressly degrade the V1 interface BER and we introduce a normally distributed latency variation in order to insert some packet jitter in our system. Fig. 3 shows the user datagram protocol (UDP) packet error rates (PER) variation between the EPC and the UE for different bitrates and introduced BER values and for a packet size of 1200 bytes. We can see that the PER of our reference scenario is below 0.3% but increases for bit-rates beyond 150 Mb/s, which is due to limited resources in our virtual machines. We can also see that the PAM4 MSB optical access degrades the PER by ~0.8pp (percentage points) compared to the reference scenario. The introduction of a BER of 10 -6 degrades the PER by about 0.5pp in both NRZ (Ethernet) and PAM4 transmissions compared their respective transmissions without degradation. In all cases, the PER remains below 5% for bit-rates up to 150 Mb/s, which correspond to the useful throughputs that can be transmitted in a 20 MHz, 2x2 multiple-input, multiple output (MIMO) LTE signal with 64QAM. We have also measured the packet jitter between CU and DU with respect to the additional jitter introduced by our emulation engine (not show here for the sake of conciseness). We found out that the CU-DU jitter varies linearly with the induced jitter and that the additional packet jitter coming from the different equipment in our access transmission chain is ~120 µs. Also, the measured packet jitter values are roughly the same for the NRZ and PAM4 transmissions, meaning that the optical PHY signal jitter coming from the PAM modulation does not impact the packet jitter of the system. Fig. 4 depicts the impact of the emulated packet jitter in the PER for different bit-rates, with an inset of the transmitted PAM4 eyediagram. We fixed the mean introduced one-way latency to 2 ms and considered two values of packet jitter (latency standard deviation), namely 0.66 ms and 0.10 ms. The effect of the jitter is particularly noticeable and stronger for higher bit-rates. Whereas an introduced jitter of 0.1 ms imposes a linear PER degradation with respect to the bit-rate, a jitter of 0.66 ms imposes more abrupt signal degradation. For instance, we could measure ~4pp higher PER for an introduced packet jitter of 0.66 ms at 20 Mb/s and ~6pp for 50 Mb/s. Finally, the degradations introduced by the PAM4 modulation are relatively low compared to an ordinary Ethernet transmission. The measured packet jitter is 0.5pp and 0.3pp higher with the PAM4 for induced jitters of respectively 0.66 ms and 0.10 ms and bit-rates up to 150 Mb/s. Conclusions In this work, we experimentally demonstrated the feasibility of transporting a high-layer mobile PDCP-RLC split interface over an Ethernet aggregation network and 20 km optical access network using real-time PAM4 modulation. We have also investigated the impacts of different impairments that could come from the aggregation network namely the BER and packet jitter. Fig. 1 : 1D-RAN (top), C-RAN (center) and split V-RAN (bottom) topologies (left) and their splitting points (right). Fig. 2 : 2Experimental setup. Fig. 3 : 3PER variation with bitrate. Fig. 4 : 4PER vs bitrate for different induced jitters. PER End-to-end bit-rate (Mbit/s) OB2B Ethernet, no induced degradations OB2B Ethernet, induced 10E-6 BER 20km PAM4 MSB, no induced degradations 20km PAM4 MSB, induced 10E-6 BER PER End-to-end bit-rate (Mbit/s)2% 4% 6% 8% 10% 12% 14% 0 50 100 150 200 0% 2% 4% 6% 8% 10% 12% 14% 0 50 100 150 200 OB2B Ethernet, 0.10 ms induced jitter OB2B Ethernet, 0.66 ms induced jitter 20km PAM4 MSB, 0.10 ms induced jitter 20km PAM4 MSB, 0.66 ms induced jitter AcknowledgementsThis work was supported by the European H2020-ICT-2016-2 project 5G-PHOS. Small Cell Virtualization Functional Splits and Use Cases. 159.07.02Small Cell Virtualization Functional Splits and Use Cases, 159.07.02, Jan. 2016. . A Pizzinat, J. Light. Technol. 335A. Pizzinat et al., J. Light. Technol., vol. 33, no. 5, pp. 1077-1083, Mar. 2015. . Z Tayq, paper Th3A.2Z. Tayq et al., paper Th3A.2, OFC 2017. Itu-T Rec, XGS-PON). 9807ITU-T Rec. G9807.1 (XGS-PON), 2016. . S Barthomeuf, paper M1B.1,S. Barthomeuf et al., paper M1B.1, OFC 2018.
[]
[ "Exact analytical solutions of the Susceptible-Infected-Recovered (SIR) epidemic model and of the SIR model with equal death and birth rates", "Exact analytical solutions of the Susceptible-Infected-Recovered (SIR) epidemic model and of the SIR model with equal death and birth rates" ]
[ "Tiberiu Harko \nDepartment of Mathematics\nCentro de Astronomia e Astrofísica da Universidade de Lisboa\nDepartment of Computing and Information Management\nInstitute of Vocational Education\nUniversity College London\nGower Street, Campo Grande, Edificío C8, Chai WanWC1E 6BT, 1749-016London, Lisboa, Hong Kong, Hong KongUnited Kingdom, Portugal, P. R. China\n", "Francisco S N Lobo \nDepartment of Mathematics\nCentro de Astronomia e Astrofísica da Universidade de Lisboa\nDepartment of Computing and Information Management\nInstitute of Vocational Education\nUniversity College London\nGower Street, Campo Grande, Edificío C8, Chai WanWC1E 6BT, 1749-016London, Lisboa, Hong Kong, Hong KongUnited Kingdom, Portugal, P. R. China\n", "M K Mak \nDepartment of Mathematics\nCentro de Astronomia e Astrofísica da Universidade de Lisboa\nDepartment of Computing and Information Management\nInstitute of Vocational Education\nUniversity College London\nGower Street, Campo Grande, Edificío C8, Chai WanWC1E 6BT, 1749-016London, Lisboa, Hong Kong, Hong KongUnited Kingdom, Portugal, P. R. China\n" ]
[ "Department of Mathematics\nCentro de Astronomia e Astrofísica da Universidade de Lisboa\nDepartment of Computing and Information Management\nInstitute of Vocational Education\nUniversity College London\nGower Street, Campo Grande, Edificío C8, Chai WanWC1E 6BT, 1749-016London, Lisboa, Hong Kong, Hong KongUnited Kingdom, Portugal, P. R. China", "Department of Mathematics\nCentro de Astronomia e Astrofísica da Universidade de Lisboa\nDepartment of Computing and Information Management\nInstitute of Vocational Education\nUniversity College London\nGower Street, Campo Grande, Edificío C8, Chai WanWC1E 6BT, 1749-016London, Lisboa, Hong Kong, Hong KongUnited Kingdom, Portugal, P. R. China", "Department of Mathematics\nCentro de Astronomia e Astrofísica da Universidade de Lisboa\nDepartment of Computing and Information Management\nInstitute of Vocational Education\nUniversity College London\nGower Street, Campo Grande, Edificío C8, Chai WanWC1E 6BT, 1749-016London, Lisboa, Hong Kong, Hong KongUnited Kingdom, Portugal, P. R. China" ]
[]
In this paper, the exact analytical solution of the Susceptible-Infected-Recovered (SIR) epidemic model is obtained in a parametric form. By using the exact solution we investigate some explicit models corresponding to fixed values of the parameters, and show that the numerical solution reproduces exactly the analytical solution. We also show that the generalization of the SIR model, including births and deaths, described by a nonlinear system of differential equations, can be reduced to an Abel type equation. The reduction of the complex SIR model with vital dynamics to an Abel type equation can greatly simplify the analysis of its properties. The general solution of the Abel equation is obtained by using a perturbative approach, in a power series form, and it is shown that the general solution of the SIR model with vital dynamics can be represented in an exact parametric form.
10.1016/j.amc.2014.03.030
[ "https://arxiv.org/pdf/1403.2160v1.pdf" ]
14,509,477
1403.2160
930161e0a54a68e09f59a12a10fca480bbaa5f8b
Exact analytical solutions of the Susceptible-Infected-Recovered (SIR) epidemic model and of the SIR model with equal death and birth rates 10 Mar 2014 Tiberiu Harko Department of Mathematics Centro de Astronomia e Astrofísica da Universidade de Lisboa Department of Computing and Information Management Institute of Vocational Education University College London Gower Street, Campo Grande, Edificío C8, Chai WanWC1E 6BT, 1749-016London, Lisboa, Hong Kong, Hong KongUnited Kingdom, Portugal, P. R. China Francisco S N Lobo Department of Mathematics Centro de Astronomia e Astrofísica da Universidade de Lisboa Department of Computing and Information Management Institute of Vocational Education University College London Gower Street, Campo Grande, Edificío C8, Chai WanWC1E 6BT, 1749-016London, Lisboa, Hong Kong, Hong KongUnited Kingdom, Portugal, P. R. China M K Mak Department of Mathematics Centro de Astronomia e Astrofísica da Universidade de Lisboa Department of Computing and Information Management Institute of Vocational Education University College London Gower Street, Campo Grande, Edificío C8, Chai WanWC1E 6BT, 1749-016London, Lisboa, Hong Kong, Hong KongUnited Kingdom, Portugal, P. R. China Exact analytical solutions of the Susceptible-Infected-Recovered (SIR) epidemic model and of the SIR model with equal death and birth rates 10 Mar 2014arXiv:1403.2160v1 [q-bio.PE]Susceptible-Infected-Recovered (SIR) epidemic modelexact solutionAbel equation In this paper, the exact analytical solution of the Susceptible-Infected-Recovered (SIR) epidemic model is obtained in a parametric form. By using the exact solution we investigate some explicit models corresponding to fixed values of the parameters, and show that the numerical solution reproduces exactly the analytical solution. We also show that the generalization of the SIR model, including births and deaths, described by a nonlinear system of differential equations, can be reduced to an Abel type equation. The reduction of the complex SIR model with vital dynamics to an Abel type equation can greatly simplify the analysis of its properties. The general solution of the Abel equation is obtained by using a perturbative approach, in a power series form, and it is shown that the general solution of the SIR model with vital dynamics can be represented in an exact parametric form. I. INTRODUCTION The outbreak and spread of diseases has been questioned and studied for many years. In fact, John Graunt was the first scientist who attempted to quantify causes of death systematically [1], and his analysis of causes of death ended up with a theory that is now well established among modern epidemiologists. Daniel Bernoulli was the first mathematician to propose a mathematical model describing an infectious disease. In 1760 he modelled the spread of smallpox [2], which was prevalent at the time, and argued the advantages of variolation [3]. A simple deterministic (compartmental) model predicting the behavior of epidemic outbreaks was formulated by A. G. McKendrick and W. O Kermack in 1927 [4]. In their mathematical epidemic model, called the Susceptible-Infected-Recovered (SIR) model, or the xyz model, to describe the spread of diseases, McKendrick and Kermack proposed the following nonlinear system of ordinary differential equations [4] dx dt = −βx (t) y (t) ,(1)dy dt = βx (t) y (t) − γy (t) ,(2) and dz dt = γy (t) ,(3) respectively, with the initial conditions x (0) = N 1 ≥ 0, y (0) = N 2 ≥ 0 and z (0) = N 3 ≥ 0, N i ∈ ℜ, i = 1, 2, 3, and where the infection rate β and the mean recovery rate γ are positive constants. In this model a fixed population with only three compartments is considered: susceptible (S) x (t), infected (I) y (t), and recovered (R) z (t), respectively. The compartments used for this model consist of three classes: a) x (t) represents the number of individuals not yet infected with disease at time t or those susceptible to the disease, b) y (t) denotes the number of individuals who have been infected with the disease, and are capable of spreading the disease to those in the susceptible category, and c) z (t) is the compartment of the individuals who have been infected and then recovered from the disease. Those in this category are not able to be infected again, or to transmit the infection to others. For this model the initial conditions x(0) = N 1 , y(0) = N 2 and z(0) = N 3 are not independent, since they must satisfy the condition N 1 + N 2 + N 3 = N , where N is the total fixed number of the individuals in the given population. The constants β and γ give the transition rates between compartments. The transition rate between S (Susceptible) and I (infected) is β I, where β is the contact rate, which takes into account the probability of getting the disease in a contact between a susceptible and an infectious subject [5][6][7][8]. The transition rate between I (Infected) and R (recovered), is γ, which has the meaning of the rate of recovery or death. If the duration of the infection is denoted D, then γ = 1/D, since an individual experiences one recovery in D units of time [5][6][7][8]. Since β and γ are interpreted as transition rates (probabilities), their range is 0 ≤ β ≤ 1 and 0 ≤ γ ≤ 1, respectively. In many infectious diseases, such as in the case of measles, there is an arrival of new susceptible individuals into the population. For this type of situation deaths must be included in the model. By considering a population characterized by a death rate µ and a birth rate equal to the death rate, the non-linear system of the differential equations representing this epidemic model is given by [5][6][7][8], dx dt = −βxy + µ (N − x) ,(4)dy dt = βxy − (γ + µ) y,(5)dz dt = γy − µz.(6) and it must be considered with the initial conditions x(0) = N 1 , y(0) = N 2 and z(0) = N 3 , with the constants N i , i = 1, 2, 3 satisfying the condition 3 i=1 N i = N . The non-linear differential equations systems, given by Eqs. (1)-(3) and (4)-(6) represent modified three-dimensional competitive Lotka-Volterra type models [5,7]. These systems can also be related to the so-called T systems, introduced recently in [9], and which have the form dx dt = a 0 (y − x),(7)dy dt = (c 0 − a 0 )x − a 0 xz,(8)dz dt = −b 0 z + xy,(9) which is chaotic for a 0 = 2.1, b 0 = 0.6 and c 0 = 30. The mathematical properties of the T-system were studied in [10][11][12]. In recent years, the mathematical epidemic models given by Eqs. (1)-(3) and (4)-(6) were investigated numerically in a number of papers, with the use of a wide variety of methods and techniques, namely, the Adomian decomposition method [13], the variational iteration method [14], the homotopy perturbation method [15], and the differential transformation method [16], respectively. Very recently, a stochastic epidemic-type model with enhanced connectivity was analyzed in [17], and an exact solution of the model was obtained. With the use of a quantum mechanical approach the master equation was transformed via a quantum spin operator formulation. The time-dependent density of infected, recovered and susceptible populations for random initial conditions was calculated exactly. An exact solution of a particular case of the SIR and Susceptible-Infected-Susceptible (SIS) epidemic models was presented in [18]. Note that in the latter SIS model, the infected return to the susceptible class on recovery because the disease confers no immunity against reinfection. A SIR epidemic model with nonlinear incidence rate and time delay was investigated in [19], while an age-structured SIR epidemic model with time periodic coefficients was studied in [20]. In fact, the standard pair approximation equations for the Susceptible-Infective-Recovered-Susceptible (SIRS) model of infection spread on a network of homogeneous degree k predict a thin phase of sustained oscillations for parameter values that correspond to diseases that confer long lasting immunity. Indeed, the latter SIRS model has been thoroughly studied, with the results strongly suggesting that its stochastic Markovian version does not yield sustained oscillations [21]. A stochastic model of infection dynamics based on the Susceptible-Infective-Recovered model, where the distribution of the recovery times can be tuned, interpolating between exponentially distributed recovery times, as in the standard SIR model, and recovery after a fixed infectious period, was investigated in [22]. For large populations, the spectrum of fluctuations around the deterministic limit of the model was obtained analytically. An epidemic model with stage structure was introduced in [23], with the period of infection partitioned into the early and later stages according to the developing process of infection. The basic reproduction number of this model is determined by the method of next generation matrix. The global stability of the disease-free equilibrium and the local stability of the endemic equilibrium have been obtained, with the global stability of the endemic equilibrium is determined under the condition that the infection is not fatal. Lyapunov functions for classical SIR and SIS epidemiological models were introduced in [24], and the global stability of the endemic equilibrium states of the models were thereby established. A new Lyapunov function for a variety of SIR and SIRS models in epidemiology was introduced in [25]. Traveling wave trains in generalized two-species predator-prey models and two-component reaction-diffusion equations were considered in [26], and the stability of the fixed points of the traveling wave ordinary differential equations in the usual "spatial" variable was analyzed. For general functional forms of the nonlinear prey birthrate/prey death rate or reaction terms, a Hopf bifurcation occurs at two different critical values of the traveling wave speed. Subcritical Hopf bifurcations yield more complex post-bifurcation dynamics in the wavetrains of the system of the partial differential equations. In order to investigate the post-bifurcation dynamics all the models were integrated numerically, and chaotic regimes were characterized by computing power spectra, autocorrelation functions, and fractal dimensions, respectively. It is the purpose of this paper to present the exact analytical solution of the SIR epidemic model. The solution is obtained in an exact parametric form. The generalization of the SIR model including births deaths, described by Eqs. (4)-(6), is also considered, and we show that the nonlinear system of differential equations governing the generalized SIR model can be reduced to an Abel type equation. The general solution of the Abel equations is obtained by using an iterative method and, once the solution of this ordinary differential equation is known, the general solution of the SIR model with vital dynamics can be obtained, similarly to the standard SIR model, in an exact parametric form. The present paper is organized as follows. The exact solution of the SIR epidemic model is presented in Section II. The nonlinear system of differential equations governing the SIR model with deaths is reduced to an Abel type equation, and the general solution of the model equations is obtained in an exact parametric form in Section III. We conclude our results in Section IV. II. THE EXACT SOLUTION OF THE SIR EPIDEMIC MODEL By adding Eqs. (1)-(3), yields the following differential equation, d dt [x(t) + y(t) + z(t)] = 0,(10) which can be immediately integrated to give x(t) + y(t) + z(t) = N, ∀t ≥ 0,(11) where x(t) > 0, y(t) > 0 and z(t) > 0, ∀t ≥ 0. Hence, the total population N = N 1 + N 2 + N 3 must be an arbitrary positive integration constant. This is consistent with the model in which only a fixed population N with only three compartments is considered. A. The general evolution equation for the SIR model As a next step in our study, we differentiate Eq. (1) with respect to the time t, thus obtaining the following second order differential equation, dy dt = − 1 β x ′′ x − x ′ x 2 ,(12) where the prime represents the derivative with respect to time t. By inserting Eqs. (1) and (12) into Eq. (2), the latter is transformed into x ′′ x − x ′ x 2 + γ x ′ x − βx ′ = 0.(13) By eliminating y from Eqs. (1) and (3) yields dz dt = − γ β x ′ x .(14) which can be integrated to give x = x 0 e − β γ z ,(15) where x 0 is a positive integration constant. By estimating Eq. (15) at t = 0 provides the following value for the integration constant x 0 = N 1 e β γ N3 .(16) From Eq. (15), it is easy to obtain the relation x ′ = − x 0 β γ z ′ e − β γ z .(17) Now, differentiation of Eq. (14) with respect to the time t leads to the second order differential equation z ′′ = − γ β x ′′ x − x ′ x 2 .(18) By inserting Eqs. (14), (17) and (18) into Eq. (13), the latter becomes the basic differential equation describing the spread of a non-fatal disease in a given population z ′′ = x 0 βz ′ e − β γ z − γz ′ .(19) Eq. (19) is equivalent to the system of differential equations Eqs. (1)-(3), respectively. B. The general solution of the evolution equation of the SIR model In order to solve the nonlinear differential equation Eq. (19), we introduce a new function u (t) defined as u = e − β γ z .(20) At t = 0, u has the initial value u(0) = u 0 = e − β γ N3 .(21) Substituting Eq. (20) into Eq. (19), we obtain the following second order differential equation for u, u d 2 u dt 2 − du dt 2 + (γ − x 0 βu) u du dt = 0.(22) Next we introduce the new function φ, defined as φ = dt du .(23) With the help of the transformation given by Eq. (23), Eq. (22) becomes a Bernoulli type differential equation, dφ du + 1 u φ = (γ − x 0 βu) φ 2 ,(24) with the general solution given by φ = 1 u (C 1 − γ ln u + x 0 βu) ,(25) where C 1 is an arbitrary integration constant. In view of Eqs. (23) and (25), we obtain the integral representation of the time as t − t 0 = u u0 dξ ξ (C 1 − γ ln ξ + x 0 βξ) ,(26) where t 0 is an arbitrary integration constant, and one may choose t 0 = 0, without loss of generality. Hence we have obtained the complete exact solution of the system of Eqs. (1)- (3), describing the SIR epidemic model, given in a parametric form by x = x 0 u,(27)y = γ β ln u − x 0 u − C 1 β ,(28)z = − γ β ln u,(29) with u taken as a parameter. Now adding Eqs. (27), (28) and (29), we obtain x + y + z = − C 1 β .(30) Comparing Eq. (30) with Eq. (11), we have C 1 = −βN,(31) and hence C 1 is a negative integration constant. Eqs. (26)- (31) give the exact parametric solution of the SIR system of three differential equations, with u taken as parameter. The solution describes exactly the dynamical evolution of the SIR system for any given initial conditions x(0) = N 1 , y(0) = N 2 and z(0) = N 3 , and for arbitrary values of β and γ. The numerical values of the two constants in the solution, u 0 and C 1 are determined by the model parameters and the initial conditions. Any change in the numerical values of the initial conditions and/or of the rate parameters will not affect the validity of the solution. In order to compare the results of the present analytical solution with the results of the numerical integration of the system of differential equations Eqs. (1)-(3) we adopt the initial values and the numerical values for the coefficients considered in [16]. Hence we take N 1 = 20, N 2 = 15, and N 3 = 10, respectively. For the parameter β we take the value β = 0.01, while γ = 0.02. The variations of x(t), y(t) and z(t) obtained by both numerical integration and the use of the analytical solution are represented, as a function of time, in Fig. 1. As one can see from the figure, the analytical solution perfectly reproduces the results of the numerical integration. The exact solution we found here is also in complete agreement with the numerical results obtained in [13]- [16]. The variation of z(t) for different initial conditions x(0), y(0) and z(0) is represented in Fig. 2. In the present Section, after a brief discussion of the general properties of the SIR model with births and deaths, we will show that the time evolution of the SIR model with equal birth and death rates can be obtained from the study of a single first order Abel type differential equation. An iterative approach for solving this equation is also presented. III. THE SIR MODEL WITH EQUAL DEATH AND BIRTH RATES In the present Section we consider the extension of the simple SIR model given by Eqs. (1)-(3) by including equal rates of births and deaths. In this case the system of differential equations we are going to consider is given by Eqs. (4)- (6). It is easy to see that by adding Eqs. (4)- (6), and integrating the resulting equation yields the following result for this model, where N 0 is an arbitrary integration constant. In order that to total number of individuals is a constant, x(t) + y(t) + z(t) = N + N 0 e −µt ,(32)x(t) + y(t) + z(t) = N, ∀t ≥ 0,(33) we must fix the arbitrary integration constant N 0 as zero, N 0 = 0. The variation of x(t), y(t) and z(t) for the SIR model with vital dynamics is represented in Fig. 3. A. Qualitative properties of the SIR model with vital dynamics Both the simple SIR model (µ = 0) and the SIR model with vital dynamics (µ = 0) are two dimensional dynamical systems in the x + y + z = N invariant plane. There is no chaotic behaviour in the plane -essentially because the existence and unicity theorem prevents (in dimension 2) the existence of transversal homoclinic points. The dynamics of both systems is simple, and well understood, including the bifurcation that takes place at βN = γ + µ. There is also no chaotic behaviour for µ = 0 outside the x + y + z = N invariant plane, because, as one can see from Eqs. (4)-(6), for the time evolution of x + y + z, trajectories with initial conditions out of the invariant plane tend exponentially fast to the invariant plane -there can be no attractor or invariant set, chaotic or otherwise, outside the invariant plane. For µ = 0, y = 0 is a line of degenerate equilibria, for all parameter values. For µ = 0 and β < 0, x = N, y = 0, z = 0 is a global attractor -a stable node -, while for β > 0, x * = γ + µ β ,(34)y * = µ β N β γ + µ − 1 ,(35) and z * = γ β N β γ + µ − 1 ,(36) is a global attractor -a stable node/focus. In the two-dimensional invariant plane x + y + z = N the basic equations of the SIR model with deaths and births are dx dt = µN − βxy − µx,(37) and dy dt = βxy − (γ + µ)y,(38) respectively. Let x * and y * the equilibrium points of the system. In the following we rigorously show that the equilibrium (x * , y * ) is globally asymptotically stable, i.e., all initial conditions with x(0) > 0 and y(0) > 0 give solutions that converge onto this equilibrium point. We will prove this result by using the Lyapounov direct method. As a first step we scale the variables by population size, so that x → x/N , and y → y/N , respectively. Next we introduce the function L(x, y) defined as [8] L(x, y) = x − x * ln x + y − y * ln y, x, y ∈ (0, 1). Then, with the use of Eqs. (37) and (38) it immediately follows that dL(x, y) dt = ∇V (x, y) · dx dt , dy dt < 0, x, y ∈ (0, 1).(40) L(x, y) is therefore a Lyapunov function for the basic SIR model with vital dynamics. The existence of a Lyapounov function L ensures the global asymptotic stability of the equilibrium point (x * , y * ) [8]. B. The evolution equation of the SIR models with vital dynamics We derive now the basic differential equation describing the dynamics of the SIR model with equal birth and death rates. We differentiate Eq. (6) with respect to the time t and obtain first the second order differential equation y ′ = 1 γ (z ′′ + µz ′ ) .(41) By inserting Eq. (6) into Eq. (4) leads to the following differential equation, x ′ = − β γ x (z ′ + µz) + µ (N − x) .(42) Now, substituting Eqs. (6) and (41) into Eq. (5) yields the differential equation: βx = z ′′ + µz ′ z ′ + µz + γ + µ.(43) Then, by differentiating Eq. (43) with respect to the time t gives the third order differential equation, βx ′ = z ′′′ + µz ′′ z ′ + µz − z ′′ + µz ′ z ′ + µz 2 .(44) Finally, by substituting Eqs. (43) and (44) into Eq. (42) gives the basic differential equation describing the SIR model with equal rate of deaths and births, z ′′′ + µz ′′ z ′ + µz − z ′′ + µz ′ z ′ + µz 2 = − z ′′ + µz ′ z ′ + µz + γ + µ β γ (z ′ + µz) + µ + βµN.(45) Eq. (45) must be integrated by taking into account the initial conditions of the SIR model with equal rates of deaths and births, given by z(0) = N 3 , z ′ (0) = γN 2 − µN 3 , and z ′′ (0) = βγN 1 N 2 − γ(γ + 2µ)N 2 + µ 2 N 3 , respectively. C. Reduction of the evolution equation for the SIR model with vital dynamics to an Abel type equation In order to simplify Eq. (45), we introduce a set of transformations defined as ψ = z ′ + µz,(46)ψ ′ = z ′′ + µz ′ ,(47)ψ ′′ = z ′′′ + µz ′′ .(48) By substituting Eqs. (46)-(48) into Eq. (45) leads to a second order differential equation ψ ′ ψ 2 − ψ ′′ ψ = β γ ψ ′ + µ ψ ′ ψ + β µ γ + 1 ψ + µ (µ + γ − βN ) ,(49) which can be rewritten as dψ dt 2 − ψ d 2 ψ dt 2 = µψ + cψ 2 dψ dt + bψ 2 + aψ 3 ,(50) where we have denoted a = β µ γ + 1 ,(51)b = µ (µ + γ − βN ) ,(52) and c = β γ ,(53) respectively. The initial conditions for the integration of Eq. (50) are ψ(0) = γN 2 , and ψ ′ (0) = γ [βN 1 − (γ + µ)] N 2 . With the help of the transformation defined as w = dt dψ = 1 ψ ′ ,(54) Eq. (50) takes the form of the standard Abel type first order differential equation of the first kind, dw dψ = aψ 2 + bψ w 3 + (cψ + µ) w 2 − 1 ψ w,(55) with the corresponding initial condition given by w ( γN 2 ) = 1/γ [βN 1 − (γ + µ)] N 2 . By introducing a new function v defined as v = wψ = ψ ψ ′ ,(56) the Abel Eq. (55) reduces to the form dv dψ = a + b ψ v 3 + c + µ ψ v 2 ,(57) which is equivalent to the non-linear system of Eqs. (4)- (6), and must be integrated with the initial condition v ( γN 2 ) = 1/ [βN 1 − (γ + µ)]. The mathematical properties of the Abel type equation, and its applications, have been intensively investigated in a series of papers [27][28][29][30][31]. Note that when the average death rate µ is zero, in view of Eq. (52) then b = 0, and the Abel Eq. (57) becomes a separate variable type differential equation of the form dv dψ = av 3 + cv 2 .(58) Eq. (58) is equivalent to the system of differential equations Eqs. Eq. (57) takes the form dΨ dψ = a + b ψ e 2Ψ + c + µ ψ e Ψ ,(60) or, equivalently, dΨ dψ = a + b ψ 1 + 2Ψ + (2Ψ) 2 2! + (2Ψ) 3 3! + ... + c + µ ψ 1 + Ψ + Ψ 2 2! + Ψ 3 3! + ... ,(61) and must be integrated with the initial condition given by Ψ (γN 2 ) = − ln |βN 1 − (γ + µ)|. In the limit of small Ψ, in the zero order of approximation Eq. (61) becomes a first order differential equation of the form dΨ 0 dψ = 2a + c + 2b + µ ψ Ψ 0 + a + c + b + µ ψ ,(62) with the general solution given by Ψ 0 (ψ) = e (2a+c)ψ ψ 2b+µ C 0 + e −(2a+c)ψ ψ −(2b+µ) a + c + b + µ ψ dψ = (γN 2 ) −2b−µ e −(2a+c)(γN2−ψ) 2a + c ψ 2b+µ − (bc − aµ)e γN2(2a+c) E 2b+µ+1 ((2a + c)N 2 γ) − (2a + c) ln |βN 1 − γ − µ| + a + c − (γN 2 ) 2b+µ e (2a+c)(γN2−ψ) × a + c − e ψ(2a+c) (bc − aµ)E 2b+µ+1 ((2a + c)ψ) ,(63) where E n (x) is the exponential integral E n (x) = ∞ 1 e −xt /t n dt, and C 0 is an arbitrary constant of integration, which has been determined from the initial condition. In order to obtain the solution of the Abel equation in the next order of approximation, we write Eq. (61) as dΨ dψ = 2a + c + 2b + µ ψ Ψ + a + c + b + µ ψ + ∞ k=2 2 k a + b ψ + c + µ ψ Ψ k k! .(64) To obtain the first order approximation Ψ 1 of the solution of the Abel equation, we substitute in Eq. (64) the non-linear terms containing Ψ with Ψ 0 . Therefore the first order approximation Ψ 1 for Eq. (61) or Eq. (64) satisfies the following linear differential equation, dΨ 1 dψ = 2a + c + 2b + µ ψ Ψ 1 + a + c + b + µ ψ + ∞ k=2 2 k a + b ψ + c + µ ψ Ψ k 0 k! ,(65) with the general solution given by Ψ 1 (ψ) = e (2a+c)ψ ψ 2b+µ C 0 + e −(2a+c)ψ ψ −(2b+µ) a + c + b + µ ψ + ∞ k=2 2 k a + b ψ + c + µ ψ Ψ k 0 k! dψ = Ψ 0 (ψ) + e (2a+c)ψ ψ 2b+µ e −(2a+c)ψ ψ −(2b+µ) ∞ k=2 2 k a + b ψ + c + µ ψ Ψ k 0 k! dψ.(66) The n-th order of approximation of the Abel equation satisfies the following linear differential equation, dΨ n dψ = 2a + c + 2b + µ ψ Ψ n + a + c + b + µ ψ + ∞ k=2 2 k a + b ψ + c + µ ψ Ψ k n−1 k! .(67) with the general iterative solution given by Ψ n (ψ) = e (2a+c)ψ ψ 2b+µ C 0 + e −(2a+c)ψ ψ −(2b+µ) a + c + b + µ ψ + ∞ k=2 2 k a + b ψ + c + µ ψ Ψ k n−1 k! dψ = Ψ 0 (ψ) + e (2a+c)ψ ψ 2b+µ e −(2a+c)ψ ψ −(2b+µ) ∞ k=2 2 k a + b ψ + c + µ ψ Ψ k n−1 k! dψ.(68) Therefore the general solution of the Abel equation can be obtained as Ψ(ψ) = lim n→∞ Ψ n (ψ).(69) Once the function Ψ is known, we obtain immediately v(ψ) = e Ψ(ψ) , and w(ψ) = e Ψ(ψ) /ψ, respectively. Therefore the time evolution of the SIR model with deaths can be obtained, as a function of the parameter ψ, as t − t 0 = wdψ = e Ψ(ψ) ψ dψ.(70) In terms of the variable ψ, Eq. (42) for x becomes dx dψ = −e Ψ(ψ) c + µ ψ x + µN e Ψ(ψ) ψ ,(71) and must be integrated with the initial condition x (γN 2 ) = N 1 . Eq. (71) has the general solution given by x (ψ) = e − ψ 1 e Ψ(ξ) (cξ+µ) ξ dξ ψ 1 µN e Ψ(χ)+ χ 1 e Ψ(ξ) (cξ+µ) ξ dξ χ dχ − γN2 1 µN e Ψ(χ)+ χ 1 e Ψ(ξ) (cξ+µ) ξ dξ χ dχ + N 1 e γN 2 1 e Ψ(ξ) (cξ+µ) ξ dξ .(72) With the use of Eq. (47), Eq. (41) for y takes the form y ′ = 1 γ ψ ′ ,(73) with the general solution y(ψ) = 1 γ ψ + Y 0 ,(74) where Y 0 is an arbitrary constant of integration. By estimating Eq. (74) at t = 0, corresponding to ψ| t=0 = γN 2 , we obtain y (γN 2 ) = y(t = 0) = N 2 = N 2 + Y 0 ,(75) a condition that fixes the integration constant Y 0 as Y 0 = 0. Finally, in the new variable ψ, Eq. (46) for z takes the form ψ = dz dψ dψ dt + µz,(76) or, equivalently, (78) give the general solution of the SIR model with vital dynamics, in a parametric form, with ψ taken as a parameter. dz(ψ) dψ = −µ e Ψ(ψ) ψ z(ψ) + e Ψ(ψ) ,(77) In Fig. 4 we present the comparison of the exact numerical solution for y(t) with the different order approximations obtained by iteratively solving the Abel Eq. (61). After twenty steps the iterative and the numerical solution approximately overlap. IV. CONCLUSIONS In the present paper we have considered two versions of the SIR model, describing the spread of an epidemic in a given population. For the SIR model without births and deaths the exact analytical solution was obtained in a parametric form. The main properties of the exact solution were investigated numerically, and it was shown that it reproduces exactly the numerical solution of the model equations. For the SIR model with births and deaths we have shown that the non-linear system of differential equations governing it can be reduced to the Abel Eq. (57). This Abel equation can be easily studied by means of semianalytical/numerical methods, thus leading to a significant simplification in the study of the model. Once the general solution of the Abel equation is known, the general solution of the SIR epidemic model with deaths can be obtained in an exact parametric form. The exact solution is important because biologists could use it to run experiments to observe the spread of infectious diseases by introducing natural initial conditions. Through these experiments one can learn the ways on how to control the spread of epidemics. FIG. 1 :FIG. 2 : 12Variation of x(t) (solid curve), y(t) (dotted curve) and z(t) (dashed curve), obtained by the numerical integration of the differential equations Eqs.(1)-(3), and with the use of the analytical solution, for β = 0.01 and γ = 0.02. The initial conditions are x(0) = N1 = 20, y(0) = N2 = 15, and z(0) = N3 = 10, respectively. The numerical and the analytical solutions completely overlap. Time variation of z(t) obtained with the use of the analytical solution, for β = 0.01 and γ = 0.02, and for different initial conditions: x(0) = N1 = 20, y(0) = N2 = 15, and z(0) = N3 = 10 (solid curve), x(0) = N1 = 19, y(0) = N2 = 16, and z(0) = N3 = 11 (dotted curve), x(0) = N1 = 22, y(0) = N2 = 11, and z(0) = N3 = 11 (dashed curve), and x(0) = N1 = 24, y(0) = N2 = 9, and z(0) = N3 = 9 (long dashed curve), respectively. FIG. 3 : 3Variation of x(t) (solid curve), y(t) (dotted curve) and z(t) (dashed curve), obtained by the numerical integration of the differential equations Eqs.(4)-(6) of the SIR model with equal death and birth rates for β = 0.01, γ = 0.02, and µ = 0.20. The initial conditions are x(0) = N1 = 20, y(0) = N2 = 15, and z(0) = N3 = 10, respectively. ( 1 ) 1-(3) describing the SIR epidemic model without deaths. We shall not present the simple solution of Eq. (58) here since we have already presented the complete exact solution of Eqs. (1)-(3) in Section II. D. The iterative solution of the Abel equation By introducing a new independent variable Ψ defined as Ψ = ln v, with the initial condition z (γN 2 ) = N 3 . The general solution of Eq. (77) is provided by z(ψ) = e . (70), (72), (74), and FIG. 4 : 4Comparison of y(t), obtained from the numerical integration of the equations of the SIR model with vital dynamics (solid curve), and of yn(t), obtained by iteratively solving the Abel Eq. (61), for different orders of approximations: n = 1 (dotted curve), n = 2 (small dashed curve), n = 3 (medium dashed curve), n = 5 (dashed curve), n = 10 (long dashed curve), and n = 20 (ultra-long dashed curve). The initial value of y(t) is y(0) = 15. AcknowledgmentsWe would like to thank the two anonymous referees for comments and suggestions that helped us to significantly improve our manuscript. We also express our special thanks to Ana Nunes for a careful reading of the manuscript, and for very helpful comments and suggestions. Natural and political observations made upon the bills of mortality (1662). John Graunt, John Graunt, Natural and political observations made upon the bills of mortality (1662). Essai d'une nouvelle analyse de la mortalite causee par la petite verole et des avantages de l'inoculation pour la prevenir. D Bernoulli, Mem. Math. Phys. Acad. Roy. Sci. 1D. Bernoulli, Essai d'une nouvelle analyse de la mortalite causee par la petite verole et des avantages de l'inoculation pour la prevenir, Mem. Math. Phys. Acad. Roy. Sci., Paris, 1 (1760). Reflexions sur les avantages de l'inoculation. D Bernoulli, Mercure de France173D. Bernoulli, Reflexions sur les avantages de l'inoculation, Mercure de France, June issue, 173 (1760). Contribution to the mathematical theory of epidemics. W O Kermack, A G Mckendrick, Proc. Roy. Soc. Lond A. 115W. O. Kermack and A. G. McKendrick, Contribution to the mathematical theory of epidemics, Proc. Roy. Soc. Lond A 115, 700-721 (1927). . F Brauer, C Castillo-Chávez, Mathematical Models in Population Biology and Epidemiology. SpringerF. Brauer and C. Castillo-Chávez, Mathematical Models in Population Biology and Epidemiology, Springer, New York (2001). Mathematical Biology: I. An Introduction. J D Murray, Springer-VerlagNew York, Berlin, HeidelbergJ. D. Murray, Mathematical Biology: I. An Introduction, Springer-Verlag, New York, Berlin, Heidelberg (2002). Epidemic Modeling: An Introduction. D J Daley, J Gani, Cambridge University PressCambridgeD. J. Daley and J. Gani, Epidemic Modeling: An Introduction, Cambridge University Press, Cambridge (2005). . F Brauer, P Van Den Driessche, J Wu, Lecture Notes in Mathematical Epidemiology. Springer-VerlagF. Brauer, P. van den Driessche, and J. Wu (Editors), Lecture Notes in Mathematical Epidemiology, Springer-Verlag, Berlin, Heidelberg (2008). Analysis of a dynamical system derived from the Lorenz system. G Tigan, Sci. Bull. Politehnica Univ. Timisoara, Transactions on Mathematics and Physics. 50G. Tigan, Analysis of a dynamical system derived from the Lorenz system, Sci. Bull. Politehnica Univ. Timisoara, Trans- actions on Mathematics and Physics, 50, 61-72 (2005). Analysis of a 3D chaotic system. G Tigan, D Opris, Chaos, Solitons, & Fractals. 3613159G. Tigan and D. Opris, Analysis of a 3D chaotic system, Chaos, Solitons, & Fractals 36, 13159 (2008). Analytical Hopf bifurcation and stability analysis of T system. R A Van Gorder, S. Roy Choudhury, Commun. Theor. Phys. 55609R. A. Van Gorder and S. Roy Choudhury, Analytical Hopf bifurcation and stability analysis of T system, Commun. Theor. Phys. 55, 609 (2011). Bifurcation analysis for T system with delayed feedback and its application to control of chaos. R Zhang, Nonlinear Dynamic. 72629R. Zhang, Bifurcation analysis for T system with delayed feedback and its application to control of chaos, Nonlinear Dynamic 72, 629 (2013). Solution of the epidemic model by Adomian decomposition method. J Biazar, Applied Mathematics and Computation. 173J. Biazar, Solution of the epidemic model by Adomian decomposition method, Applied Mathematics and Computation 173, 1101-1106 (2006). Variational iteration method for solving the epidemic model and the prey and predator problem. M Rafei, H Daniali, D D Ganji, Applied Mathematics and Computation. 186M. Rafei, H. Daniali and D. D. Ganji, Variational iteration method for solving the epidemic model and the prey and predator problem, Applied Mathematics and Computation 186, 1701-1709 (2007). Solution of the epidemic model by homotopy perturbation method, Applied Mathematics and Computation. M Rafei, D D Ganji, H Daniali, 187M. Rafei, D. D. Ganji and H. Daniali, Solution of the epidemic model by homotopy perturbation method, Applied Mathe- matics and Computation, 187 1056-1062 (2007). A new method for solving epidemic model. Abdul-Monim Batiha, Belal Batiha, Australian J. of Basic and Applied Sciences. 5Abdul-Monim Batiha and Belal Batiha, A new method for solving epidemic model, Australian J. of Basic and Applied Sciences 5, 3122-3126 (2011). Stochastic epidemic-type model with enhanced connectivity: exact solution. H T Williams, I Mazilu, D A Mazilu, Journal of Statistical Mechanics: Theory and Experiment. 011017H. T. Williams, I. Mazilu and D. A. Mazilu, Stochastic epidemic-type model with enhanced connectivity: exact solution, Journal of Statistical Mechanics: Theory and Experiment 01, 01017, (2012). An exact solution of a particular case of SIR and SIS epidemic models. G Shabbir, H Khan, M A Sadiq, arXiv:1012.5035G. Shabbir, H. Khan and M. A. Sadiq, An exact solution of a particular case of SIR and SIS epidemic models, arXiv:1012.5035 (2010). Global stability of a SIR epidemic model with nonlinear incidence rate and time delay. R Xu, Z Ma, Nonlinear Analysis: Real World Applications. 10R. Xu and Z. Ma, Global stability of a SIR epidemic model with nonlinear incidence rate and time delay, Nonlinear Analysis: Real World Applications 10, 3175-3189 (2009). Existence of a nontrivial periodic solution in an age-structured SIR epidemic model with time periodic coefficients. T Kuniya, Applied Mathematics Letters. 27T. Kuniya, Existence of a nontrivial periodic solution in an age-structured SIR epidemic model with time periodic coeffi- cients, Applied Mathematics Letters 27, 15-20 (2014). SIRS dynamics on random networks: Simulations and analytical models. G Rozhnova, A Nunes, Complex Sciences. J. Zhou4792SpringerG. Rozhnova and A. Nunes. SIRS dynamics on random networks: Simulations and analytical models. In J. Zhou, editor, Complex Sciences 4, page 792, Springer, Berlin, Heidelberg (2009). Stochastic fluctuations in the susceptible-infective-recovered model with distributed infectious periods. A J Black, A J Mckane, A Nunes, A Parisi, Physical Review E. 8021922A. J. Black, A. J. McKane, A. Nunes, and A. Parisi, Stochastic fluctuations in the susceptible-infective-recovered model with distributed infectious periods, Physical Review E 80, 021922 (2009). Stability analysis for an epidemic model with stage structure. J Lia, Z Ma, F Zhang, Nonlinear Analysis: Real World Applications. 9J. Lia, Z. Ma, and F. Zhang, Stability analysis for an epidemic model with stage structure, Nonlinear Analysis: Real World Applications 9, 1672 79 (2008). A Korobeinikov, G C Wake, Lyapunov Functions and Global Stability for SIR, SIRS, and SIS Epidemiological Models. 15A. Korobeinikov and G. C. Wake, Lyapunov Functions and Global Stability for SIR, SIRS, and SIS Epidemiological Models, Applied Mathematics Letters 15, 955-960 (2002). Lyapunov functions for SIR and SIRS epidemic models. S M O&apos;regan, T C Kelly, A Korobeinikov, M J A O&apos;callaghan, A V Pokrovskii, Applied Mathematics Letters. 23S. M. O'Regan, T. C. Kelly, A. Korobeinikov, M. J. A. O'Callaghan, and A. V. Pokrovskii, Lyapunov functions for SIR and SIRS epidemic models, Applied Mathematics Letters 23, 446-448 (2010). Periodic and Chaotic Traveling Wave Patterns in Reaction-Diffusion/ Predator-Prey Models with General Nonlinearities. S C Mancas, R S Choudhury, Far East Journal of Dynamical Systems. 11S. C. Mancas and R. S. Choudhury, Periodic and Chaotic Traveling Wave Patterns in Reaction-Diffusion/ Predator-Prey Models with General Nonlinearities, Far East Journal of Dynamical Systems, 11, 117-142 (2009). Relativistic dissipative cosmological models and Abel differential equation. T Harko, M K Mak, Journal of Computer and Mathematics with Applications. 46T. Harko and M. K. Mak, Relativistic dissipative cosmological models and Abel differential equation, Journal of Computer and Mathematics with Applications 46, 849-853 (2003). New method for generating general solution of Abel differential equations. M K Mak, T Harko, Journal of Computer and Mathematics with Applications. 43M. K. Mak and T. Harko, New method for generating general solution of Abel differential equations, Journal of Computer and Mathematics with Applications 43, 91-94 (2002). Solutions generating technique for Abel type non-linear ordinary differential equations. M K Mak, H W Chan, T Harko, Journal of Computer and Mathematics with Applications. 41M. K. Mak, H. W. Chan and T. Harko, Solutions generating technique for Abel type non-linear ordinary differential equations Journal of Computer and Mathematics with Applications 41, 1395-1401 (2001). Integrable dissipative nonlinear second order differential equations via factorizations and Abel equations. S C Mancas, H C Rosu, Phys. Lett. A. 377S. C. Mancas and H. C. Rosu, Integrable dissipative nonlinear second order differential equations via factorizations and Abel equations, Phys. Lett. A 377, 1434-1438 (2013). S C Mancas, H C Rosu, arXiv:1301.3567Ermakov-Pinney equations with Abel-induced dissipation. S. C. Mancas and H. C. Rosu, Ermakov-Pinney equations with Abel-induced dissipation, arXiv:1301.3567 (2013).
[]
[ "On the Positive Geometry of Conformal Field Theory", "On the Positive Geometry of Conformal Field Theory", "On the Positive Geometry of Conformal Field Theory", "On the Positive Geometry of Conformal Field Theory" ]
[ "Nima Arkani-Hamed \nSchool of Natural Sciences\nInstitute for Advanced Study\n08540PrincetonNJUSA\n", "Yu-Tin Huang \nDepartment of Physics and Astronomy\nNational Taiwan University\n10617TaipeiTaiwan\n\nPhysics Division\nNational Center for Theoretical Sciences\nNational Tsing-Hua University\nNo.101, Section 2, Kuang-Fu RoadHsinchuTaiwan\n", "Shu-Heng Shao \nSchool of Natural Sciences\nInstitute for Advanced Study\n08540PrincetonNJUSA\n", "Nima Arkani-Hamed \nSchool of Natural Sciences\nInstitute for Advanced Study\n08540PrincetonNJUSA\n", "Yu-Tin Huang \nDepartment of Physics and Astronomy\nNational Taiwan University\n10617TaipeiTaiwan\n\nPhysics Division\nNational Center for Theoretical Sciences\nNational Tsing-Hua University\nNo.101, Section 2, Kuang-Fu RoadHsinchuTaiwan\n", "Shu-Heng Shao \nSchool of Natural Sciences\nInstitute for Advanced Study\n08540PrincetonNJUSA\n" ]
[ "School of Natural Sciences\nInstitute for Advanced Study\n08540PrincetonNJUSA", "Department of Physics and Astronomy\nNational Taiwan University\n10617TaipeiTaiwan", "Physics Division\nNational Center for Theoretical Sciences\nNational Tsing-Hua University\nNo.101, Section 2, Kuang-Fu RoadHsinchuTaiwan", "School of Natural Sciences\nInstitute for Advanced Study\n08540PrincetonNJUSA", "School of Natural Sciences\nInstitute for Advanced Study\n08540PrincetonNJUSA", "Department of Physics and Astronomy\nNational Taiwan University\n10617TaipeiTaiwan", "Physics Division\nNational Center for Theoretical Sciences\nNational Tsing-Hua University\nNo.101, Section 2, Kuang-Fu RoadHsinchuTaiwan", "School of Natural Sciences\nInstitute for Advanced Study\n08540PrincetonNJUSA" ]
[]
It has long been clear that the conformal bootstrap is associated with a rich geometry. In this paper we undertake a systematic exploration of this geometric structure as an object of study in its own right. We study conformal blocks for the minimal SL(2, R) symmetry present in conformal field theories in all dimensions. Unitarity demands that(2.7)This is known as the crossing symmetry of the four-point function. The prefactororiginates from the ratios of prefactors in (2.4) in the two OPE channels.
10.1007/jhep06(2019)124
[ "https://export.arxiv.org/pdf/1812.07739v2.pdf" ]
119,333,821
1812.07739
bb0c8a4409cbbc0f9cb49ccb7180d9e0bae0c468
On the Positive Geometry of Conformal Field Theory 29 Jan 2019 Nima Arkani-Hamed School of Natural Sciences Institute for Advanced Study 08540PrincetonNJUSA Yu-Tin Huang Department of Physics and Astronomy National Taiwan University 10617TaipeiTaiwan Physics Division National Center for Theoretical Sciences National Tsing-Hua University No.101, Section 2, Kuang-Fu RoadHsinchuTaiwan Shu-Heng Shao School of Natural Sciences Institute for Advanced Study 08540PrincetonNJUSA On the Positive Geometry of Conformal Field Theory 29 Jan 2019Prepared for submission to JHEP It has long been clear that the conformal bootstrap is associated with a rich geometry. In this paper we undertake a systematic exploration of this geometric structure as an object of study in its own right. We study conformal blocks for the minimal SL(2, R) symmetry present in conformal field theories in all dimensions. Unitarity demands that(2.7)This is known as the crossing symmetry of the four-point function. The prefactororiginates from the ratios of prefactors in (2.4) in the two OPE channels. Introduction Conformal field theories are characterized by a set of operators O i with dimensions ∆ i and spins s i , together with the three-point functions coefficients given by c ijk . Polyakov's dream [1] of the conformal bootstrap program was to determine the space of allowed {∆ i , s i , c ijk } from locality and unitarity, via the consistency of the operator product expansion (OPE) for four-point correlators in two different channels. This puts a quadratic constraint on the c ijk depending on the putative spectrum for ∆ i , s i , somewhat analogous to the Jacobi identity for the structure constants of a Lie group. Indeed Polyakov's vision was that the conformal bootstrap would lead to a classification of possible conformal field theories (CFT) mirroring the classification of Lie groups. Unlike the older S-matrix bootstrap program-where the constraints from locality and unitarity on the analytic structure of four-particle scattering amplitudes were not well-understood (and are indeed still not well-understood to this day)-the conformal bootstrap poses a completely well-defined mathematical problem. Given some putative CFT data {∆ i , s i , c ijk }, it is possible to unambiguously check whether it does or does not define a unitary CFT. The fundamental challenge of determining the space of consistent CFTs in this way lies in the infinite nature of the problem. The four-point functions depends continuously on conformal cross-ratios, and we must contend with the infinite number of operators and the continuous spectrum of dimensions. For instance, if we are only given operators with dimensions ∆ i < 100, say, how can we check whether just this spectrum of low-lying dimensions is consistent? How can we make definite statements about them that are completely robust and independent of the details associated with all the even higher-dimensional operators? To make progress, it is necessary to render this infinite problem finite in some reliable way. For instance, instead of dealing with the full four-point function, we can take a finite approximation to it by considering a truncation of its Taylor expansion to n terms, around some convenient kinematic point. Schematically this gives us an n-dimensional vector F, and the conformal bootstrap turns into a well-defined geometry problem in this n-dimensional space. The modern revival of the conformal bootstrap [2] began by numerically studying this geometry problem using techniques from linear programming, and both this numerical approach as well as various analytic avatars of this program had some spectacular successes over the past decade. See [3][4][5][6] for reviews on this subject. Stimulated by these developments, in this paper we initiate a systematic study of the geometry associated with the conformal bootstrap. Working with some fixed number n of Taylor coefficients is analogous to using a microscope with fixed resolution to look at both the four-point function and the space of CFTs. We would then like to ask: how is the space of consistent operator dimensions {∆ i , s i } "carved out" by the bootstrap at this resolution? Given some set of consistent {∆ i , s i } at this order, what constraints do we find on the Taylor coefficients F of the four-point function? And finally, clearly as we increase the resolution by increasing n, the allowed space of {∆ i , s i } is further refined. How can we systematically see this refinement as we increase n one step at a time? We begin by giving a trivial interpretation of the conformal bootstrap equations using the language of polytopes in projective space. Given a putative spectrum {∆ i , s i }, unitarity implies that F must be in the interior of the convex hull U(∆ i , s i ) of a collection of points associated with the Taylor coefficient of conformal blocks G(∆ i , s i ), while crossing symmetry implies that F must lie on a fixed hyperplane X depending only on the external operator dimensions. The bootstrap then demands that the crossing hyperplane X must intersect the unitarity polytope U. If this intersection is non-trivial, the Taylor coefficients F must lie inside it, i.e. F ⊂ U ∩ X. In this language, the non-trivial aspect of the problem is to determine what the polytope U (and ultimately the intersection U ∩ X) "looks like". This is a standard challenge in the study of polytopes. A polytope can be defined as the convex hull of a collection of points, and this is precisely how it arises in our example. But this definition does not allow us to easily check whether some given point F is or isn't inside the polytope. For this, we need a dual definition, where the polytope is cut out by giving a collection of inequalities W a · F ≥ 0, where a is the index that runs over the boundaries of the polytope. In general, given a random collection of vertices, the only systematic method for determining the facets of their convex hull is by exhaustion: simply trying all possible (n−1)-tuples of vertices as candidates for a face, and seeing whether the remaining vertices all lie on the same side of this plane. Thus if the vertices V i are given as n dimensional vectors, the facet structure is entirely determined by specifying, for all n-tuples {i 1 , · · · , i n }, whether the determinants V i 1 · · · V in are positive, negative, or zero. This data is known as the oriented matroid associated with the configuration of vectors V i . Of course, for a random collection of V i , this method is hopelessly impractical, and linear programming provides a more effective way of determining the face structure numerically. From a mathematical point of view, however, it is often possible to do much better than this, and various infinite classes of polytopes have been studied and "understood" by mathematicians over the past century. This requires some special structure for the vertices, which allows the faces of all co-dimensions to be fully understood analytically. From the point of view of physics, we should not expect geometric problems handed to us by quantum field theory to be random and unstructured, so it is natural to ask whether there is something special about the polytopes associated with the conformal bootstrap problem, reflecting some special structure of the conformal block vectors G(∆ i , s i ), which allows the face structure of these polytopes to be understood analytically. One famous class of polytopes that have been "understood" directly generalize convex polygons in two dimensions, and are known as cyclic polytopes. Here the vertices have a natural ordering V 1 , V 2 , · · · , and the special property they enjoy is that all the ordered determinants are positive, V i 1 · · · V in > 0 for i 1 < i 2 < .. < i n . The convex hull of these vertices is known as a cyclic polytope, and the positivity allows a beautiful characterization of their full face structure. This notion of positivity (and the closely related ideas of the positive Grassmannian) has played a prominent role in the development of the Amplituhedron [7] approach to the reformulating the physics of scattering amplitudes starting from primitive combinatoric-geometric ideas of positive geometry. We will see that precisely this same "positive" structure makes a striking appearance in the geometry of the conformal bootstrap. We will work in the simplest possible setting, by exploring the geometry of the conformal bootstrap with the minimal degree of conformal symmetry possible, associated with the SL(2, R) subgroup corresponding to conformal transformations in d = 1 dimension. The SL(2, R) conformal blocks are labeled only by the dimension ∆, and so the conformal block vectors G ∆ also only depend on ∆. Note that the ∆'s have a natural ordering from small to large dimensions. Remarkably, we find, with a small but fascinating caveat, that the conformal block vectors enjoy the positivity property that G ∆ 1 · · · G ∆n > 0, and thus the unitarity polytope U is a cyclic polytope! We will explore the conformal bootstrap from this geometric point of view. As an illustration of the main ideas, we will fully solve the geometry problem leading to the determination of the intersection U ∩ X, when keeping up to six terms in the Taylor expansion of the fourpoint function, corresponding to a two-dimensional geometry for U ∩ X. This will let us transparently see how the spectrum of consistent operator dimensions is "carved out" by unitarity and crossing symmetry, and will lead to a number of new, exact results for the spectrum and four-point functions of any CFT. In particular, the intersection U ∩ X, which bounds the value of the four-point function F, obeys an interesting combinatoric rule, and its shape undergoes various intricate "phase transitions" as the spectrum is continuously varied. We will also sketch the nature of the recursive method that allows us to "increase resolution" as we keep more terms in the Taylor expansion and consider higher-dimensional geometries. Looking ahead, we make some preliminary comments on the more general geometries appropriate to higher-dimensional conformal symmetries. The positive geometry associated with the conformal bootstrap is relevant to the physics of conformal field theory, while the positivity seen in the conformal blocks, and the problem of determining the intersection of cyclic polytopes with the crossing plane, is mathematically interesting in its own right. We have thus endeavored to present our results in a self-contained way, not assuming any prior knowledge of either conformal field theory nor positive geometry, keeping both physicists and mathematician readers in mind. All the necessary background on polytopes and positivity is included in the main text. In Appendix A, we also give an introduction to conformal blocks, starting with a Lorentzian integral representation going back to Polyakov's pioneering paper [1,8,9]. The necessary integrals are trivial to carry out in this representation, leading to a conceptually transparent and rapid computation for conformal blocks in even dimensions. Conformal Bootstrap in One Dimension In this section we set up the physical problem -the conformal bootstrap of four-point functions in one dimension [10][11][12][13][14][15][16][17][18][19]. Unitary 1d CFTs arise in various physical contexts. For example, the theory on a conformal line defect in a higher dimensional CFT can be described by a 1d CFT [10] (see also [20]). They can also arise as the boundary theory of a QFT in a classical AdS 2 background [11]. Moreover, since the 1d conformal group SL(2, R) is a subgroup of the higher-dimensional conformal group, every four-point function in higher-dimensional CFT can be thought of as a 1d conformal four-point function and can be decomposed into the SL(2, R) conformal blocks. See Appendix A of [15] for a comprehensive overview of 1d CFTs. In one dimension, the conformal symmetry is SL(2, R), which acts on the 1d spacetime coordinate x ∈ R as x → x (x) = ax + b cx + d , (2.1) where a, b, c, d ∈ R with ad − bc = 1. A conformal primary operator φ(x) with scaling dimension ∆ φ transforms as φ(x) → φ (x) under SL(2, R), where ∂x ∂x ∆ φ φ (x ) = φ(x) . (2.2) The physical observables in a unitary 1d CFT includes the correlation functions of conformal primary operators. The correlation functions transform covariantly under SL(2, R), obey reflection positivity, and factorize via the operator product expansion (OPE). Let us consider the four-point function φ( x 1 )φ(x 2 )φ(x 3 )φ(x 4 ) of identical, real conformal primary operators φ with scaling dimension ∆ φ . We will assume x 1 < x 2 < x 3 < x 4 and define the SL(2, R) invariant cross ratio z as z = x 12 x 34 x 13 x 24 ∈ (0, 1) , (2.3) where x ij = x i − x j . We note that 1 − z = x 14 x 23 x 13 x 24 . The SL(2, R) covariance of the four-point function implies that, up to an overall factor, it can be written as a function of the cross ratio φ(x 1 )φ(x 2 )φ(x 3 )φ(x 4 ) = 1 |x 12 | 2∆ φ |x 34 | 2∆ φ F (z) . (2.4) Using the OPE between φ(x 1 ) and φ(x 2 ), the four-point function F (z) can be written as a sum over all the intermediate conformal primary operators O i in the φ × φ OPE channel Unitarity : F (z) = i p i G ∆ i (z) , p i > 0 ,(2.5) where the SL(2, R) conformal block G ∆ (z) is [21] G ∆ (z) = z ∆ 2 F 1 (∆, ∆, 2∆, z) . (2.6) Here p i is the three-point function coefficient square between φ, φ, O i . Unitarity implies that the p i 's are positive. Hence in a unitary 1d CFT, the four-point function F (z) can be expanded on the conformal blocks with positive coefficients. In Appendix A we review the derivation of the conformal blocks in general dimensions. Alternatively, we can use the OPE between φ(x 2 ) and φ(x 3 ) to expand the four-point function and express the four-point function φ( (2.4) and (2.5), but only with 1 ↔ 3 exchange. Since the OPE channels are identical in the two expansions, the coefficients p i 's are the same as well. Comparing the two expansions, we conclude that x 1 )φ(x 2 )φ(x 3 )φ(x 4 ) as inCrossing : F (z) = z 1 − z To summarize, a four-point function F (z) in a unitary 1d CFT satisfies the unitarity condition (2.5) and the crossing equation (2.7). The central goal in the conformal bootstrap program is to study the solution space of ∆ i and p i . The simplest analytic examples of the unitary 1d four-point function are the generalized free boson and fermion. The generalized free field theory contains a fundamental bosonic/fermionic conformal primary operator φ(x), together with a tower of double-trace operators that are quadratic in φ. The theory is free in the sense that all correlation functions in the generalized free field theory are computed by Wick contractions. The four-point function of φ in the generalized free boson (+) and fermion (−) theory is simply given by the sum of three different Wick contractions (with signs): φ(x 1 )φ(x 2 )φ(x 3 )φ(x 4 ) = φ(x 1 )φ(x 2 ) φ(x 3 )φ(x 4 ) ± φ(x 1 )φ(x 3 ) φ(x 2 )φ(x 4 ) + φ(x 1 )φ(x 4 ) φ(x 2 )φ(x 3 ) = 1 (x 2 12 x 2 34 ) ∆ φ ± 1 (x 2 13 x 2 24 ) ∆ φ + 1 (x 2 14 x 2 23 ) ∆ φ , (2.8) which gives F ± (z) = 1 ± (zz) ∆ φ + zz (1 − z)(1 −z) ∆ φ . (2.9) F ± (z) manifestly satisfies the crossing symmetry (2.7). The conformal block decomposition of the generalized free boson four-point function is F + (z) = 1 + ∞ n=0 p + n G 2∆ φ +2n (z) ,(2.10) with positive coefficients (see, for example, [10]) p + n = (2∆ φ ) n 2 2n−1 (2n)!(2∆ φ + n − 1/2) n , (2.11) where (a) b = Γ(a+b)/Γ(a). The φ×φ OPE channel contains the identity plus an infinite tower of double-trace operators of the form φ∂ 2n φ + · · · where the · · · represents other distributions of the derivative such that the operator is a conformal primary. The scaling dimensions of these intermediate operators are ∆ = 2∆ φ + 2n , n ∈ Z ≥0 . (2.12) On the other hand, the block decomposition of the four-point function in the generalized free fermion is F − (z) = 1 + ∞ n=0 p − n G 2∆ φ +2n+1 (z) ,(2.13) where p − n = (2∆ φ ) n (2∆ φ ) 2n+1 4 n (2n + 1)!(2∆ φ + n + 1/2) n . (2.14) The intermediate operators in the φ × φ OPE are schematically of the form φ∂ 2n+1 φ + · · · with scaling dimension ∆ = 2∆ φ + 2n + 1 , n ∈ Z ≥0 . (2.15) Note that in the fermionic theory, φφ vanishes due to Fermi statistics, so the lowest nonidentity dimension operator is φ∂φ with ∆ = 2∆ φ + 1. In fact, given the external operator dimension ∆ φ , ∆ gap = 2∆ φ + 1 is the maximum possible gap in a unitary 1d CFT four-point function [10,16,22]. Finding a solution to the conformal bootstrap equations (2.5) and (2.7) is quite challenging in general. To tame this infinite-dimensional problem in the space of all four-point functions, let us discretize the problem to start with. For example, we can discretize the four-point function F (z) by its first 2N + 1 derivatives around z = 1/2: F (z) → F =       F 0 F 1 . . . F 2N +1       ∈ P 2N +1 ,(2.16) where F I ≡ 1 I! ∂ I z F (z) z=1/2 , I = 0, 1, · · · , 2N + 1 . (2.17) Let us now ask what are the constraints on the (2N + 2)-dimensional vector F from unitarity (2.5) and crossing (2.7). Given any solution F to this truncated problem, λF is again a solution for any real λ. Hence it is convenient to think of F as a vector in P 2N +1 . The constraint from crossing symmetry (2.7) is easy to state in the finite-dimensional problem. Expanding (2.7) around z = 1/2, we have F 0 + F 1 y + F 2 y 2 + · · · = 1 + 2y 1 − 2y 2∆ φ F 0 − F 1 y + F 2 y 2 + · · · ,(2.18) where y ≡ z − 1/2. Matching the coefficients of powers of y, we obtain N + 1 linear relations (that depend on ∆ φ ) among the F I 's. We can use these relations to solve F odd in terms of F even . For example, when N = 2, we obtain the following 3 linear relations: To say it more geometrically, the crossing equation restricts the four-point function F to lie on an N -dimensional plane, denoted as X[∆ φ ], in P 2N +1 . We will call this plane the crossing plane. F 1 = 4∆ φ F 0 , F 3 = 16 3 (∆ φ − 4∆ 3 φ )F 0 + 4∆ φ F 2 F 5 = 64 15 ∆ φ (32∆ 4 φ − 20∆ 2 φ + 3)F 0 − 16 3 ∆ φ (4∆ 2 φ − 1)F 2 + 4∆ φ F 4 . Let us switch gear to the unitarity constraint (2.5). We first Taylor expand the conformal block G ∆ (z) around z = 1/2 to forma a (2N + 2)-dimensional block vector G ∆ : G ∆ (z) → G ∆ =       G 0 ∆ G 1 ∆ . . . G 2N +1 ∆       ∈ P 2N +1 . (2.20) The unitarity constraint (2.5) demands the four-point function vector F to lie in the positive span of the block vectors G ∆ : F = ∆ p ∆ G ∆ ,(2.21) with p ∆ all positive. Polytopes and Positive Geometry Geometric problems of the character of equation (2.21), asking for a vector to be expressible as a positive linear combination of some fixed set of vectors, are ubiquitous in mathematics and physics. In this section we give a brief introduction to the elementary ideas of projective polytopes, which is a useful language for thinking about such problems. A nice reference for discussing polytopes from this perspective (usually couched in terms of "cones") can be found in [23]. To emphasize the generality of the discussion, let's switch notation, and ask to characterize the space of (d + 1) dimensional vectors A which can be expressed as a positive linear combination of a given set of vectors V i : Figure 2: The "bad" and "good" configurations. A = i c i V i , where c i > 0 (3.1) Bad Good We assume for simplicity that the number of vectors is greater than or equal to (d + 1), so that the space of all A's is top-dimensional. Clearly, for this equation to put any interesting constraint on the space of allows A's, all the vectors V i must lie on the same side of some hyperplane. If this is not the case, then any vector can be expressed as a linear combination of the V i and there are no constraints on A at all. But if the V i are all the same side of some hyperplane, A is constrained to lie inside a cone spanned by the V i . Examples of "bad" and "good" configurations of vectors in (d + 1) = 2 dimensions are shown in Figure 2. It is convenient to think of this cone projectively in the following way. Clearly, the space of all possible A's is invariant under A → tA for any (positive) rescaling t > 0. Similarly, the cone spanned by vectors V i is exactly the same as that spanned by the vectors V i = t i V i for all t i > 0. Thus, the data associated with this problem is naturally projective: we can think of the equivalence class of all A ∼ tA. All these vectors are on the same side of some hyperplane. Thus we can write V i = t i 1 V i , A = t 1 A . (3.2) where t i > 0, t > 0. Note that we have arranged the top component to be positive for all the vectors, reflecting the fact that they are on the same side of the hyperplane (1, 0), i.e. (1, 0) · A = t > 0. Because of the projective invariance, the data of the problem is given by V i and we want to determine the space of allowed A. Since i c i V i = t i c i i t i c i V i = i c i i c i V i (3.3) where c i = t i c i are also positive, the space of A is given by A = i c i V i i c i , with c i > 0 (3.4) This is a weighted sum of the vectors V i , familiar for instance from the notion of "center of mass". This defines the convex hull of the vectors V i , which is a polytope in R d . More directly, we can think of the equation A = i c i V i as a projective polytope in P d . This is a special case of the well-known general fact, that any geometric questions in R d that do not involve a notion of a metric are more usefully posed in P d . The description in R d manifests the affine symmetries of translations and linear transformations, T d × SL(d), but these are just a subgroup of the larger group SL(d + 1) of symmetries that map linear spaces to linear spaces, which is made manifest in the projective space. Being explicit with the indices, we denote A I with an upstairs index I = 0, · · · , d to be a point in the projective space, with the symmetry under A I → L I J A J . A hyperplane is denoted by W I with a lower SL(d + 1) index. The T d × SL(d) subgroup of SL(d + 1) is the one that keeps the hyperplane at infinity-in our notation above the co-vector (1, 0), fixed, while the full SL(d+1) allows the most general transformations that also move infinity. Since the projective symmetry is SL(d + 1), the only invariant tensor we have is the antisymmetric tensor. Any invariant must involve the antisymmetric contraction of (d + 1) vectors which we will denote by V 1 , V 2 , · · · , V d+1 . This allows us to easily characterize the notions of "incidence". For instance, if these d + 1 vectors are coplanar, then this bracket must vanish. Note that this statement is invariant under arbitrary (positive or not) rescaling of the vectors. It is also easy to describe linear subspaces of any dimensionality p: they are simply antisymmetric tensors with either (p + 1) upstairs indices or (d + 1) − (p + 1) downstairs indices. There are further advantages of the projective picture. One rather trivial one is that we don't need to awkwardly normalize the weights to add up to one, as we do when talking about the usual convex hull in R d . But the projective language is most useful for thinking about the boundary structure of the polytope. Indeed so far we have defined a polytope by the convex hull of its vertices. It is also possible to define the polytope by a collection of hyperplanes W aI , the facets, that cut out the polytope by the linear inequalities A · W a = A I W aI > 0 , ∀ a . (3.5) where the index a labels the facet. This is useful, since given the convex hull definition, it is not easy to check whether a given point A is or isn't inside the polytope. The reason is that the expression A = i c i V i is highly redundant-there are many more c i than the dimensionality of the space, and hence there is no unique representation of A in this form. All we ask that is that there is exist some representation for which all the c i are positive, but there are also (infinitely) many others for which some of the coefficients can be negative. By contrast the facet definition does allow us to practically check whether or not a given A is in the polytope-we simply check whether it satisfies all the inequalities defined by the boundary hyperplanes. Given the convex hull definition of a polytope, how can we determine the facets? Let's see how the projective picture helps us do this in the simplest case of a 2d polygon, where the answer is pictorially obvious: the facets are edges connecting consecutive vertices (V i , V i+1 ), so W iI = IJK V J i V K i+1 . To begin with, consider points a, b, c in the two-dimensional plane, projectively associated with three-vectors a, b, c. We can distinguish three situations, where c is on one side on the line (ab), on the line (ab), and on the other side of (ab), as having a, b, c > 0, a, b, c = 0, a, b, c < 0 respectively. Note that these signs (and zeros) are invariant under the positive rescaling of all the vectors we allow for our projective polytopes. Let's now discuss how to understand the boundaries of a 2d polygon, which are just edges specified by the vertices at their end points. But not every pair of points gives rise to an edge. As shown in Figure 3, while the line segment (V 1 , V 2 ) is an edge of the polygon, (V 2 , V n ) is not. What is the condition on a pair of points such that they give rise to an edge? Since the line segment (V 2 , V n ) is in the interior of the polygon, there are points on its either side. It follows that the determinant I 1 I 2 I 3 A I 1 V I 2 i 1 V I 3 i 2 can take either sign in the interior of the polygon, where is the three-dimensional Levi-Cevita tensor. Thus, for (V i 1 , V i 2 ) to be an edge, we must have A, V i 1 , V i 2 ≡ I 1 I 2 I 3 A I 1 V I 2 i 1 V I 3 i 2 have the same sign ,(3.6) for all A in the interior of the polytope. Since A can be any point in the convex hull of {V i }, it follows that (3.6) amounts to V i , V i 1 , V i 2 have the same sign ∀ i . (3.7) This straightforwardly generalizes to higher dimensions. In P d , the facets for the convex hull of a set of vectors {V i } are given by the set of d vectors (V i 1 , · · · , V i d ) such that V i , V i 1 , V i 2 , · · · , V i d ≡ I 0 ···I d V I 0 i V I 1 i 1 · · · V I d i d have the same sign, ∀ i . (3.8) This means that the points V i 1 , · · · , V i d lie on a facet of the polytope, as W aI = II 1 ···I d V I 1 i 1 · · · V I d i d . (3.9) Note that in general dimensions, a facet may well have more than d vertices on it. Any (d + 1) vertices V i 1 , · · · , V i d + 1 lying on the same facet will then satisfy V i 1 , · · · , V i d+1 = 0, and any d of them will define exactly the same facet W aI projectively (i.e. up to rescaling). We see that the boundary structure of a polytope, defined as the convex hull of n points {V i } n i=1 , is fully fixed by specifying the zeroes and signs of all n d+1 brackets V i 1 , · · · , V i d+1 . This is known as the oriented matroid of the configuration of vectors {V i }. Thus, to determine the face structure of a polytope, one needs to compute the signs of the determinants for all possible collections of d+1 vectors. The computational complexity grows polynomially with n. The projective language also lets us easily determine the intersection of planes of various dimensions; we simply write down the various planes as antisymmetric tensors and contract the indices with the epsilon tensor in the only way possible. For instance, a k-plane in P d is defined by k+1 vectors {Z 1 , · · · , Z k }. In d dimensions, in general a k-plane (defined by k+1 vectors) intersects with p-plane (defined by p+1 vectors) on a (p+k−d)-plane. For example, a 2-plane (Z 1 , Z 2 , Z 3 ) intersects a line (Z a , Z b ) in P 3 at the point Z a Z b , Z 1 , Z 2 , Z 3 − Z b Z a , Z 1 , Z 2 , Z 3 . (3.10) For general (p, k), the intersection is given as: Z [I 1 j 1 · · ·Z I d ] j d Z j d +1 , · · ·, Z j p+1 , Z i 1 , · · ·, Z i k+1 +(−) p Z [I 1 j 2 · · · Z I d ] j d +1 Z j d +2 , · · ·, Z j 1 , Z i 1 , · · ·, Z i k+1 +(−) p+1 Z [I 1 j 3 · · ·Z I d ] j d +2 Z j d +3 , · · ·, Z j 2 , Z i 1 , · · ·, Z i k+1 +· · · (3.11) where d = p+1 +k −d. In this paper, we will be interested in the intersection of a k-plane X, with a d-dimensional polytope with vertices {V i }. From the above, we see that it intersects with a (d−k)-face at a point, which is given as V 1 V 2 , V 3 , V 4 , · · · , V d−k , X −V 2 V 1 , V 3 , V 4 , · · · , V d−k , X +V 3 V 1 , V 2 , V 4 , · · · , V d−k , X +· · · . (3.12) By definition, this point is in the interior of the polytope if all the coefficients of V i 's V 2 , V 3 , V 4 , · · · , V d−k , X , − V 1 , V 3 , V 4 , · · · , V d−k , X , V 1 , V 2 , V 4 , · · · , V d−k , X , e.t.c , (3.13) have the same sign. Thus if the k-plane X intersects the polytope, we must be able to find d−k+1 vertices such that the signs of the determinants with X satisfies the above pattern. Finally, we will often reduce the dimension of the problem by projecting the geometry through fixed vectors or planes in the problem. The former include the identity block vector G 0 = (1, 0, · · · , 0), the infinity vector G ∞ = (0, · · · , 0, 1), and the latter involve the crossingplane X. Projecting through G 0 means that we are considering 0, · · · = det       1 * * * 0 * * * . . . * * * 0 * * *       , (3.14) where i, · · · will be a short hand notation for G ∆ i , · · · . We see that projection through G 0 lops off the first component of each vector, and reducing the dimension by 1. Similarly for the projection through G ∞ we simply lob off the last component, which corresponds to the geometry of the previous dimension. For the projection through X, we can understand as implemented by performing some GL(d+1) transformation such that X takes the form k = 0 :       1 0 . . . 0       , k = 1,       1 0 0 1 . . . . . . 0 0       , e.t.c. (3.15) Then one considers the geometry defined through the determinant X, · · · . The net effect is that the first (k+1)-components of all vectors are chopped off, leaving behind a (d−k+1)dimensional space. Cyclic Polytopes As mentioned in Section 3, the complexity of computing the convex hull of a set of vectors grows polynomially with the number of vectors. For our purpose the number of vectors is associated with the number of conformal primaries, which is infinite. Thus a priori it would appear that the problem is intractable. However as alluded to in the introduction, the science of studying polytopes is precisely about finding situations where the face structure can be fully determined analytically, which is possible when the data of the oriented matroid -all the zeroes and signs of the V i 1 , · · · , V i d+1 -have some nice properties. Indeed, in two dimensions, all polytopes are simply convex polygons. Note that in this case, the vertices have a natural ordering {V 1 , · · · , V n }, and the oriented matroid is simply that all the V i 1 V i 2 V i 3 > 0 for i 1 < i 2 < i 3 . Observe that since the brackets are antisymmetric, the ordering on vertices was crucial to make this statement. When this is satisfied, we immediately conclude that the facets are (V i V i+1 ) I . There is a beautiful class of polytopes in any number of dimensions, known as cyclic polytopes, that directly generalizes this structure for convex polygons. The vertices have an ordering V 1 , · · · , V n , where n can be arbitrarily large, and obey V i 1 , V i 2 , · · · , V i d > 0, ∀ i 1 < i 2 < · · · < i d . (4.1) Remarkably, this allows us to determine the face structure analytically. To see how this works, we return to the case of the polygon, but now understand the facets purely algebraically without drawing a picture, since pictures will not be available to us in general dimensions. Recall that in order for (V a , V b ) to be a facet, V i , V a , V b must have the same sign for all i different than a, b. Without loss of generality we can take a < b. Then, we see that for i < a < b the bracket is positive, as well as for a < b < i, since in this case we simply pass i through two vectors to arrange it in manifestly positive (increasing) ordering in the bracket. But if there is any gap between a, b, so that we can have some i with a < i < b, this bracket will be negative. We conclude that (a, b) can not have a gap between them, and the so the facets must be of the form (V i V i+1 ). This argument generalizes for all even d. For instance as seen in the Figure 4, consider the case of d = 4. Here a putative facet is (V a V b V c V d ), but we can easily see that they must pair up into two consecutive sets (i, i + 1, j, j + 1) in order for all the brackets to have the same sign. − a b + + i i + 1 + + − a b + + c d − i i + 1 + + j j + 1 − a b + + 1 c − d + −n i i + 1 + + j j + 1 + −n 1 For odd d there is a slight difference. Consider d = 3. Here it is easy to see that all the facets must have either the first vector "1" or the last vector "n"; and V 1 V i V i+1 V j > 0 for all j, and also − V i V i+1 V n V j > 0 for all j (note the extra minus sign). The reason for the difference between even and odd d is also related to the name of these objects. Given V 1 , · · · , V n that are "positive" in the sense that all the ordered brackets are positive, it is natural to look at what happens to this data under the cyclic shift V 1 → V 2 , V 2 → V 3 , · · · , V n → V 1 . This preserves the ordering of everything except between n and 1, so all the positive brackets involving n will pick up a factor of (−1) d+1−1 = (−1) d . Thus, for all the brackets to stay positive, we must have a twisted cyclic symmetry under which V n → (−1) d V 1 . For d even this is an honest cyclic symmetry, and so cyclic polytopes are really cyclically invariant (starting with the familiar polygon example in d = 2). But for d odd, cyclic polytopes are not literally cyclically invariant, and the vertices V 1 and V n play a special role in the boundary structure. For general d, the facets of the cyclic polytope are d ∈ odd : {W a } = V 1 , V i 1 , V i 1 +1 , V i 2 , V i 2 +1 , · · · , V i d−1 2 , V i d−1 2 +1 (−1) V i 1 , V i 1 +1 , V i 2 , V i 2 +1 , · · · , V i d−1 2 , V i d−1 2 +1 , V n , d ∈ even : {W a } = V i 1 , V i 1 +1 , V i 2 , V i 2 +1 , · · · , V i d 2 , V i d 2 +1 . (4.2) Note in particular that all the vectors V i are vertices. Thus if (4.1) holds true, the task of computing the convex hull is already completed! Furthermore, all the lower dimensional faces simply follows from removing one vertex from the facet at a time. Cyclic polytopes have a number of extremal properties that make them special amongst all polytopes. For instance, for a fixed number of vertices in any dimension, cyclic polytopes have the maximum number of possible facets, of all dimensionality! Cyclic polytopes have also made an important appearance in the physics of scattering amplitudes. If we group the vectors V 1 , · · · , V n into a (d+1)×n matrix, all the minors of this matrix are positive-the matrix gives a point in the positive Grassmannian, which has figured prominently in the study of on-shell diagrams [24] and the Amplituhedron [7]. Indeed, while for general N k M HV amplitudes with k > 1, the Amplituhedron represents a Grassmannian generalization of polytopes, for k = 1 it reduces to the cyclic polytope [25]. There are many natural questions we can ask about general polytopes that cannot be readily answered, for which the results are instead trivially available in the case of cyclic polytopes. Let us begin with a particularly simple example. Given any polytope with vertices (V 1 , · · · , V n ), we can choose to look at the convex hull of any subset of these vertices, to get a new polytope that naturally sits "inside" the old one. In general we cannot make any predictions for the face structure of the new polytope. However given a cyclic polytope, if we choose any subset of the vertices (V a 1 , · · · , V am ), we trivially get another cyclic polytope (with fewer vertices), since all the ordered determinants are trivially a subset of the old ones and are still positive. As another example, as we have already discussed, it is often useful to project a space of interest through some vector. In particular it is often interesting to a take a d-dimensional polytope and project through one of its vertices V to go down to a (d − 1)-dimensional polytope. For general polytopes, we cannot easily predict the face structure of the projected polytope. Indeed, the vertices of the higher dimensional polytope can end up inside the lower dimensional one. However, again everything can be understood for cyclic polytopes. Given some d-dimensional cyclic polytope with vertices (V 1 , · · · , V n ), if we project through V 1 , the projected vertices (V 2 , · · · , V n ) form a cyclic polytope in (d − 1) dimensions. This is because the projected minors V a 1 , · · · , V a d proj = V 1 V a 1 · · · V a d (4.3) are positive when the a i are ordered. The same is similarly true if we project through the last vertex V n . Thus, for instance, if we take a d = 3 cyclic polytope and project through the vertex V 1 , we are left with a d = 2 cyclic polytope, which is just the polygon with vertices (V 2 , · · · , V n ). In this simple example it is easy to see how the faces of the "upstairs" polytope project down to those of the projected one. The (two-dimensional) faces of the upstairs polytope are (1, i, i + 1) and (i, i + 1, n); the (one-dimensional) edges of these are (i, i + 1), together with (1, i), (1, i + 1) and (n, i), (n, i + 1). Of these, after projection, (i, i + 1) end up as the edges of the polygon, (1, i), (1, i + 1) get projected down to the vertices i, i + 1, while (n, i), (n, i + 1) end up inside the polygon. Let us give a few examples of cyclic polytopes in two and three dimensions. For d = 2, any convex polygon is a cyclic polytope. Indeed, the faces (to be more precise, edges) are all the adjacent pairs (V i , V i+1 ), as in (4.2). For d = 3, the polytopes with four (tetrahedron) and five (triangular bipyramid) are both cyclic (see Figure 5). Moving on to polytopes with six vertices, the one in Figure 6a is cyclic with the ordering of vertices given there. On the other hand, the square bipyramid in Figure 6b is the simplest example of a non-cyclic simplicial polytope. To see this, let us note that in a 3d cyclic polytope with six vertices, the first vertex 1 is on 5 different (4.2). However, the maximum number of faces that share a vertex is 4 in Figure 6b, hence the latter cannot be cyclic. faces, i.e. (V 1 , V 2 , V 3 ), (V 1 , V 3 , V 4 ), (V 1 , V 4 , V 5 ), (V 1 , V 5 , V 6 ), (V 6 , V 1 , V 2 ) according to The canonical example for cyclic polytopes in general dimensions is that constructed from a moment curve. A moment curve is the following algebraic curve in P d :         1 x x 2 . . . x d         ∈ P d , x ∈ R . (4.4) Consider any d + 1 points V i = (1, x i , x 2 i , x 3 i , · · · , x d i ) T labeled by x i (i = 1, · · · , n) on the curve. We will order them such that x 1 < x 2 < · · · < x d+1 . We have V 1 , V 2 , · · · , V d+1 = i<j (x j − x i ) . (4.5) To see this, we note that the RHS vanishes if x i = x j , and is a polynomial of degree d(d−1) 2 in the x i 's. It follows that as long as we order x 1 < x 2 < · · · < x n , the determinant will be positive. Hence, the convex hull of any n points V i on the moment curve is a cyclic polytope. Another example of cyclic polytope comes from the monomial z ∆ i with a collection of ∆ i 's. This is the contribution to the CFT four-point function from a single operator (not necessarily primary) of scaling dimension ∆ i . Let us first consider the vector generated by Taylor expanding z ∆ i around some point z 0 > 0: V i =         z ∆ i 0 ∆ i z ∆ i −1 0 ∆ i (∆ i −1) 2 z ∆ i −2 0 . . . d−1 a=0 (∆ i −a) d! z ∆ i −d 0         . (4.6) We will often refer to this as the Taylor scheme. Again since V 1 , V 2 , · · · , V d+1 must vanish whenever ∆ i = ∆ j , it is straightforward to see that V 1 , V 2 , · · · , V d+1 = d a=1 1 a! i<j (∆ j − ∆ i )z i ∆ i − d(d−1) 2 0 , (4.7) which is positive for ordered ∆ i 's. We conclude that the convex hull of vectors from the Taylor coefficients of z ∆ i forms a cyclic polytope. There is in fact a enlightening way of understanding this positivity. Instead of the Taylor expansion, we will consider the vector constructed by evaluating z ∆ i at distinct but ordered z i > 0: V i =       z ∆ i 0 z ∆ i 1 . . . z ∆ i d       , 0 < z 0 < z 1 < · · · < z d . (4.8) This will be referred to as the position scheme. We will prove that the determinants of ordered vectors in the position scheme are also positive. Note that this the positivity in the position scheme implies that in the Taylor scheme, since we can take the z i s arbitrary close to z 0 and the determinants of the two schemes are equivalent. We will proceed by proving that V 1 , · · · , V d+1 in the position scheme (4.8) must have the same sign for any z i 's with 0 < z 0 < z 1 < · · · < z d and 0 < ∆ 1 < ∆ 2 < · · · < ∆ d+1 . The overall sign can be fixed later by considering an arbitrary example. Now the statement that the determinants (4.8) have the same sign is equivalent to that the determinant V 1 , · · · , V d+1 can never be zero. In other words, for any real c i 's, there is no solution to i c i       z ∆ i 0 z ∆ i 1 . . . z ∆ i d       = 0 . (4.9) Said it in yet another way, the function g d+1 (z) = c 1 z ∆ 1 + c 2 z ∆ 2 + · · · + c d+1 z ∆ d+1 (4.10) cannot have d+1 positive roots for any choice of c i 's. We will prove by induction. For d = 0, this is obviously true because g 1 (z) = c 1 z ∆ 1 does not have any root for z > 0. Let us assume g n (z) has at most n−1 positive roots for any choice of the c i 's. Now if we further assume g n+1 has n+1 positive roots for some choices of the c i 's, we will show that this leads to a contradiction. Since g n+1 has n+1 positive roots so does the product z −∆ n+1 g n+1 (z). Now take the derivative of this product: z −∆ n+1 g n+1 (z) = c 1 (∆ 1 −∆ n+1 )z ∆ 1 +c 2 (∆ 2 −∆ n+1 )z ∆ 2 +· · ·+c n (∆ n −∆ n+1 )z ∆n . (4.11) From calculus (Descartes' rule of signs) we know that if a smooth function has n+1 positive roots, then its derivative must have at least n positive roots. However the right hand side of the above is special case of g n (z), which cannot have n positive roots, and thus a contradiction. We conclude that g d+1 (z) cannot have d + 1 positive roots for all n, which implies that V 1 , · · · , V d+1 cannot vanish and has a fixed sign. This completes the proof that the convex hull of vectors in the position scheme of z ∆ form a cyclic polytope. We can phrase our two examples in the following more general way. We are interested in characterizing functions of two variables F (x, y), such that in some suitable ranges for x, y, and for any ordered x 1 < x 2 < · · · < x n and y 1 < y 2 < · · · < y n , the matrix M aA = F (x a , y A ) has positive determinant det(M ) > 0. Such classes of functions are known as Tchebycheff systems, and were first studied by Tchebycheff in the context of the theory of interpolating functions. The technique above, using Descartes' rule of signs, is one of several interesting approaches to understanding these functions. We refer the interested reader to the classic text on this subject [26]. In the following section, we will apply similar arguments to the full conformal block in the Taylor scheme. Positivity of the Conformal Block We now return to the block vectors G I ∆ (2.20), obtained by Taylor expanding the conformal block around z = 1 2 . Since the block is positive for 0 < z < 1, we will define the block vector G ∆ projectively by normalizing the first component to be 1, i.e. G 0 ∆ = 1. We have G ∆ =             1 2∆α 4∆(∆ − 1) + 2∆α 8 3 ∆α(∆ 2 −∆+1) 8 3 ∆(∆−1)(∆ 2 −∆+3) + 4∆α 16 15 ∆α(∆ 2 −∆+1)(∆ 2 −∆+6) − 2∆ 2 (∆−1) 2 . . .             , (5.1) where α ≡ 2 F 1 ∆, ∆+1, 2∆, 1 2 2 F 1 ∆, ∆, 2∆, 1 2 . (5.2) The polynomial dependence of the block vector on ∆ and α follows from the Gauss contiguous relation of the hypergeometric function. While α is a non-polynomial function of ∆, numerically it is approximately a constant. As shown in Figure 7, the value of α decreases monotonically from 3/2 to √ 2 as we vary ∆: 3 2 > α > √ 2, ∆ ∈ [0, ∞) . (5.3) If we approximate α by a constant, the Taylor block vector in (5.1) becomes equivalent to (1, ∆, ∆ 2 , · · · ) up to a ∆-independent GL transformation. For example, in the large ∆ limit α → √ 2, and the 3-dimensional block vector approaches Applying a ∆-independent GL(3) rotation on the last three rows, we can convert it to a moment curve (4.4): G ∆ 1 =      1 2 √ 2∆ 4∆ 2 − 2(2 − √ 2)∆ 8 √ 2 3 (∆ 3 − ∆ 2 + ∆)      . (5.4) -19 -   1 2 √ 2 0 0 √ 2−1 4 1 4 0 − 1 4 1 4 3 8 √ 2    G I ∆ 1 =      1 ∆ ∆ 2 ∆ 3      . (5.5) Note that since the GL(3) matrix is a constant matrix, it only changes the determinant by an overall constant. Thus for large scaling dimensions we see that the block vectors form a cyclic polytope as discussed in Section 3. Motivated by the large ∆ analysis, we would like to see whether the block vectors (5.1) in the Taylor scheme give rise to a cyclic polytope for general ∆. We will proceed with our analysis iteratively in the dimension of the block vectors as in Section 4. Let us first introduce a shorthand notation that will be used in the rest of this paper: i 1 , i 2 , · · · , i d+1 ≡ I 1 I 2 ···I d+1 G I 1 ∆ i 1 · · · G I d+1 ∆ i d+1 . (5.6) For d = 1, we would like to show 1, 2 > 0, ∀ ∆ 1 < ∆ 2 . (5.7) This is equivalent to the statement that G ∆ 1 and G ∆ 2 can never be linearly dependent. Following the discussion in Section 4, this implies that c 1 + c 2 G 1 ∆ = c 1 + 2c 2 ∆α(∆) (5.8) can not have two positive roots for any c i s. Let's prove the statement by contradiction. If (5.8) has two positive roots, then the derivative must have at least a positive root. The derivative of (5.8) is shown in Figure 8, where we see that it is never zero for ∆ > 0. Thus the absence of a positive root for g 1 = (G 1 ∆ ) , with = d d∆ then implies that (5.7) holds. Now let us move on to d = 2, where we will show 1, 2, 3 > 0, ∀ ∆ 1 < ∆ 2 < ∆ 3 . (5.9) Again this is true so long as the following function does not have three distinct positive roots: c 1 + c 2 G 1 ∆ + c 3 G 2 ∆ = c 1 + 2c 2 ∆α(∆) + c 3 (4∆(∆ − 1) + 2∆α(∆)) (5.10) for any choice of the c i 's. We again proceed by showing a contradiction. Assuming (5.10) has three positive roots for some c i 's, then its first derivative 11) must have at least two positive roots. Since we have shown that (G 1 ∆ ) has no positive root, the second parentheses must have at least two positive roots. Recycling our argument, this implies that: c 2 (G 1 ∆ ) + c 3 (G 2 ∆ ) = (G 1 ∆ ) c 2 + c 3 (G 2 ∆ ) (G 1 ∆ ) ,(5.g 2 ≡ (G 2 ∆ ) (G 1 ∆ ) ,(5.12) must have a at least a positive root. However, we have explicitly checked that the above function is never zero for ∆ > 0, and hence the contradiction. Thus we have shown (5.9). We can iteratively proceed for higher d. In summary the condition for positivity boils down to the following conditions for general ∆: d = 1 : g 1 = (G 1 ∆ ) > 0, d = 2 : g 2 = (G 2 ∆ ) (G 1 ∆ ) > 0, (5.13) d = 3 : g 3 =    (G 3 ∆ ) (G 1 ∆ ) (G 2 ∆ ) (G 1 ∆ )    > 0 , d = 4 : g 4 =       (G 4 ∆ ) (G 1 ∆ ) (G 2 ∆ ) (G 1 ∆ )    /    (G 3 ∆ ) (G 1 ∆ ) (G 2 ∆ ) (G 1 ∆ )       > 0, · · · . We have numerically verified that up to d = 5, these conditions hold true for general ∆ > 0. On the right of Figure 8, we show the plot of the function g 1 , g 4 , g 5 , which is always positive. Note that the positivity of g 5 is enough to guarantee all the lower orders, since (as we will talk about in more detail below), we can obtain the lower dimensional geometries simply by projecting through the infinity block, so that positive in d dimensions immediately implies positivity for lower dimensions. We show the plots for d = 1, d = 4 and d = 5 to illustrate an interesting trend: for d = 1 the functions are clearly positive. But for d = 4 it gets "close" to zero for small ∆, while for d = 5 it is monotonically increasing at low ∆ and is still manifestly positive. However starting at d = 6, the analogous function g 6 does cross zero at very small ∆. This does not by itself prove that some brackets will be negative-the condition we are using is sufficient but not necessary-and indeed randomly scanning the brackets for billions of choices of dimensions does not reveal any negativity. However we have managed to prove that indeed for sufficiently small ∆'s, the brackets can indeed become negative. However, quite amazingly, the negativity occurs only when all seven block vectors in the bracket are small, ∆ i 1/10, and the most negative the brackets ever become never exceeds a magnitude ∼ 10 −20 ! Let us explain how we know the brackets become negative when all ∆'s are small enough. We do this by studying the limit ∆ 1 in more detail. Let us first consider the block vectors. To keep analytic control, lets first expand the block vectors around z = 0, z ∆ 2 F 1 (∆, ∆, 2∆, z) = z ∆ nc i=0 c i (∆)z i + O(z nc+1 ) ,(5.14) where n c is the cut off of the expansion, and now c i (∆) is an analytic function in ∆. Next, we re-expand the polynomial around z = 1 2 to obtain the approximate block vector in the Taylor scheme G ∆ . Now we compute the determinant using the approximate block vector. The cutoff n c can be determined by gradually increasing the cutoff until the determinant stabilize. In general this requires n c ∼ 50. Since we are interested in the regime where ∆ i 1, we can further Taylor expand the determinant in ∆ i . Then to leading order we have G ∆ 1 , G ∆ 2 , · · · , G ∆ d = α d i<j (∆ j − ∆ i ) + O(∆ d(d−1) 2 +1 ) . (5.15) The subleading terms can be suppressed by considering arbitrary small ∆ i s, and the sign of the determinant is given by α d . For n c = 50, we find the following α d : Note that the values are of order 1. We see that for the 7 × 7 determinant we encounter a negative leading term. From numerical experimentation at larger d up to d = 20, the pattern appears to be that all the brackets are positive unless at least 7 of the ∆'s are in this same range of ∆ 1/10, and again the (negative) minors are always miniscule. Curiously, while the full conformal blocks are not always positive, the functions 2 F 1 (∆, ∆, 2∆; z) do appear to be positive. Indeed experimentally this appears to be the case for all hypergeometric functions 2 F 1 (a, b, c = a + b, z) with a, b > 0. Statements of positivity of this sort about Hypergeometric functions do not appear to have been studied by mathematicians. The tininess of the negative minors and the small dimensions involved means that the negativity of brackets for small ∆ and large d is irrelevant from any practical point of view. Nonetheless we find this phenomenon absolutely fascinating. After all we start with a problem with no parameters whatsoever-indeed the only numbers that show up are "1" and "2" in the hypergeometric functions-and yet we produce counter-examples to positivity with exponentially tiny brackets! This is an concrete example of a dream to generate "exponentially large hierarchies from nothing". The application of this observation to inspiring radically new solutions to the cosmological constant and hierarchy problems is left as an exercise for the interested reader. The Unitarity Polytope and the Crossing Plane To summarize our discussion so far, the bootstrap constraints from unitarity and crossing symmetry can be phrased as follows. Unitarity requires that the Taylor coefficients of the four-point function F (z) expanded around z = 1/2 has to lie inside a polytope spanned by the block vectors (5.1). We will call this polytope the unitarity polytope, denoted as U[{∆ i }]. Crossing symmetry, on the other hand, restricts the Taylor coefficients of F (z) to lie on a halfdimensional plane X[∆ φ ] defined in Section 2. Here ∆ φ is the dimension of external operators. A consistent CFT four-point function must lie in the region defined by the intersection of the crossing plane X[∆ φ ] and the polytope U[{∆ i }] as in Figure 9. To solve this intersection problem, we need to know what the facets of U[{∆ i }] are. As discussed Section 3, this is a daunting task for general polytopes. Fortunately from Section 4, we see that the convex hull of the block vectors, at least up to d = 5, is a cyclic polytope, for which all the faces are known. Here d is the number of Taylor coefficients we keep. We will only obtain nontrivial constraints if d is odd, i.e. d = 2N + 1, then the crossing plane X[∆ φ ] is N -dimensional, and the facets of the unitarity polytope U [{∆ i }] are (0, i 1 , i 1 + 1, i 2 , i 2 + 1, · · · , i N , i N +1 ) (i 1 , i 1 + 1, i 2 , i 2 + 1, · · · , i N , i N +1 , ∞) ,(6.1) where each entry i represents the block vector G ∆ i given in (5.1). The lower dimension faces are obtained from the facets by knocking out one label at a time. The geometry problem we want to solve is the intersection between an N -dimensional crossing plane X[∆ φ ] and a (2N + 1)-dimensional unitarity polytope in P 2N +1 . Generically, an N -dimensional plane intersects with an (N +1)-dimensional face of the polytope at a point in 2N +1 dimensions. Such an (N + 1)-dimensional face of the cyclic polytope takes the form (i 1 , i 1 + 1, i 2 , · · · , , i N +1 ) . i 1 + 1, i 2 , · · · , , i N +1 , X , − i 1 , i 2 , · · · , , i N +1 , X , i 1 , i 1 + 1, i 3 , · · · , , i N +1 , X , · · · , (−1) N i 1 , i 1 + 1, i 2 , · · · , , i N , X , all have the same sign . (6.3) Thus, for a four-point function with external operators of dimension ∆ φ , a consistent CFT must contain N + 2 operators in its spectrum such that (6.3) holds. This picture of positive geometry is very reminiscent of the (tree) Amplituhedron for (tree) scattering amplitudes of N = 4 SYM [7]. In that case, we have a polytope comprised of (4+k)-component vectors that are the bosonized super momentum twistors (Z I i ), and a k-plane Y I α , with α = 1, · · · , k and I = 1, · · · , 4 + k. 1 The Amplituhedron then puts a constraint on the plane Y. In the original formulation, the k−plane Y I α must be expressible as Y I α = C αa Z I a , where the k × n matrix C is in the positive Grassmannian G + (k, n), with all positive minors. For k = 1 the Amplituhedron is simply the cyclic polytope, while for higher k it defines a generalization of polytopes into the Grassmannian. A more recent definition of the Amplituhedron is defined by characterizing the geometry obtained by projecting through Y. The data Z a becomes 4-dimensional after the projection, and is required to have a maximum "winding number" [27]. But in both definitions, the external data Z I a is thought of as fixed, and we are interested in carving out an allowed space of k-planes Y by various conditions. then defined as the set of planes Y which are in a general region where Y intersects the Z-polytope with the prescribed winding number. For the CFT case discussed in this paper, we instead have the crossing plane X fixed by the external dimension ∆ φ , while the unitarity polytope varies with the spectrum and intersects with the former. See Figure 10. Figure 10: The comparison between the geometry for CFT and the Amplituhedron. In the CFT case, we have a fixed crossing plane X that intersects with the unitarity polytope. The latter is determined by the spectrum of the CFT operators. In the Amplituhedron case, we have the polytope determined by the momentum twistors Z I i held fixed, and we look for k-planes Y, constrained by positivity/projected winding number conditions . N = 1: Three-Dimensional Polytope We now begin studying in detail the geometry of the crossing plane intersecting the cyclic polytope. In the Taylor scheme, we keep the first (2N +1)-th Taylor coefficients of the fourpoint function and the conformal block around z = 1 2 . The four-point function resides on the plane X, which is N -dimensional, and conveniently parameterized as: F =              F 0 4∆ φ F 0 F 2 16 3 (∆ φ − 4∆ 3 φ )F 0 + 4∆ φ F 2 F 4 64 15 ∆ φ (32∆ 4 φ − 20∆ 2 φ + 3)F 0 − 16 3 ∆ φ (4∆ 2 φ − 1)F 2 + 4∆ φ F 4 . . .              ∈ P 2N +1 . (7.1) Here F 0 , F 2 , F 4 , · · · , F 2N parametrize the N -dimensional crossing plane X in P 2N +1 . Using the rescaling freedom in P 2N +1 , we can set F 0 = 1. As we increase N , it suffices to restrict ourselves to the case of odd-dimensional polytopes. This is because in even dimensions the intersection problem merely constrains the new parameter F 2N , and we do not learn anything new compared to the analysis in the previous dimensionality. With N = 1, we are considering block vectors given by: G ∆ =      1 2α∆ 4∆(∆ − 1) + 2∆α 8 3 α∆(∆ 2 − ∆ + 1)      , (7.2) where again α = 2 F 1 (∆, ∆ + 1, 2∆, 1/2)/ 2 F 1 (∆, ∆, 2∆, 1/2). The unitarity polytope is a three-dimensional cyclic polytope, positively spanned by the vectors G ∆ i . The two-dimensional facets consists of the following two sets: (0, i, i + 1) , (i, i + 1, ∞) . (7.3) Here i represents the vector G ∆ i , 0 is the identity operator G 0 = (1, 0, · · · , 0), and ∞ is G ∞ = (0, 0, · · · , 1). The subscripts i and i+1 label two operators ∆ i < ∆ i+1 with nothing in between. Note that here we assume that the last vertex of the cyclic polytope is at ∆ = ∞, which is expected for realistic CFT's. However for the geometry problem, with even a putative finite spectrum of ∆'s, we can always think of the finite cyclic polytope as living inside a larger one where we add a final vertex at infinity. The crossing plane X is one-dimensional, which is represented by a 4 × 2 matrix X =      F 0 F 2 1 0 4∆ φ 0 0 1 16 3 (∆ φ − 4∆ 3 φ ) 4∆ φ      . (7.4) where the two columns are the vectors parameterized by F 0 and F 2 . The crossing plane X intersects with the face (0, i, i + 1) if and only if (see (6.3)) X, i, i + 1 , − X, 0, i + 1 , X, 0, i , all have the same sign. Conditions (7.5) union (7.6) are the sufficient and necessary condition for the bootstrap problem to have a solution in this dimensionality. To extract useful constraints from these conditions, it is often useful to derive necessary (but not necessarily sufficient) conditions by projecting the geometry to lower dimensions. Specifically, we will project our threedimensional polytope through one of its vertex to reduce it to a two-dimensional polytope. The crossing plane, on the other hand, retains its dimensionality and is still a line. We will discuss the projected problems and derive constraints on the spectrum below. 7.1 Projecting through the Identity 0 Let us start by projecting our three-dimensional cyclic polytope through the identity, G 0 = (1, 0, 0, 0), to obtain a two-dimensional cyclic polytope in P 2 . The crossing plane X, on the other hand, is still one-dimensional after the projection. In this way we have reduced the geometric question to asking whether a two-dimensional polygon (or a curve in the case of continuous spectrum) intersects with a line or not. The edges of the polygons are (i, i+1) proj , which descend from (0, i, i + 1) before the projection. We use a subscript to remind the reader that these are facets of the projected geometry. In practice, the projection through the identity 0 amounts to the following manipulation on the determinant 0, F, i, i+1 , det      1 1 1 1 0 4∆ φ G 1 ∆ i G 1 ∆ i+1 0 F 2 G 2 ∆ i G 2 ∆ i+1 0 16 3 (∆ φ −4∆ 3 φ )+4∆ φ F 2 G 3 ∆ i G 3 ∆ i+1      = det    4∆ φ G 1 ∆ i G 1 ∆ i+1 F 2 G 2 ∆ i G 2 ∆ i+1 16 3 (∆ φ −4∆ 3 φ )+4∆ φ F 2 G 3 ∆ i G 3 ∆ i+1    → det       1 1 1 F 2 4∆ φ G 2 ∆ i G 1 ∆ i G 2 ∆ i+1 G 1 ∆ i+1 4 3 (1−4∆ 2 φ )+F 2 G 3 ∆ i G 1 ∆ i G 3 ∆ i+1 G 1 ∆ i+1       . (7.7) In the last step we have rescaled the vectors by their leading positive component G 1 ∆ i = 2∆ i α. If the crossing plane and the polytope intersect before the projection, they must again intersect after the projection, but not vice versa. The condition for the projected crossing plane to intersect the two-dimensional polygon is X, i proj = 0, X, i , − X, i + 1 proj = − 0, X, i + 1 have the same sign . (7.8) This condition is weaker than (7.5) union (7.6) as it only requires the two geometric objects to intersect after the projection. Indeed, note that our condition keeps only two of those in (7.5), and none of those in (7.6). The absence of the conditions from (7.6) reflects the fact that the edges (i, ∞) and (i + 1, ∞) of the two-dimensional faces (i, i + 1, ∞) all end up inside the two-dimensional polygon after the projection through the identity. We can immediately say something interesting about the spectrum from (7.8). Geometrically, the block vectors form a curve, parametrized by ∆, and the crossing plane is a line. Figure 11: The 2d geometry obtained by projecting through the identity 0. The blue curve is the block vectors parametrized by y = G 3 ∆ /G 1 ∆ , x = G 2 ∆ /G 1 ∆ as in (7.7) . The unitarity polytope is constructed from the convex hull of points on these curves. The red (∆ φ = 0.3) green (∆ φ = 0.4) and orange (∆ φ = 0.5) lines are the crossing planes X[∆ φ ] with various external dimensions, parametrized as y = 4 3 (1 − 4∆ 2 φ ) + F 2 , x = F 2 4∆ φ . We display the geometry in Figure 11. We see that generically the crossing plane intersects with the block curve at two points. These two points are the solution to the equation 0 = 0, X, ∆ = 4∆ φ G 3 ∆ − 4∆ φ G 2 ∆ + 4 3 (4∆ 2 φ − 1)G 1 ∆ = 32 3 ∆ φ ∆ −6∆ φ (∆ − 1) + α(∆ 2 − ∆ + 4∆ 2 φ − 3∆ φ ) . (7.9) Since α = 2 F 1 (∆,∆+1,2∆,1/2) 2 F 1 (∆,∆,2∆,1/2) is a slowly varying function of ∆ (see Figure 7), we can effectively treat it as a constant in a wide range of ∆. From this approximation we see that there are two roots in ∆ for this equation, denoted as ∆ ± with ∆ + > ∆ − . ∆ ± are functions of ∆ φ . The necessary condition (7.8) tells us that there must be a pair of operator such that either ∆ + or ∆ − lies in between. In other words, there must exist at least an operator with dimension ∆ satisfying ∆ − < ∆ < ∆ + . (7.10) Examples for spectrums allowed and disallowed are shown in Figure 12. This statement in particular implies that the gap in the spectrum between the identity and the lightest operator has to be smaller than ∆ + . Note that this analysis also gives us constraints on the four-point function for a given putative spectrum. Consider the part of the spectrum that lies between {∆ − , ∆ + }, and denote the operator closest to ∆ − as ∆ min , while that closest to ∆ + as ∆ max . The four-point function X is then bounded by: 0, F, ∆ min−1 , ∆ min > 0 0, F, ∆ max , ∆ max+1 > 0 . (7.11) We illustrate this in Figure 13. Projecting through ∞ We can derive another necessary condition by projecting through the infinity vertex G ∞ = (0, · · · , 0, 1). Again after the projection, the polytope is two-dimensional with edges (i, i + 1) proj and the crossing plane is an one-dimensional line. The condition for the two to intersect in the projected geometry is The equation X, i proj = X, i, ∞ , − X, i + 1 proj = − X, i + 1,0 = X, ∆, ∞ = −2α∆ + 4∆ φ . (7.13) The equation has a unique positive solution in ∆, denoted as ∆ * . This implies that there must be a pair of operators (possibly including the identity) sandwiching ∆ * . This constraint is always satisfied in a realistic CFT spectrum, since we have an identity operator to the left of ∆ * , and infinity many operators near infinity. N = 2: Five-Dimensional Polytope We now consider the five-dimensional polytope, where the block vectors has six components: G ∆ =          1 2∆α 4∆(∆ − 1) + 2∆α 8 3 ∆α(∆ 2 −∆+1) 8 3 ∆(∆−1)(∆ 2 −∆+3) + 4∆α 16 15 ∆α(∆ 2 −∆+1)(∆ 2 −∆+6) − 2∆ 2 (∆−1) 2          . (8.1) The crossing plane at this order is two dimensional, given by X =           F 0 F 2 F 4 1 0 0 4∆ φ 0 0 0 1 0 16 3 (∆ φ − 4∆ 3 φ ) 4∆ φ 0 0 0 1 64 15 ∆ φ (32∆ 4 φ − 20∆ 2 φ + 3) 16 3 (∆ φ − 4∆ 3 φ ) 4∆ φ           . (8.2) and the boundaries are (0, i, i+1, j, j+1), (i, i+1, j, j+1, ∞) . (8.3) This implies that a putative spectrum is consistent only if there exists a set of four operators (i, i + 1, j, k) that intersects the crossing plane at a point, i.e. X, i, i+1, j , − X, i+1, j, k , X, i, j, k , − X, i, i+1, k , have the same sign . (8.4) Here j, k could be 0 or ∞. Following our experience for the three-dimensional polytope, the interesting constraints arrises when we will view the geometry projected through 0. In this case, we are considering the cyclic polytope with facets (i, i+1, j, j+1). Since the crossing plane is still two dimensional, and it intersects the polytope if there exists three operators (i, i+1, j) such that X, i, i+1 , − X, i, j , X, i+1, j , have the same sign . (8.5) Note that in the above, since we are projecting through 0, each vector has five component as the leading piece is removed, and the brackets now correspond to taking the determinant of 5 × 5 matrices. Now our geometry is four-dimensional. We can reduce the dimensionality by either projecting through, or projecting on to the two-dimensional crossing plane. Projecting through the crossing plane indicates that we are looking at the part of the geometry orthogonal to X. As discussed previously, we can consider a GL(3) transformation acting on the five component vectors such that the crossing plane X takes the following form X =        1 0 0 0 1 0 0 0 1 0 0 0 0 0 0        . (8.6) Acting on all block vectors with the same GL(3) transformation, the orthogonal directions are simply the remaining two components. On the other hand, projecting on to the crossing plane means that we are considering the image of the unitarity polytope on the two-dimensional plane. In practice the two components of this projection, for points (i, i+1, j), can be read off from the coefficient of F 2 and F 4 in the following determinant: F, i, i + 1, j = det       F 2 4∆ φG 2 ∆ iG 2 ∆ i+1G 2 ∆ j 4 3 (1 − 4∆ 2 φ ) + F 2G3 ∆ iG 3 ∆ i+1G 3 ∆ j F 4 4∆ φG 4 ∆ iG 4 ∆ i+1G 4 ∆ j 16 15 (32∆ 4 φ − 20∆ 2 φ + 3) + 4 3 (1 − 4∆ 2 φ )F 2 + F 4G5 ∆ iG 5 ∆ i+1G 5 ∆ j       .(8.7) -31 -where we've used the short hand notationG I ∆ ≡ G I ∆ G 1 ∆ . The two different projections will give us complementary information. The projection through X allows us to have a bird's eye view of all the block vectors, gaining information on how operators of a consistent spectrum must distribute on the curve of block vectors. Once the polytope intersect, the projection onto the crossing plane gives us information with regards to the neighboring operators as well as bounds on the four-point function itself. Projecting through X We begin with projection through X. Instead of finding the ∆ φ -dependent GL(3) transformation that separates the directions orthogonal to the crossing plane, we can compute the following projection of the block vectors { X, 0, v 1 , ∆ , X, 0, v 2 , ∆ } ,(8.8) where v 1 , v 2 are some arbitrarily chosen auxiliary vectors that does not lie on the crossing plane X. The projected block vector is parametrized by these two coordinates on a twodimensional plane. By construction, the crossing plane X and the identity 0 are located at the origin on this two-dimensional plane. The block vectors (8.1) form a curve, starting from the origin and parameterized by ∆. The two points ∆ ± , defined as the solutions to (7.9), play a special role in this geometry. In particular, they are collinear with the origin. To see this note that since (7.9) tells us that X 1 , X 2 , 0, ∆ ± = 0, where X 1 , X 2 are the two vectors of the crossing "line" in the N = 1 problem, this implies that the three-dimensional block vector for ∆ ± can be spanned by X 1 , X 2 , and the identity. This then tells us that the four-component determinants X 1 , 0, ∆ − , ∆ + = X 2 , 0, ∆ − , ∆ + = 0. Now explicitly expanding the five component determinant X, 0, ∆ + , ∆ − , one finds that it can be written as a linear combination of X 1 , 0, ∆ − , ∆ + , X 2 , 0, ∆ − , ∆ + and X 1 , X 2 , 0, ∆ ± , which all vanishes. Thus X, 0, ∆ + , ∆ − = 0 . (8.9) In other words, the line between ∆ ± must pass through the origin on this two-dimensional plane. This is illustrated in Figure 14 (where the external dimension is chosen to be ∆ φ = 1.5), where the two points ∆ ± are on the opposite sides of the origin. Rather amusingly, the vertex ∆ − is always much closer to the origin compared to ∆ + . The criterion for the consistency of a spectrum is that there exists a set of three operators such that (8.5) is satisfied, i.e. X, i, i+1 , X, i+1, j , X, j, i have the same sign . This means that X is on the same side of the boundaries (i, i+1), (i+1, j) and (j, i), i.e. the origin must be in the convex hull of the three block vectors. In other words, there have to be three points on the block curve, forming a triangle that encloses the origin. See Figure 14 for an example. Note that since ∆ + , ∆ − are collinear with the origin, this criterion automatically 1, 0). The external dimension that defines the crossing plane was chosen to be ∆ φ = 1.5. Notice that (0, ∆ − , ∆ + ) are collinear. The purple dots correspond the position of three operators, labeled (i, i+1, j), that forms a triangle that incloses the origin 0. requires that there is at least one operator between ∆ + and ∆ − , which is the consistency condition from the N = 1 geometry in Section 7. The above criterion imposes global constraints on the spectrum. Let the dimension of the heaviest operator between ∆ ± be ∆ max . Since ∆ ± are on the opposite sides of the origin, we can draw a line that is tangent to the curve before ∆ − and passes through the origin. This line intersects the curve in between ∆ ± , which we denote as ∆ T , as shown in Figure 15. 2 Now if ∆ max > ∆ T , then we have the same conclusion as before, either there exists a ∆ max+1 in between ∆ + and ∆ c , or there is a light operator between ∆ a and ∆ b . However if ∆ max < ∆ T , the line between ∆ max and the origin only intersects the block curve above ∆ + , i.e. ∆ b,c no longer exists. Thus we conclude that we must have ∆ max+1 above ∆ + and ∆ a . Finally we consider the constraints for the first two primaries ∆ 1 and ∆ 2 in the spectrum. For ∆ 1 < ∆ − , we know that ∆ 2 cannot be greater than ∆ + , otherwise there will be no operators between ∆ ± . Similarly, the region with ∆ 1 > ∆ + is also ruled out by the same argument. For ∆ 1 between ∆ − and ∆ + , ∆ 2 cannot be above the contour plot of X, 0, ∆ 1 , ∆ 2 = 0, otherwise there does not exist a triplet of operators ∆ i , ∆ i+1 , ∆ j satisfying (8.4). Combining Figure 15: There are two possible configurations of the spectrum depending on where ∆ max lies with respect to ∆ T , which is the point where the line that passes through the origin and tangent to the curve below ∆ − , intersects with the block curve. On the LHS we have ∆ max > ∆ T , for which we conclude that either ∆ max+1 is between ∆ + and ∆ c , or that there is a light operator between ∆ a , ∆ b . On the RHS for ∆ max < ∆ T we must have ∆ max+1 in between ∆ + and ∆ a . all these statements, we carve out the space of consistent 1d CFT for the first two primaries (∆ 1 , ∆ 2 ) in Figure 16. Projecting onto X As before, we start with the geometry projected through the identity 0. The cross section of the four-dimensional polytope by the crossing plane X is a two-dimensional polygon. The polygon gives a bound on the actual values of the four-point function, parametrized by F 0 , F 2 , F 4 as in (7.1). The shape of the polygon is determined by the spectrum through some interesting combinatorics inherited from the cyclic polytope. The edges of the polygon come from the facets of the four-dimensional cyclic polytope, which are of the form (i, i + 1, j, j + 1). A vertex (i, i + 1, j), on the other hand, is the intersection of two edges who share exactly three numbers. See Figure 17 for an illustration of this combinatoric rule. A triplet (i, i + 1, j) is an actual vertex of the polygon if the corresponding facet intersects with the crossing plane, i.e. if (8.5) is true. To recap, Edges : (i, i + 1, j, j + 1) , Vertices : (i, i + 1, j) . Figure 16: Constraints on the first two primary operators (∆ 1 , ∆ 2 ). The region below the blue line is allowed. Before the dashed line, ∆ 1 < ∆ − and hence ∆ 2 must be smaller than ∆ + . After the dashed line ∆ − < ∆ 1 , then ∆ 2 must be below the curve X, 0, ∆ 1 , ∆ 2 = 0. Finally for ∆ 1 > ∆ + there are no solutions and is ruled out. (i, i + 1, j) (i, i + 1, j − 1, j) (i, i + 1, j, j + 1) . . . . . . Figure 17: The combinatoric rule of the polygon on X inherited from the cyclic geometry. An edge is given by four numbers of the form (i, i + 1, j, j + 1). Two edges intersect at a vertex of the form (i, i + 1, j) if they share three numbers. For example, in Figure 18 we give two examples of polygons allowed by the above combinatoric rule. We illustrate how the polygon on the crossing plane X changes its shape as we vary the spectrum in Figure 19. The polytopes projected through X and the identity 0 are shown on the left. The red curve on the left is the trajectory of the block vector (8.1) as we vary ∆. The number i on the red curve label where the block vector for the i-th operator of dimension ∆ i is located at. In the figure we show the first 5 operators but we assume the spectrum continues to infinity. As we vary the dimension ∆ 4 of the fourth operator, we see that the polygon changes from a triangle to a polygon, and then to a polygon with infinitely many edges. To make the plot more visible, Figure 19 is not drawn to scale. As a concrete example, the polygon on the crossing plane for the generalized free fermion is shown in Figure 20. The region inside the polygon, which is amazingly thin and tiny, is the allowed four-point function constrained by the bootstrap equation. Recursive Increase of Resolution We have completely investigated the bootstrap geometry problem up to the N = 2 case. This is the case where the crossing plane X is two-dimensional in P 5 , or stated non-projectively, three-dimensional in six dimensions. We have seen how, as we go to the higher-dimensional problem, our "resolution" on CFT data increases, and we further carve out the space of allowed operator dimensions. In this section we discuss how this recursive procedure increases resolution more systematically. As before, it is natural to work with a (2N +2) dimensional space of Taylor coefficients for the four-point function. The crossing plane X is a half-dimensional (N + 1)-plane in (2N + 2) dimensions. We project through the X plane as well as the identity block vector G 0 to end up with a geometry in (2N + 2) − (N + 1) − 1 = N dimensions. As we have discussed, the CFT data {∆} is consistent if and only if there are (N + 1) ∆ i such that the simplex with vertices ∆ i 1 , · · · , ∆ i N +1 contains the origin. Now, we would like to systematically relate the solution of this problem with some N , to the one where we increase N → (N + 1). We have already highlighted an obvious geometric relation between the two problems: we know that if we project the (N + 1)-dimensional geometry through the infinity vertex G ∞ , then we go back to the N dimensional problem. This motivates us to incorporate the infinity block into our thinking about the geometry problem at hand. Indeed there is a very natural way to do this, using the following trivial but important Figure 20: The cyclic polytope projected onto the crossing plane for the free fermion spectrum ∆ i = 2∆ φ + 2i − 1 (i = 1, 2, · · · ). The dimension ∆ φ of the external operator (the fermion field) is shown in the plot. piece of basic linear algebra and geometry. Suppose v 1 , · · · , v D+1 are (D + 1) vectors in D dimensions, such that the origin is contained inside the vertices of the simplex defined by them, i.e. such that c 1 v 1 + · · · c D+1 v D+1 = 0 for some c i > 0. Then the claim is that for an arbitrary vector w, there is some collection of D of the (D + 1) vs, such that the a positive linear combination of {v i 1 , · · · , v i D , w} also contains the origin. The proof is trivial. By GL(D) transformations and positive rescalings, we can always bring the vectors v 1 , · · · , v D+1 to the form v i = e i for i = 1, · · · , D, and v D+1 = −(e 1 + · · · + e D ). Here the e i are unit vectors with non-vanishing components in the i-th slot, and we have v 1 + · · · v D + v D+1 = 0. Now, we can expand any w in the basis of e i as w = w 1 e 1 + · · · w D e D , and let us consider the equation c 1 v 1 + · · · c D v D + c D+1 v D+1 + w = 0, which implies c i = c D+1 − w i . If all the w i are negative, we simply put c D+1 = 0 and c i = −w i > 0. If instead some of the w i are positive, let w i * be the largest of all the w's. Then we can set c D+1 = w * , and c i * = 0, then all the other c i are c i = c D+1 −w i = w i * −w i > 0. Thus in all cases, we have found some positive c i for which c i 1 v i 1 + · · · c i D v i D + w = 0, as claimed. With this simple fact in hand, let us return to our CFT geometry problem, and observe that being able to find (N + 1) vectors ∆ i 1 , · · · , ∆ i N +1 containing the origin, is completely equivalent to demanding that there are just N ∆'s which, together with infinity, contain the origin. On the one hand, if {∆ i 1 , · · · , ∆ i N , ∞} contains the origin, we have good CFT data (with ∆ i N +1 = ∞). On the other hand, we have just seen that for any set of (N + 1)∆'s containing the origin, there are N of them for which the set {∆ i 1 , · · · ∆ i N , ∞} contains the origin. Thus, the set of legal CFT data in the N dimensional geometry is associated with solving for the space S N of {∆ 1 , · · · , ∆ N } such that {∆ 1 , · · · , ∆ N , ∞} contains the origin; we must then demand that the CFT spectrum contains N operators with dimensions inside this set S N . Let us choose some particular {∆ 1 , · · · , ∆ N } inside S N . Define T N [∆ 1 , · · · , ∆ N ] to be the set of all ∆ N +1 's such that {∆ 1 , · · · , ∆ N , ∆ N +1 } contains the origin. Obviously T N is not empty since by definition as ∆ N +1 → ∞, {∆ 1 , · · · , ∆ N , ∞} contains the origin. Note that the union of S N and T N is a way of labelling every possible simplex that contains the origin, with N ∆'s such that {∆ 1 , · · · , ∆ N , ∞} contains the origin, and the possible ranges for the ∆ N +1 that could along with these N ∆'s. Now, let's go up to (N + 1) dimensions. We are now interested in the space S N +1 , i.e. we are looking for some ∆ 1 , · · · , ∆ N , ∆ N +1 , ∞ such that X0∆ 1 · · · ∆ N ∆ N +1 > 0 , X0∆ 1 · · · ∆ N ∞ < 0 , · · · , X0∆ 2 · · · ∆ N +1 ∞ > 0 . (9.1) Note that the conditions in the second line simply demand that ∆ 1 , · · · , ∆ N +1 is a legal simplex in the N -dimensional problem, since projecting through G ∞ sends us back to the N dimensional geometry. Thus, we learn that ∆ 1 , · · · , ∆ N +1 must actually be in S N ∪ T N . But importantly, we have one more constraint on ∆ N +1 ! We must have that X0∆ 1 · · · , ∆ N ∆ N +1 has the opposite sign as X0∆ 1 · · · ∆ N ∞ . This is crucial. Recall that T N was always non-empty since ∆ N +1 could always go to ∞. But the above condition forbids this. Indeed, ∆ N +1 must be smaller than the largest root in the variable ∆ of the function f [∆; ∆ 1 , · · · , ∆ N ] = X0∆ 1 · · · ∆ N ∆ . So, let us finally define U N [∆ 1 , · · · , ∆ N ] to be the set of all ∆'s such that X0∆ 1 · · · ∆ N ∆ has the opposite sign as X0∆ 1 · · · , ∆ N ∞ . Again, this set U N is manifestly bounded from above. The recursive expression relating the set of legal CFT data in (N + 1) dimensions to the set in N dimensions: S N +1 = S N ∪ [T N ∩ U N ] (9.2) Note that T N ∩ U N may be empty, in which case we are ruling out parts of S N that were consistent in N dimensions but which are seen to be inconsistent with the increased resolution of the (N + 1)-dimensional problem. In this way, we can recursively build up the space of allowed ∆'s. In N dimensions we must have some {∆ 1 , · · · , ∆ N } in some specified ranges. Given any allowed {∆ 1 , · · · , ∆ N } in N dimensions, there is some manifestly bounded from above, and possible empty, range for ∆ N +1 's. This manifests the way in which the space of possible operator dimensions is further carved out as we systematically step to higher and higher dimensions. Outlook Our investigations in this paper have only scratched the surface of what appears to be a rich set of connections between the geometry and combinatorics of total positivity and the conformal bootstrap program. Even sticking with the d = 1 CFT geometry associated with the SL(2, R) conformal blocks, there are a number of open avenues for future investigation. To begin with, we would like to have a much better understanding of the positivity we have experimentally observed for conformal blocks. In particular, we would like to again draw attention to the extraordinary "fine-tuning" that arises seemingly out of the platonic thin-air of conformal blocks: minors of block vectors are almost always positive, except when 7 or more dimensions are smaller than ∼ 10 −1 , and the most negative the minors ever become is about ∼ 10 −20 ! It would be fascinating to better understand this phenomenon better. And might there be a slightly different basis of conformal blocks for which the positivity is exact? The exploration of total positivity properties involving hypergeometric functions is also interesting from a purely mathematical point of view. The conformal blocks (2.6) are G ∆ (z) = z ∆ 2 F 1 (∆, ∆, 2∆; z). Of course, as we discussed the class of function z ∆ enjoys our total positivity properties. Interestingly, the functions 2 F 1 (∆, ∆, 2∆; z) also appear to be totally positive; indeed experimentally this appears to be the case for all hypergeometric functions 2 F 1 (a, b, c = a + b, z) with a, b > 0, a fact not studied to our knowledge in the mathematical literature. It would be interesting to find a conceptual proof for this statement, which might give some inroads into understanding some exact statement about positivity for the blocks. Turning to the classification of CFT data, is it possible to more explicitly carry out the recursive procedure for increasing resolution as more Taylor coefficients are kept? Also, the intersection of the unitarity polytope and the crossing plane is in general an interesting polytope, whose facet structure is controlled by the combinatorics of cyclic polytopes, as we illustrated in our low-dimensional examples. But we worked in the simplest cases of one and two-dimensional geometries where convex polytopes have a universal shape-line segments and convex polygons-while higher-dimensional polytopes are much more interesting and intricate. Is it possible to characterize all the polytope shapes that can arise in the U ∩ X problem in a nice combinatorial way? Perhaps the most interesting set of questions from both the physical and positive-geometrical points of view have to do with understanding of the bootstrap problem with higher-dimensional conformal symmetry. As we have seen, the most obvious notions of positivity are associated with some ordering: we have block vectors G ∆ (z), and both the "parameter" ∆ and the one-dimensional space labelled by z have a natural ordering. The situation is more interesting in higher dimensions, where we have blocks that depend on two variables (z,z), and two parameters (∆, s) for dimension and spin. Is there a natural extension of positivity involving this pair of two-dimensional spaces (z,z) and (∆, s) in a non-trivial way? We close by making some preliminary and elementary observations on this problem, beginning with the bootstrap for d = 2 CFTs. In two dimensions operators can be labelled by h,h labelled to spin and dimension by s = h −h ∈ Z and ∆ = h +h. The d = 2 global conformal blocks are simply given as the product of d = 1 blocks via G ∆,s (z,z) = G h (z)Gh(z). The four-point function can then be written as F (z,z) = h,h p h,h G h (z)Gh(z) , p h,h ≥ 0 , h −h = s ∈ Z . (10.1) Note that the non-trivial correlation between h,h is enforced by the requirement that the spin h −h = s is an integer. But we can begin with a simpler problem: we consider the same expansion, but relax the requirement on the spin. Said another way, in the real 2d CFT we can think of the sum over all (h,h) but with p h,h = 0 unless h −h = s is integer, but we can consider the looser restriction placing no conditions other than the positivity of p h,h . Clearly, it is natural to call this the "direct product" of the geometries associated with the z,z problems in d = 1. The notion of the "direct product" of polytopes is a natural one, though not much studied in the mathematical literature. Some general properties can be easily summarized. Let us consider vertices X i a and Y I A of two projective polytopes P, Q in (N + 1) and (M + 1) dimensions. Will be define the direct product polytope (P ×Q), living in the (N +1)×(M +1) direct product space, to be the convex hull of the direct product of the vertices, as F iI = a,A p aA (X i a Y I A ). Now, suppose the facets of P are w i and those of Q are W I . What can we say about the faces of P × Q? Obviously, (w i W I ) bound (P × Q), since trivially (w i W I )F iI ≥ 0. Moreover, (w i W I ) is actually guaranteed to be one of the faces of the (P × Q). The reason is that we can find more than (N + 1) × (M + 1) vertices of (P × Q) that lie on (w i W I ). Indeed, there are at least (N + 1) different X i a that lie on w i and at least (M + 1) different Y I A that lie on W I . So at least these (N + 1) × (M + 1) vertices X i a i Y I A I lie on (w i W I ). Now since (w i W I ) bounds P × Q and has at least (N + 1) × (M + 1) vertices, it must be a face of (P × Q). In general, there may be further faces of (P × Q) that are not of this form. But with a little extra work, it is possible to prove the following nice fact. A simplicial polytope is one where the neighborhood of every vertex locally looks like a simplex, with the smallest number of faces meeting at the vertex. Then, if P, Q are both simplicial, all the faces of P × Q are the direct products w i W I of the faces of P, Q. Cyclic polytopes are simplicial, so we do know the face structure of the loosened geometry problem for 2d CFTs, where we relax the condition p h,h = 0 for h −h not equal an integer. It is interesting to see how the real CFT polytope sits inside the direct product polytope. Let the conformal weights of the CFT be (h i ,h i ) with i labeling different global conformal primaries. Then, the vertices of the direct product polytope are G h i (z)G h j (z) for all i, j, but for the CFT polytope we are only keeping the diagonal vertices G h i (z)Gh i (z). Now, of course the faces of the direct product polytope still bound the CFT polytope we are interested in. But in general, there aren't enough vertices of the CFT polytope on these faces for them to correspond to faces of the CFT polytope. In practice, as we have seen the cyclic polytope constraints are already very restrictive, so even these "looser" direct product polytopes, that bound the real CFT polytopes,are likely highly constraining on CFT data. This is likely the most we can say directly following from the ideas in this paper, without addressing the bigger challenge of determining the positive geometry intrinsic to general CFTs in higher dimensions. Another application of the positive geometry is to the modular bootstrap of the torus partition function in two-dimensional CFT. There the analogs of the block vectors are built from the Virasoro characters, which after a GL transform literally lie on a moment curve. Again in this case all the techniques presented in the current paper can be directly applied. We leave the direct exploration of these fascinating problems to future work. There is a simple Lorentzian variation of (A.2) that will directly yield a formula that obeys all three properties. Let x µ i ∈ R 1,d−1 with i = 1, · · · , 4. We will choose a particular causal structure such that x µ i+1 is in the future lightcone of x µ i for i = 1, 2, 3 (see Figure 21). The conformal block then admits the following Lorentzian integral representation [1,8,9] G ∆,0 (z,z) = N |x 12 | ∆ |x 34 | −d+∆ x 3 <x 5 <x 4 d d x 5 1 |x 15 | ∆ |x 25 | ∆ |x 35 | d−∆ |x 45 | d−∆ , (A.3) where we only integrate x 5 in the intersection between the future lightcone of x 3 and the past lightcone of x 4 , which is a conformally invariant region in Lorentzian signature. Here | · | is the Lorentzian norm in R 1,d−1 . N is a normalization constant to be fixed later. Furthermore, the integral is non-singular as we take x 1 → x 2 , hence the righthand side above reproduces the correct small z,z behavior. In the following we will explicitly evaluate the Lorentzian integral in one dimension and even spacetime dimensions. By conformal invariance, we can choose the four external points to be at x 1 = (t = 0, x = 0), x 3 = (t = 1, x = 0), x 4 = (t = ∞, x = 0) with x 2 in the intersection between the future lightcone of x 1 and the past lightcone of x 3 . The causal structure is shown in Figure 21. A.1 One Dimension In one dimension, let χ ≡ x 5 − x 3 = x 5 − 1. The Lorentzian integral reduces to G ∆ (z) = N z ∆ ∞ 0 dχ 1 (χ + 1) ∆ (χ + 1 − z) ∆ χ 1−∆ . (A.4) Using the integral representation of the Gauss hypergeometric function 2 F 1 (a, b, c, z) = Γ(c) Γ(b)Γ(c − b) ∞ 0 du 1 u b−c+1 (u + 1) c−a (u + 1 − z) a , (A.5) we obtain the 1d conformal block (2.6) used throughout the main text of the paper: G ∆ (z) = z ∆ 2 F 1 (∆/2, ∆/2, ∆, z) . (A.6) Here we have chosen N to normalize the block to start from z ∆ in the small z expansion. A.2 Two Dimensions Let us parametrize x 5 as x µ 5 = (t = τ + 1, x 1 = ρ) and x 2 as x µ 2 = (t = T, x 1 = X). The cross ratios are z = T + X andz = T − X. In two dimensions the Lorentzian integral takes the form Since the integrand is symmetric in exchanging u with v, we can replace the integration range as 0 < u < ∞ and 0 < v < ∞ at the price of a factor 1/2. It follows that the the integral factorizes to products of terms like (A.5), and we reproduce the standard scalar SL(2, C) conformal block in two dimensions [21,[29][30][31]: G ∆,0 (z,z) = N (zz) ∆ 2 ∞ 0 dτ τ 0 dρ 1 [(τ + 1) 2 − ρ 2 ] ∆ 2 [(τ + 1 − T ) 2 − (ρ − X) 2 ] ∆ 2 [τ 2 − ρ 2 ] d−∆ 2 + 1 [(τ + 1) 2 − ρ 2 ] G ∆,0 (z,z) = (zz) ∆ 2 2 F 1 (∆/2, ∆/2, ∆, z) 2 F 1 (∆/2, ∆/2, ∆,z) , (A.9) where we have chosen the normalization constant N such that the leading term is (zz) ∆ 2 in the small z,z expansion. A.3 Even Dimensions Let us parametrize the integration point x 5 as x t 53 = τ and | x 53 | = ρ. The second external point x 2 will be parametrized as x µ 2 = (t = T, x 1 = X, x ⊥ = 0). The cross ratios are z = T +X andz = T − X. The Lorentzian formula (A.3) is then G ∆,0 (z,z) = N V d−1 (zz) ∆ 2 ∞ 0 dτ τ 0 ρ d−2 dρ π 0 dθ sin d−3 θ × 1 [(τ + 1) 2 − ρ 2 ] ∆ 2 [(τ + 1 − T ) 2 − (ρ 2 − 2ρX cos θ + X 2 )] ∆ 2 [τ 2 − ρ 2 ] d−∆ 2 , (A.10) where V D = 2π D+1 2 Γ( D+1 2 ) is the volume of a D-dimensional sphere. Let us first focus on the θ integral: (A.14) Back to I m (α), we then have I m (α) = 1 (2ρX) m+1 m k=0 (−1) k m k 1 α − m + k − 1 (τ + 1 − T ) 2 − (ρ 2 + X 2 ) k × 1 [(u −z + 1)(v − z + 1)] α−m+k−1 − 1 [(u − z + 1)(v −z + 1)] α−m+k−1 (A.15) We would like to write the integrand as a sum of powers of (u −z + 1) and (v − z + 1), so that the final integral can be done using (A.5). To achieve this, we rewrite (τ + 1 − T ) 2 − (ρ 2 + X 2 ) = 1 2 (u −z + 1)(v − z + 1) + 1 2 (u − z + 1)(v −z + 1) (A. 16) Figure 1 : 1A point A inside a polytope is given by a non-negative sum of the vertices v i . Figure 3 : 3A polygon. While the line segment (V n V 2 ) is not an edge of the polygon, (V 1 V 2 ) is. Figure 4 : 4Signs of brackets involving the point V i depend on its position relative to the other indices. In the top figure we see that for d = 2, the signs can be positive or negative depending on whether there is a gap between a, b. When there is no gap, the signs are always positive. The middle figure shows the same phenomenon for d = 4. The bottom figure shows the novelty for odd d, where V 1 and -V n must appear. Figure 5 : 5Both the tetrahedron and the triangular bipyramid are cyclic polytopes. Figure 6 : 6Two 3d polytopes with six vertices. (a) Cyclic polytope. (b) Non-cyclic polytope. Figure 7 : 7The red curve is α (5.2) as a function of ∆. The value of α is bounded in a narrow region with the upper bound given by 3/2 (∆ → 0) and the lower bound given by √ 2 (∆ → ∞). Figure 8 : 8The plots of the functions g 1 , g 4 , and g 5 . α 2 = 3 , α 3 33= 10.6137, α 4 = 58.5971, α 5 = 12.8414, α 6 = 689.048, α 7 = −951.43 . (5.16) Figure 9 : 9A four-point function is consistent if it lies on the intersection between the crossing plane X[∆ φ ] and the unitarity polytope U[{∆ i }]. ), the condition that the the crossing plane X[∆ φ ] intersect the above face in the unitarity polytope U[{∆ i }] is crossing plane X intersects with the face (i, i + 1, ∞) if and only ifX, i, i + 1 , − X, ∞, i + 1 , X, ∞, iall have the same sign .(7.6) Since (7.3) are all the faces of the three-dimensional polytope, the crossing plane intersects with the polytope iff either one of the two conditions (7.5) and (7.6) is satisfied. Of course generically if one condition is satisfied the other will not be; the only way X can intersect U[{∆ i }] on both kinds of faces is if it only intersects U[{∆ i }] on the edge (i, i + 1) common to both kinds of faces, forcing X, i, i + 1 = 0. Figure 12 : 12In order for the unitarity polytope to intersect with the crossing plane, we must have operators lying between {∆ − , ∆ + }. Case (a) and (b) demonstrates the cases where all the operators are outside this region, and hence cannot intersect with the crossing line. For case (c) it does intersects. Figure 13 : 13∞ have the same sign . (7.12) For a putative spectrum the four point function is bounded by the operators that are the closest to {∆ − , ∆ + }. Figure 14 : 14Here we present the 2d plot of the block vectors with its coordinates given by ( X, 0, v 1 , i , X, 0, v 2 , i ), where v 1 = (0, 1, 6, 15, 2, −2) and v 2 = (−1, Figure 18 : 18The projection of the cyclic polytope on the two-dimensional crossing plane X is a polygon. The edges and vertices of the polygon are constrained by the cyclic geometry (8.11). Here we show two examples allowed by the combinatorics. Figure 19 : 19An illustration of how the polygon on the crossing plane changes its shape as we deform the spectrum. Left: The polytope projected through the crossing plane and the identity 0 as discussed in Section 8.1. The references vectors v i 's are chosen as before. Right: The polygon from the polytope projected onto the crossing plane X, parametrized by F 2 /F 0 and F 4 /F 0 . Figure 21 : 21The gray area is the integration region for x µ 5 in the Lorentzian integral formula (A.3) for the conformal block. define u = τ + ρ and v = τ − ρ. We can then write − T ) 2 − (ρ 2 − 2ρX cos θ + X 2 )] have defined u = τ + ρ and v = τ − ρ and I m (α) − T ) 2 − (ρ 2 − 2ρXy + X 2 )] α . (A.12)Let us compute the integral I m (α). First we note that Here k labels the helicity sector. For example, the pure gluon amplitude has 2+k negative helicity states with k ≥ 0. Figure 15, and similarly forFigure 19, has been deformed from the realistic plot (e.g.Figure 14) to make it more visible. AcknowledgementsWe are grateful to J. Kaplan, P. Galashin, Y.A Scalar Conformal Blocks in General DimensionsIn this appendix we give a self-contained derivation of conformal blocks in general spacetime dimension, focusing on the case when all the operators are scalars. We will review a Lorentzian integral formula for conformal blocks[1,8,9]and give a closed form expression in even dimensions. For simplicity, we will focus on the case when the four external operators are all scalars and have the same dimension ∆ φ . We will also assume that the intermediate operator is a scalar with dimension ∆. This conformal block will be denoted as G ∆,0 (z,z) where the cross ratios are defined asThe defining properties of of the conformal blocks are (1) It is conformal invariant. (2) It is an eigenvector of the quadratic Casimir of the conformal group. (3) For small z,z, G ∆,0 (z,z) goes like (zz) ∆/2 . If we only impose the first two conditions, then the function Ψ ∆,0 , defined via the following Euclidean integralwill do the job. Here |·| is the Euclidean norm and the integral in x 5 is over the whole R d . This is usually known as the shadow representation of the conformal partial wave Ψ ∆,0 (z,z). The latter is not quite the conformal block G ∆,0 (z,z). Rather, Ψ ∆,0 (z,z) is a linear combination of G ∆,0 and G d−∆,0 . The two terms have different monodromy around z =z = 0, which can be used to extract the conformal block[28].We then haveWe should further do the binomial expansion in the numerator by writing u − z + 1 = u −z + 1 − (z −z) and so on. We therefore arrive atNote that I 2n (α) is an even function in ρ.Let us now return to the conformal block:Since I 2n (α) is even under the exchange u ↔ v, we can replace the integration range to be 0 < u < ∞ and 0 < v < ∞ at the price of a factor 1/2:(A.20)The u, v integrals can be done by expanding (u − v) d−2−2n−1 and using (A.5). We therefore arrive at the following final expression for the even-dimensional scalar conformal block as a finite sum of the hypergeometric functions:Let us fix the overall normalization constant N . To do this, we only need to compute the integral at z =z = 0:In four dimensions, d = 4, various sums in (A.21) collapse and we reproduce the standard scalar conformal block[21,30,31]G ∆,0 (z,z) = 2(∆ − 1)In six dimensions we have also checked that (A.21) agrees numerically with the known expression in[30]. Nonhamiltonian approach to conformal quantum field theory. A M Polyakov, Zh. Eksp. Teor. Fiz. 66Sov. Phys.A. M. Polyakov, "Nonhamiltonian approach to conformal quantum field theory," Zh. Eksp. Teor. Fiz. 66 (1974) 23-42. [Sov. Phys. JETP39,9(1974)]. Bounding scalar operator dimensions in 4D CFT. R Rattazzi, V S Rychkov, E Tonni, A Vichi, JHEP. 12R. Rattazzi, V. S. Rychkov, E. Tonni, and A. Vichi, "Bounding scalar operator dimensions in 4D CFT," JHEP 12 (2008) 031, 0807.0004. S Rychkov, EPFL Lectures on Conformal Field Theory in D≥ 3 Dimensions. SpringerBriefs in Physics. S. Rychkov, EPFL Lectures on Conformal Field Theory in D≥ 3 Dimensions. SpringerBriefs in Physics. 2016. The Conformal Bootstrap. D Simmons-Duffin, 1-74. 2017. 1602.07982Proceedings, Theoretical Advanced Study Institute in Elementary Particle Physics: New Frontiers in Fields and Strings (TASI. Theoretical Advanced Study Institute in Elementary Particle Physics: New Frontiers in Fields and Strings (TASIBoulder, CO, USAD. Simmons-Duffin, "The Conformal Bootstrap," in Proceedings, Theoretical Advanced Study Institute in Elementary Particle Physics: New Frontiers in Fields and Strings (TASI 2015): Boulder, CO, USA, June 1-26, 2015, pp. 1-74. 2017. 1602.07982. The conformal bootstrap. D Poland, D Simmons-Duffin, Nature Phys. 126D. Poland and D. Simmons-Duffin, "The conformal bootstrap," Nature Phys. 12 (2016), no. 6, 535-539. D Poland, S Rychkov, A Vichi, 1805.04405The Conformal Bootstrap: Theory, Numerical Techniques, and Applications. D. Poland, S. Rychkov, and A. Vichi, "The Conformal Bootstrap: Theory, Numerical Techniques, and Applications," 1805.04405. The Amplituhedron. N Arkani-Hamed, J Trnka, JHEP. 101312N. Arkani-Hamed and J. Trnka, "The Amplituhedron," JHEP 10 (2014) 030, 1312.2007. A Stereoscopic Look into the Bulk. B Czech, L Lamprou, S Mccandlish, B Mosk, J Sully, 1604.03110JHEP. 12907B. Czech, L. Lamprou, S. McCandlish, B. Mosk, and J. Sully, "A Stereoscopic Look into the Bulk," JHEP 07 (2016) 129, 1604.03110. Light-ray operators in conformal field theory. P Kravchuk, D Simmons-Duffin, 1805.00098JHEP. 11236P. Kravchuk and D. Simmons-Duffin, "Light-ray operators in conformal field theory," JHEP 11 (2018) 102, 1805.00098. [,236(2018)]. Bootstrapping the 3d Ising twist defect. D Gaiotto, D Mazac, M F Paulos, 1310.5078JHEP. 10003D. Gaiotto, D. Mazac, and M. F. Paulos, "Bootstrapping the 3d Ising twist defect," JHEP 03 (2014) 100, 1310.5078. The S-matrix bootstrap. Part I: QFT in AdS. M F Paulos, J Penedones, J Toledo, B C Van Rees, P Vieira, 1607.06109JHEP. 13311M. F. Paulos, J. Penedones, J. Toledo, B. C. van Rees, and P. Vieira, "The S-matrix bootstrap. Part I: QFT in AdS," JHEP 11 (2017) 133, 1607.06109. Remarks on the Sachdev-Ye-Kitaev model. J Maldacena, D Stanford, 1604.07818Phys. Rev. 9410J. Maldacena and D. Stanford, "Remarks on the Sachdev-Ye-Kitaev model," Phys. Rev. D94 (2016), no. 10, 106002, 1604.07818. Crossing symmetry in alpha space. M Hogervorst, B C Van Rees, 1702.08471JHEP. 19311M. Hogervorst and B. C. van Rees, "Crossing symmetry in alpha space," JHEP 11 (2017) 193, 1702.08471. Cut-touching linear functionals in the conformal bootstrap. J Qiao, S Rychkov, 1705.01357JHEP. 0676J. Qiao and S. Rychkov, "Cut-touching linear functionals in the conformal bootstrap," JHEP 06 (2017) 076, 1705.01357. A tauberian theorem for the conformal bootstrap. J Qiao, S Rychkov, 1709.00008JHEP. 12J. Qiao and S. Rychkov, "A tauberian theorem for the conformal bootstrap," JHEP 12 (2017) 119, 1709.00008. The Analytic Functional Bootstrap I: 1D CFTs and 2D S-Matrices. D Mazac, M F Paulos, 1803.10233D. Mazac and M. F. Paulos, "The Analytic Functional Bootstrap I: 1D CFTs and 2D S-Matrices," 1803.10233. The Analytic Functional Bootstrap II: Natural Bases for the Crossing Equation. D Mazac, M F Paulos, 1811.10646D. Mazac and M. F. Paulos, "The Analytic Functional Bootstrap II: Natural Bases for the Crossing Equation," 1811.10646. A Crossing-Symmetric OPE Inversion Formula. D Mazac, 1812.02254D. Mazac, "A Crossing-Symmetric OPE Inversion Formula," 1812.02254. Bootstrapping the half-BPS line defect. P Liendo, C Meneghelli, V Mitev, 1806.01862JHEP. 1077P. Liendo, C. Meneghelli, and V. Mitev, "Bootstrapping the half-BPS line defect," JHEP 10 (2018) 077, 1806.01862. Defects in conformal field theory. M Billò, V Gonçalves, E Lauria, M Meineri, 1601.02883JHEP. 0491M. Billò, V. Gonçalves, E. Lauria, and M. Meineri, "Defects in conformal field theory," JHEP 04 (2016) 091, 1601.02883. Conformal Partial Waves: Further Mathematical Results. F A Dolan, H Osborn, 1108.6194F. A. Dolan and H. Osborn, "Conformal Partial Waves: Further Mathematical Results," 1108.6194. Analytic bounds and emergence of AdS 2 physics from the conformal bootstrap. D Mazac, 1611.10060JHEP. 14604D. Mazac, "Analytic bounds and emergence of AdS 2 physics from the conformal bootstrap," JHEP 04 (2017) 146, 1611.10060. B Grunbaum, V Kaibel, V Klee, G M Ziegler, Convex polytopes. New YorkSpringerB. Grunbaum, V. Kaibel, V. Klee, and G. M. Ziegler, Convex polytopes. Springer, New York, 2003. N Arkani-Hamed, J L Bourjaily, F Cachazo, A B Goncharov, A Postnikov, J Trnka, Grassmannian Geometry of Scattering Amplitudes. Cambridge University PressN. Arkani-Hamed, J. L. Bourjaily, F. Cachazo, A. B. Goncharov, A. Postnikov, and J. Trnka, Grassmannian Geometry of Scattering Amplitudes. Cambridge University Press, 2016. Eliminating spurious poles from gauge-theoretic amplitudes. A Hodges, JHEP. 13505A. Hodges, "Eliminating spurious poles from gauge-theoretic amplitudes," JHEP 05 (2013) 135, 0905.1473. Tchebycheff systems: with applications in analysis and statistics. Pure and applied mathematics. S Karlin, W Studden, Interscience PublishersS. Karlin and W. Studden, Tchebycheff systems: with applications in analysis and statistics. Pure and applied mathematics. Interscience Publishers, 1966. Unwinding the Amplituhedron in Binary. N Arkani-Hamed, H Thomas, J Trnka, 1704.05069JHEP. 0116N. Arkani-Hamed, H. Thomas, and J. Trnka, "Unwinding the Amplituhedron in Binary," JHEP 01 (2018) 016, 1704.05069. D Simmons-Duffin, 1204.3894Projectors, Shadows, and Conformal Blocks. 146D. Simmons-Duffin, "Projectors, Shadows, and Conformal Blocks," JHEP 04 (2014) 146, 1204.3894. S Ferrara, R Gatto, A F Grillo, Properties of Partial Wave Amplitudes in Conformal Invariant Field Theories. 26226S. Ferrara, R. Gatto, and A. F. Grillo, "Properties of Partial Wave Amplitudes in Conformal Invariant Field Theories," Nuovo Cim. A26 (1975) 226. Conformal four point functions and the operator product expansion. F A Dolan, H Osborn, hep-th/0011040Nucl. Phys. 599F. A. Dolan and H. Osborn, "Conformal four point functions and the operator product expansion," Nucl. Phys. B599 (2001) 459-496, hep-th/0011040. Conformal partial waves and the operator product expansion. F A Dolan, H Osborn, hep-th/0309180Nucl. Phys. 678F. A. Dolan and H. Osborn, "Conformal partial waves and the operator product expansion," Nucl. Phys. B678 (2004) 491-507, hep-th/0309180.
[]
[ "Peer-reviewed preprint A Catalog of Spectra, Albedos, and Colors of Solar System Bodies for Exoplanet Comparison", "Peer-reviewed preprint A Catalog of Spectra, Albedos, and Colors of Solar System Bodies for Exoplanet Comparison" ]
[ "J H Madden [email protected] \nCarl Sagan Institute\n\n\nAstronomy Department\nCornell University\n311 Space Science Building14850IthacaNY\n", "Lisa Kaltenegger \nCarl Sagan Institute\n\n\nAstronomy Department\nCornell University\n311 Space Science Building14850IthacaNY\n" ]
[ "Carl Sagan Institute\n", "Astronomy Department\nCornell University\n311 Space Science Building14850IthacaNY", "Carl Sagan Institute\n", "Astronomy Department\nCornell University\n311 Space Science Building14850IthacaNY" ]
[]
We present a catalog of spectra and geometric albedos, representative of the different types of Solar System bodies, from 0.45 to 2.5 microns. We analyzed published calibrated, uncalibrated spectra, and albedos for Solar System objects and derived a set of reference spectra and reference albedo for 19 objects that are representative of the diversity of bodies in our Solar System. We also identified previously published data that appears contaminated. Our catalog provides a baseline for comparison of exoplanet observations to 19 bodies in our own Solar System, which can assist in the prioritization of exoplanets for time intensive follow-up with next generation Extremely Large Telescopes (ELTs) and space based direct observation missions. Using high and low-resolution spectra of these Solar System objects, we also derive colors for these bodies and explore how a color-color diagram could be used to initially distinguish between rocky, icy, and gaseous exoplanets. We explore how the colors of Solar System analog bodies would change when orbiting different host stars. This catalog of Solar System reference spectra and albedos is available for download through the Carl Sagan Institute.
10.1089/ast.2017.1763
[ "https://export.arxiv.org/pdf/1807.11442v1.pdf" ]
51,876,700
1807.11442
56cf2c7774559646b7f635fcb1aba03b32714b52
Peer-reviewed preprint A Catalog of Spectra, Albedos, and Colors of Solar System Bodies for Exoplanet Comparison Published to Astrobiology 2018 J H Madden [email protected] Carl Sagan Institute Astronomy Department Cornell University 311 Space Science Building14850IthacaNY Lisa Kaltenegger Carl Sagan Institute Astronomy Department Cornell University 311 Space Science Building14850IthacaNY Peer-reviewed preprint A Catalog of Spectra, Albedos, and Colors of Solar System Bodies for Exoplanet Comparison Published to Astrobiology 2018Solar SystemReflectance SpectroscopyPlanetary HabitabilityBiosignaturesExoplanetsPlanetary EnvironmentsExoplanet CharacterizationPhotometric ColorsAlbedo We present a catalog of spectra and geometric albedos, representative of the different types of Solar System bodies, from 0.45 to 2.5 microns. We analyzed published calibrated, uncalibrated spectra, and albedos for Solar System objects and derived a set of reference spectra and reference albedo for 19 objects that are representative of the diversity of bodies in our Solar System. We also identified previously published data that appears contaminated. Our catalog provides a baseline for comparison of exoplanet observations to 19 bodies in our own Solar System, which can assist in the prioritization of exoplanets for time intensive follow-up with next generation Extremely Large Telescopes (ELTs) and space based direct observation missions. Using high and low-resolution spectra of these Solar System objects, we also derive colors for these bodies and explore how a color-color diagram could be used to initially distinguish between rocky, icy, and gaseous exoplanets. We explore how the colors of Solar System analog bodies would change when orbiting different host stars. This catalog of Solar System reference spectra and albedos is available for download through the Carl Sagan Institute. INTRODUCTION The first spectra of extrasolar planets have already been observed for gaseous bodies (e.g. Dyudina et al., 2016;Kreidberg et al., 2014;Mesa et al., 2016;Sing et al., 2016;Snellen et al., 2010). To aid in comparative planetology exoplanet observations will require an accurate set of disk-integrated reference spectra, and albedos of Solar System objects. To establish this catalog for the solar system we use disk-integrated spectra from several sources. We use un-calibrated and calibrated spectra as well as albedos when available from the literature to compile our reference catalog. About half of the spectra and albedos we derive in this paper are based on un-calibrated observations obtained from the Tohoku-Hiroshima-Nagoya Planetary Spectral Library (THN-PSL) (Lundock et al., 2009), which provides a large coherent dataset of uncalibrated data taken with the same telescope. Our analysis shows contamination of part of that dataset, as discussed in section 2.1 and 2.2, therefore we only include a subset of their data in our catalog (see discussion 4.3). This paper provides the first catalog of calibrated spectra (Fig. 1) and geometric albedos (Fig. 2) of 19 bodies in our Solar System, representative of a wide range of object types: all 8 planets, 9 moons (representing, icy, rocky, and gaseous moons), and 2 dwarf planets (Ceres in the Asteroid belt and Pluto in the Kuiper belt). This catalog is available through the Carl Sagan Institute 1 to enable comparative planetology beyond our Solar System. Several teams have shown that photometric colors of planetary bodies can be used to initially distinguish between icy, rocky, and gaseous surface types (Krissansen-Totton et al., 2016;Cahoy et al., 2010;Lundock et al., 2009;Traub, 2003) and that models of habitable worlds lie in a certain color space (Krissansen-Totton et al., 2016;Hegde & Kaltenegger, 2013;Traub, 2003). We expand these earlier analyses from a smaller sample of Solar System objects to 19 Solar System bodies in our catalog, which represent the diversity of bodies in our Solar System. In addition, we explore the influences of spectral resolution on characterization of planets in a color-color diagram by creating low resolution versions of our data. Using the derived albedos, we also explore how colors of analog planets would change if they were orbiting other host stars. Section 2 of this paper describes our methods to identify contamination in the THN-PSL data, derive calibrated spectra, albedos, and colors from the un-calibrated THN-PSL data and how we model the colors of the objects around the Sun and other host stars. Section 3 presents our results, Section 4 discusses our catalog, and Section 5 summarizes our findings. METHOD We first discuss our analysis of the THN-PSL data and how we identified contaminated data in detail, then discuss how we derived spectra and albedo from the uncontaminated data. Finally we discuss spectra and albedo from other data sources for our catalog. Calibrating the Spectra of Solar System Bodies from the THN-PSL The THN-PSL is a collection of observations of 38 spectra for 18 Solar System objects observed over the course of several months in 2008. The spectra of one of the objects, Callisto, was contaminated and could not be re-observed, while the spectrum for Pluto in the database is a composite spectrum of both Pluto and Charon. We analyzed the data for the 16 remaining Solar System objects for additional contamination and found 6 apparently contaminated objects among them, leaving 10 objects in the database that do not appear contaminated. Their albedos are similar to published values in the literature for the wavelength range such data is available for. We show the derived albedos for both contaminated and uncontaminated data from the database in Fig. 3, compared to available values from the literature for these bodies. The THN-PSL data were taken in 2008 using the TRISPEC instrument while on the Kanata Telescope at the Higashi-Hiroshima observatory. TRISPEC (Watanabe et al., 2005) splits light into one visible channel andtwo near-infrared channels giving a wavelength range of0.45−2.5µm. The optical band covered 0.45-0.9µm and had a resolution of = /∆ = 138. The first IR channel has a coverage from 0.9-1.85µm and had a resolutionof = 142. The second IR channel has a coverage from 1.85-2.5µm and had a resolution of = 360. Note that theslit subtends 4.5 arcseconds by 7 arcmin meaning that spectra for larger bodies such as Saturn and Jupiter were notdisk integrated (see discussion). As discussed in the originalpaper, all spectra are unreliable below 0.47µm and between0.9-1.0µm from a dichroic coating problem with thebeam splitters. Near 1.4µm and 1.8µm the Earth's waterabsorption degrades the quality and beyond 2.4µm thermal contamination is an issue. These wavelength regions aregrayed out in all relevant figures in our paper but do notinfluence our color analysis, due to the choice of filters. The raw data available for download includes all data points. The THN-PSL paper discusses several initial observations of moons that were contaminated with light from their host planet rendering their spectra inaccurate (080505 Callisto, 081125 Dione, 080506 Io, and 080506 Rhea). These objects (with the exception of Callisto) were observed again and the extra light was removed in a different fashion to more accurately correct the spectra (Lundock et al., 2009). Callisto was not re-observed and therefore the THN-PSL Calisto data remained contaminated (Fig.3). The fluxes of the published THN-PSL observations were not calibrated but arbitrary normalized to the value of 1 at 0.7µm. This makes the dataset generally useful to compare the colors of the uncontaminated objects, as shown in the original paper, but limits the data's usefulness as reference for extrasolar planet observations because geometric albedos can only be derived from calibrated spectra. The conversion factors used in the original publication were not available (Ramsey Lundock, private communication). However, in addition to the V magnitude, the THN-PSL gives the color differences: V-R, R-I, R-J, J-K, and H-Ks for each observation, providing the R, I, and J magnitudes. Therefore, we used the published V, R, I, and J magnitudes to derive the conversion factor for each spectrum to match the published color magnitudes and to calibrate the THN-PSL observations. We define the conversion factor such that /012 = where /012 and are the normalized and absolute spectra respectively. Adapting the method outlined in Fukugita et al. (1995) the magnitude in a single band using the filter response, , and the spectrum of Vega, 5678 , is given by equation (1). The spectrum of Vega, (Bohlin, 2014) 2 as well as the filter responses, are the same as in the THN-PSL publication and shown in Fig. 4 and Fig. 5 respectively. The filters we used are V (Johnson & Morgan, 1953); R and I (Bessell,2 www.stsci.edu/hst/observatory/crds/calspec.html (alpha_lyr_stis_008.fits) 1979; Cousins, 1976); J, H, K, and Ks (Tokunaga et al., 2002). Since the THN-PSL paper recorded the V, R, I, and J color magnitudes for each object we derive the conversion factor to obtain each magnitude and average them to obtain . For example, we substitute 9 /012 = in equation (2) and isolate 9 as shown in equation (3) = 9 + T + U + V 4 ( 4 ) We used this method to calibrate the THN-PSL data for each object. When comparing the coefficient of variation (CV) of the conversion factors for each body we found that the data showed two distinct groups, one with a CV greater than 14% and another with a CV smaller than 6%. We use that distinction to set the level of the conversion factor for uncontaminated spectra to (CV>6%) over the different filter bins. If the CV value was in the second group (CV>14%), the data is flagged as contaminated and not used in our catalog. The nature of this contamination is unclear, it could be photometric error during the observation, excess light from the host planet or other effects that influenced the observations. The values calculated for the 9 , T , U , V , , and the CV for each observation is given in Table A1. Albedos of Solar System Bodies We then derive the geometric albedo from the calibrated spectra as a second part of our analysis (see Table 1 and Table 2 for references) by dividing the observed flux of the Solar System bodies by the solar flux and accounting for the observation geometry as given in equation (5) (de Vaucouleurs, 1964). = D Q 8 X Q C Y Z T X Q 8 ⊕ Q C \]L( 5 ) where d is the separation between Earth and the body, a b the distance between the Sun and the body at the time of observation, and ⊕ the semi major axis of Earth. _`/ and are the fluxes from the Sun seen from Earth and the body seen from Earth respectively, a is the radius of the body being observed, and is the value of the phase function at the point in time the observation was taken. For _`/ , we used the standard STIS Sun spectrum (Bohlin et al., 2001) 3 shown in Fig. 4. If the geometric albedo exceeds 1, the data is flagged as contaminated and not used in our catalog. Note that we also compared the spectra that were flagged as contaminated in this 2-step analysis with the available data and models from other groups (Fig. 3). All flagged spectra show a strong difference in albedo for these bodies observed by other teams, supporting our analysis method (see Fig. 3). Using colors to characterize planets We use a standard astronomy tool, a color-color diagram, to analyze if we can distinguish Solar System bodies based on their colors and what effect resolution and filter choice has on this analysis. Several teams have shown that photometric colors of planetary bodies can be used to initially distinguish between icy, rocky, and gaseous surface types 3 www.stsci.edu/hst/observatory/crds/calspec.html (sun_reference_stis_002.fits) (Krissansen-Totton et al., 2016;Cahoy et al., 2010;Lundock et al., 2009;Traub, 2003). We calculated the colors from high and lowresolution spectra to mimic early results from exoplanet observations as well as explored the effect of spectral resolution on the colors and their interpretation. The error for colors derived from the THN-PSL data was calculated by adding the errors used by Lundock et al., 2009 and the error accumulated through the conversion process of 6% in the value. This gives ∆ − = ±0.34 and ∆ − = ±0.28 for the error values. We reduce the high-resolution data of = 138 − 360 to = 8 in order to mimic colors that are generated from low-resolution spectra as shown in Fig. 6. The colors at high resolutions were used to determine the best color-color combination for surface and atmospheric characterization, a process that was repeated for colors derived from low resolution spectra. We also explored how to characterize Solar System analog planets around other host stars using their colors by placing the bodies at an equivalent orbital distance around different host stars (F0V, G0V, M0V, and M9V). We used stellar spectra for the host stars from the Castelli and Kurucz Atlas (Castelli & Kurucz, 2004) 4 and the PHOENIX library (Husser et al., 2013) 5 (Fig. 4). As a first order approximation, we have assumed that the albedo of the object would not change under this new incoming stellar flux (See discussion). FIG. 1. Spectra for 19 Solar System bodies for Ceres, Dione, Earth, Jupiter, Moon, Neptune, Rhea, Saturn, Titan, Uranus (albedos calculated in this paper based on un-calibrated data by Lundock et al., 2009), Callisto , Enceladus (Filacchione et al., 2012), Europa , Ganymede , Io , Mars (McCord & Westphal, 1971), Mercury (Mallama, 2017), Pluto Protopapa et al., 2008), and Venus (Meadows 2006 (theoretical); Pollack et al. 1978 (observation)). Items are arranged by body type then by distance from the Sun. , Enceladus (Filacchione et al., 2012), Europa , Ganymede , Io , Mars (McCord & Westphal, 1971), Mercury (Mallama, 2017), Pluto Protopapa et al., 2008), and Venus (Meadows 2006 (theoretical); Pollack et al. 1978 (observation)). Items are arranged by body type then by distance from the Sun. , b , c , d , e (Meadows, 2006), f , g (Karkoschka, 1998), h (Lane & Irvine, 1973), i (McCord & Westphal, 1971, j (Mallama, 2017), k (Fink & Larson, 1979), l Protopapa et al., 2008), m , n (Cassini VIMS -NASA PDS). Items are arranged by body type then by distance from the Sun. Jupiter THN-PSL Ref.g Titan THN-PSL Ref.g Ref.k λ (μ ) Earth THN-PSL Ref.d Ref.e Luna THN-PSL Ref.h λ (μ ) Ceres THN-PSL Ref.b Dione THN-PSL Ref.c Ref.n λ (μ ) RESULTS A spectra and albedo catalog of a diverse set of Solar System Objects We assembled a reference catalog of 19 bodies in our Solar System as a baseline for comparison to upcoming exoplanet observations. To provide a wide range of Solar System bodies in our catalog we compiled and analyzed data from un-calibrated and calibrated spectra of previously published disk-integrated observations. FIG.4. Reference spectra used for calibration (Sun and Vega) and model spectra used for host stars at 1 AU (F0V, G0V, K0V, M0V, M9V). Vega was multiplied by 10 13 to fit on the same plot. Our catalog contains spectra and geometric albedo of the 8 planets: Mercury (Mallama, 2017), Venus (Meadows, 2006;Pollack et al., 1978), Earth (Lundock et al., 2009), Mars (McCord & Westphal, 1971), Jupiter, Saturn, Uranus, and Neptune (Lundock et al., 2009). 9 moons: Io , Callisto, Europa, Ganymede , Enceladus 6 (Filacchione et al., 2012), Dione, Rhea, the Moon, and Titan (Lundock et al., 2009), and 2 dwarf planets: Ceres (Lundock et al., 2009) and Pluto Protopapa et al., 2008). For the 8 planets of the Solar System, 9 moons (Callisto, Dione, Europa, Ganymede, Io, the Moon, Rhea, Titan), and 2 dwarf planets (Ceres and Pluto) we present the absolute fluxes in Fig.1 and the geometric albedos in Fig. 2. Contaminated spectra in the THN-PSL dataset When we derived the geometric albedo from the calibrated THN-PSL spectra as the second part of our analysis, we found that 6 objects (Io, Europa, Ganymede, Mercury, Mars, and Venus) display geometric albedos exceeding 1, indicating that the measurements are contaminated (see Table 2, Fig. 3). We compared the albedo of these six observations to previously published values in the literature (Mallama, 2017;Meadows, 2006;Spencer et al., 1995;Buratti & Veverka, 1983;Pollack et al., 1978;Fanale et al., 1974;McCord & Westphal, 1971) and found substantial differences over the wavelength covered by the different teams (Fig.3). We list the 7 bodies with contaminated THN-PSL measurements in Table 2. Table 1 lists the spectra of the 10 bodies from the THN-PSL database, which were not flagged as contaminated and are part of our catalog, as well as the Pluto-Charon spectrum. It shows the properties we used to calculate their albedos, once we un-normalized the uncalibrated data as well as references to 6 Data available on NASA's Planetary Data Archive: Spectra not flagged as contaminated in the THN-PSL dataset (v1640517972_1, v1640518173_1, v1640518374_1) - - - - - - λ (μ ) ( / ) λ (μ ) previously published albedos. Note that we did not use the THN-PSL Pluto-Charon spectrum in our analysis because it is not a Pluto spectrum. Instead we use the spectrum for Pluto published by two teams Protopapa et al., 2008) that cover the wavelength range requires for our analysis. We show both spectra in Fig.3 for completeness. We compared the derived albedo of the 10 bodies from the THN-PSL database, which were not flagged as contaminated, against diskintegrated spectra and albedo from observations or models in the literature for the wavelengths available. Our derived albedos are in qualitative agreement with previously published data (Fig. 3) for Ceres ; Dione and Rhea Cassini VIMS); Earth Meadows, 2006); the Moon (Lane & Irvine, 1973); Jupiter, Saturn, Uranus, Neptune, and Titan (Karkoschka, 1998;Fink & Larson, 1979). We simulated their absolute fluxes with the same observation geometry as the THN-PSL spectra to be able to compare them (Fig. 3). Note that small changes are likely due to observation geometry as well as the changes in the atmospheres over the time between observations. Giant planets have daily variations in brightness (Belton et al., 1981). For completeness we include the THN-PSL observation of the combined spectrum of Pluto and Charon and compare it to the albedo of Pluto Protopapa et al., 2008). We averaged several Cassini VIMS observations together and used them as references for Rhea and Dione 7 . Using Color-color diagrams to initially characterize Solar System bodies To qualify the Solar System objects in terms of extrasolar planet observables, we 7 Data available on NASA's PDS: Rhea -v1498350281_1, v1579258039_1, v1579259351_1 Dione -v1549192526_1, v1549192731_1, v1549193961_1 consider whether they are gaseous, icy, or rocky bodies and do not distinguish between moons and planets. Thus, Titan and Venus are both gaseous bodies in our analysis since only their atmosphere is being observed at this wavelength range. FIG.6. An example of a reduced resolution spectrum compared to its high-resolution observations. Fig. 7 shows the spectra as well as the colors for the three subcategories in our catalog. The top panel shows gaseous bodies: Jupiter, Saturn, Uranus, Neptune, Venus, Titan. The middle panel shows rocky bodies: Mars, Mercury, Io, Ceres, Earth, and the Moon. The bottom panel shows icy bodies: Ganymede, Dione, Rhea, Callisto, Pluto, Europa, and Enceladus. Each surface type occupies its own color space in the diagram. To explore how the resolution of the available spectra and thus the observation time available would influence this classification, we reduced the spectral resolution for all spectra to = /∆ of 8. ��� ��� ��� ��� ��� �� -�� �� -�� �� -�� �� -�� �� -�� �� -�� �� -�� λ (μ�) �������� ���������� (�/� � Å) FIG.7. Spectra and color-color diagrams for gaseous, rocky and icy bodies of the Solar System. Previously published data was used for bodies that were contaminated in the THN-PSL following the references in Fig. A1. � � � ��� ��� ��� ��� ��� �� -� �� -� �� -� �� -� �� -� �� -� �� � �� � λ (μ�) ���� ��� ���� ������� ▲ ▲ ▲ ▲ ▲ ▲ ▲ ▲ ▲ ▲ ▲ ▲ ▲ ▲ ▲ ▲ ▲ 1 2 3 4 1 2 1 2 3 4 1 2 3 ▲ ������� ▲ ������ ▲ ����� ▲ ������� ▲ ������ ▲ ����� ����� -� -� -� -� � � � � -� -� -� -� -� � � � �-� �-� Gas � � � ��� ��� ��� ��� ��� �� -� �� -� �� -� �� � �� � λ (μ�) ���� ���� ���� ������� • • • • • • • • • • • • • ����� • �� • ���� • ���� • ������� • ����� ����� -� -� -� -� � � � � -� -� -� -� -� � � � �-� �-� Rock � � � ��� ��� ��� ��� ��� �� -� �� -� �� -� �� -� �� � �� � λ (μ�) ���� ��� ���� ������� ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ����� ■ ������ ■ ����� ■ ���� ■ �������� ■ �������� ■ ��������� ����� -� -� -� -� � � � � -� -� -� -� -� � � � �-� �-� Ice We find that the derived colors of the Solar System bodies do not shift substantially (Fig. 8), showing that colors derived from high and low-resolution spectra provide similar capabilities for first order colorcharacterization of a Solar System object. While a slight shift occurs in the color-color diagram, the three different Solar System surface types (gaseous, rocky and icy) can still be distinguished, showing that colors from low resolution spectra can be used for first order characterization of bodies in our Solar System. We chose a lower resolution of = 8 since the bin width near the K-band becomes larger than the K-band filter itself at lower resolutions. Bandwidth is directly proportional to the amount of light collected by a telescope and thus the time needed for observation. If low-resolution spectra could initially characterize a planet, exoplanets could be prioritized for time-intense high-resolution follow-up observations from their colors. At a lower resolution, a higher signal to noise ratio is required to achieve the same distinguishability as an observation at high resolution. The ratio of the integral uncertainties of two spectra at different resolutions, Δ i and Δ j , is proportional to the number of bins being integrated over, , and the measurement uncertainty of each bin, , as shown in equation 6. We explored different filter combinations to best distinguish between icy, gaseous and rocky bodies. We find that R-J versus J-K colors distinguish the bodies best, (see also Krissansen-Totton et al., 2016;Cahoy et al., 2010;Lundock et al., 2009;Traub, 2003). ▲ ▲ ▲ ▲ ▲ ▲ ▲ ▲ ▲ ▲ ▲ ▲ ▲ ▲ ▲ ■ ■ ■ ■ ■ ■■ • • • • • • △ △ △ △ △ △ △ △ △ △ △ △ △ △ △ □ □ □ □ □ □ □ ○ ○ ○ ○ ○ ○ - - - - - - - - - - - Full Res. R=8 Type △ ▲ Gas □ ■ Ice ○ • Rock Error ▲ ▲ ▲ ▲ ▲ ▲ ▲ ▲ ▲ ▲ ▲ ▲ ▲ ▲ ▲ ■ ■ ■ ■ ■ ■ ■ • • • • • • △ △ △ △ △ △ △ △ △ △ △ △ △ △ △ □ □ □ □ □ □ □ ○ ○ ○ ○ ○ ○ - - - - - - - - - - - Full Res. R=8 Type △ ▲ Gas □ ■ Ice ○ • Rock Error ▲ ▲ ▲ ▲ ▲ ▲ ▲ ▲ ▲ ▲ ▲ ▲ ▲ ▲ ▲ ■ ■ ■ ■ ■ ■ ■ • • • • • • △ △ △ △ △ △ △ △ △ △ △ △ △ △ △ □ □ □ □ □ □ □ ○ ○ ○ ○ ○ ○ - - - - - - - - - - - Full Res. R=8 Type △ ▲ Gas □ ■ Ice ○ • Rock Error If only a smaller wavelength range is available, such as V through H or V through I, Fig. 8 shows which alternate filter combinations can still separate the surface types. However, Fig. 8 shows that a wider wavelength range improves the characterization of surfaces for Solar System objects substantially. The success of using this method to characterize the Solar System reduces with narrower wavelength coverage. Long wavelengths (J and K band) especially help distinguish different kind of Solar System bodies (Fig 8). To characterize all bodies in the Solar System it is important to have wavelength coverage of the visible and near IR at a resolution that distinguishes each band. Colors of Solar System analog bodies orbiting different host stars To provide observers with the colorspace where Solar System analog exoplanets could be found, we use the albedos shown in Fig. 2 to explore the colors of similar bodies orbiting different host stars. For airless bodies the albedo is a direct surface measurement, therefore that assumption should be valid for similar surface composition. For objects with substantial atmospheres that can be influenced by stellar radiation, individual models are needed to assess whether the albedo of a system's bodies would notably change due to the different host star flux. Note that Earth's albedo would not change significantly from F0V to M9V host stars in the wavelength range considered here (see Rugheimer et al., 2013Rugheimer et al., , 2015. Fig. 9 shows the colors of the Solar System analog bodies orbiting other host stars. Because their albedo is assumed to be constant the shift closely mimics the shift in colors of the host star. For hotter host stars the colors shift to a bluer section of the color-color diagram (F0V). For cooler host stars the colors shift toward a redder portion of the color space (M0V, and M9V). This provides insights for observers into where the divisions in color-space of rocky, icy, and gaseous bodies lie depending on the host star's spectral class. DISCUSSION Change in colors of Gaseous Planets Some gaseous bodies in the THN-PSL with multiple spectra (Uranus, Neptune, and Titan) show variations in their colors larger than the error (Fig. 7). Gaseous bodies are known to vary in brightness over timescales shorter than the time between these observations (Belton et al., 1981), consistent with the THN-PSL data. This indicates that any sub-divide for gaseous bodies would be challenging from their colors alone. However the K-band is also more susceptible to photometric error as discussed in the THN-PSL paper (Lundock et al., 2009), which could add to the observed differences. Multiple uncontaminated observations across the same wavelengths for rocky or icy bodies are not available in the literature, therefore we cannot assess whether a spread in colors also exists for rocky or icy bodies, independent of viewing geometry. Non disk-integrated spectra of some objects Due to the finite field of view of the TRISPEC instrument the observations of the Earth, Moon, Jupiter, and Saturn were not disk integrated. A disk integrated spectrum is preferred because it averages the light from the entire body instead of from a small region of its surface. The spectrograph slit was centered on the planet and aligned longitudinally for Jupiter and Saturn making the spectra as representative of the entire surface as possible. When comparing their spectra to other sources, the spectra shows a good match to disk integrated spectra (Karkoschka, 1998;Fink & Larson, 1979). This could not be done for the Moon and the Earthshine observations leading to variations in their spectra from previously published data. Spectra derived from the THN-PSL dataset We have used several spectra of planets and moons from the THN-PSL dataset that did not appear contaminated in our analysis. The contamination of 6 objects in the THN-PSL database raises questions about the viability of the spectra in this database in general. We compared the 10 bodies that we used in our catalog, which were not flagged as contaminated against disk-integrated spectra and albedo from observations or models in the literature. These observations or models were not available for the whole wavelength range, thus we could not compare the full wavelength range, however the range covered shows ▲ ▲ ▲ ▲ ▲ ▲ ▲ ▲ ▲ ▲ ▲ ▲ ▲ ▲ ▲ ■ ■ ■ ■ ■ ■ ■ • • • • • • Type ▲ Gas ■ Ice • Rock Error - - - - - - - - - - - - ▲ ▲ ▲ ▲ ▲ ▲ ▲ ▲ ▲ ▲ ▲ ▲ ▲ ▲ ▲ ■ ■ ■ ■ ■ ■ ■ • • • • • • Type ▲ Gas ■ Ice • Rock Error - - - - - - - - - - - ▲ ▲▲ ▲ ▲ ▲ ▲ ▲ ▲ ▲ ▲ ▲ ▲ ▲ ▲ ■ ■ ■ ■ ■ ■ ■ • • • • • • Type ▲ Gas ■ Ice • Rock Error - - - - - - - - - ▲ ▲ ▲ ▲ ▲ ▲ ▲ ▲ ▲ ▲ ▲ ▲ ▲ ▲ ▲ ■ ■ ■ ■ ■ ■ ■ • • • • • • Type ▲ Gas ■ Ice • Rock Error - - - - - - - qualitative agreement with previously published data (Fig. 3) and thus we have included the spectra and albedos we derived from the un-calibrated THN-PSL data in our analysis. For Earth and the Moon time variability of the spectra can be explained because observations of the Earth and the Moon were not disk integrated, due to the spectrographic slit as discussed in 4.2. Note that for most solar system objects, reliable discaveraged spectra for different times are lacking, which are observations that would be useful for future exoplanet comparisons. Similarity of the color of water and rock The primarily liquid water surface of the Earth is unique in the Solar System however, this is not apparent in the color-color diagrams (Fig 7, 8, and 9). This is because water and rock share a similar, relatively flat, albedo over the 0.5-2.5µm wavelength range. This specific color-color degeneracy for rock and water can be broken if shorter wavelength observations are available (see also Krissansen-Totton et al., 2016). Color of CO 2 atmospheres appear similar to icy surfaces Venus has the interesting position of being a rocky planet that has a gaseous appearance but lies amongst the colors of the icy bodies in the color-color diagrams. This is due to Venus having a primarily CO 2 atmosphere which provides a similarly sloped albedo as ice in this wavelength range. This shows the limits of initial characterization through a color-color diagram. It will make habitability assessments from colors alone of terrestrial planets especially on the edges of their habitable zones very difficult since CO 2 is likely to be present. Estimates of the effective stellar flux that reaches the planet or moon could help to disentangle the ice/CO 2 degeneracy on the inner edge of the Habitable Zone. On the outer edge of the Habitable Zone both surface types should be present, CO 2 -rich atmospheres as well as icy bodies, therefore higher resolution spectra will be needed to break such degeneracy. Spotting the absence of methane in a gas planet's colors The absence of methane in Venus' atmosphere makes it distinguishable from the other gaseous objects in our Solar System in the color-color diagrams. More information about the atmospheric composition of exoplanets and exomoons would be needed before we can assess whether we could derive similar inferences for other planetary systems. Colors of objects that are made of 'dirty snow' In the color-color diagrams, Ganymede and Callisto fall in the region between rocky and ice bodies due to their high amount of 'dirty snow' compared to the other bodies in the icy body category. Given the error bars in their colors, these two bodies could be placed in either the rocky or icy categories. Such rocky-icy bodies are anticipated in other planetary systems as well and should lie in the color space between the icy and rocky bodies like in our Solar System. CONCLUSIONS We present a catalog of spectra, and geometric albedos for 19 Solar System bodies, which are representative of the types of surfaces found throughout the Solar System for wavelengths from 0.45-2.5 microns. This catalog provides a baseline for comparison of exoplanet observations to the most closely studied bodies in our Solar System. The data used and created by this paper is available for download through the Carl Sagan Institute 8 . We show the utility of a color-color diagram to distinguish between rocky, icy, and gaseous bodies in our Solar System for colors derived from high as well as low-resolution spectra ( Fig. 7 and 8) and initially characterize extrasolar planets and moons. The spectra, albedo and colors presented in this catalog can be used to prioritize time-intensive follow up spectral observations of extrasolar planets and moons with current and next generation like the Extremely Large Telescopes (ELTs). Assuming an unchanged albedo, Solar System body analog exoplanets shift their position in a color-color diagram following the color change of the host stars (Fig. 9). Detailed spectroscopic characterization will be necessary to confirm the provisional categorization from the broadband photometry suggested here, which is only based on planets and moons of our own Solar System. Planetary science broke new ground in the 70s and 80s with spectral measurements for Solar System bodies. Exoplanet science will see a similar renaissance in the near future, when we will be able to compare spectra of a wide range of exoplanets to the catalog of bodies in our Solar System. FIG. 2 . 2Geometric albedos for 19 Solar System bodies for Ceres, Dione, Earth, Jupiter, Moon, Neptune, Rhea, Saturn, Titan, Uranus (albedos calculated in this paper based on un-calibrated data byLundock et al., 2009), Callisto FIG. 5 . 5Standard filters used for flux calibration and color calculations. Gray bands show the wavelength range where the observed fluxes from the THN-PSL are not reliable. FIG. 8 . 8Comparison of colors calculated using low (filled symbols, = 8) versus high resolution (non-filled symbols, R = 138, 142 and 360 for THN-PSL data) spectra for 19 Solar System bodies around the Sun. Each panel shows a different filter combination and the symbols represent the three surface types; gaseous (square), icy (triangle) and rocky (circle). FIG. 9 . 9Colors of Solar System bodies around different host stars. Here we show the colors of Solar System bodies for an F0V (upper-left), a G0V (upper-right), an M0V (lower-left), and an M9V (lower-right) host star. FIG.3.A comparison of geometric albedos for the Solar System bodies in our catalog between published values and the albedo calculated from the THN-PSL data. THN-PSL data based albedos are denoted with solid lines for uncontaminated data and with an asterisk and gray line if contaminated. References for comparison albedos: aVenus Jupiter Mercury Earth Saturn Titan Moon Mars Uranus λ (μ ) Neptune Ceres λ (μ ) Io Europa Ganymede Callisto Pluto Dione λ (μ ) Rhea Enceladus Table 1 . 1Parameters for the 10 Solar System bodies from the THN-PSL we used to calculate the calibrated flux and albedos. References used for phase function and albedo. † To obtain the proper geometric albedo for this Earthshine observation a factor of 2.38E5 is needed. * The Pluto-Charon spectrum is added for completeness.Name Obs. Date V Mag. d (AU) a b (AU) R b (km) (deg.) ( ) Phase Ref. Albedo Ref. Ceres 11/25/08 8.40 2.405 2.558 470 23 0.34(5) a a Dione 5/5/08 10.40 8.97 9.298 560 6 0.88(3) b l,v Earth 11/21/08 -2.50 0.0026 † 0.988 † 6378 70 0.5 c m,n Jupiter 5/7/08 -2.40 4.68 5.199 71492 10 0.91(8) d o Moon 11/21/08 -9.30 0.0026 0.988 1738 108 0.05(1) e e Neptune 1 5/7/08 7.90 30.14 30.04 24766 2 1 f o,p Neptune 2 11/20/08 7.90 30.145 30.03 24766 2 1 f o,p Neptune 3 11/25/08 7.90 30.23 30.03 24766 2 1 f o,p Neptune 4 11/26/08 7.90 30.247 30.03 24766 2 1 f o,p Pluto * 5/11/08 15.00 30.72 31.455 1150 1 1 q Rhea 11/25/08 9.90 9.591 9.36 764 6 0.87(2) b l,v Saturn 1 5/5/08 1.00 8.97 9.296 60268 6 0.76(3) d o,p Saturn 2 11/19/08 1.20 9.685 9.356 60268 6 1 d o,p Saturn 3 11/19/08 1.20 9.685 9.356 60268 6 0.76(3) d o,p Saturn 4 11/22/08 1.20 9.6385 9.359 60268 6 0.76(3) d o,p Titan 1 5/5/08 8.40 8.97 9.3 2575 6 0.98(3) g o,p Titan 2 5/6/08 8.40 8.97 9.3 2575 6 0.98(3) g o,p Titan 3 11/24/08 8.60 9.5947 9.364 2575 6 0.98(3) g o,p Uranus 1 5/11/08 5.90 20.66 20.097 25559 2 1 f o,p Uranus 2 11/20/08 5.80 19.742 20.097 25559 3 1 f o,p Table 2 . 2Data for the Solar System bodies from the THN-PSL dataset that were contaminated based on the shape of their calculated geometric albedo. *Note that the authors state that the Callisto data is contaminated.Name Obs. Date V Mag. d (AU) a b (AU) R b (km) (deg.) ( ) Phase Ref. Albedo Ref. Callisto* 5/5/08 6.30 4.73 5.214 2410 10 0.60(2) h r,s Europa 5/7/08 5.60 4.69 5.195 1565 10 0.88(5) i,b i,r,s Ganymede 11/26/08 5.40 5.7518 5.12066 2634 8 0.80(5) h r,s Io 11/26/08 5.80 5.7556 5.12348 1821 8 0.87(5) j s,t Mars 5/12/08 1.30 1.68 1.6676 3397 35 0.58(5) d u,n Mercury 5/11/08 0.00 0.99 0.37547 2440 96 0.11(5) d,k k,x Venus 11/20/08 -4.20 1.077 0.72556 6052 63 0.4(1) d n,m Table A1 . A1Calculated k values for each band and their average for each observation in the THN-PSL. The CV and albedo (Fig. 3) was used to determine level of reliability of the observation. 11E-12 1.10E-12 1.05E-12 1.13E-12 1.10E-12 3.44E-14 3.14% Saturn 3 11/19/08 1.00E-12 9.97E-13 9.57E-13 1.02E-12 9.94E-13 2.68E-14 2.70% 57E-13 7.52E-13 7.21E-13 7.62E-13 7.48E-13 1.81E-14 2.42% Titan 1 5/5/08 1.12E-15 1.13E-15 1.09E-15 1.13E-15 1.12E-15 1.79E-17 1.60% 74E-15 1.72E-15 1.70E-15 1.76E-15 1.73E-15 2.47E-17 1.43% 17E-15 1.16E-15 1.12E-15 1.18E-15 1.16E-15 2.45E-17 2.11% 20E-14 4.17E-14 4.52E-14 4.26E-14 4.29E-14 1.62E-15 3.79% Io 11/26/08 1.26E-14 1.25E-14 1.41E-14 1.29E-14 1.31E-14 7.45E-16Name Obs. Date k V k R k I k J k StDev CV Uncontaminated (CV < 6% and albedo < 1) Ceres 11/25/08 8.27E-16 8.27E-16 8.45E-16 8.36E-16 8.34E-16 8.44E-18 1.01% Dione 5/5/08 1.34E-16 1.34E-16 1.35E-16 1.34E-16 1.34E-16 6.75E-19 0.50% Earth 11/21/08 1.48E-11 1.48E-11 1.50E-11 1.50E-11 1.49E-11 1.14E-13 0.77% Jupiter 5/7/08 2.17E-11 2.16E-11 2.08E-11 2.19E-11 2.15E-11 5.03E-13 2.34% Moon 11/21/08 1.49E-08 1.49E-08 1.54E-08 1.51E-08 1.51E-08 2.27E-10 1.50% Neptune 1 5/7/08 5.28E-16 5.38E-16 4.89E-16 5.25E-16 5.20E-16 2.14E-17 4.11% Neptune 2 11/20/08 4.44E-16 4.53E-16 4.33E-16 4.42E-16 4.43E-16 8.44E-18 1.90% Neptune 3 11/25/08 2.96E-16 3.02E-16 2.86E-16 2.97E-16 2.95E-16 6.51E-18 2.20% Neptune 4 11/26/08 7.80E-16 7.94E-16 7.46E-16 7.76E-16 7.74E-16 2.00E-17 2.59% Pluto 5/11/08 2.67E-18 2.67E-18 2.70E-18 2.71E-18 2.69E-18 2.10E-20 0.78% Rhea 11/25/08 2.72E-16 2.71E-16 2.73E-16 2.73E-16 2.72E-16 1.14E-18 0.42% Saturn 1 5/5/08 8.29E-13 8.31E-13 7.86E-13 8.49E-13 8.24E-13 2.70E-14 3.28% Saturn 2 11/19/08 1.Saturn 4 11/22/08 7.Titan 2 5/6/08 1.Titan 3 11/24/08 1.Uranus 1 5/11/08 3.04E-15 3.12E-15 2.76E-15 3.00E-15 2.98E-15 1.53E-16 5.12% Uranus 2 11/20/08 3.30E-15 3.37E-15 3.19E-15 3.26E-15 3.28E-15 7.48E-17 2.28% Contaminated (albedo > 1) Callisto 5/5/08 7.38E-15 7.36E-15 6.90E-15 7.55E-15 7.30E-15 2.78E-16 3.81% Europa 5/7/08 2.47E-14 2.46E-14 2.58E-14 2.48E-14 2.49E-14 5.57E-16 2.24% Ganymede 11/26/08 4.5.70% Mars 5/12/08 1.59E-12 1.57E-12 1.73E-12 1.60E-12 1.62E-12 7.56E-14 4.67% Mercury 5/11/08 7.18E-12 7.12E-12 8.02E-12 7.31E-12 7.41E-12 4.16E-13 5.61% Venus 11/20/08 1.40E-10 1.39E-10 1.43E-10 1.39E-10 1.40E-10 2.08E-12 1.49% Contaminated (CV >14%) Earth 5/11/08 1.84E-11 1.79E-11 1.62E-11 2.26E-11 1.88E-11 2.71E-12 14.43% Moon 11/21/08 2.08E-08 1.84E-08 2.10E-08 3.04E-08 2.26E-08 5.33E-09 23.55% Uranus 5/7/08 3.28E-15 3.22E-15 2.60E-15 7.30E-16 2.46E-15 1.19E-15 48.52% www.carlsaganinstitute.org/data/ www.stsci.edu/hst/observatory/crds/castelli kurucz atlas.html (F0V, G0V, K0V, M0V) 5 http://phoenix.astro.physik.uni-goettingen.de (M9V) www.carlsaganinstitute.org/data/ ACKNOWLEDGMENTSWe Thank Gianrico Filacchione, Ramsey Lundock, Erich Karkoschka, Siddharth Hegde, Paul Helfenstein, Steve Squyres, and our reviewers for helpful discussions and comments. The authors acknowledge support by the Simons Foundation (SCOL # 290357, Kaltenegger), the Carl Sagan Institute, and the NASA/New York Space Grant Consortium (NASA Grant # NNX15AK07H).AUTHOR DISCLOSURE STATEMENT No competing financial interests exist. The periods of Neptune -Evidence for atmospheric motions. M J S Belton, L Wallace, S Howard, Icarus. 46Belton, M.J.S., Wallace, L. & Howard, S. (1981). The periods of Neptune - Evidence for atmospheric motions. Icarus 46: 263-274. UBVRI photometry. II -The Cousins VRI system, its temperature and absolute flux calibration, and relevance for two-dimensional photometry. M S Bessell, PASP. 91Bessell, M.S. (1979). UBVRI photometry. II - The Cousins VRI system, its temperature and absolute flux calibration, and relevance for two-dimensional photometry. PASP 91: 589-607. Hubble Space Telescope CALSPEC Flux Standards: Sirius (and Vega). R Bohlin, AJ. 147127Bohlin, R. (2014). Hubble Space Telescope CALSPEC Flux Standards: Sirius (and Vega). AJ 147: 127. R Bohlin, M Dickinson, D Calzetti, Spectrophotometric Standards from the Far-Ultraviolet to the Near-Infrared: STIS and NICMOS Fluxes. 122Bohlin, R., Dickinson, M. & Calzetti, D. (2001). Spectrophotometric Standards from the Far-Ultraviolet to the Near- Infrared: STIS and NICMOS Fluxes. AJ 122: 2118-2128. Voyager photometry of Europa. B Buratti, J Veverka, Icarus. 55Buratti, B. & Veverka, J. (1983). Voyager photometry of Europa. Icarus 55: 93-110. Voyager photometry of Rhea. B Buratti, J Veverka, Enceladus and Mimas. Dione, Tethys58Buratti, B. & Veverka, J. (1984). Voyager photometry of Rhea, Dione, Tethys, Enceladus and Mimas. Icarus 58: 254- 264. Exoplanet Albedo Spectra and Colors as a Function of Planet Phase, Separation, and Metallicity. K L Cahoy, M S Marley, J J Fortney, ApJ. 724Cahoy, K.L., Marley, M.S. & Fortney, J.J. (2010). Exoplanet Albedo Spectra and Colors as a Function of Planet Phase, Separation, and Metallicity. ApJ 724: 189-214. F Castelli, R Kurucz, arXiv:astro-ph/0405087New Grids of ATLAS9 Model Atmospheres. Castelli, F. & Kurucz, R. (2004). New Grids of ATLAS9 Model Atmospheres. arXiv:astro-ph/0405087. Standard Stars for VRI Photometry with S25 Response Photocathodes. A Cousins, Monthly Notes of the Astronomical Society of South Africa. 70. de Vaucouleurs, G.35IcarusCousins, A. (1976). Standard Stars for VRI Photometry with S25 Response Photocathodes. Monthly Notes of the Astronomical Society of South Africa 35: 70. de Vaucouleurs, G. (1964). Geometric and Photometric Parameters of the Terrestrial Planets. Icarus 3: 187-235. Reflected Light Curves, Spherical and Bond Albedos of Jupiter-and Saturnlike Exoplanets. U Dyudina, X Zhang, L Li, P Kopparla, A P Ingersoll, L Dones, Y L Yung, ApJ. 82276Dyudina, U., Zhang, X., Li, L., Kopparla, P., Ingersoll, A.P., Dones, L., … Yung, Y.L. (2016). Reflected Light Curves, Spherical and Bond Albedos of Jupiter-and Saturn- like Exoplanets. ApJ 822: 76. Io -A surface evaporite deposit. F Fanale, T Johnson, D Matson, Science. 186Fanale, F., Johnson, T. & Matson, D. (1974). Io -A surface evaporite deposit. Science 186: 922-925. Saturn's icy satellites and rings investigated by Cassini-VIMS: III -Radial compositional variability. G Filacchione, F Capaccioni, M Ciarniello, R N Clark, J N Cuzzi, P D Nicholson, E Flamini, Icarus. 2202Filacchione, G., Capaccioni, F., Ciarniello, M., Clark, R.N., Cuzzi, J.N., Nicholson, P.D., … Flamini, E. (2012). Saturn's icy satellites and rings investigated by Cassini-VIMS: III -Radial compositional variability. Icarus 220(2): 1064-1096. The infrared spectra of Uranus, Neptune, and Titan from 0.8 to 2.5 microns. U Fink, H Larson, ApJ. 233Fink, U. & Larson, H. (1979). The infrared spectra of Uranus, Neptune, and Titan from 0.8 to 2.5 microns. ApJ 233: 1021- 1040. Galaxy Colors in Various Photometric Band Systems. M Fukugita, K Shimasaku, T Ichikawa, PASP. 107945Fukugita, M., Shimasaku, K. & Ichikawa, T. (1995). Galaxy Colors in Various Photometric Band Systems. PASP 107: 945. Earthshine observations of the Earth's reflectance. P Goode, J Qiu, V Yurchyshyn, J Hickey, M Chu, E Kolbe, S E Koonin, Geophys. Res. Lett. 28Goode, P., Qiu, J., Yurchyshyn, V., Hickey, J., Chu, M., Kolbe, E., … Koonin, S.E. (2001). Earthshine observations of the Earth's reflectance. Geophys. Res. Lett. 28: 1671-1674. Colors of Extreme Exo-Earth Environments. S Hegde, L Kaltenegger, Astrobiology. 13Hegde, S. & Kaltenegger, L. (2013). Colors of Extreme Exo-Earth Environments. Astrobiology 13: 47-56. A new extensive library of PHOENIX stellar atmospheres and synthetic spectra. T Husser, Wende-Von, S Berg, S Dreizler, D Homeier, A Reiners, T Barman, P Hauschildt, A&A. 5536Husser, T., Wende-von Berg, S., Dreizler, S., Homeier, D., Reiners, A., Barman, T. & Hauschildt, P. (2013). A new extensive library of PHOENIX stellar atmospheres and synthetic spectra. A&A 553: A6. Multicolor Photoelectric Photometry of the Brighter Planets. III. Observations from Boyden Observatory. W M Irvine, T Simon, D H Menzel, C Pikoos, A T Young, AJ. 73807Irvine, W.M., Simon, T., Menzel, D.H., Pikoos, C. & Young, A.T. (1968). Multicolor Photoelectric Photometry of the Brighter Planets. III. Observations from Boyden Observatory. AJ 73: 807. Fundamental stellar photometry for standards of spectral type on the revised system of the Yerkes spectral atlas. H L Johnson, W W Morgan, ApJ. 117313Johnson, H.L. & Morgan, W.W. (1953). Fundamental stellar photometry for standards of spectral type on the revised system of the Yerkes spectral atlas. ApJ 117: 313. Deciphering Spectral Fingerprints of Habitable Exoplanets. L Kaltenegger, F Selsis, M Fridlund, H Lammer, C Beichman, W Danchi, G J White, Astrobiology. 10Kaltenegger, L., Selsis, F., Fridlund, M., Lammer, H., Beichman, C., Danchi, W., … White, G.J. (2010). Deciphering Spectral Fingerprints of Habitable Exoplanets. Astrobiology 10: 89-102. Methane, Ammonia, and Temperature Measurements of the Jovian Planets and Titan from CCD-Spectrophotometry. E Karkoschka, Icarus. 133Karkoschka, E. (1998). Methane, Ammonia, and Temperature Measurements of the Jovian Planets and Titan from CCD- Spectrophotometry. Icarus 133: 134-146. Clouds in the atmosphere of the super-Earth exoplanet GJ1214b. L Kreidberg, J L Bean, J.-M Désert, B Benneke, D Deming, K B Stevenson, D Homeier, Nature. 5057481Kreidberg, L., Bean, J.L., Désert, J.-M., Benneke, B., Deming, D., Stevenson, K.B., … Homeier, D. (2014). Clouds in the atmosphere of the super-Earth exoplanet GJ1214b. Nature 505(7481): 69-72. Is the Pale Blue Dot Unique? Optimized Photometric Bands for Identifying Earthlike Exoplanets. J Krissansen-Totton, E W Schwieterman, B Charnay, G Arney, T D Robinson, V Meadows, D C Catling, ApJ. 81731Krissansen-Totton, J., Schwieterman, E.W., Charnay, B., Arney, G., Robinson, T.D., Meadows, V. & Catling, D.C. (2016). Is the Pale Blue Dot Unique? Optimized Photometric Bands for Identifying Earth- like Exoplanets. ApJ 817: 31. Monochromatic phase curves and albedos for the lunar disk. A Lane, W Irvine, AJ. 78267Lane, A. & Irvine, W. (1973). Monochromatic phase curves and albedos for the lunar disk. AJ 78: 267. The spectrum of Pluto, 0.40-0.93 µm -I. Secular and longitudinal distribution of ices and complex organics. V Lorenzi, N Pinilla-Alonso, J Licandro, D P Cruikshank, W M Grundy, R P Binzel, J P Emery, A&A. 585131Lorenzi, V., Pinilla-Alonso, N., Licandro, J., Cruikshank, D.P., Grundy, W.M., Binzel, R.P. & Emery, J.P. (2016). The spectrum of Pluto, 0.40-0.93 µm -I. Secular and longitudinal distribution of ices and complex organics. A&A 585: A131. Tohoku-Hiroshima-Nagoya planetary spectra library: a method for characterizing planets in the visible to near infrared. R Lundock, T Ichikawa, H Okita, K Kurita, K S Kawabata, M Uemura, Kino, A&A. 507Lundock, R., Ichikawa, T., Okita, H., Kurita, K., Kawabata, K.S., Uemura, M., … Kino, M. (2009). Tohoku-Hiroshima-Nagoya planetary spectra library: a method for characterizing planets in the visible to near infrared. A&A 507: 1649-1658. The Spherical Bolometric Albedo of Planet Mercury. A Mallama, ArXiv e-printsMallama, A. (2017). The Spherical Bolometric Albedo of Planet Mercury. ArXiv e-prints. Photometry of Mercury from SOHO/LASCO and Earth. The Phase Function from 2 to 170 deg. A Mallama, D Wang, R A Howard, Icarus. 155Mallama, A., Wang, D. & Howard, R.A. (2002). Photometry of Mercury from SOHO/LASCO and Earth. The Phase Function from 2 to 170 deg. Icarus 155: 253-264. Mars: Narrow-Band Photometry. T Mccord, J Westphal, from 0.3 to 2.5McCord, T. & Westphal, J. (1971). Mars: Narrow-Band Photometry, from 0.3 to 2.5 Surface Regions during the 1969 Apparition. Microns, ApJ. 168141Microns, of Surface Regions during the 1969 Apparition. ApJ 168: 141. Modelling the Diversity of Extrasolar Terrestrial Planets. V S Meadows, IAU Colloq. 200: Direct Imaging of Exoplanets: Science & Techniques. C. Aime & F. VakiliMeadows, V.S. (2006). Modelling the Diversity of Extrasolar Terrestrial Planets. In C. Aime & F. Vakili (eds.), IAU Colloq. 200: Direct Imaging of Exoplanets: Science & Techniques. Characterizing HR 3549 B using SPHERE. D Mesa, A Vigan, V D&apos;orazi, C Ginski, S Desidera, M Bonnefoy, A Zurlo, A&A. 593119Mesa, D., Vigan, A., D'Orazi, V., Ginski, C., Desidera, S., Bonnefoy, M., … Zurlo, A. (2016). Characterizing HR 3549 B using SPHERE. A&A 593: A119. Detection of ozone on Saturn's satellites Rhea and Dione. K S Noll, T L Roush, D P Cruikshank, R E Johnson, Y J Pendleton, Nature. 388Noll, K.S., Roush, T.L., Cruikshank, D.P., Johnson, R.E. & Pendleton, Y.J. (1997). Detection of ozone on Saturn's satellites Rhea and Dione. Nature 388: 45-47. Estimates of the bolometric albedos and radiation balance of Uranus and Neptune. J Pollack, K Rages, K Baines, J Bergstralh, D Wenkert, G Danielson, Icarus. 65Pollack, J., Rages, K., Baines, K., Bergstralh, J., Wenkert, D. & Danielson, G. (1986). Estimates of the bolometric albedos and radiation balance of Uranus and Neptune. Icarus 65: 442-466. Properties of the clouds of Venus, as inferred from airborne observations of its near-infrared reflectivity spectrum. J Pollack, D Strecker, F Witteborn, E Erickson, B Baldwin, Icarus. 34Pollack, J., Strecker, D., Witteborn, F., Erickson, E. & Baldwin, B. (1978). Properties of the clouds of Venus, as inferred from airborne observations of its near-infrared reflectivity spectrum. Icarus 34: 28-45. Surface characterization of Pluto and Charon by L and M band spectra. S Protopapa, H Boehnhardt, T M Herbst, D P Cruikshank, W M Grundy, F Merlin, C B Olkin, A&A. 4901Protopapa, S., Boehnhardt, H., Herbst, T.M., Cruikshank, D.P., Grundy, W.M., Merlin, F. & Olkin, C.B. (2008). Surface characterization of Pluto and Charon by L and M band spectra. A&A 490(1): 365- 375. Photometric properties of Ceres from telescopic observations using Dawn Framing Camera color filters. V Reddy, J.-Y Li, B Gary, J Sanchez, R Stephens, R Megna, M Hoffmann, Icarus. 260Reddy, V., Li, J.-Y., Gary, B., Sanchez, J., Stephens, R., Megna, R., … Hoffmann, M. (2015). Photometric properties of Ceres from telescopic observations using Dawn Framing Camera color filters. Icarus 260: 332-345. Characterizing Pale Blue Dots Around FGKM Stars. S Rugheimer, L Kaltenegger, D Sasselov, A Segura, AAS/Division for Extreme Solar Systems Abstracts. 3Rugheimer, S., Kaltenegger, L., Sasselov, D. & Segura, A. (2015). Characterizing Pale Blue Dots Around FGKM Stars. In AAS/Division for Extreme Solar Systems Abstracts (Vol. 3). . S Rugheimer, L Kaltenegger, A Zsom, A Segura, D Sasselov, Rugheimer, S., Kaltenegger, L., Zsom, A., Segura, A. & Sasselov, D. (2013). Spectral Fingerprints of Earth-like Planets Around FGK Stars. Astrobiology. 13Spectral Fingerprints of Earth-like Planets Around FGK Stars. Astrobiology 13: 251- 269. Voyager disk-integrated photometry of Io. D P Simonelli, J Veverka, Icarus. 59Simonelli, D.P. & Veverka, J. (1984). Voyager disk-integrated photometry of Io. Icarus 59: 406-425. A continuum from clear to cloudy hot-Jupiter exoplanets without primordial water depletion. D K Sing, J J Fortney, N Nikolov, H R Wakeford, T Kataria, T M Evans, P A Wilson, Nature. 529Sing, D.K., Fortney, J.J., Nikolov, N., Wakeford, H.R., Kataria, T., Evans, T.M., … Wilson, P.A. (2016). A continuum from clear to cloudy hot-Jupiter exoplanets without primordial water depletion. Nature 529: 59-62. Bright optical day-side emission from extrasolar planet CoRoT-2b. I A G Snellen, E J W De Mooij, A Burrows, A&A. 51376Snellen, I.A.G., de Mooij, E.J.W. & Burrows, A. (2010). Bright optical day-side emission from extrasolar planet CoRoT- 2b. A&A 513: A76. The Surfaces of Europa, Ganymede, and Callisto: an Investigation Using Voyager IRIS Thermal Infrared Spectra. J Spencer, THE UNIVERSITY OF ARIZONA. Spencer, J. (1987). The Surfaces of Europa, Ganymede, and Callisto: an Investigation Using Voyager IRIS Thermal Infrared Spectra. THE UNIVERSITY OF ARIZONA. CCD Spectra of the Galilean Satellites: Molecular Oxygen on Ganymede. J Spencer, W Calvin, M Person, J. Geophys. Res. 100Spencer, J., Calvin, W. & Person, M. (1995). CCD Spectra of the Galilean Satellites: Molecular Oxygen on Ganymede. J. Geophys. Res. 100: 19049-19056. Voyager photometry of surface features on Ganymede and Callisto. S Squyres, J Veverka, Icarus. 46Squyres, S. & Veverka, J. (1981). Voyager photometry of surface features on Ganymede and Callisto. Icarus 46: 137- 155. A Tokunaga, D Simons, W Vacca, The Mauna Kea Observatories Near-Infrared Filter Set. II. Specifications for a New JHKL'M' Filter Set for Infrared Astronomy. 114Tokunaga, A., Simons, D. & Vacca, W. (2002). The Mauna Kea Observatories Near- Infrared Filter Set. II. Specifications for a New JHKL'M' Filter Set for Infrared Astronomy. PASP 114: 180-186. Photometry and polarimetry of Titan -Pioneer 11 observations and their implications for aerosol properties. M Tomasko, P Smith, Icarus. 51Tomasko, M. & Smith, P. (1982). Photometry and polarimetry of Titan -Pioneer 11 observations and their implications for aerosol properties. Icarus 51: 65-95. The Colors of Extrasolar Planets. W Traub, Scientific Frontiers in Research on Extrasolar Planets. D. Deming & S. Seager294Traub, W. (2003). The Colors of Extrasolar Planets. In D. Deming & S. Seager (eds.), Scientific Frontiers in Research on Extrasolar Planets (Vol. 294). TRISPEC: A Simultaneous Optical and Near-Infrared Imager, Spectrograph, and Polarimeter. M Watanabe, H Nakaya, T Yamamuro, T Zenno, M Ishii, M Okada, A Chrysostomou, PASP. 117Watanabe, M., Nakaya, H., Yamamuro, T., Zenno, T., Ishii, M., Okada, M., … Chrysostomou, A. (2005). TRISPEC: A Simultaneous Optical and Near-Infrared Imager, Spectrograph, and Polarimeter. PASP 117: 870-884. Reddy, g (Tomasko & Smith, 1982), h (Squyres & Veverka, 1981), i (Buratti & Veverka, 1983), j (Simonelli & Veverka. e (Lane & Irvine2b (Buratti & VeverkaReferences for Table 1 and 2: a (Reddy et al., 2015), b (Buratti & Veverka, 1984), c (Goode et al., 2001), d (Irvine et al., 1968), e (Lane & Irvine, 1973), f (Pollack et al., 1986), g (Tomasko & Smith, 1982), h (Squyres & Veverka, 1981), i (Buratti & Veverka, 1983), j (Simonelli & Veverka, 1984), k (Mallama et al., 2002), l (Noll et al., 1997), Kaltenegger, 1986), g (Tomasko & Smith, 1982), h (Squyres & Veverka, 1981), i (Buratti & Veverka, 1983), j (Simonelli & Veverka. e (Lane & Irvine, 1973), f (Pollack et al.,2b (Buratti & Veverkam (Kaltenegger et al., 2010), n (Meadows, 2006), o (Karkoschka, 1998), p (Fink & Larson, 1979), q (Lorenzi et al., 2016; Protopapa et al., 2008) r (Spencer, 1987), s (Spencer et al., 1995) t (Fanale et al., 1974) u (McCord & Westphal, 1971), v (Cassini VIMS -NASA PDS), w (Pollack et al., 1978), x (Mallama, 2017) References for Table 1 and 2: a (Reddy et al., 2015), b (Buratti & Veverka, 1984), c (Goode et al., 2001), d (Irvine et al., 1968), e (Lane & Irvine, 1973), f (Pollack et al., 1986), g (Tomasko & Smith, 1982), h (Squyres & Veverka, 1981), i (Buratti & Veverka, 1983), j (Simonelli & Veverka, 1984), k (Mallama et al., 2002), l (Noll et al., 1997), Kaltenegger, v (Cassini VIMS -NASA PDS). x (MallamaMcCord & Westphalpt (Fanale et al., 1974) um (Kaltenegger et al., 2010), n (Meadows, 2006), o (Karkoschka, 1998), p (Fink & Larson, 1979), q (Lorenzi et al., 2016; Protopapa et al., 2008) r (Spencer, 1987), s (Spencer et al., 1995) t (Fanale et al., 1974) u (McCord & Westphal, 1971), v (Cassini VIMS -NASA PDS), w (Pollack et al., 1978), x (Mallama, 2017)
[]
[ "Person Search in Videos with One Portrait Through Visual and Temporal Links", "Person Search in Videos with One Portrait Through Visual and Temporal Links" ]
[ "Qingqiu Huang \nCUHK-SenseTime Joint Lab\nThe Chinese University of Hong Kong\n\n", "Wentao Liu \nDepartment of Computer Science and Technology\nTsinghua University\n\n\nSenseTime Research\n\n", "Dahua Lin [email protected] \nCUHK-SenseTime Joint Lab\nThe Chinese University of Hong Kong\n\n" ]
[ "CUHK-SenseTime Joint Lab\nThe Chinese University of Hong Kong\n", "Department of Computer Science and Technology\nTsinghua University\n", "SenseTime Research\n", "CUHK-SenseTime Joint Lab\nThe Chinese University of Hong Kong\n" ]
[]
In real-world applications, e.g. law enforcement and video retrieval, one often needs to search a certain person in long videos with just one portrait. This is much more challenging than the conventional settings for person re-identification, as the search may need to be carried out in the environments different from where the portrait was taken. In this paper, we aim to tackle this challenge and propose a novel framework, which takes into account the identity invariance along a tracklet, thus allowing person identities to be propagated via both the visual and the temporal links. We also develop a novel scheme called Progressive Propagation via Competitive Consensus, which significantly improves the reliability of the propagation process. To promote the study of person search, we construct a large-scale benchmark, which contains 127K manually annotated tracklets from 192 movies. Experiments show that our approach remarkably outperforms mainstream person re-id methods, raising the mAP from 42.16% to 62.27%. 1
10.1007/978-3-030-01261-8_26
[ "https://arxiv.org/pdf/1807.10510v1.pdf" ]
51,864,534
1807.10510
c97a5f2241cc6cd99ef0c4527ea507a50841f60b
Person Search in Videos with One Portrait Through Visual and Temporal Links Qingqiu Huang CUHK-SenseTime Joint Lab The Chinese University of Hong Kong Wentao Liu Department of Computer Science and Technology Tsinghua University SenseTime Research Dahua Lin [email protected] CUHK-SenseTime Joint Lab The Chinese University of Hong Kong Person Search in Videos with One Portrait Through Visual and Temporal Links person searchportraitvisual and temporalProgressive PropagationCompetitive Consensus In real-world applications, e.g. law enforcement and video retrieval, one often needs to search a certain person in long videos with just one portrait. This is much more challenging than the conventional settings for person re-identification, as the search may need to be carried out in the environments different from where the portrait was taken. In this paper, we aim to tackle this challenge and propose a novel framework, which takes into account the identity invariance along a tracklet, thus allowing person identities to be propagated via both the visual and the temporal links. We also develop a novel scheme called Progressive Propagation via Competitive Consensus, which significantly improves the reliability of the propagation process. To promote the study of person search, we construct a large-scale benchmark, which contains 127K manually annotated tracklets from 192 movies. Experiments show that our approach remarkably outperforms mainstream person re-id methods, raising the mAP from 42.16% to 62.27%. 1 Introduction Searching persons in videos is frequently needed in real-world scenarios. To catch a wanted criminal, the police may have to go through thousands of hours of videos collected from multiple surveillance cameras, probably with just a single portrait. To find the movie shots featured by a popular star, the retrieval system has to examine many hour-long films, with just a few facial photos as the references. In applications like these, the reference photos are often taken in an environment that is very different from the target environments where the search is conducted. As illustrated in Figure 1, such settings are very challenging. Even state-of-the-art recognition techniques would find it difficult to reliably identify all occurrences of a person, facing the dramatic variations in pose, makeups, clothing, and illumination. Fig. 1: Person re-id differs significantly from the person search task. The first row shows a typical example in person re-id from the MARS dataset [44], where the reference and the targets are captured under similar conditions. The second row shows an example from our person search dataset CSM, where the reference portrait is dramatically different from the targets that vary significantly in pose, clothing, and illumination. It is noteworthy that two related problems, namely person re-identification (re-id) and person recognition in albums, have drawn increasing attention from the research community. However, they are substantially different from the problem of person search with one portrait, which we aim to tackle in this work. Specifically, in typical settings of person re-id [44,22,38,45,13,8,16], the queries and the references in the gallery set are usually captured under similar conditions, e.g. from different cameras along a street, and within a short duration. Even though some queries can be subject to issues like occlusion and pose changes, they can still be identifies via other visual cues, e.g. clothing. For person recognition in albums [43], one is typically given a diverse collection of gallery samples, which may cover a wide range of conditions and therefore can be directly matched to various queries. Hence, for both problems, the references in the gallery are often good representatives of the targets, and therefore the methods based on visual cues can perform reasonably well [22,1,4,3,39,44,43,15,14]. On the contrary, our task is to bridge a single portrait with a highly diverse set of samples, which is much more challenging and requires new techniques that go beyond visual matching. To tackle this problem, we propose a new framework that propagates labels through both visual and temporal links. The basic idea is to take advantage of the identity invariance along a person trajectory, i.e. all person instances along a continuous trajectory in a video should belong to the same identity. The connections induced by tracklets, which we refer to as the temporal links, are complementary to the visual links based on feature similarity. For example, a trajectory can sometimes cover a wide range of facial images that can not be easily associated based on visual similarity. With both visual and temporal links incorporated, our framework can form a large connected graph, thus allowing the identity information to be propagated over a very diverse collection of instances. While the combination of visual and temporal links provide a broad foundation for identity propagation, it remains a very challenging problem to carry out the propagation reliably over a large real-world dataset. As we begin with only a single portrait, a few wrong labels during propagation can result in catastrophic errors downstream. Actually, our empirical study shows that conventional schemes like linear diffusion [47,46] even leads to substantially worse results. To address this issue, we develop a novel scheme called Progressive Propagation via Competitive Consensus, which performs the propagation prudently, spreading a piece of identity information only when there is high certainty. To facilitate the research on this problem setting, we construct a dataset named Cast Search in Movies (CSM), which contains 127K tracklets of 1218 cast identities from 192 movies. The identities of all the tracklets are manually annotated. Each cast identity also comes with a reference portrait. The benchmark is very challenging, where the person instances for each identity varies significantly in makeup, pose, clothing, illumination, and even age. On this benchmark, our approach get 63.49% and 62.27% mAP under two settings, Comparing to the 53.33% and 42.16% mAP of the conventional visual-matching method, it shows that only matching by visual cues can not solve this problem well, and our proposed framework -Progressive Propagation via Competitive Consensus can significantly raise the performance. In summary, the main contributions of this work lie in four aspects: (1) We systematically study the problem of person search in videos, which often arises in real-world practice, but remains widely open in research. (2) We propose a framework, which incorporates both the visual similarity and the identity invariance along a tracklet, thus allowing the search to be carried out much further. (3) We develop the Progressive Propagation via Competitive Consensus scheme, which significantly improves the reliability of propagation. (4) We construct a dataset Cast Search in Movies (CSM) with 120K manually annotated tracklets to promote the study on this problem. Related Work Person Re-id. Person re-id [41,6,7], which aims to match pedestrian images (or tracklets) from different cameras within a short period, has drawn much attention in the research community. Many datasets [44,22,38,45,13,8,16] have been proposed to promote the research of re-id. However, the videos are captured by just several cameras in nearby locations within a short period. For example, the Airport [16] dataset is captured in an airport from 8 a.m. to 8 p.m. in one day. So the instances of the same identities are usually similar enough to identify by visual appearance although with occlusion and pose changes. Based on such characteristic of the data, most of the re-id methods focus on how to match a query and a gallery instance by visual cues. In earily works, the matching process is splited into feature designing [11,9,26,27] and metric learning [28,17,23]. Re-cently, many deep learning based methods have been proposed to jointly handle the matching problem. Li et al. [22] and Ahmed et al. [1] designed siamese-based networks which employ a binary verification loss to train the parameters. Ding et al. [4] and Cheng et al. [3] exploit triple loss for training more discriminating feature. Xiao et al. [39] and Zheng et al. [44] proposed to learn features by classifying identities. Although the feature learning methods of re-id can be adopted for the Person Search with One Portrait problem, they are substantially different as the query and the gallery would have huge visual appearances gap in person search, which would make one-to-one matching fail. Person Recognition in Photo Album. Person recognition [24,43,15,19,14] is another related problem, which usually focuses on the persons in photo album. It aims to recognize the identities of the queries given a set of labeled persons in gallery. Zhang et al. [43] proposed a Pose Invariant Person Recognition method (PIPER), which combines three types of visual recognizers based on ConvNets, respectively on face, full body, and poselet-level cues. The PIPA dataset published in [43] has been widely adopted as a standard benchmark to evaluate person recognition methods. Oh et al. [15] evaluated the effectiveness of different body regions, and used a weighted combination of the scores obtained from different regions for recognition. Li et al. [19] proposed a multi-level contextual model, which integrates person-level, photo-level and group-level contexts. But the person recognition is also quite different from the person search problem we aim to tackle in this paper, since the samples of the same identities in query and gallery are still similar in visual appearances and the methods mostly focus on recognizing by visual cues and context. Person Search. There are some works that focus on person search problem. Xiao et al. [40] proposed a person search task which aims to search the corresponding instances in the images of the gallery without bounding box annotation. The associated data is similar to that in re-id. The key difference is that the bounding box is unavailable in this task. Actually it can be seen as a task to combine pedestrian detection and person re-id. There are some other works try to search person with different modality of data, such as language-based [21] and attribute-based [35,5], which focus on the application scenarios that are different from the portrait-based problem we aim to tackle in this paper. Label Propogation. Label propagation (LP) [47,46], also known as Graph Transduction [37,30,32], is widely used as a semi-supervised learning method. It relies on the idea of building a graph in which nodes are data points (labeled and unlabeled) and the edges represent similarities between points so that labels can propagate from labeled points to unlabeled points. Different kinds of LP-based approaches have been proposed for face recognition [18,48], semantic segmentation [33], object detection [36], saliency detection [20] in the computer vision community. In this paper, We develop a novel LP-based approach called Progressive Propagation via Competitive Consensus, which differs from the conventional LP in two folds: (1) propagating by competitive consensus rather than linear diffusion, and (2) iterating in a progressive manner. Cast Search in Movies Dataset Whereas there have been a number of public datasets for person re-id [44,22,38,45,13,8,16] and album-based person recognition [43]. But dataset for our task, namely person search with a single portrait, remains lacking. In this work, we constructed a large-scale dataset Cast Search in Movies (CSM) for this task. CSM comprises a query set that contains the portraits for 1, 218 cast (the actors and actresses) and a gallery set that contains 127K tracklets (with 11M person instances) extracted from 192 movies. We compare CSM with other datasets for person re-id and person recognition in Tabel 1. We can see that CSM is significantly larger, 6 times for tracklets and 11 times more instances than MARS [44], which is the largest dataset for person re-id to our knowledge. Moreover, CSM has a much wider range of tracklet durations (from 1 to 4686 frames) and instance sizes (from 23 to 557 pixels in 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 # tracklets (*100) height). Figure 2 shows several example tracklets as well as their corresponding portraits, which are very diverse in pose, illumination, and wearings. It can be seen that the task is very challenging. Query Set. For each movie in CSM, we acquired the cast list from IMDB. For those movies with more than 10 cast, we only keep the top 10 according to the IMDB order, which can cover the main characters for most of the movies. In total, we obtained 1, 218 cast, which we refer to as the credited cast. For each credited cast, we download a portrait from either its IMDB or TMDB homepage, which will serve as the query portraits in CSM. Gallery Set. We obtained the tracklets in the gallery set through five steps: 1. Detecting shots. A movie is composed of a sequence of shots. Given a movie, we first detected the shot boundaries of the movies using a fast shot segmentation technique [2,34], resulting in totally 200K shots for all movies. For each shot, we selected 3 frames as the keyframes. The former allows the identity information to be propagated among those instances that are similar in appearance, while the latter allows the propagation along a continuous tracklet, in which the instances can look significantly different. With both types of links incorporated, we can construct a more connected graph, which allows the identities to be propagated much further. set and a testing set by a ratio 7 : 3. We then finetuned a Faster-RCNN [29] pre-trained on MSCOCO [25] on the training set. On the testing set, the detector gets around 91% mAP, which is good enough for tracklet generation. 4. Generating tracklets. With the person detector as described above, we performed per-frame person detection over all the frames. By concatenating the bounding boxes across frames with IoU > 0.7 within each shot, we obtained 127K trackets from the 192 movies. 5. Annotating identities. Finally, we manually annotated the identities of all the tracklets. Particularly, each tracklet is annotated as one of the credited cast or as "others". Note that the identities of the tracklets in each movie are annotated independently to ensure high annotation quality with a reasonable budget. Hence, being labeled as "others" means that the tracklet does not belong to any credited cast of the corresponding movie. Methodology In this work, we aim to develop a method to find all the occurrences of a person in a long video, e.g. a movie, with just a single portrait. The challenge of this task lies in the vast gap of visual appearance between the portrait (query) and the candidates in the gallery. Our basic idea to tackle this problem by leveraging the inherent identity invariance along a person tracklet and propagate the identities among instances via both visual and temporal links. The visual and temporal links are complementary. The use of both types of links allows identities to be propagated much further than using either type alone. However, how to propagate over a large, diverse, and noisy dataset reliably remains a very challenging problem, considering that we only begin with just a small number of labeled samples (the portraits). The key to overcoming this difficulty is to be prudent, only propagating the information which we are certain about. To this end, we propose a new propagation framework called Progressive Propagation via Competitive Consensus, which can effectively identify confident labels in a competitive way. Graph Formulation The propagation is carried out over a graph among person instances. Specifically, the propagation graph is constructed as follows. Suppose there are C cast in query set, M tracklets in gallery set, and the length of k-th tracklet (denoted by τ k ) is n k , i.e. it contains n k instances. The cast portraits and all the instances along the tracklets are treated as graph nodes. Hence, the graph contains N = C + M k=1 n k nodes. In particular, the identities of the C cast portraits are known, and the corresponding nodes are referred to as labeled nodes, while the other nodes are called unlabled nodes. The propagation framework aims to propagate the identities from the labeled nodes to the unlabeled nodes through both visual and temporal links between them. The visual links are based on feature similarity. For each instance (say the i-th), we can extract a feature vector, denoted as v i . Each visual link is associated with an affinity value -the affinity between two instances v i and v j is defined to be their cosine similarity as w ij = v T i v j /( v i · v j ) . Generally, higher affinity value w ij indicates that v i and v j are more likely to be from the same identity. The temporal links capture the identity invariance along a tracklet, i.e. all instances along a tracklet should share the same identity. In this framework, we treat the identity invariance as hard constraints, which is enforced via a competitive consensus mechanism. For two tracklets with lengths n k and n l , there can be n k · n l links between their nodes. Among all these links, the strongest link, i.e. the one between the most similar pair, is the best to reflect the visual similarity. Hence, we only keep one strongest link for each pair of tracklets as shown in Figure 4, which makes the propagation more reliable and efficient. Also, thanks to the temporal links, such reduction would not compromise the connectivity of the whole graph. As illustrated in Figure 4, the visual and temporal links are complementary. The former allows the identity information to be propagated among those instances that are similar in appearance, while the latter allows the propagation along a continuous trajectory, in which the instances can look significantly different. With only visual links, we can obtain clusters in the feature space. With only temporal links, we only have isolated tracklets. However, with both types of links incorporated, we can construct a more connected graph, which allows the identities to be propagated much further. Propagating via Competitive Consensus Each node of the graph is associated with a probability vector p i ∈ R C , which will be iteratively updated as the propagation proceeds. To begin with, we set We are going to propagate labels from the left nodes to the right node. However, two of its neighbor nodes are noise. The calculation process of linear diffusion and competitive consensus are shown on the right side. We can see that in a graph with much noise, our competitive consensus, which aims to propagate the most confident information, is more robust. the probability vector for each labeled node to be a one-hot vector indicating its label, and initialize all others to be zero vectors. Due to the identity invariance along tracklets, we enforce all nodes along a tracklet τ k to share the same probability vector, denoted by p τ k . At each iteration, we traverse all tracklets and update their associated probability vectors one by one. Linear Diffusion. Linear diffusion is the most widely used propagation scheme, where a node would update its probability vector by taking a linear combination of those from the neighbors. In our setting with identity invariance, the linear diffusion scheme can be expressed as follows: p (t+1) τ k = j∈N (τ k ) α kj p (t) j , with α kj =w kj j ∈N (τ k )w kj .(1) Here, N (τ k ) = ∪ i∈τ k N i is the set of all visual neighbors of those instances in τ k . Also,w kj is the affinity of a neighbor node j to the tracklet τ k . Due to the constraint that there is only one visual link between two tracklets (see Sec. 4.1), each neighbor j will be connected to just one of the nodes in τ k , andw kj is set to the affinity between the neighbor j to that node. However, we found that the linear diffusion scheme yields poor performance in our experiments, even far worse than the naive visual matching method. An important reason for the poor performance is that errors will be mixed into the updated probability vector and then propagated to other nodes. This can cause catastrophic errors downstream, especially in a real-world dataset that is filled with noise and challenging cases. Competitive Consensus. To tackle this problem, it is crucial to improve the reliability and propagate the most confident information only. Particularly, we should only trust those neighbors that provide strong evidence instead of simply taking the weighted average of all neighbors. Following this intuition, we develop a novel scheme called competitive consensus. When updating p τ k , the probability vector for the tracklet τ k , we first collect the strongest evidence to support each identity c, from all the neighbors in N (τ k ), as η k (c) = max j∈N (τ k ) α kj · p (t) j (c),(2) where the normalized coefficient α kj is defined in Eq.(1). Intuitively, an identity is strongly supported for τ k if one of its neighbors assigns a high probability to it. Next, we turn the evidences for individual identities into a probability vector via a tempered softmax function as p (t+1) τ k (c) = exp(η k (c)/T )/ C c =1 exp(η k (c )/T ).(3) Here, T is a temperature the controls how much the probabilities concentrate on the strongest identity. In this scheme, all identities compete for getting high probability values in p (t+1) τ k by collecting the strongest supports from the neighbors. This allows the strongest identity to stand out. Competitive consensus can be considered as a coordinate ascent method to solve Eq. 4, where we introduce a binary variable z (c) kj to indicate whether the j-th neighbor is a trustable source for the class c for the k-th tracklet. Here, H is the entropy. The constraint means that one trustable source is selected for each class c and tracklet k. Figure 5 illustrates how linear diffusion and our competitive Consensus work. Experiments on CSM also show that competitive consensus significantly improves the performance of the person search problem. max C c=1 p (c) τ k j∈N (τ k ) α kj z (c) kj p (c) j + C c=1 H(p (c) τ k ) s.t. j∈N (τ k ) z (c) kj = 1.(4) Progressive Propagation In conventional label propagation, labels of all the nodes would be updated until convergence. This way can be prohibitively expensive when the graph contains a large number of nodes. However, for the person search problem, this is unnecessary -when we are very confident about the identity of a certain instance, we don't have to keep updating it. Motivated by the analysis above, we propose a progressive propagation scheme to accelerate the propagation process. At each iteration, we will fix the labels for a certain fraction of nodes that have the highest confidence, where the confidence is defined to be the maximum probability value in p i . We found empirically that a simple freezing schedule, e.g. adding 10% of the instances to the label-frozen set, can already bring notable benefits to the propagation process. Note that the progressive scheme not only reduces computational cost but also improves propagation accuracy. The reason is that without freezing, the noise and the uncertain nodes will keep affecting all the other nodes, which can sometimes cause additional errors. Experiments in 5.3 will show more details. Experiments Evaluation protocol and metrics of CSM The 192 movies in CSM are partitioned into training (train), validation (val) and testing (test) sets. Statistics of these sets are shown in Table 2. Note that we make sure that there is no overlap between the cast of different sets. i.e. the cast in the testing set would not appear in training and validation. This ensures the reliability of the testing results. Under the Person Search with One Portrait setting, one should rank all the tracklets in the gallery given a query. For this task, we use mean Average Precision (mAP) as the evaluation metric. We also report the recall of tracklet identification results in our experiments in terms of R@k. Here, we rank the identities for each tracklet according to their probabilities. R@k means the fraction of tracklets for which the correct identity is listed within the top k results. We consider two test settings in the CSM benchmark named "search cast in a movie" (IN) and "search cast across all movies" (ACROSS). The setting "IN" means the gallery consists of just the tracklets from one movie, including the tracklets of the credited cast and those of "others". While in the "ACROSS" setting, the gallery comprises all the tracklets of credited cast in testing set. Here we exclude the tracklets of "others" in the "ACROSS" setting because "others" just means that it does not belong to any one of the credited cast of a particular movie rather than all the movies in the dataset as we have mentioned in Sec. 3. Table 3 shows the query/gallery sizes of each setting. Implementation Details We use two kinds of visual features in our experiments. The first one is the IDE feature [44] widely used in person re-id. The IDE descriptor is a CNN feature of the whole person instance, extracted by a Resnet-50 [12], which is pre-trained on ImageNet [31] and finetuned on the training set of CSM. The second one is the face feature, extracted by a Resnet-101, which is trained on MS-Celeb-1M [10]. For each instance, we extract its IDE feature and the face feature of the face region, which is detected by a face detector [42]. All the visual similarities in experiments are calculated by cosines similarity between the visual features. Table 4, we can see that: (1) Even with a very powerful CNN trained on a large-scale dataset, matching portrait and candidates by visual cues cannot solve the person search problem well due to the big gap of visual appearances between the portraits and the candidates. Although face features are generally more stable than IDE features, they would fail when the faces are invisible, which is very common in real-world videos like movies. (2) Label propagation with linear diffusion gets very poor results, even worse than the matching-based methods. (3) Our approach raises the performance by a considerable margin. Particularly, the performance gain is especially remarkable on the more challenging "ACROSS" setting (62.27 with ours vs. 42.16 with the visual matching method). Here k indicates the number of neighbors to receive information from. When k = 1, it reduces to only taking the maximum, which is what we use in PPCC. Performances obtained with different k are shown in Fig. 6. (2) We also study on the "softmax" in Eq.(3) and compare results between different temperatures of it. The results are also shown in Fig. 6. Clearly, using smaller temperature of softmax significantly boosts the performance. This study supports what we have claimed when designing Competitive Consensus: we should only propagate the most confident information in this task. Analysis on Progressive Propagation . Here we show the comparison between our progressive updating scheme and the conventional scheme that updates all the nodes at each iteration. For progressive propagation, we try two kinds of freezing mechanisms: (1) Step scheme means that we set the freezing ratio of each iteration and the ratio are raised step by step. More specifically, the freezing ratio r is set to r = 0.5 + 0.1 × iter in our experiment. (2) Threshold scheme means that we set a threshold, and each time we freeze the nodes whose max probability to a particular identity is greater than the threshold. In our experiments, the threshold is set to 0.5. The results are shown in Table 5, from which we can see the effectiveness of the progressives scheme. Case Study . We show some samples that are correctly searched in different iterations in Fig. 7. We can see that the easy cases, which are usually with clear frontal faces, can be identified at the beginning. And after iterative propagation, the information can be propagated to the harder samples. At the end of the propagation, even some very hard samples, which are non-frontal, blurred, occluded and under extreme illumination, can be propagated a right identity. Conclusion In this paper, we studied a new problem named Person Search in Videos with One Protrait, which is challenging but practical in the real world. To promote the research on this problem, we construct a large-scale dataset CSM, which contains 127K tracklets of 1, 218 cast from 192 movies. To tackle this problem, we proposed a new framework that incorporates both visual and temporal links for identity propagation, with a novel Progressive Propagation vis Competitive Consensus scheme. Both quantitative and qualitative studies show the challenges of the problem and the effectiveness of our approach. Acknowledgement This work is partially supported by the Big Data Collaboration Research grant from SenseTime Group (CUHK Agreement No. TS1610626), the General Research Fund (GRF) of Hong Kong (No. 14236516). Fig. 2 : 2Examples of CSM Dataset. In each row, the photo on the left is the query portrait and the following tracklets of are groud-truth tracklets of them in the gallery. Fig. 3 : 3Statistics of CSM dataset. (a): the tracklet number distribution over movies. (b): the tracklet number of each movie, both credited cast and "others". (c): the distribution of tracklet number over cast. (d): the distribution of length (frames) over tracklets. (e): the distribution of height (px) over tracklets. 2 .Fig. 4 : 24Annotating bounding boxes on keyframes. We then manually annotated the person bounding boxes on keyframes and obtained around 700K bounding boxes. 3. Training a person detector. We trained a person detector with the annotated bounding boxes. Specifically, all the keyframes are partitioned into a training visual link temporal link Visual links and temporal links in our graph. We only keep one strongest link for each pair of tracklets. And we can see that these two kinds of links are complementary. Fig. 5 : 5An example to show the difference between competitive consensus and linear diffusion. There are four nodes here and their probability vectors are shown by their sides. Fig. 6 : 6mAP of different settings of competitive consensus. Comparison between different temperatures(T) of softmax and different settings of k (in top-k average). Fig. 7 : 7Some samples that are correctly searched in different iterations. Table 1 : 1Comparing CSM with related datasetsDataset CSM MARS[44] iLIDS[38] PRID[13] Market[45] PSD[40] PIPA[43] task search re-id re-id re-id re-id det.+re-id recog. type video video video video image image image identities 1,218 1,261 300 200 1,501 8,432 2,356 tracklets 127K 20K 600 400 - - - instances 11M 1M 44K 40K 32K 96K 63K Table 2 : 2train/val/test splits of CSMmovies cast tracklets credited tracklets train 115 739 79K 47K val 19 147 15K 8K test 58 332 32K 18K total 192 1,218 127K 73K Table 3 : 3query/gallery size setting query gallery IN (per movie) 6.4 560.5 CROSS 332 17,927 Table 4 : 4Results on CSM under Two Test SettingsWe set up four baselines for comparison: (1) FACE: To match the portrait with the tracklet in the gallery by face feature similarity. Here we use the mean feature of all the instances in the tracklet to represent it. (2) IDE: Similar to FACE, except that the IDE features are used rather than the face features.(3) IDE+FACE: To combine face similarity and IDE similarity for matching, respectively with weights 0.8 and 0.2. (4) LP: Conventional label propagation with linear diffusion with both visual and temporal links. Specifically, we use face similarity as the visual links between portraits and candidates and the IDE similarity as the visual links between different candidates. We also consider two settings of the proposed Progressive Propagation via Competitive Consensus method. (5) PPCC-v: using only visual links. (6) PPCC-vt: the full config with both visual and temporal links.From the results inIN ACROSS mAP R@1 R@3 R@5 mAP R@1 R@3 R@5 FACE 53.33 76.19 91.11 96.34 42.16 53.15 61.12 64.33 IDE 17.17 35.89 72.05 88.05 1.67 1.68 4.46 6.85 FACE+IDE 53.71 74.99 90.30 96.08 40.43 49.04 58.16 62.10 LP 8.19 39.70 70.11 87.34 0.37 0.41 1.60 5.04 PPCC-v 62.37 84.31 94.89 98.03 59.58 63.26 74.89 78.88 PPCC-vt 63.49 83.44 94.40 97.92 62.27 62.54 73.86 77.44 5.3 Results on CSM Table 5 : 5Results of Different Updating SchemesAnalysis on Competitive Consensus . To show the effectiveness of Competitive Consensus, we study different settings of the Competitive Consensus scheme in two aspects: (1) The max in Eq. (3) can be relaxed to top-k average.IN ACROSS mAP R@1 R@3 R@5 mAP R@1 R@3 R@5 Conventional 60.54 76.64 91.63 96.70 57.42 54.60 63.31 66.41 Threshold 62.51 81.04 93.61 97.48 61.20 61.54 72.31 76.01 Step 63.49 83.44 94.40 97.92 62.27 62.54 73.86 77.44 Code and data at http://qqhuang.cn/projects/eccv18-person-search/ arXiv:1807.10510v1 [cs.CV] 27 Jul 2018 An improved deep learning architecture for person re-identification. E Ahmed, M Jones, T K Marks, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. the IEEE Conference on Computer Vision and Pattern Recognition24Ahmed, E., Jones, M., Marks, T.K.: An improved deep learning architecture for person re-identification. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 3908-3916 (2015) 2, 4 Fast shot segmentation combining global and local visual descriptors. E Apostolidis, V Mezaris, Acoustics, Speech and Signal Processing. IEEE62014 IEEE International Conference onApostolidis, E., Mezaris, V.: Fast shot segmentation combining global and local visual descriptors. In: Acoustics, Speech and Signal Processing (ICASSP), 2014 IEEE International Conference on. pp. 6583-6587. IEEE (2014) 6 Person re-identification by multi-channel parts-based cnn with improved triplet loss function. D Cheng, Y Gong, S Zhou, J Wang, N Zheng, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. the IEEE Conference on Computer Vision and Pattern Recognition24Cheng, D., Gong, Y., Zhou, S., Wang, J., Zheng, N.: Person re-identification by multi-channel parts-based cnn with improved triplet loss function. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 1335- 1344 (2016) 2, 4 Deep feature learning with relative distance comparison for person re-identification. S Ding, L Lin, G Wang, H Chao, Pattern Recognition. 48104Ding, S., Lin, L., Wang, G., Chao, H.: Deep feature learning with relative distance comparison for person re-identification. Pattern Recognition 48(10), 2993-3003 (2015) 2, 4 Attribute-based people search: Lessons learnt from a practical surveillance system. R Feris, R Bobbitt, L Brown, S Pankanti, Proceedings of International Conference on Multimedia Retrieval. International Conference on Multimedia RetrievalACM4Feris, R., Bobbitt, R., Brown, L., Pankanti, S.: Attribute-based people search: Lessons learnt from a practical surveillance system. In: Proceedings of International Conference on Multimedia Retrieval. p. 153. ACM (2014) 4 Person reidentification using spatiotemporal appearance. N Gheissari, T B Sebastian, R Hartley, Computer Vision and Pattern Recognition. IEEE23IEEE ComGheissari, N., Sebastian, T.B., Hartley, R.: Person reidentification using spatiotem- poral appearance. In: Computer Vision and Pattern Recognition, 2006 IEEE Com- puter Society Conference on. vol. 2, pp. 1528-1535. IEEE (2006) 3 Person re-identification. S Gong, M Cristani, S Yan, C C Loy, Springer3Gong, S., Cristani, M., Yan, S., Loy, C.C.: Person re-identification. Springer (2014) 3 Dukemtmc4reid: A largescale multi-camera person re-identification dataset. M Gou, S Karanam, W Liu, O Camps, R J Radke, IEEE Conference on Computer Vision and Pattern Recognition Workshops. 25Gou, M., Karanam, S., Liu, W., Camps, O., Radke, R.J.: Dukemtmc4reid: A large- scale multi-camera person re-identification dataset. In: IEEE Conference on Com- puter Vision and Pattern Recognition Workshops (2017) 2, 3, 5 Viewpoint invariant pedestrian recognition with an ensemble of localized features. D Gray, H Tao, European conference on computer vision. Springer3Gray, D., Tao, H.: Viewpoint invariant pedestrian recognition with an ensemble of localized features. In: European conference on computer vision. pp. 262-275. Springer (2008) 3 Ms-celeb-1m: Challenge of recognizing one million celebrities in the real world. Y Guo, L Zhang, Y Hu, X He, J Gao, Electronic Imaging. 1112Guo, Y., Zhang, L., Hu, Y., He, X., Gao, J.: Ms-celeb-1m: Challenge of recognizing one million celebrities in the real world. Electronic Imaging 2016(11), 1-6 (2016) 12 Person re-identification in multi-camera system by signature based on interest point descriptors collected on short video sequences. O Hamdoun, F Moutarde, B Stanciulescu, B Steux, Second ACM/IEEE International Conference on. IEEE3Distributed Smart CamerasHamdoun, O., Moutarde, F., Stanciulescu, B., Steux, B.: Person re-identification in multi-camera system by signature based on interest point descriptors collected on short video sequences. In: Distributed Smart Cameras, 2008. ICDSC 2008. Second ACM/IEEE International Conference on. pp. 1-6. IEEE (2008) 3 Deep residual learning for image recognition. K He, X Zhang, S Ren, J Sun, Proceedings of the IEEE conference on computer vision and pattern recognition. the IEEE conference on computer vision and pattern recognition12He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE conference on computer vision and pattern recognition. pp. 770-778 (2016) 12 Person re-identification by descriptive and discriminative classification. M Hirzer, C Beleznai, P M Roth, H Bischof, Scandinavian conference on Image analysis. Springer25Hirzer, M., Beleznai, C., Roth, P.M., Bischof, H.: Person re-identification by de- scriptive and discriminative classification. In: Scandinavian conference on Image analysis. pp. 91-102. Springer (2011) 2, 3, 5 Unifying identification and context learning for person recognition. Q Huang, Y Xiong, D Lin, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. the IEEE Conference on Computer Vision and Pattern Recognition24Huang, Q., Xiong, Y., Lin, D.: Unifying identification and context learning for person recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 2217-2225 (2018) 2, 4 Person recognition in personal photo collections. Joon Oh, S Benenson, R Fritz, M Schiele, B , Proceedings of the IEEE International Conference on Computer Vision. the IEEE International Conference on Computer Vision24Joon Oh, S., Benenson, R., Fritz, M., Schiele, B.: Person recognition in personal photo collections. In: Proceedings of the IEEE International Conference on Com- puter Vision. pp. 3862-3870 (2015) 2, 4 A systematic evaluation and benchmark for person re-identification: Features, metrics. S Karanam, M Gou, Z Wu, A Rates-Borras, O Camps, R J Radke, arXiv:1605.0965325and datasets. arXiv preprintKaranam, S., Gou, M., Wu, Z., Rates-Borras, A., Camps, O., Radke, R.J.: A sys- tematic evaluation and benchmark for person re-identification: Features, metrics, and datasets. arXiv preprint arXiv:1605.09653 (2016) 2, 3, 5 Large scale metric learning from equivalence constraints. M Koestinger, M Hirzer, P Wohlhart, P M Roth, H Bischof, Computer Vision and Pattern Recognition (CVPR), 2012 IEEE Conference on. IEEE3Koestinger, M., Hirzer, M., Wohlhart, P., Roth, P.M., Bischof, H.: Large scale metric learning from equivalence constraints. In: Computer Vision and Pattern Recognition (CVPR), 2012 IEEE Conference on. pp. 2288-2295. IEEE (2012) 3 Face recognition in videos by label propagation. V Kumar, A M Namboodiri, C Jawahar, Pattern Recognition (ICPR), 2014 22nd International Conference on. IEEE4Kumar, V., Namboodiri, A.M., Jawahar, C.: Face recognition in videos by label propagation. In: Pattern Recognition (ICPR), 2014 22nd International Conference on. pp. 303-308. IEEE (2014) 4 A multi-level contextual model for person recognition in photo albums. H Li, J Brandt, Z Lin, X Shen, G Hua, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. the IEEE Conference on Computer Vision and Pattern Recognition4Li, H., Brandt, J., Lin, Z., Shen, X., Hua, G.: A multi-level contextual model for person recognition in photo albums. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 1297-1305 (2016) 4 Inner and inter label propagation: salient object detection in the wild. H Li, H Lu, Z Lin, X Shen, B Price, IEEE Transactions on Image Processing. 24104Li, H., Lu, H., Lin, Z., Shen, X., Price, B.: Inner and inter label propagation: salient object detection in the wild. IEEE Transactions on Image Processing 24(10), 3176- 3186 (2015) 4 Person search with natural language description. S Li, T Xiao, H Li, B Zhou, D Yue, X Wang, Proc. CVPR. CVPR4Li, S., Xiao, T., Li, H., Zhou, B., Yue, D., Wang, X.: Person search with natural language description. In: Proc. CVPR (2017) 4 Deepreid: Deep filter pairing neural network for person re-identification. W Li, R Zhao, T Xiao, X Wang, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. the IEEE Conference on Computer Vision and Pattern Recognition25Li, W., Zhao, R., Xiao, T., Wang, X.: Deepreid: Deep filter pairing neural network for person re-identification. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 152-159 (2014) 2, 3, 4, 5 Person re-identification by local maximal occurrence representation and metric learning. S Liao, Y Hu, X Zhu, S Z Li, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. the IEEE Conference on Computer Vision and Pattern Recognition3Liao, S., Hu, Y., Zhu, X., Li, S.Z.: Person re-identification by local maximal occur- rence representation and metric learning. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 2197-2206 (2015) 3 Joint people, event, and location recognition in personal photo collections using cross-domain context. D Lin, A Kapoor, G Hua, S Baker, European Conference on Computer Vision. Springer4Lin, D., Kapoor, A., Hua, G., Baker, S.: Joint people, event, and location recogni- tion in personal photo collections using cross-domain context. In: European Con- ference on Computer Vision. pp. 243-256. Springer (2010) 4 Microsoft coco: Common objects in context. T Y Lin, M Maire, S Belongie, J Hays, P Perona, D Ramanan, P Dollár, C L Zitnick, European conference on computer vision. Springer7Lin, T.Y., Maire, M., Belongie, S., Hays, J., Perona, P., Ramanan, D., Dollár, P., Zitnick, C.L.: Microsoft coco: Common objects in context. In: European conference on computer vision. pp. 740-755. Springer (2014) 7 Local descriptors encoded by fisher vectors for person reidentification. B Ma, Y Su, F Jurie, European Conference on Computer Vision. Springer3Ma, B., Su, Y., Jurie, F.: Local descriptors encoded by fisher vectors for person re- identification. In: European Conference on Computer Vision. pp. 413-422. Springer (2012) 3 Covariance descriptor based on bio-inspired features for person re-identification and face verification. B Ma, Y Su, F Jurie, Image and Vision Computing. 326-73Ma, B., Su, Y., Jurie, F.: Covariance descriptor based on bio-inspired features for person re-identification and face verification. Image and Vision Computing 32(6-7), 379-390 (2014) 3 Person re-identification by support vector ranking. B J Prosser, W S Zheng, S Gong, T Xiang, Q Mary, In: BMVC. 23Prosser, B.J., Zheng, W.S., Gong, S., Xiang, T., Mary, Q.: Person re-identification by support vector ranking. In: BMVC. vol. 2, p. 6 (2010) 3 Faster r-cnn: Towards real-time object detection with region proposal networks. S Ren, K He, R Girshick, J Sun, Advances in neural information processing systems. 7Ren, S., He, K., Girshick, R., Sun, J.: Faster r-cnn: Towards real-time object detec- tion with region proposal networks. In: Advances in neural information processing systems. pp. 91-99 (2015) 7 Transfer learning in a transductive setting. M Rohrbach, S Ebert, B Schiele, Advances in neural information processing systems. 4Rohrbach, M., Ebert, S., Schiele, B.: Transfer learning in a transductive setting. In: Advances in neural information processing systems. pp. 46-54 (2013) 4 Imagenet large scale visual recognition challenge. O Russakovsky, J Deng, H Su, J Krause, S Satheesh, S Ma, Z Huang, A Karpathy, A Khosla, M Bernstein, International Journal of Computer Vision. 115312Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recog- nition challenge. International Journal of Computer Vision 115(3), 211-252 (2015) 12 Learning transferrable representations for unsupervised domain adaptation. O Sener, H O Song, A Saxena, S Savarese, Advances in Neural Information Processing Systems. 4Sener, O., Song, H.O., Saxena, A., Savarese, S.: Learning transferrable represen- tations for unsupervised domain adaptation. In: Advances in Neural Information Processing Systems. pp. 2110-2118 (2016) 4 Real-time semantic segmentation with label propagation. R Sheikh, M Garbade, J Gall, European Conference on Computer Vision. Springer4Sheikh, R., Garbade, M., Gall, J.: Real-time semantic segmentation with label propagation. In: European Conference on Computer Vision. pp. 3-14. Springer (2016) 4 Temporal video segmentation to scenes using high-level audiovisual features. P Sidiropoulos, V Mezaris, I Kompatsiaris, H Meinedo, M Bugalho, I Trancoso, IEEE Transactions on Circuits and Systems for Video Technology. 2186Sidiropoulos, P., Mezaris, V., Kompatsiaris, I., Meinedo, H., Bugalho, M., Tran- coso, I.: Temporal video segmentation to scenes using high-level audiovisual fea- tures. IEEE Transactions on Circuits and Systems for Video Technology 21(8), 1163-1177 (2011) 6 Deep attributes driven multi-camera person re-identification. C Su, S Zhang, J Xing, W Gao, Q Tian, European conference on computer vision. Springer4Su, C., Zhang, S., Xing, J., Gao, W., Tian, Q.: Deep attributes driven multi-camera person re-identification. In: European conference on computer vision. pp. 475-491. Springer (2016) 4 Detecting temporally consistent objects in videos through object class label propagation. S Tripathi, S Belongie, Y Hwang, T Nguyen, Applications of Computer Vision (WACV), 2016 IEEE Winter Conference on. IEEE4Tripathi, S., Belongie, S., Hwang, Y., Nguyen, T.: Detecting temporally consistent objects in videos through object class label propagation. In: Applications of Com- puter Vision (WACV), 2016 IEEE Winter Conference on. pp. 1-9. IEEE (2016) 4 Graph transduction via alternating minimization. J Wang, T Jebara, S F Chang, Proceedings of the 25th international conference on Machine learning. the 25th international conference on Machine learningACM4Wang, J., Jebara, T., Chang, S.F.: Graph transduction via alternating minimiza- tion. In: Proceedings of the 25th international conference on Machine learning. pp. 1144-1151. ACM (2008) 4 Person re-identification by discriminative selection in video ranking. T Wang, S Gong, X Zhu, S Wang, IEEE transactions. 38125Wang, T., Gong, S., Zhu, X., Wang, S.: Person re-identification by discriminative selection in video ranking. IEEE transactions on pattern analysis and machine intelligence 38(12), 2501-2514 (2016) 2, 3, 5 Learning deep feature representations with domain guided dropout for person re-identification. T Xiao, H Li, W Ouyang, X Wang, Computer Vision and Pattern Recognition (CVPR), 2016 IEEE Conference on. 24Xiao, T., Li, H., Ouyang, W., Wang, X.: Learning deep feature representations with domain guided dropout for person re-identification. In: Computer Vision and Pattern Recognition (CVPR), 2016 IEEE Conference on. pp. 1249-1258. IEEE (2016) 2, 4 Joint detection and identification feature learning for person search. T Xiao, S Li, B Wang, L Lin, X Wang, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE45Xiao, T., Li, S., Wang, B., Lin, L., Wang, X.: Joint detection and identification feature learning for person search. In: 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). pp. 3376-3385. IEEE (2017) 4, 5 Keeping track of humans: Have i seen this person before? In: Robotics and Automation. W Zajdel, Z Zivkovic, B Krose, ICRA 2005. Proceedings of the 2005 IEEE International Conference on. IEEE3Zajdel, W., Zivkovic, Z., Krose, B.: Keeping track of humans: Have i seen this person before? In: Robotics and Automation, 2005. ICRA 2005. Proceedings of the 2005 IEEE International Conference on. pp. 2081-2086. IEEE (2005) 3 Joint face detection and alignment using multitask cascaded convolutional networks. K Zhang, Z Zhang, Z Li, Y Qiao, IEEE Signal Processing Letters. 231012Zhang, K., Zhang, Z., Li, Z., Qiao, Y.: Joint face detection and alignment using multitask cascaded convolutional networks. IEEE Signal Processing Letters 23(10), 1499-1503 (2016) 12 Beyond frontal faces: Improving person recognition using multiple cues. N Zhang, M Paluri, Y Taigman, R Fergus, L Bourdev, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. the IEEE Conference on Computer Vision and Pattern Recognition25Zhang, N., Paluri, M., Taigman, Y., Fergus, R., Bourdev, L.: Beyond frontal faces: Improving person recognition using multiple cues. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 4804-4813 (2015) 2, 4, 5 Mars: A video benchmark for large-scale person re-identification. L Zheng, Z Bie, Y Sun, J Wang, C Su, S Wang, Q Tian, European Conference on Computer Vision. Springer2113, 4Zheng, L., Bie, Z., Sun, Y., Wang, J., Su, C., Wang, S., Tian, Q.: Mars: A video benchmark for large-scale person re-identification. In: European Conference on Computer Vision. pp. 868-884. Springer (2016) 2, 3, 4, 5, 11 Scalable person reidentification: A benchmark. L Zheng, L Shen, L Tian, S Wang, J Wang, Q Tian, Proceedings of the IEEE International Conference on Computer Vision. the IEEE International Conference on Computer Vision25Zheng, L., Shen, L., Tian, L., Wang, S., Wang, J., Tian, Q.: Scalable person re- identification: A benchmark. In: Proceedings of the IEEE International Conference on Computer Vision. pp. 1116-1124 (2015) 2, 3, 5 Learning with local and global consistency. D Zhou, O Bousquet, T N Lal, J Weston, B Schölkopf, Advances in neural information processing systems. 34Zhou, D., Bousquet, O., Lal, T.N., Weston, J., Schölkopf, B.: Learning with local and global consistency. In: Advances in neural information processing systems. pp. 321-328 (2004) 3, 4 Learning from labeled and unlabeled data with label propagation. X Zhu, Z Ghahramani, 34Zhu, X., Ghahramani, Z.: Learning from labeled and unlabeled data with label propagation (2002) 3, 4 Person identity label propagation in stereo videos. O Zoidi, A Tefas, N Nikolaidis, I Pitas, IEEE Transactions on Multimedia. 1654Zoidi, O., Tefas, A., Nikolaidis, N., Pitas, I.: Person identity label propagation in stereo videos. IEEE Transactions on Multimedia 16(5), 1358-1368 (2014) 4
[]